content
stringlengths
86
994k
meta
stringlengths
288
619
Market Thoughts and Analysis Adapted from my original post here: Why e is the coolest number - June 18, 2008 Okay, this is obviously not going to be a blog about stocks or Elliott Wave counts. It will be about investing only in a very general and relatively abstract sense. But growth (and more importantly exponential growth) is why all of us are investing in the first place. And it is interesting to think about the fact that all exponential growth has its basis is one very cool number: Why am I writing this? Who knows, binv271828 is a strange character [Note: My original Caps username is binv271828 and my new username in binve] . I really wanted to share why I like the number so much that I did put it in my name and why it is related to investing. Let me warn you now that there will be a lot of ideas, mostly math based and some very uninteresting except to those that really like math. Please feel free to skip, and I won’t be offended :). So, it should be abundantly clear to anybody who has read my blog posts that I am a fairly large nerd. I like math… a lot. Numbers are cool. But the relationships between numbers and how they describe physical phenomena are even more interesting. is a number that describes a whole class of relationships like this. But if you read a math textbook or looked up on wikipedia you would have no idea how universally cool it is. So here is the dry definition: e, also called Euler’s number, is a transcendental number that is approximated by 2.71828182845904523536. …. Okay, who cares. So here is some more dry definition: the mathematical constant is the unique real number such that the function e^x has the same value as the slope of the tangent line, for all values of x. … again, who cares! Okay, lets see why is so cool. Everybody is familiar with compound interest. You begin with a starting amount of money, and then you earn interest. The next period you earn interest on the principal + interest from the first period, and this continues until you are rich! So lets say you have an account where interest is calculated once a year. So the growth comes in yearly chunks. If you start with $1 and you get an interest rate of 100%, then at the end of the year you will have $2 (the interest earned on 1$ with a rate of 100% is 1$, and $1 + $1 = $2). Well, what if interest was calculated once every half year. Then that means after 6 months you will earn $0.50 (100% interest for half a year, or 50% earned on the $1) for a total of $1.50, then at the end of the next 6 months, you will earn interest on $1.50. This interest is 50% (for half a year) to give you $0.75. Add back to the $1.50, which gives you $2.25. Right on, so calculating in more intervals gives you more money. So now you have 2 payments instead of one step at the end of the year. Next imagine the interest was calculated once a month, or once a day or once an hour or once a minute or once a second or once a nanosecond…. What this does is to increase the number of steps, which makes your growth curve “smoother”. Eventually with an infinite number of steps in which your interest is calculated, your interest growth will represent a continuous curve. That is an interesting relationship. And this relationship can be expressed as: (1 + 100%/n)^n where n is the number of steps taken. So lets list this relationship for an increasing number of steps: Steps Growth 1 2.0 2 2.25 3 2.37 5 2.48832 100 2.59374246 1000 2.704813829 10000 2.716923932 100000 2.718145927 100000 2.718268237 1000000 2.718280469 10000000 2.718281694 100000000 2.718281786 1000000000 2.718282031 And as you can see, the relationship begins to converge, and lo and behold, it’s ! So this is where comes in, it is this idea of continuous growth. What this actually is, is a limit. Okay, I am going to throw some calculus at you. e = limit as n goes to infinity of (1 + 1/n)^n. This is an exceptionally important relationship very useful in describing all kinds of phenomena, and has some very unique properties in relation to derivatives and integrals (more in a minute). What is even more interesting is that if you start looking at any exponential relationship (interest calculated at interest rates other than 100%, population growth, cell division, bacteria replication, etc.) you can express it as function of . Absolutely any exponential relationship at all. What this means is that every single continuous growth relationship in existence can be though of as a scaled version of …. ! How cool is that! So all of us are looking for continuous exponential growth in our portfolio returns :), is always on our minds subconsciously. Okay so that’s cool, so what’s up with all derivative and integral stuff? Because of the shape of this exponential curve and remembering the original dry wikipedia definition “the mathematical is the unique real number such that the function e^x has the same value as the slope of the tangent line, for all values of x” an interesting property is discovered: The derivative d/dx (e^x) = e^x. This is very useful in casting functions for linearization. Another concept as to why useful is in the concept of imaginary numbers. It can be shown that is actually a trig relationship (sines and cosines) in the imaginary domain. Now that is really abstract, but you can think of imaginary numbers as describing an oscillating signal or motion. Any motion that can be described as a magnitude and an angle or phase (such as a pendulum moving back and forth, a wing vibrating through the air, a cesium atom moving back and forth in an atomic clock) can be though of in terms of imaginary numbers, which then can be compactly represented in one number: also has usefulness in integrals. Since can represent imaginary numbers, it can represent any oscillatory signal. Any oscillating signal can decay (think of a bar door when you open it, rocking back and forth on its hinge until it eventually comes back to rest), stay stable or it can grow (if you are not familiar with the bridge “galloping gertie” check this out and be sure to watch the video under the collapse section). So growing oscillating signals are bad for mechanical systems, like galloping gertie, and are also bad for electrical systems. Exponentially growing signals cannot be easily described since their integrals do not converge. So you cannot even analyze the effects of a system with a non-converging integral. That is until you throw in some ! Since is actually an oscillatory number, you can add a sufficiently large amount of negative in order to force an integral to converge. This is the principle behind the LaPlace transform. Figuring out the size of negative added can tell you something about the stability of a system, and is a very useful technique in controls. is just so cool! Okay, okay enough geeking out. If you want to use for some useful formulas for investing calculations, here are a few: growth = e ^ (total rate * time) annualized growth rate = e ( ln (total return multiple) / number years ) - 1 where ln is the natural logarithm (another cool relationship that is related to e). If you also have a love of , please feel free to share! If you have skipped everything in the middle and come down to the end, well I don’t blame you :) For more reading on , check out: Why e is the coolest number blog comments powered by Disqus
{"url":"http://marketthoughtsandanalysis.blogspot.com/2009/09/why-e-is-coolest-number.html","timestamp":"2014-04-17T06:48:15Z","content_type":null,"content_length":"117625","record_id":"<urn:uuid:8f3ff162-95a3-481d-9e58-6d38f9b54f95>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Whitman, MA Math Tutor Find a Whitman, MA Math Tutor ...I have tutored math prep classes for 4th grade MCAS and am acquainted with Scott Foresman math curriculum and the MacGraw Hill curriculum. I have taught phonics using Bradley phonics system but am acquainted with Tellian TLC methodology. I have assessed using DIBELS and also understand how to d... 9 Subjects: including prealgebra, reading, vocabulary, grammar My name is Lauren and I am currently a math teacher, soccer and lacrosse coach and tutor. I am looking for more tutoring hours during the school year and over the summer as well! I have a certified math teacher for grades 5-12 and have years of experience working with all levels! 12 Subjects: including calculus, elementary (k-6th), trigonometry, soccer ...I can help you with your legal studies. I have many years of experience tutoring the SAT. I have many years of experience tutoring the SAT. 29 Subjects: including algebra 2, trigonometry, linear algebra, ACT Math ...I work with students in the core academic subjects: math, science, social studies, and English. I also work on study skills, organization and time management with students as needed. I have extensive experience working with students with ADD/ADHD. 31 Subjects: including calculus, European history, special needs, dyslexia ...I have successfully tutored high, middle, and elementary school students in many subjects: Math through calculus History (including US history, Western Civ, and Modern European history) Advanced and introductory Latin AP English SAT/SSAT prep Reading and writing Study skills And more I... 69 Subjects: including precalculus, logic, elementary (k-6th), physics Related Whitman, MA Tutors Whitman, MA Accounting Tutors Whitman, MA ACT Tutors Whitman, MA Algebra Tutors Whitman, MA Algebra 2 Tutors Whitman, MA Calculus Tutors Whitman, MA Geometry Tutors Whitman, MA Math Tutors Whitman, MA Prealgebra Tutors Whitman, MA Precalculus Tutors Whitman, MA SAT Tutors Whitman, MA SAT Math Tutors Whitman, MA Science Tutors Whitman, MA Statistics Tutors Whitman, MA Trigonometry Tutors Nearby Cities With Math Tutor Abington, MA Math Tutors Bridgewater, MA Math Tutors Brockton, MA Math Tutors East Bridgewater Math Tutors Halifax, MA Math Tutors Hanover, MA Math Tutors Hanson, MA Math Tutors Holbrook, MA Math Tutors Kingston, MA Math Tutors Norwell Math Tutors Pembroke, MA Math Tutors Randolph, MA Math Tutors Rockland, MA Math Tutors South Weymouth Math Tutors West Bridgewater Math Tutors
{"url":"http://www.purplemath.com/Whitman_MA_Math_tutors.php","timestamp":"2014-04-18T11:06:37Z","content_type":null,"content_length":"23655","record_id":"<urn:uuid:7da3d914-ecc8-4637-8fd9-4fa37a39dbe6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Categorifications of the Fibonacci Fusion Ring arising from Conformal Field Theory up vote 4 down vote favorite I was reading about realizations of the "Fibonacci" fusion ring $X \otimes X = X \oplus 1$ in Fusion Categories of Rank 2 by Victor Ostrik. Apparently, there are two of them and they arise in various • integer-spin representations of integrable $\widehat{sl}_2$-modules of level 3 • the minimal model $\mathcal{M}(2,5)$ of the Virasoro algebra (central charge c = -22/5) • representations of $U_q(sl_2)$ with $q = e^{\pi i /5}, e^{3\pi i / 5}$. [DEL:In any of these cases, how is the Fibonacci category realized?:DEL] I would like to understand each of these specific categorifications of the Fibonacci fusion ring. Can someone here explain the basics of integrable $\widehat{sl}_2$-modules or about the $\mathcal{M} (2,5)$ minimal model from Conformal Field Theory? I would also like to learn about $U_q(sl_2)$, though it's probably written in many texts. 1 I don't understand your question. What does "how is it realized" mean? Usually you realize an abstractly dscribed object by providing a construction, but then you've already answered your own question by providing several constructions. – Noah Snyder Dec 30 '09 at 21:42 Ostrik's paper explains this category pretty well in the abstract; it's the constructions themselves I do not understand. In other words, where can I learn the basics about \hat{sl}_2 modules or M (2,5)? It would be even better if someone explained them here. – john mangual Dec 31 '09 at 1:05 1 Thanks for clarifying what you are after. – Chris Schommer-Pries Dec 31 '09 at 2:02 2 You would like to learn about $U_q(sl_2)$ "even though" it is written in many texts? I think what you mean is, you'd like a mathoverflow summary of those texts. With all due respect, I hope that no one obliges you, because a good book such as Kassel is the right way to learn the material. The rest of your question, a specific understanding of the Fibonacci category in its two Galois conjugate forms, is better. – Greg Kuperberg Dec 31 '09 at 2:29 It sounds like the theory of planar algebras arises naturally in attempts to understand all these physical models. It was probably unrealistic to assume it can all be explained here. These book suggestions really help. – john mangual Dec 31 '09 at 16:11 add comment 2 Answers active oldest votes Unfortunately all three of those realizations are the sort of thing you need to read a book about not a MO post. I agree with Greg that Kassel's book is a great place to start for the quantum group construction (I don't know the other two constructions well, presumably for the affine algebra construction you'd want to start with Kac's book?). On the other hand there is an easier to explain elementary diagrammatic description. As usual with diagram categories you only construct a full subcategory and then you'll need to take the additive and idempotent completions to get an abelian category. Consider the Temperley-Lieb subcategory, whose objects are indexed by integers and whose morphisms m->n are given by linear combinations of planar diagrams of nonintersecting arcs with m boundary points at the bottom and n boundary points at the top modulo a single relation that a circle can be removed for a multiplicative factor of either the golden ratio or its conjugate. Composition is stacking, tensor product is disjoint union. There's an explicit 4-strand projection (called a Jones-Wenzl idempotent) here that has the property that any way you close it off up vote you get zero. Kill that idempotent. Now look at the "even part" i.e. the full subcategory whose objects are even integers. This is your category. Its simple objects are the 0 and 2-strand 4 down Jones-Wenzl idempotents. There's another way to think of this example. First checkboard shade the regions of all your even diagrams so that they're unshaded on the outside. Then collapse all the dark regions to lines. What you end up with now has half as many boundary points and is allowed to have internal 3-valent and 1-valent vertices. It's easy to see that they satisfy an I=H relation and a relation allowing absorbing vertices. This gives a construction of the Fibonacci category using the Yamada polynomial relations (I think to get the usual Yamada polynomial on the nose here you want to actually throw in a bunch of JW2s everywhere but its six of one half dozen the other). Finally there's a slightly different diagram description given in the appendix of one of my papers with Emily Peters and Scott Morrison. In our notation there the Fibonacci category is (the additive and idempotent completion of) the tadpole planar algebra T_2. add comment The Virasoro minimal model $\mathcal{M}(2,5)$ (or in some conventions also $\mathcal{M}(5,2)$ is the conformal field theory which describes the critical behaviour of the Lee-Yang edge singularity. It is described, for example, in Conformal Field Theory, by di Francesco, Mathieu and Sénéchal; albeit the description of the Lee-Yang singularity itself is perhaps a little too physicsy. Still their treatment of minimal models should be amenable to mathematicians without prior exposure to physics. At any rate, googling Lee-Yang edge singularity might reveal other sources easier to digest. In general it is the Verlinde formula which relates the fusion ring and the Virasoro characters, and at least for the case of the Lee-Yang singularity, these can be related in turn to Temperley-Lieb algebras and Ocneanu path algebras on a suitable graph. Some details appear in this up vote 3 down The relation between the Virasoro minimal models and the representations of $\widehat{sl}_2$ goes by the name of the coset construction in the Physics conformal field theory literature or vote also Drinfeld-Sokolov reduction. This procedure gives a cohomology theory (a version of semi-infinite cohomology for a nilpotent subalgebra) which produces Virasoro modules from $\widehat {sl}_2$ modules. Relevant words to google are W- algebras, Casimir algebras,... Of course here we are dealing with the simplest case of $\widehat{sl}_2$ and Virasoro, which is the tip of a very large iceberg. The case of the Lee-Yang edge singularity is simple enough that it appears in many papers as an example from which to understand more general constructions. I know less about the quantum group story, but this paper of Gaberdiel might be a good starting point. add comment Not the answer you're looking for? Browse other questions tagged qa.quantum-algebra rt.representation-theory mp.mathematical-physics or ask your own question.
{"url":"http://mathoverflow.net/questions/10174/categorifications-of-the-fibonacci-fusion-ring-arising-from-conformal-field-theo","timestamp":"2014-04-17T07:19:21Z","content_type":null,"content_length":"65280","record_id":"<urn:uuid:789328b2-c52e-4e29-be89-d7d86678956c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving a quotient ring is a field November 16th 2009, 02:50 PM #1 Sep 2009 Proving a quotient ring is a field We are given R/I, where R:= { $\left(\begin{array}{cc}q&r\\0&s\end{array}\right)$: q,r,s are in the rational numbers} and I:={ $\left(\begin{array}{cc}0&r\\0&s\end{array}\right)$: r,s also in the rational numbers} and the defined set of I forms an ideal of R Prove (or disprove) that R/I is a field. I'm not sure where to even start. I know for other quotient rings, it was a question of finding factors of the "denominator" - if any existed, then it could not be a field since it would not be irreducible. But since we're dealing with an ideal, I'm not so sure. I've tried finding zero-divisors, but at this point it seems a fruitless exercise, so I turn to your collective expertise. Last edited by flabbergastedman; November 16th 2009 at 03:39 PM. Reason: b,c = r, s in my world. We are given R/I, where R:= { $\left(\begin{array}{cc}q&r\\0&s\end{array}\right)$: q,r,s are in the rational numbers} and I:={ $\left(\begin{array}{cc}0&r\\0&s\end{array}\right)$: r,s also in the rational numbers} and the defined set of I forms an ideal of R I'm not sure where to even start. I know for other quotient rings, it was a question of finding factors of the "denominator" - if any existed, then it could not be a field since it would not be irreducible. But since we're dealing with an ideal, I'm not so sure. I've tried finding zero-divisors, but at this point it seems a fruitless exercise, so I turn to your collective expertise. You can check: you didn't ask anything, but I guess that you need to show that the quotient ring $R\slash I$ is a field. Well, define $f: R\rightarrow \mathbb{Q}$ by $f\left(\begin{array}{cc}q&r\ \0&s\end{array}\right) =q$ , and now just check that f is a ring homom. and $I=Ker(f)$ oops! I will change the opening post, but yes, the problem is merely to show whether or not R/I is a field. Also, I'm a little unclear as to how that proves we have a field*. if we have a mapping of f: R -> S, isn't that simply showing that the quotient ring is isomorphic to the mapping? edited for confusing Rings and Fields again Last edited by flabbergastedman; November 16th 2009 at 05:04 PM. November 16th 2009, 03:36 PM #2 Oct 2009 November 16th 2009, 03:38 PM #3 Sep 2009 November 16th 2009, 03:46 PM #4 Sep 2009
{"url":"http://mathhelpforum.com/advanced-algebra/114994-proving-quotient-ring-field.html","timestamp":"2014-04-16T05:01:44Z","content_type":null,"content_length":"40832","record_id":"<urn:uuid:5f27c5d2-dfab-40e9-9efb-332968cda3e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
In ancient time, mathematics was mainly used in an auxiliary or applied role. Thus, mathematical methods were used to solve problems in architecture and construction (as in the public works of the Harappan civilization) in astronomy and astrology (as in the words of the Jain mathematicians) and in the construction of Vedic altars (as in the case of the Shulba Sutras of Baudhayana and his successors). By the sixth or fifth century BCE, mathematics was being studied for its own sake, as well as for its applications in other fields of knowledge. Supplementary to the Vedas are the Shulba Sutras. These texts are considered to date from 800 to 200 BCE. Four in number, they are named after their authors: Baudhayana (600 BCE), Manava (750 BCE), Apastamba (600 BCE), and Katyayana (200 BCE ). The sutras contain the famous theorem commonly attributed to Pythagoras. Some scholars (such as Seidenberg) feel that this theorem as opposed to the geometric proof that the Greeks, and possibly the Chinese, were aware of. The Shulba Sutras introduce the concept of irrational numbers, numbers that are not the ratio of two whole numbers. For example, the square root of 2 is one such number. The sutras give a way of approximating the square root of number using rational numbers through a recursive procedure which in modern language would be a ‘series expansion’. This predates, by far, the European use of Taylor Jain Mathematics (600 BCE to 500 CE) Jain cosmology led to ideas of the infinite. This in turn, led to the development of the notion of orders of infinity as a mathematical concept. By orders of infinity, we mean a theory by which one set could be deemed to be ‘more infinite’ than another. In modern language, this corresponds to the notion of cardinality. For a finite set, its cardinality is the number of elements it contains. However, we need a more sophisticated notion to measure the size of an infinite set. In Europe, it was not until Cantors work in the nineteenth century that a proper concept of cardinality was No account of Indian mathematics would be complete without a discussion of Indian numerals, the place-value system, and the concept of zero. The numerals that we use even today can be traced to the Brahmi numerals that seem to have made their appearance in 300 BCE. But Brahmi numerals were not part of a place value system. They evolved into the Gupta numerals around 400 CE and subsequently into the Devnagari numerals, which developed slowly between 600 and 1000 CE. By 600 CE, a place-value decimal system was well in use in India. This means that when a number is written down, each symbol that is used has an absolute value, but also a value relative to its position. For example, the numbers 1 and 5 have a value on their own, but also have a value relative to their position in the number 15. The importance of a place-value system need hardly be emphasized. It would suffice to cite an often-quoted remark by La-place: ‘It is India that gave us the ingenious method of expressing all numbers by means of ten symbols, each symbol receiving a value of position as well as an absolute value; a profound and important idea which appears so simple to us now that we ignore its true merit. But its very simplicity and the great ease which it has lent to computations put our arithmetic in the first rank of useful inventions; and we shall appreciate the grandeur of the achievement the more when we remember that it escaped the genius of Archimedes and Apollonius, two of the greatest men produced by antiquity. The elevation of zero to the same status as other numbers involved difficulties that many brilliant mathematicians struggled with. The main problem was that the rules of arithmetic had to be formulated so as to include zero. While addition, subtraction, and multiplication with zero were mastered, division was a more subtle question. Today, we know that division by zero is not well-defined and so has to be excluded from the rules of arithmetic. But this understanding did not come all at once, and took the combined efforts of many minds. It is interesting to note that it was not until the seventeenth century that zero was being used in Europe, and the path of mathematics from India to Europe is the subject of much historical research. The Classical Era of Indian Mathematics (500 to 1200 CE ) The most famous names of Indian mathematics belong to what is known as the classical era. This includes Aryabhata I (500 CE) Brahmagupta (700 CE), Bhaskara I (900 CE), Mahavira (900 CE), Aryabhatta II (1000 CE) and Bhaskarachrya or Bhaskara II (1200 CE) One of Aryabhata’s discoveries was a method for solving linear equations of the form ax + by = c. Here a, b, and c are whole numbers, and we seeking values of x and y in whole numbers satisfying the above equation. For example if a = 5, b =2, and c =8 then x =8 and y = -16 is a solution. In fact, there are infinitely many solutions: x = 8 -2m y = 5m -16 where m is any whole number, as can easily be verified. Aryabhata devised a general method for solving such equations, and he called it the kuttaka (or pulverizer) method. He called it the pulverizer because it proceeded by a series of steps, each of which required the solution of a similar problem, but with smaller numbers. Thus, a, b, and c were pulverized into smaller numbers. Amongst other important contributions of Aryabhata is his approximation of Pie to four decimal places (3.14146). By comparison the Greeks were using the weaker approximation 3.1429. Also of importance is Aryabhata’s work on trigonometry, including his tables of values of the sine function as well as algebraic formulate for computing the sine of multiples of an angle. Mathematics in the Modern Age Ramanujan (1887- 1920) is perhaps the most famous of modern Indian mathematicians. Though he produced significant and beautiful results in many aspects of number theory, his most lasting discovery may be the arithmetic theory of modular forms. In an important paper published in 1916, he initiated the study of the Pie function. The values of this function are the Fourier coefficients of the unique normalized cusp form of weight 12 for the modular group SL2 (Z). Ramanujan proved some properties of the function and conjectured many more. As a result of his work, the modern arithmetic theory of modular forms, which occupies a central place in number theory and algebraic geometry, was developed by Hecke. “Education should be started with mathematics. For it forms well designed brains that are able to reason right. It is even admitted that those who have studied mathematics during their childhood should be trusted, for they have acquired solid bases for arguing which become to them a sort of second nature”. Mathematics is around us. It is present in different forms whenever we pick up the phone, manage the money, travel to some place, play soccer, meet new friends; unintentionally in all these things mathematics is involved. There are huge illustrations that testify the presence of in everything that we are doing. With some good understanding of simple and compound interest, you can manage the way your money grows. The mathematical concept that deals with the chance of winning a lottery game is probability. If you want to calculate how much paint,wallpaper,flooring, carpeting or tile you have to buy to for your project then you must know the area of wall or floor. Geometry in clothing Importance of Mathematics in everyday life Geometry in house decoration Importance of Mathematics in everyday lifeGeometry in art: the Volswagen Logo Importance of Mathematics in everyday life Geometry in architecture: the Eiffel/Paris. Every area of mathematics has its own unique applications to the different career options. For example Algebra: computer sciences, cryptology, networking, study of symmetry in chemisty and physics Calculus (differential equations): Chemistry, biology, physics engineering, the motion of water, rocket science, molecular structure, option price modeling in business and economics models, etc... Posted by (student), on 24/7/12 Show me more questions
{"url":"http://www.meritnation.com/ask-answer/question/mathematics-in-india-past-present-future/math/2265917","timestamp":"2014-04-17T18:26:07Z","content_type":null,"content_length":"224385","record_id":"<urn:uuid:97fdb0dc-fc3c-40d3-9210-541710ca2484>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
of Molecular Computers Design of Basic Elements of Molecular Computers Based on Quantum Mechanical Investigation of Photoactive Organic and Fullerene Molecules A. Tamulis* Institute of Theoretical Physics and Astronomy, Laboratory of Theoretical Molecular Electronics This is an abstract for a poster to be presented at the Fifth Foresight Conference on Molecular Nanotechnology. There will be a link from here to the full article when it is available on the web. Work is underway in our group to design and produce molecular-level implementations of the basic elements of photovoltaic cells, solar cells, molecular devices for electronically genome regulation and digital and postdigital computers. The resultant classical and quantum molecular devices could be used for much faster, low power logic, simplified high speed memory in digital (classical and quantum logic) molecular computers, cellular automata and neuromolecular networks. The molecular implementation (MI) of two-, three-, four-variable logic functions, summators of neuromolecular networks, and cells of molecular cellular automata have been designed based on the results of quantum mechanical calculations of photo-induced electron donors, electron insulators, and electron acceptors, as well as fullerene and endohedral fullerene molecules. A complete set of sixteen MI of two-variable logic functions (for example OR, AND, IMPLICATION, EQUIVALENCE, DIFFERENCE, etc.) has been designed and the use of MI of two-variable molecular logic function initial basis sets has been proposed ({OR, AND, NEGATION} or {NOR}, or/and {NAND}). See in more detail [2-6]. We have plans to perform quantum mechanical searches for novel advanced photoactive molecules to be used to design and construct classical and quantum nano- and pico-size molecular devices. Simulations of MI of photovoltaic cells, solar electromagnetic radiation energy converters, variable resistors, and summators as well as theoretical design of molecular and quantum neural networks will be carried out using already completed quantum mechanical calculations of organic electron insulators, photo-induced electron donor and electron acceptor molecules, photoactive supermolecules, electron donor and electron acceptor oligomers, empty and endohedral fullerene molecules and self-assemblies of supramolecules. Changes in the electronic characteristics of certain supramolecules and supermolecules (which are prospective components of MI of basic elements of classical and quantum digital and postdigital computers), caused by defined energy quanta of light or single electrons acting on them, will be evaluated. Their electronic resistance and accumulation of electrons become altered due to charge transfer processes which depend on certain quantum parameters of molecules: point set groups, energies of electron levels, dipole (multipole) moments, electron affinity, ionization potential, molecular orbitals, electron density, electrostatic-potential derived charges, bond orders, net atomic charges, free valences, total energy, energy of formation, singlet and triplet UV/Visible spectra, IR and Raman spectra, polarizabilities, hyperpolarizabilities, magnetic moments, NMR properties, geometry optimization, atoms in molecules properties etc. These quantum parameters of molecules and molecular derivatives will be calculated using quantum chemical and mechanical methods: MNDO, AM1, PM3, Hartree-Fock, MP2, MP3, MP4, MP5, CI, CIS, Density functional (XAlpha, LYP, BLYP, VWN5, PW91, Becke3) using MOPAC-7, GAMESS, Gaussian 94 programes. In order to investigate electrons and holes localized on molecular derivatives under investigation, calculations of molecular ions will be performed; quantum coherence time will be evaluated in all molecular devices. The designed MI of cells of classical and quantum cellular automata will be investigated through quantum mechanical calculations and the probabilities of electrons hopping to various branches of elementary MI of cells of classical and quantum cellular automata will be evaluated. The quantum mechanical investigations of designed MI of two-, three-, and four-variable classical and quantum logic functions will be performed. Design of MI of classical and quantum logic functions complete basis sets from the MI of initial basis sets will be performed in order to design integrated MI of classical and quantum circuits. It will be performed quantum mechanical search for the magnetic properties of molecule-based materials for the design of magnetically active molecular devices. Once designed and constructed classical and quantum molecular devices could be used for the development of classical and quantum logic molecular implementation digital computers, cellular automata, neuromolecular networks and molecular devices for electronically genome regulation. Our References have been published: [1]. Tamulis, A.; Braga, M. and Klimkans, A. (1995) "Quantum Chemical Investigations of Two Fullerene C_60 Molecules" Fullerene Science and Technology, Vol. 3, No.5, p.p. 603-610. [2]. Tamulis, A. and Tamulis, V. (1994) "Molecular Electronics - Advanced Technology", Science and Arts of Lithuania, Vol. 2, No. 4, p.p. 40-47 (in Lithuanian). [3]. Tamulis, A.; Stumbrys, E.; Tamulis, V.; Tamuliene, J. (1996) "Quantum Mechanical Investigations of Photoactive Molecules, Supermolecules, Supermolecules and Design of Basic Elements of Molecular Computers", NATO ASI series "Photoactive Organic Materials - Science and Applications", Edited by F. Kajzar, V.M. Agranovich and C.Y.-C. Lee, 3. High Technology - Vol. 9, p.p. 53-66. Reported in Conferences: [4]. Tamulis, A.; Giceviciute-Tamuliene, J.; Stumbrys, E.; Tamulis, V. and Nakas, A. (1995) "Quantum Mechanical Design of Self-Assembly of Photoactive Supramolecules and Design of Basic Elements of Molecular Devices," in: Book of Abstracts of NATO ASI on "Physics of Biomaterials: Fluctuations, Self-assembly and Evolution", held in Geilo, Norway, 27 March- 06 April 1995, p. 52. [5]. Tamulis, A., Stumbrys, E., Tamulis, V., Giceviciute-Tamuliene, J. and Nakas, A. (1995) "Stability Investigations of Small Empty and Endohedral Fullerene Molecules, Disc-like Supramolecules and Design of Basic Elements of Molecular Computers," in: Book of Abstracts, NATO ASI on "Localized and Itinerant Molecular Magnetism: From molecular Assemblies to the Devices", 22 April-03 May, 1995, Tenerife, Spain, p. 520. Accepted by journals: [6]. Tamulis, A. and Tamulis, V. (1995) "Quantum Mechanical Design of Basic Elements of Molecular Computers", accepted: Newsletter #8 of International Society for Molecular Electronics and BioComputing, 4 figures. [7]. Balevicius, L.M., Stumbrys, E., Tamulis, A. (1996), "Conformations and Electronic Structure of Fullerene C_24 and C_26 Molecules", accepted: Fullerene Science and Technology, vol. 5, No 1, 1997, 12 pages, 2 figures, 1 table. *Corresponding Address: Dr. Arvydas Tamulis,senior research fellow Institute of Theoretical Physics and Astronomy, Laboratory of Theoretical Molecular Electronics A. Gostauto 12, Vilnius 2600, Lithuania Home address: DIDLAUKIO 27-40, Vilnius 2057, Lithuania tel#: work +(370-2)-620861 or home +(370-2)-778743; fax#: +(370-2)-224694 or +(370-2)-225361; e-mail: TAMULIS@ITPA.LT or GICEVIC@ITPA.LT Foresight Programs
{"url":"http://www.foresight.org/Conferences/MNT05/Abstracts/Tamuabst.html","timestamp":"2014-04-17T04:55:59Z","content_type":null,"content_length":"14545","record_id":"<urn:uuid:b5991443-b103-4d58-ad7c-5538a47e9c14>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Equations and Computational Simulations III Electron. J. Diff. Eqns., Conf. 01, 1997, pp. 109-117. A bifurcation result for Sturm-Liouville problems with a set-valued term Georg Hetzer Abstract: It is established in this note that has a multiple bifurcation point at Published November 12, 1998. Mathematics Subject Classification: 34B15, 34C23, 47H04, 86A10. Key words and phrases: Differential inclusion, Sturm-Liouville problem, Rabinowitz bifurcation. Show me the PDF file (115K), TEX file, and other files for this article. Georg Hetzer Department of Mathematics, Auburn University Auburn, AL 36849-5310, USA E-mail address: hetzege@mail.auburn.edu Return to the Proceedings of Conferences: Electr. J. Diff. Eqns.
{"url":"http://ejde.math.txstate.edu/conf-proc/01/h1/abstr.html","timestamp":"2014-04-18T00:33:37Z","content_type":null,"content_length":"2059","record_id":"<urn:uuid:1289d3fa-9b75-4ac3-803e-2198597ffa7b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
depth-first search of a graph May 18th, 2013, 06:07 PM depth-first search of a graph Hi!!! I need some help... The exercise is: You have to implement a data structure to represent graphs,directed or undirected,that tries to avoid the wasted space in the representation of a graph with adjacency matrix and the difficulty of searching the edges with adjacency list representation. We consider that the vertices are numbered from 1 to nverts and the exit degree of each vertex is at most MAXDEG. If deg[i] is the exit degree of the vertex i then the neighbors of the vertex i can be saved at the matrix edge[i][j], 1<=j<=deg[i]. Write a program that reads the datas from a file: if the graph is directed or undirected(1 or 0), the number of vertices (nverts),the number of edges (nedges) and the first and the last vertex of each edge. Write the function dfs that, with argument the data structure that you implemented before for the representation of a graph, prints the edges by the depth-first search of a graph. What I've done so far is: I wrote a program that reads these information from a file, calculates the exit degree of each vertex and creates the matrix edge[i][j]. What data structure do I have to implement??? May 19th, 2013, 04:35 AM Re: depth-first search of a graph May 19th, 2013, 04:53 AM Re: depth-first search of a graph So do I have only to implement the matrix edge[i][j], 1<=j<=deg[i], and then write the function dfs that has as argument this matrix (void dfs(int edges[][])) ? May 19th, 2013, 01:36 PM Re: depth-first search of a graph It's actually not correct to call this a matrix, because it may not have the same number of columns in each row. The correct term would be jagged array. This also means that you cannot create it as a static array (at least, not easily); you'll have to make a dynamic array or (better) use a std::vector. Then you can pass it to a function. May 19th, 2013, 01:58 PM Re: depth-first search of a graph Ok.. Thank you very much!!!!!! :p
{"url":"http://forums.codeguru.com/printthread.php?t=537099&pp=15&page=1","timestamp":"2014-04-16T16:18:04Z","content_type":null,"content_length":"8425","record_id":"<urn:uuid:5a59a825-2c61-4a0c-b329-a78c01d478b4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Relation Problem ! May 17th 2009, 07:15 PM #1 Apr 2009 Relation Problem ! Let $R$ and $S$ be the relations on z by defined by $a R b \iff a \equiv b\mod 9$ $a S b \iff a \equiv b \mod 6$ Find $(RUS)^{\infty}$ and $\frac{A}{(RUS)^{\infty}}$ Last edited by mr fantastic; May 22nd 2009 at 03:41 AM. Reason: Tried to make the post intelligible Do you mean find $(R \cup S)^\infty$ as in everything in all the sets in $(R \cup S)$? There are a total of $19$ such sets. If $a, b \in R \cup S$ then $a=9r+b=6s+b \Rightarrow 3r=2s$. Thus, $2|r$ and $3|s$ (although the $s$ follows from the formula every time you plug in the $r$). So, the elements related to $i \in \ mathbb{Z}$ under $R$ and $S$ are the elements, $\{18r+1 : r \in \mathbb{Z} \}$. What do you mean by the set $A$? May 17th 2009, 11:52 PM #2
{"url":"http://mathhelpforum.com/discrete-math/89435-relation-problem.html","timestamp":"2014-04-16T19:33:32Z","content_type":null,"content_length":"36883","record_id":"<urn:uuid:5aba8a25-4904-4c85-a960-8e24742ed68d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
3.5.1 HIGH PERFORMANCE ADDITION The ripple-carry adder that we reviewed in Section 3.2.2 may introduce too much delay into a system. The longest path through the adder is from the inputs of the least significant full adder to the outputs of the most significant full adder. The process of summing the inputs at each bit position is relatively fast (a small two-level circuit suffices) but the carry propagation takes a long time to work its way through the circuit. In fact, the propagation time is proportional to the number of bits in the operands. This is unfortunate, since more significant figures in an addition translates to more time to perform the addition. For many applications, the speed of arithmetic operations are the bottleneck to performance. Most supercomputers, such as the Cray, the Tera, and the Intel Hypercube are considered “super” because they excel at performing fixed and floating point arithmetic. In this section we discuss a number of ways to improve the speed of addition, subtraction, multiplication, and division. 3.4.2 FLOATING POINT MULTIPLICATION AND DIVISION Floating point multiplication and division are performed in a manner similar to floating point addition and subtraction, except that the sign, exponent, and fraction of the result can be computed separately. If the operands have the same sign, then the sign of the result is positive. Unlike signs produce a negative result. The exponent of the result before normalization is obtained by adding the exponents of the source operands for multiplication, or by subtracting the divisor exponent from the dividend exponent for division. The fractions are multiplied or divided according to the operation, followed by normalization. 3.4.1 FLOATING POINT ADDITION AND SUBTRACTION Floating point arithmetic differs from integer arithmetic in that exponents must be handled as well as the magnitudes of the operands. As in ordinary base 10 arithmetic using scientific notation, the exponents of the operands must be made equal for addition and subtraction. The fractions are then added or subtracted as appropriate, and the result is normalized. This process of adjusting the fractional part, and also rounding the result can lead to a loss of precision in the result. Consider the unsigned floating point addition (.101 × 2^3 + .111 × 2^4) in which the fractions have three significant digits. We start by adjusting the smaller exponent to be equal to the larger exponent, and adjusting the fraction accordingly. 3.4 Floating Point Arithmetic Arithmetic operations on floating point numbers can be carried out using the fixed point arithmetic operations described in the previous sections, with attention given to maintaining aspects of the floating point representation. In the sections that follow, we explore floating point arithmetic in base 2 and base 10, keeping the requirements of the floating point representation in mind. 3.3.3 SIGNED MULTIPLICATION AND DIVISION If we apply the multiplication and division methods described in the previous sections to signed integers, then we will run into some trouble. Consider multiplying −1 by +1 using four-bit words, as shown in the left side of Figure 3-16. The eight-bit equivalent of +15 is produced instead of −1. What went wrong is that the sign bit did not get extended to the left of the result. This is not a problem for a positive result because the high order bits default to 0, producing the correct sign bit 0. 3.3.2 UNSIGNED DIVISION In longhand binary division, we must successively attempt to subtract the divisor from the dividend, using the fewest number of bits in the dividend as we can. Figure 3-13 illustrates this point by showing that (11)2 does not “fit” in 0 or 01, but does fit in 011 as indicated by the pattern 001 that starts the quotient. Computer-based division of binary integers can be handled similar to the way that binary integer multiplication is carried out, but with the complication that the only way to tell if the dividend does not “fit” is to actually do the subtraction and test if the remainder is negative. If the remainder is negative then the subtraction must be “backed out” by adding the divisor back in, as described below. 3.3.1 UNSIGNED MULTIPLICATION Multiplication of unsigned binary integers is handled similar to the way it is carried out by hand for decimal numbers. Figure 3-10 illustrates the multiplication process for two unsigned binary integers. Each bit of the multiplier determines whether or not the multiplicand, shifted left according to the position of the multiplier bit, is added into the product. When two unsigned n-bit numbers are multiplied, the result can be as large as 2n bits. For the example shown in Figure 3-10, the multiplication of two four-bit operands results in an eight-bit product. 3.3 Fixed Point Multiplication and Division Multiplication and division of fixed point numbers can be accomplished with addition, subtraction, and shift operations. The sections that follow describe methods for performing multiplication and division of fixed point numbers in both unsigned and signed forms using these basic operations. We will first cover unsigned multiplication and division, and then we will cover signed multiplication and division. 3.2.3 ONE’S COMPLEMENT ADDITION AND SUBTRACTION Although it is not heavily used in mainstream computing anymore, the one’s complement representation was used in early computers. One’s complement addition is handled somewhat differently from two’s complement addition: the carry out of the leftmost position is not discarded, but is added back into the least significant position of the integer portion as shown in Figure 3-7. This is known as an end-around carry Up until now we have focused on algorithms for addition and subtraction. Now we will take a look at implementations of simple adders and subtractors. Ripple-Carry Addition and Ripple-Borrow Subtraction In Appendix A, a design of a four-bit ripple-carry adder is explored. The adder is modeled after the way that we normally perform decimal addition by hand, by summing digits in one column at a time while moving from right to left. In this section, we review the ripple-carry adder, and then take a look at a ripple-borrow subtractor. We then combine the two into a single addition/subtraction 3.2.1 TWO’S COMPLEMENT ADDITION AND SUBTRACTION In this section, we look at the addition of signed two’s complement numbers. As we explore the addition of signed numbers, we also implicitly cover subtraction as well, as a result of the arithmetic a - b = a + (−b). We can negate a number by complementing it (and adding 1, for two’s complement), and so we can perform subtraction by complementing and adding. This results in a savings of hardware because it avoids the need for a hardware subtractor. We will cover this topic in more detail later. We will need to modify the interpretation that we place on the results of addition when we add two’s complement
{"url":"http://4them.blogspot.com/","timestamp":"2014-04-16T05:17:44Z","content_type":null,"content_length":"225433","record_id":"<urn:uuid:0cd80be6-f8bc-4b21-aac5-7e55e9049210>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Please wait for loading... dc machines solved problems / 5.0 1 +1 vtu.ac.in Solved Problems on Armature of a DC machine - VTU e-Learning 1. Solved Problems on Armature of a DC machine . Example: 1. Determine the number of poles, armature diameter and core length for the preliminary design. 2 +1 niceindia.com SOLVED PROBLEMSSOLVED PROBLEMS . 1. A 350 KW, 500V, 450rpm, 6-pole, dc generator is built with an armature diameter of 0.87m and core length of 0.32m. The lap wound ... 3 +97 nptel.ac.in Lesson 41 - NPTelContents. 41 Problem solving on D.C machines (Lesson-41) ... To begin with few problems on d.c motors have been solved and then problems on generator are. 4 ~ safaribooksonline.com 7. DC Machines > 7.15 - Solved Numerical Problems - Safari Books 7.15 Solved Numerical Problems . Example 7.2 A four-pole dc generator having wave-wound armature winding has 51 slots, each slot containing 20 conductors. 5 ~ engineersinfo-org.blogspot.com Engineersinfo.org: Problem Solving on D.C MachinesProblem Solving on D.C Machines . Introduction. In this lecture some typical problems on D.C machines are worked out not only to solve the ... 6 +1 usna.edu Chapter 6 Direct Current MotorsIn view of this, the question is why do we use d-c motors at all? ...... of the series motor curve relative to the shunt motor curve is to solve Example Problem 6-5. 7 +4 google.com Dc Machines And Synchronous Machines - Google Books ResultM.V.Bakshi U.A.Bakshi - 2007 - 470 pagesEach chapter is supported with large number of solved problems . The theory of electrical machines can be digested through the working of many problems , ... 9 +91 ucalgary.ca DC Machine Example ProblemsA separately excited DC motor is rotated at 1000rpm, The variation of armature .... The armature voltage can be found by solving the armature circuit equation. 10 +14 mhhe.com Chapter 5 Additional Problems(a) If the machine were operated as a dc generator with direction of rotation unchanged, would ... The solution is more simple if part (b) is solved before part ( a). 11 +5 uq.edu.au Tutorial 4The equivalent circuit for this problem is a 3.9kV voltage source in series with a 5Ω ... The load parameters (2400kVA, 16kV, pf = 0.8) are used to solve the problem . ... How is the induced voltage of a separately excited DC generator affected if:. 12 -4 kuet.ac.bd Chapter 2 DC Machines - KUETdirection which we need to generate from the DC generator . So, we sould make some 13 ~ nodia.co.in PREVIOUS YEARS SOLVED PAPERS Electrical EngineeringPrevious Year Solved Papers. Electrical Engineering. GATE .... A separately excited dc machine is coupled to a 50 Hz, three-phase, 4-pole induction machine as ... 14 ~ examcrazy.com Solved examples on 3-phase Induction Motor - Examcrazy.comShunt DC Motor - Principle of operation & Applications. Solved problems on Electrical Instruments. Solved examples on 3-phase Induction Motor. Synchronous ... 15 -10 bengalstudents.com Solved Problem on DC Motors | bengalstudents.comQ. The separately excited dc motor in the figure below has a rated armature current of 20 A and a rated armature voltage of 150 V. An ideal ... 16 +85 iitk.ac.in D.C Machinesalso in regenerative systems the d.c. machines still have a major say. ...... To solve this problem the slots are partitioned vertically to increase the number. 17 +1 arduino.cc [ SOLVED ] PROBLEM : DC MOTOR DRIVER 24V 43A + Arduino UNO - Arduino [ SOLVED ] PROBLEM : DC MOTOR DRIVER 24V 43A + Arduino UNO. 18 +82 navymars.org DIRECT CURRENT GENERATORSState the purpose of a dc generator that has been modified to function as an amplidyne. INTRODUCTION ..... Shifting the brushes to the advanced position ( the new neutral plane) does not completely solve the problems of armature reaction. 20 +81 idc-online.com Solving Generator Loading Problems for an ... - IDC TechnologiesSOLVING GENERATOR LOADING PROBLEMS ON OFFSHORE OIL PLATFORM ... Complications can occur when large variable frequency drives (VFD) or DC ... 21 -11 sctevtorissa.in ElectricalThe subject energy Convertion-1 deals with DC machines and transformers. ... Characteristics of D.C. Generators with problem solving methods and uses of ... 22 -7 uidaho.edu Mathcad - DC_Machines_Lsn28_S128.2 How can the speed of a shunt dc motor be controlled? Explain in detail. A shunt ... In problems 8.1 through 8.7, assume that the motor described above can be connected in shunt. ..... Solving these for the starting resistances,. R3. Rstart3. :=. 23 -2 lamar.edu Lecture 5: DC motors - Lamar University Electrical EngineeringProblems with commutation in real DC machines . 1. Armature reaction. If the magnetic ..... which can be solved for the speed: (5.78.3). The speed of unsaturated ... 24 +16 siemens.com An Analytical Approach to Solving Motor Vibration ProblemsAn Analytical Approach to Solving Motor Vibration Problems . Copyright 25 +21 most.gov.mm A dc shunt motor has an armature resistance of 0.2 Q and ... - mostDynamic braking is applied to bring a dc separately excited motor to rest from its initial ... Solve Problem 5.2l neglecting armature inductance. _ . Derive the ... 26 +9 mcgraw-hill.com Basic Electrical Engineering_4e.p65 - McGraw-HillAdditional Solved Problems 37. Exercises ... 9.2 Constructional Details of DC Machines 290. 9.3 Principle of Operation of a Simple DC Generator 294. 9.4 Types ... 27 ~ schneider-electric.hu 3. Motors and loads - Schneider ElectricSynchronous motors. 43. 3.4. Direct current motors commonly named DC motors ..... squirrel cage motor. These make it easier to solve maintenance problems . 28 +2 allaboutcircuits.com DC -Shunt Motor - All About Circuits Forumhey guys: i have this problem that i cant solve . Q/ 250V,15KW shunt motor has a total armature resistance of 0.5 ohm and fiend resistance of ... 29 ~ amieexamhelp.blogspot.com amie part b electrical machine design el 407 study... - AMIE exam helpSolved Problems on Armature of a DC machine · Solved Problems on DC Machine Magnetic Circuit · Design of Induction Motors · Design of ... 30 +12 training.gov.au MARL6005A Apply advanced principles of marine ... - Training.gov.aumotors. 5.1 DC torque equation is applied to solve problems related to DC motors . 5.2 Losses that may occur in DC motors are analysed. 31 +69 leeds.ac.uk ELEC5565M Electric Drives - Module and Programme Catalogueinput; solve numerical problems ; understand the closed-loop control ... Operating characteristics and control principles of DC motors . 32 +6 accessengineeringlibrary.com Handbook of Electric Power Calculations, Third Edition - Access DC MOTORS AND GENERATORS; 4. .... engineers and technicians essential, step-by-step procedures for solving a wide array of electric power problems . 33 -7 cnx.org PROBLEMS - chapter 7 - Connexions7.6 The dc machine of Problem 7.4 is to be operated as a motor supplied by a ... ( Hint: This problem is most easily solved using MATLAB and its ... 34 -17 fixya.com Dc generator Problems & Solutions - FixyaFixya - Solutions for Everything. Ask. Solve Problems ; Search ... 35 +66 epu.edu.vn Solve problems in multiple path d.c. circuitsSolve problems in multiple path d.c. circuits_UEUNEEE004A. 2 ...... Thermistors are often used to protect an electric motor against overheating. In small. 36 +64 edoqs.com Electrical Machines Solved Problems - edoqs1 Solved Problems on Armature of a DC machine Example: 1 Determine the number of poles, armature diameter and core length for the preliminary design. 37 ~ electricalobjectivequestion.blogspot.com Electrical Machine - DC Motor Objectives: Part 7 - electrical objective Electrical Machine: DC Motor Objective Questions Answers. ... You can find solved objective type questions for Power Electronics, Digital ... 38 -9 yu.edu.jo DC MachinesIn order for a dc motor to function properly it must have some special control and protection equipment ... DC Motor Problems on Starting ... Solving for n yields. 39 -12 sharif.edu Problems with Commutation in Real MachineCommutation in a Simple Four-Loop DC Machine .... Note: this problem can not be solved even by placing brushes over full-load neutral plane, because then ... 40 +61 msu.edu Notes for an Introductory Course On Electrical Machines and Drives43. 3.8 Autotransformers. 44. 4 Concepts of Electrical Machines; DC motors ... 7.6 Brushless DC Machines . 95 ... How to solve problems for three-phase systems. 42 +59 iitg.ernet.in Basic Electrical Technology (Web) - nptelLesson 38 D.C Generators Lesson 39 D.C Motors Lesson 40 Losses, Efficiency and Testing of D.C. Machines Lesson 41 Problem Solving on D.C Machines . 43 -11 reliance.com Reliance Motor Maintenance H-7000 Chapter 1 - Reliance ElectricThe key to minimizing motor problems is scheduled routine inspection and service. ... From such records, specific problems in each application can be identified and solved routinely to avoid ... Typical DC Motor Brushes And Commutator. 44 ~ gearseds.com Understanding and Using DC Motor Specifications - Gears published data to make useful predictions about DC motor performance. .... In order to best solve this design problem , an engineer might choose to accomplish ... 45 -20 unitn.it Figure 11. Permanent-magnet DC generator driven by DC motorFigure 1. Schematic representation of permanent-magnet DC machines ... Problem to be solved and laboratory procedure: Calculate and plot the torque- speed ... 46 +55 pdfonline.com Solved Problems on Armature of a DC machineSolved Problems on Armature of a DC machine . Example: 1. Determine the number of poles, armature diameter and core length for the preliminary design of a ... 47 +18 everythingscience.co.za Electrical machines - generators and motors - Everything ScienceA simple DC generator is constructed the same way as an AC generator ... the machine contains flammable or explosive vapours, the practical problems of ... 48 +38 rmit.edu.au Solve problems in electromagnetic circuits - RMIT UniversityUEENEEE004A Solve problems in multiple path d.c. circuits ... Work simulated activities: to construct of DC motor control starting circuit. • Student directed hours ... 49 +52 amazon.com Electrical Machines Problem Solver ( Problem Solvers Solution Each problem is clearly solved with step-by-step detailed solutions. ... Chapter 5: D.C. Motors ... A.C. Generator - Wave-form and Frequency of Generated emf 50 ~ humanpub.org DC Motor Speed and Position Control Using Discrete-Time Fixed DC motors are widely used in industrial applications, robotic manipulators, etc. ... in [4] gradient-based approaches have been proposed for solving the problem ... 51 -28 ibiblio.org AC motors - IbiblioThough few AC motors today bear any resemblance to DC motors , these problems had to be solved before AC motors of any type could be properly designed ... 52 -24 sharepdf.net Browse : dc shunt motor solved problems - SharePDF.netPrivacy Policy · DCMA. dc shunt motor solved problems .
{"url":"http://savedwebhistory.org/k/dc-machines-solved-problems","timestamp":"2014-04-18T08:37:01Z","content_type":null,"content_length":"51741","record_id":"<urn:uuid:53fd2e99-ea04-444c-8487-6fde25579381>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability of an independent event March 7th 2013, 06:35 PM Probability of an independent event Each day, Mr.Samms randomly chooses 2 students from his class to serve as helpers. If there are 15 boys and 10 girls in the class, what is the probability that Mr.Samms will choose 2 girls to be March 7th 2013, 06:46 PM Re: Probability of an independent event First, this is "sampling without replacement" (he can't choose the same person twice) so the events are NOT "independent". The probability he chooses two girls is equal to the probability the first student chosen is a girl multiplied by the probability the second student is a girl given that the first student chosen was a girl. Initially, there are 25 students, 10 of whom are girls. Assuming all students are equally likely to be chosen, what is the probability that the first student chosen? Now how many students are left to choose from? How many of them are girls?
{"url":"http://mathhelpforum.com/statistics/214416-probability-independent-event-print.html","timestamp":"2014-04-19T20:27:55Z","content_type":null,"content_length":"4134","record_id":"<urn:uuid:92f9aedf-fbaa-4636-a112-6ab140265027>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Mark McClure—Wolfram|Alpha Blog Blog Posts from this author: Some common questions from the many student users of Wolfram|Alpha include “Isn’t cbrt(-8) = -2?” and “Why doesn’t the plot of the cube root include the negative part?” The answers are that -2 is just one of the three cube roots of -8, and that Mathematica, the computational engine of Wolfram|Alpha, has always chosen the principal root, which is complex valued. More generally, odd roots of negative numbers are typically assumed to be complex. You can see this in the output of (-8)^(1/3). More » Wolfram|Alpha has been steadily growing since its initial release nearly three years ago, and this growth is directed, in part, by the queries it receives. For example, the Wolfram Education Portal was created largely in response to the obvious demand for Wolfram|Alpha in the classroom. As a more specific example, we’ve recently enabled Wolfram|Alpha to respond to domain and range queries for real functions. The domain of a real function is the set of real numbers that can be plugged in so that the function returns a real value. If, for example, we wish to evaluate f(x) = √(x + 2) / (x – 1), then we should ensure that x + 2 > = 0 and x – 1 ≠ 0: Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more. Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies… Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes! Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon? Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
{"url":"http://blog.wolframalpha.com/author/mmcclure/","timestamp":"2014-04-20T19:06:06Z","content_type":null,"content_length":"32529","record_id":"<urn:uuid:a5a40df4-6299-4416-b7d6-93af63f38e92>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimal classical logic and control operators Results 1 - 10 of 17 - Journal of Logic and Computation , 2004 "... We present a formulae-as-types interpretation of Subtractive Logic (i.e. bi-intuitionistic logic). This presentation is two-fold: we first define a very natural restriction of the λµ-calculus which is closed under reduction and whose type system is a constructive restriction of the Classical Natural ..." Cited by 23 (1 self) Add to MetaCart We present a formulae-as-types interpretation of Subtractive Logic (i.e. bi-intuitionistic logic). This presentation is two-fold: we first define a very natural restriction of the λµ-calculus which is closed under reduction and whose type system is a constructive restriction of the Classical Natural Deduction. Then we extend this deduction system conservatively to Subtractive Logic. From a computational standpoint, the resulting calculus provides a type system for first-class coroutines (a restricted form of first-class continuations). Keywords: Curry-Howard isomorphism, Subtractive Logic, control operators, coroutines. 1 - MATHEMATICAL STRUCTURES OF COMPUTER SCIENCE , 2008 "... X is an untyped continuation-style formal language with a typed subset which provides a Curry-Howard isomorphism for a sequent calculus for implicative classical logic. X can also be viewed as a language for describing nets by composition of basic components connected by wires. These features make X ..." Cited by 16 (16 self) Add to MetaCart X is an untyped continuation-style formal language with a typed subset which provides a Curry-Howard isomorphism for a sequent calculus for implicative classical logic. X can also be viewed as a language for describing nets by composition of basic components connected by wires. These features make X an expressive platform on which algebraic objects and many different (applicative) programming paradigms can be mapped. In this paper we will present the syntax and reduction rules for X and in order to demonstrate the expressive power of X, we will show how elaborate calculi can be embedded, like the λ-calculus, Bloo and Rose’s calculus of explicit substitutions λx, Parigot’s λµ and Curien and Herbelin’s λµ ˜µ. "... Abstract. We study the π-calculus, enriched with pairing and non-blocking input, and define a notion of type assignment that uses the type constructor →. We encode the circuits of the calculus X into this variant of π, and show that all reduction (cut-elimination) and assignable types are preserved. ..." Cited by 12 (12 self) Add to MetaCart Abstract. We study the π-calculus, enriched with pairing and non-blocking input, and define a notion of type assignment that uses the type constructor →. We encode the circuits of the calculus X into this variant of π, and show that all reduction (cut-elimination) and assignable types are preserved. Since X enjoys the Curry-Howard isomorphism for Gentzen’s calculus LK, this implies that all proofs in LK have a representation in π. - In Rewriting Technics and Application, RTA’05, volume 3467 of LNCS , 2005 "... Abstract. We consider the relation of the dual calculus of Wadler (2003) to the λµ-calculus of Parigot (1992). We give translations from the λµ-calculus into the dual calculus and back again. The translations form an equational correspondence as defined by Sabry and Felleisen (1993). In particular, ..." Cited by 11 (0 self) Add to MetaCart Abstract. We consider the relation of the dual calculus of Wadler (2003) to the λµ-calculus of Parigot (1992). We give translations from the λµ-calculus into the dual calculus and back again. The translations form an equational correspondence as defined by Sabry and Felleisen (1993). In particular, translating from λµ to dual and then ‘reloading ’ from dual back into λµ yields a term equal to the original term. Composing the translations with duality on the dual calculus yields an involutive notion of duality on the λµ-calculus. A previous notion of duality on the λµcalculus has been suggested by Selinger (2001), but it is not involutive. Note This paper uses color to clarify the relation of types and terms, and of source and target calculi. If the URL below is not in blue please download the color version from - Logical Methods in Computer Science "... www.lmcs-online.org ..." - In Proc. Functional and Logic Programming, Springer Lecture Notes in Comput. Sci , 2004 "... Abstract. We propose a semantic framework for modelling the linear usage of continuations in typed call-by-name programming languages. On the semantic side, we introduce a construction for categories of linear continuations, which gives rise to cartesian closed categories with “linear classical disj ..." Cited by 6 (4 self) Add to MetaCart Abstract. We propose a semantic framework for modelling the linear usage of continuations in typed call-by-name programming languages. On the semantic side, we introduce a construction for categories of linear continuations, which gives rise to cartesian closed categories with “linear classical disjunctions ” from models of intuitionistic linear logic with sums. On the syntactic side, we give a simply typed call-by-name λµcalculus in which the use of names (continuation variables) is restricted to be linear. Its semantic interpretation into a category of linear continuations then amounts to the call-by-name continuation-passing style (CPS) transformation into a linear lambda calculus with sum types. We show that our calculus is sound for this CPS semantics, hence for models given by the categories of linear continuations. "... Abstract. We study call-by-need from the point of view of the duality between call-by-name and call-by-value. We develop sequent-calculus style versions of call-by-need both in the minimal and classical case. As a result, we obtain a natural extension of call-by-need with control operators. This lea ..." Cited by 4 (4 self) Add to MetaCart Abstract. We study call-by-need from the point of view of the duality between call-by-name and call-by-value. We develop sequent-calculus style versions of call-by-need both in the minimal and classical case. As a result, we obtain a natural extension of call-by-need with control operators. This leads us to introduce a call-by-need λµ-calculus. Finally, by using the dualities principles of λµ˜µ-calculus, we show the existence of a new call-by-need calculus, which is distinct from call-by-name, call-byvalue and usual call-by-need theories. 1 - Lingua , 1992 "... Abstract. This paper revisits the results of Barendregt and Ghilezan [3] and generalizes them for classical logic. Instead of λ-calculus, we use here λµ-calculus as the basic term calculus. We consider two extensionally equivalent type assignment systems for λµ-calculus, one corresponding to classic ..." Cited by 1 (1 self) Add to MetaCart Abstract. This paper revisits the results of Barendregt and Ghilezan [3] and generalizes them for classical logic. Instead of λ-calculus, we use here λµ-calculus as the basic term calculus. We consider two extensionally equivalent type assignment systems for λµ-calculus, one corresponding to classical natural deduction, and the other to classical sequent calculus. Their relations and normalisation properties are investigated. As a consequence a short proof of Cut elimination theorem is obtained. "... The work presented here is an extension of a previous work realised jointly with Pierre-Louis Curien [CH00]. The current work focuses on the pure calculus of variables and binders that operates at the core of the duality between call-by-name and call-by-value evaluations. A Curry-Howard-de Bruijn co ..." Cited by 1 (0 self) Add to MetaCart The work presented here is an extension of a previous work realised jointly with Pierre-Louis Curien [CH00]. The current work focuses on the pure calculus of variables and binders that operates at the core of the duality between call-by-name and call-by-value evaluations. A Curry-Howard-de Bruijn correspondence is given that shed light on some aspects of Gentzen’s sequent calculus. This includes a sequent-free presentation of it.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=281474","timestamp":"2014-04-16T08:32:35Z","content_type":null,"content_length":"34274","record_id":"<urn:uuid:29efd1bc-bc9b-4cf0-88d0-fe90da015493>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Imagine a parking lot with 999 cars with license plates Author Message Imagine a parking lot with 999 cars with license plates [#permalink] 12 Aug 2003, 06:45 AkamaiBrah Imagine a parking lot with 999 cars with license plates numbered from 001 to 999 and no two cars having the same license plate number. At 5pm, they all leave the lot one by one. What is the probability that the license plate numbers of the first four cars to leave are in increasing order of magintude? GMAT Instructor Joined: 07 Jul 2003 Posts: 771 Location: New York NY Former Senior Instructor, Manhattan GMAT and VeritasPrep 10024 Vice President, Midtown NYC Investment Bank, Structured Finance IT MFE, Haas School of Business, UC Berkeley, Class of 2005 Schools: Haas, MFE; MBA, Anderson School of Management, UCLA, Class of 1993 Anderson, MBA; USC, MSEE Followers: 9 Last edited by on 13 Aug 2003, 07:14, edited 1 time in total. mciatto Re: Challenge: Increasing license plates [#permalink] 12 Aug 2003, 06:58 Manager AkamaiBrah wrote: Joined: 10 Jun 2003 Imagine a parking lot with 999 cars with license plates numbered from 001 to 999 and no two cars having the same license plate number. At 5pm, they all leave the lot one by one. What is the probability that the license plate numbers of the first four cars to leave are in increasing order of magintude? Posts: 213 The # of the first car can be anywhere, but the AVERAGE of all of the places it could be is right in the middle, so the average chances of the second car being greater than Location: Maryland this value is 1/2. Followers: 2 Now the third car has to be above the second, and the chance of this is 1/4. And finally, the fourth must be still above that so 1/8. Kudos [?]: 3 [0], given: 1/2 * 1/4 * 1/8 = 1/64 Is it 999C4 * (1/2)^4 ? Joined: 03 Jul 2003 Posts: 656 Followers: 2 Kudos [?]: 7 [0], given: Manager kpadma wrote: Joined: 10 Jun 2003 Is it 999C4 * (1/2)^4 ? Posts: 213 Kpadma, your answer equals way over 1, with 1 being a certain probability. I don't think that passes a sanity check. Location: Maryland Followers: 2 Kudos [?]: 3 [0], given: Here I come charging again. Joined: 03 Jul 2003 P = (997)^2 / (8 * 999 * 998 ) = (approx) 1/8 Posts: 656 I think I may be close to the answer, but not sure it the correct answer. Followers: 2 Kudos [?]: 7 [0], given: AkamaiBrah kpadma wrote: GMAT Instructor I think I may be close to the answer, but not sure it the correct answer. Joined: 07 Jul 2003 sounds like yogi berra. Posts: 771 _________________ Location: New York NY Best, Schools: Haas, MFE; Former Senior Instructor, Manhattan GMAT and VeritasPrep Anderson, MBA; USC, MSEE Vice President, Midtown NYC Investment Bank, Structured Finance IT MFE, Haas School of Business, UC Berkeley, Class of 2005 Followers: 9 MBA, Anderson School of Management, UCLA, Class of 1993 License plate -= solution [#permalink] 13 Aug 2003, 07:14 AkamaiBrah wrote: AkamaiBrah kpadma wrote: GMAT Instructor I think I may be close to the answer, but not sure it the correct answer. Joined: 07 Jul 2003 sounds like yogi berra. Posts: 771 Any four cars have an equal chance of leaving the lot first, so we can concentrate on just one specific bunch of four cars. (Whether there are 999 or just 4 cars in the lot is irrelevant). For a given set of four cars, they can leave the lot in 4! or 24 ways, only one of which the license plate numbers will be in increasing order. Location: New York NY 10024 Hence, the answer is 1/24. Schools: Haas, MFE; _________________ Anderson, MBA; USC, MSEE Followers: 9 Former Senior Instructor, Manhattan GMAT and VeritasPrep Vice President, Midtown NYC Investment Bank, Structured Finance IT MFE, Haas School of Business, UC Berkeley, Class of 2005 MBA, Anderson School of Management, UCLA, Class of 1993 SVP A nice trick! Remember a question with six letters to be distributed among six envelopes? What is the probability of having them all distributed correctly? 1/6! Again, there is the only right case. Joined: 03 Feb 2003 Posts: 1619 Followers: 5 Kudos [?]: 29 [0], given: 0
{"url":"http://gmatclub.com/forum/imagine-a-parking-lot-with-999-cars-with-license-plates-1983.html","timestamp":"2014-04-18T21:03:30Z","content_type":null,"content_length":"152510","record_id":"<urn:uuid:d23503ce-2945-48b1-b5a8-46883c22ab60>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
An Approach The Lessons 11. Continuous versus discrete The definition of a "continuous" quantity. 12. Limits A sequence of rational numbers. The definition of the limit of a variable. The limit of a function. Theorems on limits. Limits of polynomials. 13. Continuous functions The definition of "a function is continuous at a value of x." 14. Infinity (∞) The definition of "a variable becomes infinite." Limits of rational functions. 15. The derivative The slope of a tangent line to a curve. The difference quotient and the definition of the derivative. Notations for the derivative. The equation of a tangent to a curve. 16. Rules for derivatives The derivative of a constant. The derivative of y = x. The product rule. The power rule. The derivative of the square root function. 17. The chain rule The derivative of a function of a function. 18. More rules for derivatives The quotient rule. Implicit differentiation. The derivative of inverse functions. 19. Instantaneous velocity and Related rates 10. Maximum and minimum values The turning points of a graph. Critical values. 11. Applications of maximum and minimum values 12. Derivatives of trigonometric functions 13. Derivatives of inverse trigonometric functions 14. Derivatives of exponential and logarithmic functions The system of natural logarithms. The general power rule. 15. Evaluating e Appendix. The mathematical existence of numbers Is there an arithmetical continuum? Is a line really composed of points? TheMathPage depends on donations. Please help keep us online.
{"url":"http://www.themathpage.com/aCalc/calculus.htm","timestamp":"2014-04-16T17:00:43Z","content_type":null,"content_length":"10810","record_id":"<urn:uuid:f0ec63cd-8f6f-402c-94aa-e28c194eb59f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] sparse array indexing. Viral Shah vshah@interactivesupercomputing.... Mon Apr 14 13:45:15 CDT 2008 Nathan, and others, A few more questions about the sparse array manipulation. Also, is there a sparse document in the works other than the one at http://www.scipy.org/SciPy_Tutorial ? I would be more than happy to put all that I am finding out in a document. But if one is in the works, I can edit that instead. 1. I notice that A[I, J] behaves differently than in Matlab. This is documented. My question is, if one wants Matlab like behaviour, is there a better way to do this than A[I, :], A[:, J] ? 2. Do the sparse data structures support deletion ? I got the svn, and it seems that the delete function that works on numpy arrays does not work on sparse arrays, yet. Until I figure this out, I am doing deletion with: def deleterowcol(self, A, delrow, delcol): A = A.tocsc() m = A.shape[0] n = A.shape[1] keep = delete (arange(0, m), delrow) A = A[keep, :] keep = delete (arange(0, m), delcol) A = A[:, keep] return A 3. How do I get the rows and columns as vectors out of the coo format, so that I can do the equivalent of find() in MATLAB ? 4. I forwarded the SuperLU bug to the upstream author, Sherry Li. Perhaps she will fix it in the next release. In general, I believe that Sherry does not spend much time maintaining the sequential SuperLU, but Tim Davis does support UMFPACK more actively. But UMFPACK does not have single precision.. So the solution for the default sparse direct solver is not as obvious. I would vote in favour of UMFPACK being the default solver for double precision, if its possible to put it in the scipy tree. More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2008-April/008875.html","timestamp":"2014-04-16T07:40:12Z","content_type":null,"content_length":"4065","record_id":"<urn:uuid:d140b744-d09a-43c9-a51b-f3c51f16a263>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Past 2001 Seminars printer friendly Past 2001 Seminars 2001 Seminars 2:30 p.m., Thursday, January 4, 2001 Algebraic Geometry Seminar Zinovy Reichstein, Department of Mathematics, Oregon State University ``Recent results on G-torsors'' WMAX 216 3:30 p.m., Friday, January 5, 2001 Mathematics Colloquium Zinovy Reichstein, Department of Mathematics, Oregon State University ``Simplifying polynomials by Tschirnhaus transformations'' Math 100 3:30 p.m., Monday, January 8, 2001 Mathematics Colloquium Bert Wiest, PDF, PIMS and Department of Mathematics, UBC ``Orderable groups in topology'' Math 100 1:00 p.m., Tuesday, January 9, 2001 IAM Colloquium Boye K. Ahlborn, Department of Physics, UBC ``How Big and How Small Can Active Animals Be? Thermodynamic Limits of the Body Dimensions of Warm Blooded Animals'' LSK Bldg. Room 301 4:30 p.m., Thursday, January 11, 2001 PIMS-MITACS Mathematical Finance Seminar A. Lazrak, USC and U. d'Evry ``Incomplete Information with Recursive Preferences'' WMAX 216 3:30 p.m., Friday, January 12, 2001 Mathematics Colloquium Professor Jiaping Wang, University of Minnesota ``Harmonic functions and topology of complete manifolds'' Math 100 3:30 p.m., Monday, January 15, 2001 Mathematics Colloquium Sergey Gavrilets, University of Tennessee ``Evolutionary dynamics on holey adaptive landscapes'' Math 100 10:30 a.m., Tuesday, January 16, 2001 Special Math Biology Seminar Sergey Gavrilets, University of Tennessee ``Evolution of female mate choice by sexual conflict'' Math Annex 1102 1:00 p.m., Tuesday, January 16, 2001 IAM-PIMS Distinguished Colloquium David Baillie, Department of Molecular Biology and Biochemistry, SFU ``Comparative Genomics'' LSK Bldg. Room 301 3:30 p.m., Wednesday, January 17, 2001 Algebraic Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``Differential Graded Schemes II'' WMAX 216 4:30 p.m., Wednesday, January 17, 2001 Student Seminar on topics in algebraic geometry Organizational Meeting. All interested students are invited to attend. WMAX 216 3:30 p.m., Friday, January 19, 2001 Mathematics Colloquium Greg Martin, University of Toronto ``The Distribution of Primes: What we know, and how'' Math 100 1:30 p.m., Monday, January 22, 2001 Special Number Theory Seminar Greg Martin, University of Toronto ``Biases in the Shanks-Renyi Prime Number Race'' Math Annex 1102 1:00 p.m., Tuesday, January 23, 2001 IAM Colloquium Fred Brauer, Department of Mathematics, UBC ``Disease Transmission Models with Recoveries and Disease Fatalities'' LSK Bldg. Room 301 3:30 p.m., Wednesday, January 24, 2001 Algebraic Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``Differential Graded Schemes III'' WMAX 216 4:30 p.m., Wednesday, January 24, 2001 Student Seminar on Algebraic Geometry Behrang Noohi, Department of Mathematics, UBC ``Serre Duality'' WMAX 216 2:30 p.m., Thursday, January 25, 2001 Special Geometry Seminar Ping Xu, Pennsylvania State University ``Quantum Groupoids'' Math Annex 1118 3:30 p.m., Friday, January 26, 2001 Mathematics Colloquium Ping Xu, Pennsylvania State University ``Poisson geometry and some applications'' Math 100 1:00 p.m., Tuesday, January 30, 2001 IAM Colloquium Matthew W. Choptuik, Department of Physics and Astronomy, UBC ``Critical Phenomena in Gravitational Collapse'' LSK Bldg. Room 301 4:30 p.m., Wednesday, January 31, 2001 Student Seminar on Algebraic Geometry Behrang Noohi, Department of Mathematics, UBC ``Serre Duality, II'' WMAX 216 2:30 p.m., Thursday, February 1, 2001 Algebra/Topology Seminar Bert Wiest, PIMS PDF anbd Department of Mathematics, UBC ``Configuration spaces of graphs'' WMAX 216 (PIMS) 3:30 p.m., Friday, February 2, 2001 Mathematics Colloquium V.S. Sunder, The Institute of Mathematical Sciences, Chennai, India (currently visiting MSRI, Berkeley) ``Principal Graphs of Subfactors'' Math 100 1:00 p.m., Tuesday, February 6, 2001 IAM Colloquium Gerda de Vries, Department of Mathematical Sciences, Univ. of Alberta ``From Spikers to Bursters via Coupling: Effects of Noise and Heterogeneity'' LSK Bldg. Room 301 2:30 p.m., Tuesday, February 6, 2001 MITACS/Math Biology Seminar Nima Geffen, Tel-Aviv University ``A Simple Geometric Model for a Simple Unicellular Organism'' WMAX 216 (PIMS) 4:30 p.m., Wednesday, February 7, 2001 Student Seminar on Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces'' WMAX 216 (PIMS) 12:30 p.m., Thursday, February 8, 2001 Graduate Student Seminar Patrick Ingram, Department of Mathematics, UBC ``Random graphs'' Location: TBA 2:30 p.m., Thursday, February 8, 2001 Algebra/Topology Seminar Denis Sjerve, Department of Mathematics, UBC ``Equations for group actions on Riemann surfaces'' WMAX 216 4:30 p.m., Thursday, February 8, 2001 PIMS-MITACS Mathematical Finance Seminar Tan Wang, UBC Finance ``Model Misspecification and Under-Diversification'' WMAX 216 3:30 p.m., Friday, February 9, 2001 Mathematics Colloquium Dirk Hundertmark, Caltech ``An optimal L^p-bound on the Krein spectral shift function'' Math 100 1:00 p.m., Tuesday, February 13, 2001 IAM Colloquium David Kirkpatrick, Department of Computer Science, UBC ``Optimal Motion of a Ladder in the Presence of Polygonal Obstacles: Characterization, Complexity and Construction'' LSK Bldg. Room 301 3:30 p.m., Wednesday, February 14, 2001 Probability Seminar THIS SEMINAR IS POSTPONED UNTIL FEB 28, (WED). Akira Sakai, Department of Mathematics, UBC ``Mean-field critical behavior for contact processes'' Math Annex 1118 3:30 p.m., Wednesday, February 14, 2001 Special Mathematics Colloquium Jim Bryan*, Tulane University, New Orleans, LA ``The enumerative geometry of K3 surfaces and modular forms'' WMAX 216 (PIMS Seminar Room, 1933 West Mall) *Jim Bryan is a candidate for a position in the Department. 4:30 p.m., Wednesday, February 14, 2001 Algebraic Geometry Seminar Jim Bryan, Tulane University ``BPS States and Gromov-Witten Invariants'' WMAX 216 2:30 p.m., Thursday, February 15, 2001 Algebra/Topology Seminar Denis Sjerve, Department of Mathematics, UBC ``Equations for group actions on Riemann surfaces, II'' Math Annex 1102 *note change of room location 3:30 p.m., Friday, February 16, 2001 Mathematics Colloquium Antonin Novotny, University of Toulon, France ``Existence, uniqueness and qualitative properties of solutions to Navier-Stokes equations for compressible fluids'' Math 100 1:00 p.m., Tuesday, February 27, 2001 IAM Colloquium Gordon Slade, Department of Mathematics, UBC ``Statistical Mechanics and Super-Brownian Motion'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, February 27, 2001 MITACS/Math Biology Seminar Amy Norris, Department of Mathematics, UBC ``Survey of methods for studying large scale gene expression data'' WMAX Room 216, (PIMS Seminar Room) 3:30 p.m., Wednesday, February 28, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Mean-field critical behavior for contact processes'' Math Annex 1118 3:30 p.m., Wednesday, February 28, 2001 Algebraic Geometry Seminar Stefan Kebekus, Bayreuth University ``Families of singular rational curves'' WMAX 216 4:30 p.m., Wednesday, February 28, 2001 Student Seminar on Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces'' WMAX 216 4:30 p.m., Thursday, March 1, 2001 PIMS-MITACS Mathematical Finance Seminar Simon MacNair, Department of Mathematics, UBC ``Delta Hedging and Survival Probabilities in Markets with Frictions'' WMAX 216 3:30 p.m., Friday, March 2, 2001 Mathematics Colloquium Brian Wetton, Department of Mathematics, UBC ``The MITACS/Ballard Collaborative Project: Rivulets and Condensation Front Modelling'' Math 100 3:30 p.m., Monday, March 5, 2001 Algebraic Geometry Seminar Stefan Kebekus, Bayreuth University ``Contact Manifolds'' Math 105 1:00 p.m., Tuesday, March 6, 2001 IAM-PIMS Distinguished Colloquium Gunther Uhlmann, Department of Mathematics, University of Washington, Seattle, WA ``The Mathematics of Reflection Seismology'' Room 301, LSK Bldg. 2:30 p.m., Thursday, March 8, 2001 Algebra-Topology Seminar Mark MacLean, Department of Mathematics and Science One, UBC ``Asymptotic homotopy'' WMAX 216 3:30 p.m., Friday, March 9, 2001 Mathematics Colloquium Professor Dilip B. Madan, Robert H. Smith School of Business, University of Maryland ``Levy Processes in Financial Modeling'' Math 100 1:00 p.m., Tuesday, March 13, 2001 Institute of Applied Mathematics Colloquium David Hargreaves, MacDonald Detwiller and Associates ``A Tour of the Mathematics in Space-Based Earth Observation'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 13, 2001 Math Biology Seminar Yue-Xian Li, Department of Mathematics, UBC ``Paradoxical Role of Ca2+-activated K+(BK) Channels in Controlling the Firing Patterns of Anterior Pituitary Cells -- A Modelling Study'' WMAX 216 (PIMS Seminar Room) 3:30 p.m., Wednesday, March 14, 2001 Algebraic Geometry Seminar Jim Carrell, Department of Mathematics, UBC ``Singular loci of Schubert varieties'' WMAX 216 4:30 p.m., Wednesday, March 14, 2001 Student Seminar in Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces, II'' WMAX 216 2:30 p.m., Thursday, March 15, 2001 Algebra-Topology Seminar Mark MacLean, Department of Mathematics and Science One, UBC ``Asymptotic homotopy, II'' WMAX 216 1:00 p.m., Tuesday, March 20, 2001 Institute of Applied Mathematics Colloquium Konstantin Kabin, Department of Chemistry, UBC ``Shocks in Magnetohydrodynamics'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 20, 2001 MITACS/Math Biology Seminar Leah Edelstein-Keshet, Department of Mathematics, UBC ``Spatial Regulation of Actin Dynamics in Cell Motion'' WMAX 216 (PIMS Seminar Room) 3:30 p.m., Wednesday, March 21, 2001 Algebraic Geometry Seminar Sandor Kovacs, Univ. of Washington ``Boundedness and hyperbolicity for families of varieties of general type'' WMAX 216 4:30 p.m., Wednesday, March 21, 2001 Student Seminar in Algebraic Geometry Boris Tschirschwitz, Department of Mathematics, UBC ``The Mukai Vector of Sheaves on K3 Surfaces'' Location: Wreck Beach 2:30 p.m., Thursday, March 22, 2001 Algebra-Topology Seminar Catherine Webster, Department of Mathematics, UBC ``Braid groups and cryptography'' WMAX 216 4:30 p.m., Thursday, March 22, 2001 PIMS-MITACS Mathematical Finance Seminar Alan King, IBM Research Division ``A Contingent Claims Approach to Setting the Franchise Fee for Capacity Constrained, Quantity-Flexible Supply Contracts'' WMAX 216 1:30 p.m., Friday, March 23, 2001 Mathematical Physics Seminar Christian Borgs, Microsoft Research ``Complex Zeros of Partition Functions: A Generalized Lee-Yang Theorem'' MATX 1102 3:30 p.m., Friday, March 23, 2001 Mathematics Colloquium Jennifer Chayes, Microsoft Research ``Phase Transitions in Computer Science'' Math 100 1:00 p.m., Tuesday, March 27, 2001 IAM-PIMS Distinguished Colloquium Speaker Bengt Fornberg, Department of Applied Mathematics, University of Colorado ``T Radial Basis Functions - A Future Way to Solve PDEs to Spectral Accuracy on Irregular Multidimensional Domains?'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 27, 2001 MITACS/Math Biology Seminar Magdalena Luca and Alexandra Chavez-Ross, Department of Mathematics, UBC ``Application of Chemotaxis Models to Alzheimer's Disease'' WMAX 216 (PIMS Seminar Room) 4:30 p.m., Thursday, March 29, 2001 PIMS-MITACS Mathematical Finance Seminar Robert Jones, SFU ``Valuing Revolving Lines of Credit under Jump-Diffusion Credit Quality'' WMAX 216 3:30 p.m., Wednesday, April 4, 2001 Algebraic Geometry Seminar Jim Carrell, Department of Mathematics, UBC ``Singularities of Schubert Varieties, II'' WMAX 216 10:30 a.m., Monday, April 9, 2001 Special Algebraic Geometry Seminar S.T. Yau, UIC ``Hyperplane Arrangements in P^2'' WMAX 216 (PIMS Seminar Room) Refreshments will be served at 10:15 a.m. 2:30 p.m., Tuesday, April 10, 2001 MITACS/Mathbiology Seminar Adriana Dawes, IAM & Department of Mathematics, UBC ``Estrogen biosynthesis: a modelling approach'' WMAX 216 3:30 p.m., Thursday, April 12, 2001 Special PIMS Colloquium David Eisenbud, Director, Mathematical Science Research Institute (Berkeley) ``Chow Forms and Resultants -- old and new'' WMAX 216 A reception will follow the seminar in the PIMS lounge. 3:00 p.m., Wednesday, September 5, 2001 Probability Seminar Professor Y. Ogura, Saga University, Japan ``On a completion of a class of one-dimensional diffusion processes'' Math Annex 1102 2:00 p.m., Thursday, September 6, 2001 Math Biology Seminar Marek Labecki, IAM PDF, UBC ``Protein transport in hollow-fibre bioreactors for mammalian cell culture'' West Mall Annex Room 216 (second floor) 4:30 p.m., Thursday, September 6, 2001 PIMS-MITACS Mathematical Finance Seminar Ali Lazrak, U. d'Evry and UBC ``Incomplete Information with Recursive Preferences'' West Mall Annex, Room 216 (second floor) 12:30 p.m., Tuesday, September 11, 2001 Algebra/Topology Seminar Laura Scull, Department of Mathematics, UBC ``Equivariant homotopy theory'' West Mall Annex, PIMS Seminar Room 216 (second floor) Coffee and refreshments will be available preceding the seminar. Feel free to bring a bag lunch, if you like. Grad students are encouraged to attend. 3:00 p.m., Wednesday, September 12, 2001 Probability Seminar Alexander E. Holroyd, UCLA ``How to find an extra head: optimal random shifts of Bernoulli and Poisson random fields'' Math Annex 1102 3:00 p.m., Wednesday, September 12, 2001 Geometry/PDE Seminar Jiguang Bao, PIMS ``Liouville and regularity properties of a Hessian equation'' West Mall Annex, PIMS Room 216 4:00 p.m., Wednesday, September 12, 2001 Algebraic/Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``C^*-equivariant vector fields and cohomology algebras stable map spaces'' West Mall Annex, PIMS Room 216 1:00 p.m., Thursday, September 13, 2001 Cancelled Special Seminar Stephanie van Willigenburg, Cornell Univ. ``Pieri Operators and Eulerian Enumeration'' Math Annex 1101 2:00 p.m., Thursday, September 13, 2001 Math Biology Seminar Nima Geffen, Tel-Aviv Univ. ``Line and point singularities for sources in two and three dimensions'' West Mall Annex, PIMS Room 216 3:30 p.m., Thursday, September 13, 2001 Number Theory Seminar David Boyd, Department of Mathematics, UBC ``Mahler measure and unusual models for elliptic curves'' Math Annex 1102 3:00 p.m., Friday, September 14, 2001 Mathematics Colloquium Stephanie van Willigenburg, Cornell Univ. ``The algebra of card shuffling'' Math 100 3:00 p.m., Monday, September 17, 2001 Cancelled Institute of Applied Mathematics Colloquium Herschel Rabitz, Department of Chemistry, Princeton University ``High Dimensional Model Representations with Applications in the Chemical/Physical Sciences'' LSK Bldg. Room 301 12:30 p.m., Tuesday, September 18, 2001 Algebra/Topology Seminar Sadok Kallel, University of Lille ``On the topology of some algebraic function spaces from curves to projective spaces'' WMAX 216 (PIMS Seminar Room) 3:00 p.m., Wednesday, September 19, 2001 Geometry/PDE Seminar Colleen Robles, Department of Mathematics, UBC ``Finsler geometry and some interesting examples'' WMAX 216 4:00 p.m., Wednesday, September 19, 2001 Algebraic/Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``C^*-equivariant vector fields and cohomology algebras stable map spaces, II'' WMAX 216 2:00 p.m., Thursday, September 20, 2001 Math Biology Seminar Amy Norris, Department of Mathematics, UBC ``Investigation of Epidermal Growth Factor simulation data'' WMAX 216 3:30 p.m., Thursday, September 20, 2001 Number Theory Seminar Michael Bennett, Department of Mathematics, UBC ``Variants of Fermat's last theorem, d'apres Wiles'' Math Annex 1102 10:00 a.m., Monday, September 24, 2001 Special Seminar Ailana Fraser, Department of Mathematics, Brown University ``Fundamental groups of manifolds of positive isotropic curvature'' Math Annex 1102 3:00 p.m., Monday, September 24, 2001 Mathematics Colloquium Ailana Fraser, Department of Mathematics, Brown University ``The free boundary problem for minimal disks and applications'' Math 100 12:30 p.m., Tuesday, September 25, 2001 Algebra/Topology Seminar Dale Rolfsen, Department of Mathematics, UBC ``Free Lie algebras and the figure-of-eight knot'' WMAX 216 (PIMS Seminar Room) 3:00 p.m., Wednesday, September 26, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Mean-field behavior for the contact process'' Math Annex 1102 3:30 p.m., Thursday, September 27, 2001 Number Theory Seminar Nils Bruin, PIMS, SFU, UBC ``Walking around a local-global obstruction for elliptic curves'' WMAX 216 (note permanent change of location) 4:30 p.m., Thursday, September 27, 2001 Cancelled PIMS-MITACS Mathematical Finance Seminar R. Tompkins, T.U. Vienna ``Pricing, No-arbitrage Bounds and Robust Hedging of Installment Options'' WMAX 216 3:00 p.m., Friday, September 28, 2001 Mathematics Colloquium Alex Iosevich, University of Missouri-Columbia ``Some combinatorial problems associated with the study of convex bodies'' Math 100 3:00 p.m., Monday, October 1, 2001 Institute of Applied Mathematics Colloquium Philippe Spalart, Boeing, Seattle ``Detached-Eddy Simulation'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 3, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Hyperscaling inequalities for the contact process'' MATX 1102 3:00 p.m., Wednesday, October 3, 2001 Geometry/PDE Seminar Izabella Laba, Department of Mathematics, UBC ``Spectral Cantor measures'' WMAX 216 3:00 p.m., Wednesday, October 3, 2001 (note different time and location for this week's seminar only) Algebraic/Geometry Seminar Rekha Thomas, Department of Mathematics, Univ. of Washington, Seattle ``The Combinatorics of the Toric Hilbert Scheme'' MATX 1118 2:00 p.m., Thursday, October 4, 2001 MITACS Math Biology Seminar Kerry Landman, Department of Mathematics and Statistics, Univ. of Melbourne, Australia ``Part 1. Can you still read the fine print? Part II. Development of the nervous system of the gut'' WMAX 216 3:30 p.m., Thursday, October 4, 2001 Number Theory Seminar Chris Smyth, University of Edinburgh ``Polylogs and Mahler measures'' WMAX 216 3:00 p.m., Wednesday, October 10, 2001 Probability Seminar Antal Jarai, Department of Mathematics, UBC ``On a problem in percolation theory'' MATX 1102 3:00 p.m., Wednesday, October 10, 2001 Geometry/PDE Seminar Nassif Ghoussoub, Department of Mathematics, UBC ``New Hardy-Sobolev Inequalities'' WMAX 216 4:00 p.m., Wednesday, October 10, 2001 Algebraic/Geometry Seminar Jim Bryan, Department of Mathematics, UBC ``An informal discussion on the Gopakumar-Vafa conjecture and related topics'' WMAX 216 2:00 p.m., Thursday, October 11, 2001 MITACS Math Biology Seminar Leah Keshet, Department of Mathematics, UBC ``Applications of mathematical modelling to social aggregation and swarming behaviour'' WMAX 216 3:30 p.m., Thursday, October 11, 2001 Number Theory Seminar Nike Vatsal, Department of Mathematics, UBC ``Uniform distribution of Heegner points'' WMAX 216 4:30 p.m., Thursday, October 11, 2001 PIMS-MITACS Mathematical Finance Seminar Jaksa Cvitanic, Univ. of Southern California ``Computation of Hedging Portfolios for Options with Discontinuous Payoffs'' WMAX 216 3:00 p.m., Friday, October 12, 2001 Mathematics Colloquium Mark Haiman, Department of Mathematics, Univ. of California, Berkeley ``The geometric significance of Macdonald positivity'' Math 100 3:00 p.m., Monday, October 15, 2001 Institute of Applied Mathematics Colloquium William Reinhardt, Chemistry Department, Univ. of Washington ``The 2001 Nobel Prize in Physics: The Gaseous Bose-Einstein Condensate, a Field Day for Physics and Applied Maths'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 17, 2001 Geometry/PDE Seminar Jingyi Chen, Department of Mathematics, UBC ``Mean curvature flow of surface in 4-manifolds'' WMAX 216 4:00 p.m., Wednesday, October 17, 2001 Algebraic/Geometry Seminar (postponed from 10th, October) Jim Bryan, Department of Mathematics, UBC ``An informal discussion on the Gopakumar-Vafa conjecture and related topics'' WMAX 216 2:00 p.m., Thursday, October 18, 2001 MITACS Math Biology Seminar Dan Reinders, Department of Bio-resource Engineering, UBC ``Computer Modelling of Endometrial Thermal Ablation for Menorrhagia'' WMAX 216 3:30 p.m., Thursday, October 18, 2001 Number Theory Seminar Greg Martin, Department of Mathematics, UBC ``Egyptian fractions with lots and lots and lots of terms'' WMAX 216 11:00 a.m., Saturday, October 20, 2001 Third North West Probability Seminar (4 talks) David Brydges, Department of Mathematics, UBC ``Branched Polymers and Dimensional Reduction'' Savery Hall 249, University of Washington 3:00 p.m., Monday, October 22, 2001 Institute of Applied Mathematics Colloquium Susan Baldwin, Department of Chemical and Biological Engineering, UBC ``Mathematical Modelling of Thermal Damage in Human Tissues'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 24, 2001 Probability Seminar Marten Klok, Delft University of Technology ``Performance analysis of advanced third generation receivers'' MATX 1102 4:00 p.m., Wednesday, October 24, 2001 Algebraic/Geometry Seminar Jim Bryan, Department of Mathematics, UBC ``The enumerative geometry of K3 surfaces and modular forms'' WMAX 216 2:00 p.m., Thursday, October 25, 2001 MITACS Math Biology Seminar Stan Maree, IAM, Department of Mathematics, UBC ``Small variations in multiple parameters account for wide variations in HIV-1 set points: a novel modelling approach'' WMAX 216 3:30 p.m., Thursday, October 25, 2001 Number Theory Seminar Izabella Laba, Department of Mathematics, UBC ``A characterization of finite sets that tile the integers'' WMAX 216 4:30 p.m., Thursday, October 25, 2001 PIMS-MITACS Mathematical Finance Seminar Joern Sass, Department of Mathematics, UBC ``Maximizing the asymptotic growth rate under fixed and proportional transaction costs'' WMAX 216 3:00 p.m., Monday, October 29, 2001 Institute of Applied Mathematics/Pacific Institute of Mathematical Sciences Distinguished Colloquium David Gottlieb, Division of Applied Mathematics, Brown University ``Spectral Methods for Discontinuous Problems'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 31, 2001 Probability Seminar David Brydges, Department of Mathematics, UBC ``Branched polymers and dimensional reduction, I'' MATX 1102 3:00 p.m., Wednesday, October 31, 2001 Geometry/PDE Seminar Mihail Cocos, Department of Mathematics, UBC ``Square integrable harmonic forms and the heat flow on complete manifolds'' WMAX 216 3:00 p.m., Wednesday, October 31, 2001 Institute of Applied Mathematics Colloquium Herschel Rabitz, Department of Chemistry, Princeton University ``High Dimensional Model Representations with applications in the Chemical/Physical Sciences'' LSK Bldg. Room 301 3:30 p.m., Thursday, November 1, 2001 Number Theory Seminar Imin Chen, SFU ``Rational points on a certain modular curve of level p^2'' WMAX 216 3:00 p.m., Friday, November 2, 2001 Mathematics Colloquium Joel Spencer, Courant Institute of Mathematical Sciences, New York University ``Erdos' Magic'' Math 100 3:00 p.m., Monday, November 5, 2001 Institute of Applied Mathematics Colloquium Reinhard Illner, Department of Mathematics and Statistics, University of Victoria ``An Enskog Equation for Inelastic Particle Dynamics: Energy Dissipation and Diffusive Equilibria'' LSK Bldg. Room 301 3:00 p.m., Wednesday, November 7, 2001 Probability Seminar David Brydges, Department of Mathematics, UBC ``Branched polymers and dimensional reduction, II'' MATX 1102 4:00 p.m., Wednesday, November 7, 2001 Algebraic/Geometry Seminar Zinovy Reichstein, Department of Mathematics, UBC ``A brief introduction to geometric invariant theory and the Kirwan resolution'' WMAX 216 2:00 p.m., Thursday, November 8, 2001 MITACS Math Biology Seminar Muhammad A.S. Chaudry, Biotech Lab, UBC ``Mathematical Modeling of Epidermal Growth Factor Signal Transduction Pathway'' WMAX 216 3:30 p.m., Thursday, November 8, 2001 Number Theory Seminar Hugh Edgar, San Jose State University (emeritus) WMAX 216 3:00 p.m., Friday, November 9, 2001 Mathematics Colloquium Nassif Ghoussoub, Director, Pacific Institute for the mathematical sciences, Department of Mathematics, UBC ``Phase transitions, Domain walls and minimal surfaces'' Math 100 3:00 p.m., Wednesday, November 14, 2001 Institute of Applied Mathematics Special Seminar David Kan, COMSOL ``Femlab-Multiphysics Modeling'' LSK Bldg. Room 301 4:00 p.m., Wednesday, November 14, 2001 Algebraic/Geometry Seminar Zinovy Reichstein, Department of Mathematics, UBC ``A brief introduction to geometric invariant theory and the Kirwan resolution, II'' WMAX 216 2:00 p.m., Thursday, November 15, 2001 MITACS Math Biology Seminar Donald Ludwig, Mathematics and Zoology, UBC ``Ecology, Conservation and Public Policy'' WMAX 216 3:00 p.m., Friday, November 16, 2001 Mathematics Colloquium John Friedlander, Department of Mathematics, University of Toronto ``Sieve methods, old and new'' Math 100 3:00 p.m., Monday, November 19, 2001 Institute of Applied Mathematics Colloquium Holger Hoos, Department of Computer Science, UBC ``Stochastic Local Search -- Foundations and Applications'' LSK Bldg., Room 301 12:30 p.m., Tuesday, November 20, 2001 Algebra/Topology Seminar Laura Scull, Department of Mathematics, UBC ``Rational S^1-equivariant homotopy theory'' WMAX 216 3:00 p.m., Wednesday, November 21, 2001 Probability Seminar Remco van der Hofstad, Delft University of Technology ``Weak interaction limits of one dimensional polymers'' MATX 1102 2:00 p.m., Thursday, November 22, 2001 MITACS Math Biology Seminar Colin Clark, Department of Mathematics, UBC ``The logic of fisheries management failures'' WMAX 216 3:00 p.m., Friday, November 23, 2001 Mathematics Colloquium Michael Doebeli, Departments of Zoology and Mathematics, UBC ``Evolutionary branching and speciation'' Math 100 3:00 p.m., Monday, November 26, 2001 IAM-PIMS Distinguished Colloquium Joel H. Ferziger, Flow Physics & Computation Division, Stanford University ``Numerical Simulation of Turbulence'' LSK Bldg., Room 301 12:30 p.m., Tuesday, November 27, 2001 Algebra/Topology Seminar Sadok Kallel, University of Lille ``On the geometry of configuration spaces, their loop spaces and the N-body problem'' WMAX 216 3:00 p.m., Wednesday, November 28, 2001 Probability Seminar Gianluca Guadagni, Department of Mathematics, UBC ``Is it really Gaussian? Looking at an ``almost" Gaussian integral through renormalization group glasses'' MATX 1102 2:00 p.m., Thursday, November 29, 2001 MITACS Math Biology Seminar Gerald Lim, Bio-Physics Group, Department of Physics, SFU ``Three-Dimensional Simulation of the Shapes and Shape Transformations of the Human Red Blood Cell (The Stomatocyte-Discocyte-Echinocyte Cycle and more)'' WMAX 216 3:30 p.m., Thursday, November 29, 2001 Number Theory Seminar Michael Bennett, Department of Mathematics, UBC ``Cubic Thue equations'' Math Annex 1102 4:15 p.m., Thursday, November 29, 2001 Special Seminar Michael Thaddeus, Department of Mathematics, Columbia University ``Mirror symmetry and Higgs bundles'' Math 225 3:00 p.m., Friday, November 30, 2001 Mathematics Colloquium Michael Thaddeus, Department of Mathematics, Columbia University ``Mirror symmetry and Langlands duality'' Math 100 3:00 p.m., Monday, December 3, 2001 Institute of Applied Mathematics Colloquium Remco W. Van der Hofstad, Faculty of Information Technology and Systems, Delft University of Technology ``Improving Performance of Third Generation Wireless Communication Systems'' LSK Bldg., Room 301 2:00 p.m., Thursday, December 6, 2001 MITACS Math Biology Seminar Michael Shelley, Courant Institute and the Center for Neural Science, New York University ``The Simple and the Complex in Visual Cortex Dynamics'' WMAX 216 3:30 p.m., Thursday, December 6, 2001 Number Theory Seminar Kevin O'Bryant, University of Illinois ``The algebraic life of a combinatorial object arising in the analytic theory of Diophantine approximation'' WMAX 216 2001 Seminars 2:30 p.m., Thursday, January 4, 2001 Algebraic Geometry Seminar Zinovy Reichstein, Department of Mathematics, Oregon State University ``Recent results on G-torsors'' WMAX 216 3:30 p.m., Friday, January 5, 2001 Mathematics Colloquium Zinovy Reichstein, Department of Mathematics, Oregon State University ``Simplifying polynomials by Tschirnhaus transformations'' Math 100 3:30 p.m., Monday, January 8, 2001 Mathematics Colloquium Bert Wiest, PDF, PIMS and Department of Mathematics, UBC ``Orderable groups in topology'' Math 100 1:00 p.m., Tuesday, January 9, 2001 IAM Colloquium Boye K. Ahlborn, Department of Physics, UBC ``How Big and How Small Can Active Animals Be? Thermodynamic Limits of the Body Dimensions of Warm Blooded Animals'' LSK Bldg. Room 301 4:30 p.m., Thursday, January 11, 2001 PIMS-MITACS Mathematical Finance Seminar A. Lazrak, USC and U. d'Evry ``Incomplete Information with Recursive Preferences'' WMAX 216 3:30 p.m., Friday, January 12, 2001 Mathematics Colloquium Professor Jiaping Wang, University of Minnesota ``Harmonic functions and topology of complete manifolds'' Math 100 3:30 p.m., Monday, January 15, 2001 Mathematics Colloquium Sergey Gavrilets, University of Tennessee ``Evolutionary dynamics on holey adaptive landscapes'' Math 100 10:30 a.m., Tuesday, January 16, 2001 Special Math Biology Seminar Sergey Gavrilets, University of Tennessee ``Evolution of female mate choice by sexual conflict'' Math Annex 1102 1:00 p.m., Tuesday, January 16, 2001 IAM-PIMS Distinguished Colloquium David Baillie, Department of Molecular Biology and Biochemistry, SFU ``Comparative Genomics'' LSK Bldg. Room 301 3:30 p.m., Wednesday, January 17, 2001 Algebraic Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``Differential Graded Schemes II'' WMAX 216 4:30 p.m., Wednesday, January 17, 2001 Student Seminar on topics in algebraic geometry Organizational Meeting. All interested students are invited to attend. WMAX 216 3:30 p.m., Friday, January 19, 2001 Mathematics Colloquium Greg Martin, University of Toronto ``The Distribution of Primes: What we know, and how'' Math 100 1:30 p.m., Monday, January 22, 2001 Special Number Theory Seminar Greg Martin, University of Toronto ``Biases in the Shanks-Renyi Prime Number Race'' Math Annex 1102 1:00 p.m., Tuesday, January 23, 2001 IAM Colloquium Fred Brauer, Department of Mathematics, UBC ``Disease Transmission Models with Recoveries and Disease Fatalities'' LSK Bldg. Room 301 3:30 p.m., Wednesday, January 24, 2001 Algebraic Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``Differential Graded Schemes III'' WMAX 216 4:30 p.m., Wednesday, January 24, 2001 Student Seminar on Algebraic Geometry Behrang Noohi, Department of Mathematics, UBC ``Serre Duality'' WMAX 216 2:30 p.m., Thursday, January 25, 2001 Special Geometry Seminar Ping Xu, Pennsylvania State University ``Quantum Groupoids'' Math Annex 1118 3:30 p.m., Friday, January 26, 2001 Mathematics Colloquium Ping Xu, Pennsylvania State University ``Poisson geometry and some applications'' Math 100 1:00 p.m., Tuesday, January 30, 2001 IAM Colloquium Matthew W. Choptuik, Department of Physics and Astronomy, UBC ``Critical Phenomena in Gravitational Collapse'' LSK Bldg. Room 301 4:30 p.m., Wednesday, January 31, 2001 Student Seminar on Algebraic Geometry Behrang Noohi, Department of Mathematics, UBC ``Serre Duality, II'' WMAX 216 2:30 p.m., Thursday, February 1, 2001 Algebra/Topology Seminar Bert Wiest, PIMS PDF anbd Department of Mathematics, UBC ``Configuration spaces of graphs'' WMAX 216 (PIMS) 3:30 p.m., Friday, February 2, 2001 Mathematics Colloquium V.S. Sunder, The Institute of Mathematical Sciences, Chennai, India (currently visiting MSRI, Berkeley) ``Principal Graphs of Subfactors'' Math 100 1:00 p.m., Tuesday, February 6, 2001 IAM Colloquium Gerda de Vries, Department of Mathematical Sciences, Univ. of Alberta ``From Spikers to Bursters via Coupling: Effects of Noise and Heterogeneity'' LSK Bldg. Room 301 2:30 p.m., Tuesday, February 6, 2001 MITACS/Math Biology Seminar Nima Geffen, Tel-Aviv University ``A Simple Geometric Model for a Simple Unicellular Organism'' WMAX 216 (PIMS) 4:30 p.m., Wednesday, February 7, 2001 Student Seminar on Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces'' WMAX 216 (PIMS) 12:30 p.m., Thursday, February 8, 2001 Graduate Student Seminar Patrick Ingram, Department of Mathematics, UBC ``Random graphs'' Location: TBA 2:30 p.m., Thursday, February 8, 2001 Algebra/Topology Seminar Denis Sjerve, Department of Mathematics, UBC ``Equations for group actions on Riemann surfaces'' WMAX 216 4:30 p.m., Thursday, February 8, 2001 PIMS-MITACS Mathematical Finance Seminar Tan Wang, UBC Finance ``Model Misspecification and Under-Diversification'' WMAX 216 3:30 p.m., Friday, February 9, 2001 Mathematics Colloquium Dirk Hundertmark, Caltech ``An optimal L^p-bound on the Krein spectral shift function'' Math 100 1:00 p.m., Tuesday, February 13, 2001 IAM Colloquium David Kirkpatrick, Department of Computer Science, UBC ``Optimal Motion of a Ladder in the Presence of Polygonal Obstacles: Characterization, Complexity and Construction'' LSK Bldg. Room 301 3:30 p.m., Wednesday, February 14, 2001 Probability Seminar THIS SEMINAR IS POSTPONED UNTIL FEB 28, (WED). Akira Sakai, Department of Mathematics, UBC ``Mean-field critical behavior for contact processes'' Math Annex 1118 3:30 p.m., Wednesday, February 14, 2001 Special Mathematics Colloquium Jim Bryan*, Tulane University, New Orleans, LA ``The enumerative geometry of K3 surfaces and modular forms'' WMAX 216 (PIMS Seminar Room, 1933 West Mall) *Jim Bryan is a candidate for a position in the Department. 4:30 p.m., Wednesday, February 14, 2001 Algebraic Geometry Seminar Jim Bryan, Tulane University ``BPS States and Gromov-Witten Invariants'' WMAX 216 2:30 p.m., Thursday, February 15, 2001 Algebra/Topology Seminar Denis Sjerve, Department of Mathematics, UBC ``Equations for group actions on Riemann surfaces, II'' Math Annex 1102 *note change of room location 3:30 p.m., Friday, February 16, 2001 Mathematics Colloquium Antonin Novotny, University of Toulon, France ``Existence, uniqueness and qualitative properties of solutions to Navier-Stokes equations for compressible fluids'' Math 100 1:00 p.m., Tuesday, February 27, 2001 IAM Colloquium Gordon Slade, Department of Mathematics, UBC ``Statistical Mechanics and Super-Brownian Motion'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, February 27, 2001 MITACS/Math Biology Seminar Amy Norris, Department of Mathematics, UBC ``Survey of methods for studying large scale gene expression data'' WMAX Room 216, (PIMS Seminar Room) 3:30 p.m., Wednesday, February 28, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Mean-field critical behavior for contact processes'' Math Annex 1118 3:30 p.m., Wednesday, February 28, 2001 Algebraic Geometry Seminar Stefan Kebekus, Bayreuth University ``Families of singular rational curves'' WMAX 216 4:30 p.m., Wednesday, February 28, 2001 Student Seminar on Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces'' WMAX 216 4:30 p.m., Thursday, March 1, 2001 PIMS-MITACS Mathematical Finance Seminar Simon MacNair, Department of Mathematics, UBC ``Delta Hedging and Survival Probabilities in Markets with Frictions'' WMAX 216 3:30 p.m., Friday, March 2, 2001 Mathematics Colloquium Brian Wetton, Department of Mathematics, UBC ``The MITACS/Ballard Collaborative Project: Rivulets and Condensation Front Modelling'' Math 100 3:30 p.m., Monday, March 5, 2001 Algebraic Geometry Seminar Stefan Kebekus, Bayreuth University ``Contact Manifolds'' Math 105 1:00 p.m., Tuesday, March 6, 2001 IAM-PIMS Distinguished Colloquium Gunther Uhlmann, Department of Mathematics, University of Washington, Seattle, WA ``The Mathematics of Reflection Seismology'' Room 301, LSK Bldg. 2:30 p.m., Thursday, March 8, 2001 Algebra-Topology Seminar Mark MacLean, Department of Mathematics and Science One, UBC ``Asymptotic homotopy'' WMAX 216 3:30 p.m., Friday, March 9, 2001 Mathematics Colloquium Professor Dilip B. Madan, Robert H. Smith School of Business, University of Maryland ``Levy Processes in Financial Modeling'' Math 100 1:00 p.m., Tuesday, March 13, 2001 Institute of Applied Mathematics Colloquium David Hargreaves, MacDonald Detwiller and Associates ``A Tour of the Mathematics in Space-Based Earth Observation'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 13, 2001 Math Biology Seminar Yue-Xian Li, Department of Mathematics, UBC ``Paradoxical Role of Ca2+-activated K+(BK) Channels in Controlling the Firing Patterns of Anterior Pituitary Cells -- A Modelling Study'' WMAX 216 (PIMS Seminar Room) 3:30 p.m., Wednesday, March 14, 2001 Algebraic Geometry Seminar Jim Carrell, Department of Mathematics, UBC ``Singular loci of Schubert varieties'' WMAX 216 4:30 p.m., Wednesday, March 14, 2001 Student Seminar in Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces, II'' WMAX 216 2:30 p.m., Thursday, March 15, 2001 Algebra-Topology Seminar Mark MacLean, Department of Mathematics and Science One, UBC ``Asymptotic homotopy, II'' WMAX 216 1:00 p.m., Tuesday, March 20, 2001 Institute of Applied Mathematics Colloquium Konstantin Kabin, Department of Chemistry, UBC ``Shocks in Magnetohydrodynamics'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 20, 2001 MITACS/Math Biology Seminar Leah Edelstein-Keshet, Department of Mathematics, UBC ``Spatial Regulation of Actin Dynamics in Cell Motion'' WMAX 216 (PIMS Seminar Room) 3:30 p.m., Wednesday, March 21, 2001 Algebraic Geometry Seminar Sandor Kovacs, Univ. of Washington ``Boundedness and hyperbolicity for families of varieties of general type'' WMAX 216 4:30 p.m., Wednesday, March 21, 2001 Student Seminar in Algebraic Geometry Boris Tschirschwitz, Department of Mathematics, UBC ``The Mukai Vector of Sheaves on K3 Surfaces'' Location: Wreck Beach 2:30 p.m., Thursday, March 22, 2001 Algebra-Topology Seminar Catherine Webster, Department of Mathematics, UBC ``Braid groups and cryptography'' WMAX 216 4:30 p.m., Thursday, March 22, 2001 PIMS-MITACS Mathematical Finance Seminar Alan King, IBM Research Division ``A Contingent Claims Approach to Setting the Franchise Fee for Capacity Constrained, Quantity-Flexible Supply Contracts'' WMAX 216 1:30 p.m., Friday, March 23, 2001 Mathematical Physics Seminar Christian Borgs, Microsoft Research ``Complex Zeros of Partition Functions: A Generalized Lee-Yang Theorem'' MATX 1102 3:30 p.m., Friday, March 23, 2001 Mathematics Colloquium Jennifer Chayes, Microsoft Research ``Phase Transitions in Computer Science'' Math 100 1:00 p.m., Tuesday, March 27, 2001 IAM-PIMS Distinguished Colloquium Speaker Bengt Fornberg, Department of Applied Mathematics, University of Colorado ``T Radial Basis Functions - A Future Way to Solve PDEs to Spectral Accuracy on Irregular Multidimensional Domains?'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 27, 2001 MITACS/Math Biology Seminar Magdalena Luca and Alexandra Chavez-Ross, Department of Mathematics, UBC ``Application of Chemotaxis Models to Alzheimer's Disease'' WMAX 216 (PIMS Seminar Room) 4:30 p.m., Thursday, March 29, 2001 PIMS-MITACS Mathematical Finance Seminar Robert Jones, SFU ``Valuing Revolving Lines of Credit under Jump-Diffusion Credit Quality'' WMAX 216 3:30 p.m., Wednesday, April 4, 2001 Algebraic Geometry Seminar Jim Carrell, Department of Mathematics, UBC ``Singularities of Schubert Varieties, II'' WMAX 216 10:30 a.m., Monday, April 9, 2001 Special Algebraic Geometry Seminar S.T. Yau, UIC ``Hyperplane Arrangements in P^2'' WMAX 216 (PIMS Seminar Room) Refreshments will be served at 10:15 a.m. 2:30 p.m., Tuesday, April 10, 2001 MITACS/Mathbiology Seminar Adriana Dawes, IAM & Department of Mathematics, UBC ``Estrogen biosynthesis: a modelling approach'' WMAX 216 3:30 p.m., Thursday, April 12, 2001 Special PIMS Colloquium David Eisenbud, Director, Mathematical Science Research Institute (Berkeley) ``Chow Forms and Resultants -- old and new'' WMAX 216 A reception will follow the seminar in the PIMS lounge. 3:00 p.m., Wednesday, September 5, 2001 Probability Seminar Professor Y. Ogura, Saga University, Japan ``On a completion of a class of one-dimensional diffusion processes'' Math Annex 1102 2:00 p.m., Thursday, September 6, 2001 Math Biology Seminar Marek Labecki, IAM PDF, UBC ``Protein transport in hollow-fibre bioreactors for mammalian cell culture'' West Mall Annex Room 216 (second floor) 4:30 p.m., Thursday, September 6, 2001 PIMS-MITACS Mathematical Finance Seminar Ali Lazrak, U. d'Evry and UBC ``Incomplete Information with Recursive Preferences'' West Mall Annex, Room 216 (second floor) 12:30 p.m., Tuesday, September 11, 2001 Algebra/Topology Seminar Laura Scull, Department of Mathematics, UBC ``Equivariant homotopy theory'' West Mall Annex, PIMS Seminar Room 216 (second floor) Coffee and refreshments will be available preceding the seminar. Feel free to bring a bag lunch, if you like. Grad students are encouraged to attend. 3:00 p.m., Wednesday, September 12, 2001 Probability Seminar Alexander E. Holroyd, UCLA ``How to find an extra head: optimal random shifts of Bernoulli and Poisson random fields'' Math Annex 1102 3:00 p.m., Wednesday, September 12, 2001 Geometry/PDE Seminar Jiguang Bao, PIMS ``Liouville and regularity properties of a Hessian equation'' West Mall Annex, PIMS Room 216 4:00 p.m., Wednesday, September 12, 2001 Algebraic/Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``C^*-equivariant vector fields and cohomology algebras stable map spaces'' West Mall Annex, PIMS Room 216 1:00 p.m., Thursday, September 13, 2001 Cancelled Special Seminar Stephanie van Willigenburg, Cornell Univ. ``Pieri Operators and Eulerian Enumeration'' Math Annex 1101 2:00 p.m., Thursday, September 13, 2001 Math Biology Seminar Nima Geffen, Tel-Aviv Univ. ``Line and point singularities for sources in two and three dimensions'' West Mall Annex, PIMS Room 216 3:30 p.m., Thursday, September 13, 2001 Number Theory Seminar David Boyd, Department of Mathematics, UBC ``Mahler measure and unusual models for elliptic curves'' Math Annex 1102 3:00 p.m., Friday, September 14, 2001 Mathematics Colloquium Stephanie van Willigenburg, Cornell Univ. ``The algebra of card shuffling'' Math 100 3:00 p.m., Monday, September 17, 2001 Cancelled Institute of Applied Mathematics Colloquium Herschel Rabitz, Department of Chemistry, Princeton University ``High Dimensional Model Representations with Applications in the Chemical/Physical Sciences'' LSK Bldg. Room 301 12:30 p.m., Tuesday, September 18, 2001 Algebra/Topology Seminar Sadok Kallel, University of Lille ``On the topology of some algebraic function spaces from curves to projective spaces'' WMAX 216 (PIMS Seminar Room) 3:00 p.m., Wednesday, September 19, 2001 Geometry/PDE Seminar Colleen Robles, Department of Mathematics, UBC ``Finsler geometry and some interesting examples'' WMAX 216 4:00 p.m., Wednesday, September 19, 2001 Algebraic/Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``C^*-equivariant vector fields and cohomology algebras stable map spaces, II'' WMAX 216 2:00 p.m., Thursday, September 20, 2001 Math Biology Seminar Amy Norris, Department of Mathematics, UBC ``Investigation of Epidermal Growth Factor simulation data'' WMAX 216 3:30 p.m., Thursday, September 20, 2001 Number Theory Seminar Michael Bennett, Department of Mathematics, UBC ``Variants of Fermat's last theorem, d'apres Wiles'' Math Annex 1102 10:00 a.m., Monday, September 24, 2001 Special Seminar Ailana Fraser, Department of Mathematics, Brown University ``Fundamental groups of manifolds of positive isotropic curvature'' Math Annex 1102 3:00 p.m., Monday, September 24, 2001 Mathematics Colloquium Ailana Fraser, Department of Mathematics, Brown University ``The free boundary problem for minimal disks and applications'' Math 100 12:30 p.m., Tuesday, September 25, 2001 Algebra/Topology Seminar Dale Rolfsen, Department of Mathematics, UBC ``Free Lie algebras and the figure-of-eight knot'' WMAX 216 (PIMS Seminar Room) 3:00 p.m., Wednesday, September 26, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Mean-field behavior for the contact process'' Math Annex 1102 3:30 p.m., Thursday, September 27, 2001 Number Theory Seminar Nils Bruin, PIMS, SFU, UBC ``Walking around a local-global obstruction for elliptic curves'' WMAX 216 (note permanent change of location) 4:30 p.m., Thursday, September 27, 2001 Cancelled PIMS-MITACS Mathematical Finance Seminar R. Tompkins, T.U. Vienna ``Pricing, No-arbitrage Bounds and Robust Hedging of Installment Options'' WMAX 216 3:00 p.m., Friday, September 28, 2001 Mathematics Colloquium Alex Iosevich, University of Missouri-Columbia ``Some combinatorial problems associated with the study of convex bodies'' Math 100 3:00 p.m., Monday, October 1, 2001 Institute of Applied Mathematics Colloquium Philippe Spalart, Boeing, Seattle ``Detached-Eddy Simulation'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 3, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Hyperscaling inequalities for the contact process'' MATX 1102 3:00 p.m., Wednesday, October 3, 2001 Geometry/PDE Seminar Izabella Laba, Department of Mathematics, UBC ``Spectral Cantor measures'' WMAX 216 3:00 p.m., Wednesday, October 3, 2001 (note different time and location for this week's seminar only) Algebraic/Geometry Seminar Rekha Thomas, Department of Mathematics, Univ. of Washington, Seattle ``The Combinatorics of the Toric Hilbert Scheme'' MATX 1118 2:00 p.m., Thursday, October 4, 2001 MITACS Math Biology Seminar Kerry Landman, Department of Mathematics and Statistics, Univ. of Melbourne, Australia ``Part 1. Can you still read the fine print? Part II. Development of the nervous system of the gut'' WMAX 216 3:30 p.m., Thursday, October 4, 2001 Number Theory Seminar Chris Smyth, University of Edinburgh ``Polylogs and Mahler measures'' WMAX 216 3:00 p.m., Wednesday, October 10, 2001 Probability Seminar Antal Jarai, Department of Mathematics, UBC ``On a problem in percolation theory'' MATX 1102 3:00 p.m., Wednesday, October 10, 2001 Geometry/PDE Seminar Nassif Ghoussoub, Department of Mathematics, UBC ``New Hardy-Sobolev Inequalities'' WMAX 216 4:00 p.m., Wednesday, October 10, 2001 Algebraic/Geometry Seminar Jim Bryan, Department of Mathematics, UBC ``An informal discussion on the Gopakumar-Vafa conjecture and related topics'' WMAX 216 2:00 p.m., Thursday, October 11, 2001 MITACS Math Biology Seminar Leah Keshet, Department of Mathematics, UBC ``Applications of mathematical modelling to social aggregation and swarming behaviour'' WMAX 216 3:30 p.m., Thursday, October 11, 2001 Number Theory Seminar Nike Vatsal, Department of Mathematics, UBC ``Uniform distribution of Heegner points'' WMAX 216 4:30 p.m., Thursday, October 11, 2001 PIMS-MITACS Mathematical Finance Seminar Jaksa Cvitanic, Univ. of Southern California ``Computation of Hedging Portfolios for Options with Discontinuous Payoffs'' WMAX 216 3:00 p.m., Friday, October 12, 2001 Mathematics Colloquium Mark Haiman, Department of Mathematics, Univ. of California, Berkeley ``The geometric significance of Macdonald positivity'' Math 100 3:00 p.m., Monday, October 15, 2001 Institute of Applied Mathematics Colloquium William Reinhardt, Chemistry Department, Univ. of Washington ``The 2001 Nobel Prize in Physics: The Gaseous Bose-Einstein Condensate, a Field Day for Physics and Applied Maths'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 17, 2001 Geometry/PDE Seminar Jingyi Chen, Department of Mathematics, UBC ``Mean curvature flow of surface in 4-manifolds'' WMAX 216 4:00 p.m., Wednesday, October 17, 2001 Algebraic/Geometry Seminar (postponed from 10th, October) Jim Bryan, Department of Mathematics, UBC ``An informal discussion on the Gopakumar-Vafa conjecture and related topics'' WMAX 216 2:00 p.m., Thursday, October 18, 2001 MITACS Math Biology Seminar Dan Reinders, Department of Bio-resource Engineering, UBC ``Computer Modelling of Endometrial Thermal Ablation for Menorrhagia'' WMAX 216 3:30 p.m., Thursday, October 18, 2001 Number Theory Seminar Greg Martin, Department of Mathematics, UBC ``Egyptian fractions with lots and lots and lots of terms'' WMAX 216 11:00 a.m., Saturday, October 20, 2001 Third North West Probability Seminar (4 talks) David Brydges, Department of Mathematics, UBC ``Branched Polymers and Dimensional Reduction'' Savery Hall 249, University of Washington 3:00 p.m., Monday, October 22, 2001 Institute of Applied Mathematics Colloquium Susan Baldwin, Department of Chemical and Biological Engineering, UBC ``Mathematical Modelling of Thermal Damage in Human Tissues'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 24, 2001 Probability Seminar Marten Klok, Delft University of Technology ``Performance analysis of advanced third generation receivers'' MATX 1102 4:00 p.m., Wednesday, October 24, 2001 Algebraic/Geometry Seminar Jim Bryan, Department of Mathematics, UBC ``The enumerative geometry of K3 surfaces and modular forms'' WMAX 216 2:00 p.m., Thursday, October 25, 2001 MITACS Math Biology Seminar Stan Maree, IAM, Department of Mathematics, UBC ``Small variations in multiple parameters account for wide variations in HIV-1 set points: a novel modelling approach'' WMAX 216 3:30 p.m., Thursday, October 25, 2001 Number Theory Seminar Izabella Laba, Department of Mathematics, UBC ``A characterization of finite sets that tile the integers'' WMAX 216 4:30 p.m., Thursday, October 25, 2001 PIMS-MITACS Mathematical Finance Seminar Joern Sass, Department of Mathematics, UBC ``Maximizing the asymptotic growth rate under fixed and proportional transaction costs'' WMAX 216 3:00 p.m., Monday, October 29, 2001 Institute of Applied Mathematics/Pacific Institute of Mathematical Sciences Distinguished Colloquium David Gottlieb, Division of Applied Mathematics, Brown University ``Spectral Methods for Discontinuous Problems'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 31, 2001 Probability Seminar David Brydges, Department of Mathematics, UBC ``Branched polymers and dimensional reduction, I'' MATX 1102 3:00 p.m., Wednesday, October 31, 2001 Geometry/PDE Seminar Mihail Cocos, Department of Mathematics, UBC ``Square integrable harmonic forms and the heat flow on complete manifolds'' WMAX 216 3:00 p.m., Wednesday, October 31, 2001 Institute of Applied Mathematics Colloquium Herschel Rabitz, Department of Chemistry, Princeton University ``High Dimensional Model Representations with applications in the Chemical/Physical Sciences'' LSK Bldg. Room 301 3:30 p.m., Thursday, November 1, 2001 Number Theory Seminar Imin Chen, SFU ``Rational points on a certain modular curve of level p^2'' WMAX 216 3:00 p.m., Friday, November 2, 2001 Mathematics Colloquium Joel Spencer, Courant Institute of Mathematical Sciences, New York University ``Erdos' Magic'' Math 100 3:00 p.m., Monday, November 5, 2001 Institute of Applied Mathematics Colloquium Reinhard Illner, Department of Mathematics and Statistics, University of Victoria ``An Enskog Equation for Inelastic Particle Dynamics: Energy Dissipation and Diffusive Equilibria'' LSK Bldg. Room 301 3:00 p.m., Wednesday, November 7, 2001 Probability Seminar David Brydges, Department of Mathematics, UBC ``Branched polymers and dimensional reduction, II'' MATX 1102 4:00 p.m., Wednesday, November 7, 2001 Algebraic/Geometry Seminar Zinovy Reichstein, Department of Mathematics, UBC ``A brief introduction to geometric invariant theory and the Kirwan resolution'' WMAX 216 2:00 p.m., Thursday, November 8, 2001 MITACS Math Biology Seminar Muhammad A.S. Chaudry, Biotech Lab, UBC ``Mathematical Modeling of Epidermal Growth Factor Signal Transduction Pathway'' WMAX 216 3:30 p.m., Thursday, November 8, 2001 Number Theory Seminar Hugh Edgar, San Jose State University (emeritus) WMAX 216 3:00 p.m., Friday, November 9, 2001 Mathematics Colloquium Nassif Ghoussoub, Director, Pacific Institute for the mathematical sciences, Department of Mathematics, UBC ``Phase transitions, Domain walls and minimal surfaces'' Math 100 3:00 p.m., Wednesday, November 14, 2001 Institute of Applied Mathematics Special Seminar David Kan, COMSOL ``Femlab-Multiphysics Modeling'' LSK Bldg. Room 301 4:00 p.m., Wednesday, November 14, 2001 Algebraic/Geometry Seminar Zinovy Reichstein, Department of Mathematics, UBC ``A brief introduction to geometric invariant theory and the Kirwan resolution, II'' WMAX 216 2:00 p.m., Thursday, November 15, 2001 MITACS Math Biology Seminar Donald Ludwig, Mathematics and Zoology, UBC ``Ecology, Conservation and Public Policy'' WMAX 216 3:00 p.m., Friday, November 16, 2001 Mathematics Colloquium John Friedlander, Department of Mathematics, University of Toronto ``Sieve methods, old and new'' Math 100 3:00 p.m., Monday, November 19, 2001 Institute of Applied Mathematics Colloquium Holger Hoos, Department of Computer Science, UBC ``Stochastic Local Search -- Foundations and Applications'' LSK Bldg., Room 301 12:30 p.m., Tuesday, November 20, 2001 Algebra/Topology Seminar Laura Scull, Department of Mathematics, UBC ``Rational S^1-equivariant homotopy theory'' WMAX 216 3:00 p.m., Wednesday, November 21, 2001 Probability Seminar Remco van der Hofstad, Delft University of Technology ``Weak interaction limits of one dimensional polymers'' MATX 1102 2:00 p.m., Thursday, November 22, 2001 MITACS Math Biology Seminar Colin Clark, Department of Mathematics, UBC ``The logic of fisheries management failures'' WMAX 216 3:00 p.m., Friday, November 23, 2001 Mathematics Colloquium Michael Doebeli, Departments of Zoology and Mathematics, UBC ``Evolutionary branching and speciation'' Math 100 3:00 p.m., Monday, November 26, 2001 IAM-PIMS Distinguished Colloquium Joel H. Ferziger, Flow Physics & Computation Division, Stanford University ``Numerical Simulation of Turbulence'' LSK Bldg., Room 301 12:30 p.m., Tuesday, November 27, 2001 Algebra/Topology Seminar Sadok Kallel, University of Lille ``On the geometry of configuration spaces, their loop spaces and the N-body problem'' WMAX 216 3:00 p.m., Wednesday, November 28, 2001 Probability Seminar Gianluca Guadagni, Department of Mathematics, UBC ``Is it really Gaussian? Looking at an ``almost" Gaussian integral through renormalization group glasses'' MATX 1102 2:00 p.m., Thursday, November 29, 2001 MITACS Math Biology Seminar Gerald Lim, Bio-Physics Group, Department of Physics, SFU ``Three-Dimensional Simulation of the Shapes and Shape Transformations of the Human Red Blood Cell (The Stomatocyte-Discocyte-Echinocyte Cycle and more)'' WMAX 216 3:30 p.m., Thursday, November 29, 2001 Number Theory Seminar Michael Bennett, Department of Mathematics, UBC ``Cubic Thue equations'' Math Annex 1102 4:15 p.m., Thursday, November 29, 2001 Special Seminar Michael Thaddeus, Department of Mathematics, Columbia University ``Mirror symmetry and Higgs bundles'' Math 225 3:00 p.m., Friday, November 30, 2001 Mathematics Colloquium Michael Thaddeus, Department of Mathematics, Columbia University ``Mirror symmetry and Langlands duality'' Math 100 3:00 p.m., Monday, December 3, 2001 Institute of Applied Mathematics Colloquium Remco W. Van der Hofstad, Faculty of Information Technology and Systems, Delft University of Technology ``Improving Performance of Third Generation Wireless Communication Systems'' LSK Bldg., Room 301 2:00 p.m., Thursday, December 6, 2001 MITACS Math Biology Seminar Michael Shelley, Courant Institute and the Center for Neural Science, New York University ``The Simple and the Complex in Visual Cortex Dynamics'' WMAX 216 3:30 p.m., Thursday, December 6, 2001 Number Theory Seminar Kevin O'Bryant, University of Illinois ``The algebraic life of a combinatorial object arising in the analytic theory of Diophantine approximation'' WMAX 216 2:30 p.m., Thursday, January 4, 2001 Algebraic Geometry Seminar Zinovy Reichstein, Department of Mathematics, Oregon State University ``Recent results on G-torsors'' WMAX 216 3:30 p.m., Friday, January 5, 2001 Mathematics Colloquium Zinovy Reichstein, Department of Mathematics, Oregon State University ``Simplifying polynomials by Tschirnhaus transformations'' Math 100 3:30 p.m., Monday, January 8, 2001 Mathematics Colloquium Bert Wiest, PDF, PIMS and Department of Mathematics, UBC ``Orderable groups in topology'' Math 100 1:00 p.m., Tuesday, January 9, 2001 IAM Colloquium Boye K. Ahlborn, Department of Physics, UBC ``How Big and How Small Can Active Animals Be? Thermodynamic Limits of the Body Dimensions of Warm Blooded Animals'' LSK Bldg. Room 301 4:30 p.m., Thursday, January 11, 2001 PIMS-MITACS Mathematical Finance Seminar A. Lazrak, USC and U. d'Evry ``Incomplete Information with Recursive Preferences'' WMAX 216 3:30 p.m., Friday, January 12, 2001 Mathematics Colloquium Professor Jiaping Wang, University of Minnesota ``Harmonic functions and topology of complete manifolds'' Math 100 3:30 p.m., Monday, January 15, 2001 Mathematics Colloquium Sergey Gavrilets, University of Tennessee ``Evolutionary dynamics on holey adaptive landscapes'' Math 100 10:30 a.m., Tuesday, January 16, 2001 Special Math Biology Seminar Sergey Gavrilets, University of Tennessee ``Evolution of female mate choice by sexual conflict'' Math Annex 1102 1:00 p.m., Tuesday, January 16, 2001 IAM-PIMS Distinguished Colloquium David Baillie, Department of Molecular Biology and Biochemistry, SFU ``Comparative Genomics'' LSK Bldg. Room 301 3:30 p.m., Wednesday, January 17, 2001 Algebraic Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``Differential Graded Schemes II'' WMAX 216 4:30 p.m., Wednesday, January 17, 2001 Student Seminar on topics in algebraic geometry Organizational Meeting. All interested students are invited to attend. WMAX 216 3:30 p.m., Friday, January 19, 2001 Mathematics Colloquium Greg Martin, University of Toronto ``The Distribution of Primes: What we know, and how'' Math 100 1:30 p.m., Monday, January 22, 2001 Special Number Theory Seminar Greg Martin, University of Toronto ``Biases in the Shanks-Renyi Prime Number Race'' Math Annex 1102 1:00 p.m., Tuesday, January 23, 2001 IAM Colloquium Fred Brauer, Department of Mathematics, UBC ``Disease Transmission Models with Recoveries and Disease Fatalities'' LSK Bldg. Room 301 3:30 p.m., Wednesday, January 24, 2001 Algebraic Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``Differential Graded Schemes III'' WMAX 216 4:30 p.m., Wednesday, January 24, 2001 Student Seminar on Algebraic Geometry Behrang Noohi, Department of Mathematics, UBC ``Serre Duality'' WMAX 216 2:30 p.m., Thursday, January 25, 2001 Special Geometry Seminar Ping Xu, Pennsylvania State University ``Quantum Groupoids'' Math Annex 1118 3:30 p.m., Friday, January 26, 2001 Mathematics Colloquium Ping Xu, Pennsylvania State University ``Poisson geometry and some applications'' Math 100 1:00 p.m., Tuesday, January 30, 2001 IAM Colloquium Matthew W. Choptuik, Department of Physics and Astronomy, UBC ``Critical Phenomena in Gravitational Collapse'' LSK Bldg. Room 301 4:30 p.m., Wednesday, January 31, 2001 Student Seminar on Algebraic Geometry Behrang Noohi, Department of Mathematics, UBC ``Serre Duality, II'' WMAX 216 2:30 p.m., Thursday, February 1, 2001 Algebra/Topology Seminar Bert Wiest, PIMS PDF anbd Department of Mathematics, UBC ``Configuration spaces of graphs'' WMAX 216 (PIMS) 3:30 p.m., Friday, February 2, 2001 Mathematics Colloquium V.S. Sunder, The Institute of Mathematical Sciences, Chennai, India (currently visiting MSRI, Berkeley) ``Principal Graphs of Subfactors'' Math 100 1:00 p.m., Tuesday, February 6, 2001 IAM Colloquium Gerda de Vries, Department of Mathematical Sciences, Univ. of Alberta ``From Spikers to Bursters via Coupling: Effects of Noise and Heterogeneity'' LSK Bldg. Room 301 2:30 p.m., Tuesday, February 6, 2001 MITACS/Math Biology Seminar Nima Geffen, Tel-Aviv University ``A Simple Geometric Model for a Simple Unicellular Organism'' WMAX 216 (PIMS) 4:30 p.m., Wednesday, February 7, 2001 Student Seminar on Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces'' WMAX 216 (PIMS) 12:30 p.m., Thursday, February 8, 2001 Graduate Student Seminar Patrick Ingram, Department of Mathematics, UBC ``Random graphs'' Location: TBA 2:30 p.m., Thursday, February 8, 2001 Algebra/Topology Seminar Denis Sjerve, Department of Mathematics, UBC ``Equations for group actions on Riemann surfaces'' WMAX 216 4:30 p.m., Thursday, February 8, 2001 PIMS-MITACS Mathematical Finance Seminar Tan Wang, UBC Finance ``Model Misspecification and Under-Diversification'' WMAX 216 3:30 p.m., Friday, February 9, 2001 Mathematics Colloquium Dirk Hundertmark, Caltech ``An optimal L^p-bound on the Krein spectral shift function'' Math 100 1:00 p.m., Tuesday, February 13, 2001 IAM Colloquium David Kirkpatrick, Department of Computer Science, UBC ``Optimal Motion of a Ladder in the Presence of Polygonal Obstacles: Characterization, Complexity and Construction'' LSK Bldg. Room 301 3:30 p.m., Wednesday, February 14, 2001 Probability Seminar THIS SEMINAR IS POSTPONED UNTIL FEB 28, (WED). Akira Sakai, Department of Mathematics, UBC ``Mean-field critical behavior for contact processes'' Math Annex 1118 3:30 p.m., Wednesday, February 14, 2001 Special Mathematics Colloquium Jim Bryan*, Tulane University, New Orleans, LA ``The enumerative geometry of K3 surfaces and modular forms'' WMAX 216 (PIMS Seminar Room, 1933 West Mall) *Jim Bryan is a candidate for a position in the Department. 4:30 p.m., Wednesday, February 14, 2001 Algebraic Geometry Seminar Jim Bryan, Tulane University ``BPS States and Gromov-Witten Invariants'' WMAX 216 2:30 p.m., Thursday, February 15, 2001 Algebra/Topology Seminar Denis Sjerve, Department of Mathematics, UBC ``Equations for group actions on Riemann surfaces, II'' Math Annex 1102 *note change of room location 3:30 p.m., Friday, February 16, 2001 Mathematics Colloquium Antonin Novotny, University of Toulon, France ``Existence, uniqueness and qualitative properties of solutions to Navier-Stokes equations for compressible fluids'' Math 100 1:00 p.m., Tuesday, February 27, 2001 IAM Colloquium Gordon Slade, Department of Mathematics, UBC ``Statistical Mechanics and Super-Brownian Motion'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, February 27, 2001 MITACS/Math Biology Seminar Amy Norris, Department of Mathematics, UBC ``Survey of methods for studying large scale gene expression data'' WMAX Room 216, (PIMS Seminar Room) 3:30 p.m., Wednesday, February 28, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Mean-field critical behavior for contact processes'' Math Annex 1118 4:30 p.m., Wednesday, February 28, 2001 Student Seminar on Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces'' WMAX 216 4:30 p.m., Thursday, March 1, 2001 PIMS-MITACS Mathematical Finance Seminar Simon MacNair, Department of Mathematics, UBC ``Delta Hedging and Survival Probabilities in Markets with Frictions'' WMAX 3:30 p.m., Friday, March 2, 2001 Mathematics Colloquium Brian Wetton, Department of Mathematics, UBC ``The MITACS/Ballard Collaborative Project: Rivulets and Condensation Front Modelling'' Math 100 3:30 p.m., Monday, March 5, 2001 Algebraic Geometry Seminar Stefan Kebekus, Bayreuth University ``Contact Manifolds'' Math 105 1:00 p.m., Tuesday, March 6, 2001 IAM-PIMS Distinguished Colloquium Gunther Uhlmann, Department of Mathematics, University of Washington, Seattle, WA ``The Mathematics of Reflection Seismology'' Room 301, LSK Bldg. 2:30 p.m., Thursday, March 8, 2001 Algebra-Topology Seminar Mark MacLean, Department of Mathematics and Science One, UBC ``Asymptotic homotopy'' WMAX 216 3:30 p.m., Friday, March 9, 2001 Mathematics Colloquium Professor Dilip B. Madan, Robert H. Smith School of Business, University of Maryland ``Levy Processes in Financial Modeling'' Math 100 1:00 p.m., Tuesday, March 13, 2001 Institute of Applied Mathematics Colloquium David Hargreaves, MacDonald Detwiller and Associates ``A Tour of the Mathematics in Space-Based Earth Observation'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 13, 2001 Math Biology Seminar Yue-Xian Li, Department of Mathematics, UBC ``Paradoxical Role of Ca2+-activated K+(BK) Channels in Controlling the Firing Patterns of Anterior Pituitary Cells -- A Modelling Study'' WMAX 216 (PIMS Seminar Room) 3:30 p.m., Wednesday, March 14, 2001 Algebraic Geometry Seminar Jim Carrell, Department of Mathematics, UBC ``Singular loci of Schubert varieties'' WMAX 216 4:30 p.m., Wednesday, March 14, 2001 Student Seminar in Algebraic Geometry Mark Jackson, Department of Mathematics, UBC ``K3 Surfaces, II'' WMAX 216 2:30 p.m., Thursday, March 15, 2001 Algebra-Topology Seminar Mark MacLean, Department of Mathematics and Science One, UBC ``Asymptotic homotopy, II'' WMAX 216 1:00 p.m., Tuesday, March 20, 2001 Institute of Applied Mathematics Colloquium Konstantin Kabin, Department of Chemistry, UBC ``Shocks in Magnetohydrodynamics'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 20, 2001 MITACS/Math Biology Seminar Leah Edelstein-Keshet, Department of Mathematics, UBC ``Spatial Regulation of Actin Dynamics in Cell Motion'' WMAX 216 (PIMS Seminar 3:30 p.m., Wednesday, March 21, 2001 Algebraic Geometry Seminar Sandor Kovacs, Univ. of Washington ``Boundedness and hyperbolicity for families of varieties of general type'' WMAX 216 4:30 p.m., Wednesday, March 21, 2001 Student Seminar in Algebraic Geometry Boris Tschirschwitz, Department of Mathematics, UBC ``The Mukai Vector of Sheaves on K3 Surfaces'' Location: Wreck Beach 2:30 p.m., Thursday, March 22, 2001 Algebra-Topology Seminar Catherine Webster, Department of Mathematics, UBC ``Braid groups and cryptography'' WMAX 216 4:30 p.m., Thursday, March 22, 2001 PIMS-MITACS Mathematical Finance Seminar Alan King, IBM Research Division ``A Contingent Claims Approach to Setting the Franchise Fee for Capacity Constrained, Quantity-Flexible Supply Contracts'' WMAX 216 1:30 p.m., Friday, March 23, 2001 Mathematical Physics Seminar Christian Borgs, Microsoft Research ``Complex Zeros of Partition Functions: A Generalized Lee-Yang Theorem'' MATX 1102 3:30 p.m., Friday, March 23, 2001 Mathematics Colloquium Jennifer Chayes, Microsoft Research ``Phase Transitions in Computer Science'' Math 100 1:00 p.m., Tuesday, March 27, 2001 IAM-PIMS Distinguished Colloquium Speaker Bengt Fornberg, Department of Applied Mathematics, University of Colorado ``T Radial Basis Functions - A Future Way to Solve PDEs to Spectral Accuracy on Irregular Multidimensional Domains?'' Room 301, LSK Bldg. 2:30 p.m., Tuesday, March 27, 2001 MITACS/Math Biology Seminar Magdalena Luca and Alexandra Chavez-Ross, Department of Mathematics, UBC ``Application of Chemotaxis Models to Alzheimer's Disease'' WMAX 216 (PIMS Seminar Room) 4:30 p.m., Thursday, March 29, 2001 PIMS-MITACS Mathematical Finance Seminar Robert Jones, SFU ``Valuing Revolving Lines of Credit under Jump-Diffusion Credit Quality'' WMAX 216 3:30 p.m., Wednesday, April 4, 2001 Algebraic Geometry Seminar Jim Carrell, Department of Mathematics, UBC ``Singularities of Schubert Varieties, II'' WMAX 216 10:30 a.m., Monday, April 9, 2001 Special Algebraic Geometry Seminar S.T. Yau, UIC ``Hyperplane Arrangements in P^2'' WMAX 216 (PIMS Seminar Room) Refreshments will be served at 10:15 a.m. 2:30 p.m., Tuesday, April 10, 2001 MITACS/Mathbiology Seminar Adriana Dawes, IAM & Department of Mathematics, UBC ``Estrogen biosynthesis: a modelling approach'' WMAX 216 3:30 p.m., Thursday, April 12, 2001 Special PIMS Colloquium David Eisenbud, Director, Mathematical Science Research Institute (Berkeley) ``Chow Forms and Resultants -- old and new'' WMAX 216 A reception will follow the seminar in the PIMS lounge. 3:00 p.m., Wednesday, September 5, 2001 Probability Seminar Professor Y. Ogura, Saga University, Japan ``On a completion of a class of one-dimensional diffusion processes'' Math Annex 1102 2:00 p.m., Thursday, September 6, 2001 Math Biology Seminar Marek Labecki, IAM PDF, UBC ``Protein transport in hollow-fibre bioreactors for mammalian cell culture'' West Mall Annex Room 216 (second 4:30 p.m., Thursday, September 6, 2001 PIMS-MITACS Mathematical Finance Seminar Ali Lazrak, U. d'Evry and UBC ``Incomplete Information with Recursive Preferences'' West Mall Annex, Room 216 (second 12:30 p.m., Tuesday, September 11, 2001 Algebra/Topology Seminar Laura Scull, Department of Mathematics, UBC ``Equivariant homotopy theory'' West Mall Annex, PIMS Seminar Room 216 (second floor) Coffee and refreshments will be available preceding the seminar. Feel free to bring a bag lunch, if you like. Grad students are encouraged to attend. 3:00 p.m., Wednesday, September 12, 2001 Probability Seminar Alexander E. Holroyd, UCLA ``How to find an extra head: optimal random shifts of Bernoulli and Poisson random fields'' Math Annex 1102 3:00 p.m., Wednesday, September 12, 2001 Geometry/PDE Seminar Jiguang Bao, PIMS ``Liouville and regularity properties of a Hessian equation'' West Mall Annex, PIMS Room 216 4:00 p.m., Wednesday, September 12, 2001 Algebraic/Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``C^*-equivariant vector fields and cohomology algebras stable map spaces'' West Mall Annex, PIMS Room 216 1:00 p.m., Thursday, September 13, 2001 Cancelled Special Seminar Stephanie van Willigenburg, Cornell Univ. ``Pieri Operators and Eulerian Enumeration'' Math Annex 1101 2:00 p.m., Thursday, September 13, 2001 Math Biology Seminar Nima Geffen, Tel-Aviv Univ. ``Line and point singularities for sources in two and three dimensions'' West Mall Annex, PIMS Room 216 3:30 p.m., Thursday, September 13, 2001 Number Theory Seminar David Boyd, Department of Mathematics, UBC ``Mahler measure and unusual models for elliptic curves'' Math Annex 1102 3:00 p.m., Friday, September 14, 2001 Mathematics Colloquium Stephanie van Willigenburg, Cornell Univ. ``The algebra of card shuffling'' Math 100 3:00 p.m., Monday, September 17, 2001 Cancelled Institute of Applied Mathematics Colloquium Herschel Rabitz, Department of Chemistry, Princeton University ``High Dimensional Model Representations with Applications in the Chemical/Physical Sciences'' LSK Bldg. Room 301 12:30 p.m., Tuesday, September 18, 2001 Algebra/Topology Seminar Sadok Kallel, University of Lille ``On the topology of some algebraic function spaces from curves to projective spaces'' WMAX 216 (PIMS Seminar Room) 3:00 p.m., Wednesday, September 19, 2001 Geometry/PDE Seminar Colleen Robles, Department of Mathematics, UBC ``Finsler geometry and some interesting examples'' WMAX 216 4:00 p.m., Wednesday, September 19, 2001 Algebraic/Geometry Seminar Kai Behrend, Department of Mathematics, UBC ``C^*-equivariant vector fields and cohomology algebras stable map spaces, II'' WMAX 2:00 p.m., Thursday, September 20, 2001 Math Biology Seminar Amy Norris, Department of Mathematics, UBC ``Investigation of Epidermal Growth Factor simulation data'' WMAX 216 3:30 p.m., Thursday, September 20, 2001 Number Theory Seminar Michael Bennett, Department of Mathematics, UBC ``Variants of Fermat's last theorem, d'apres Wiles'' Math Annex 1102 10:00 a.m., Monday, September 24, 2001 Special Seminar Ailana Fraser, Department of Mathematics, Brown University ``Fundamental groups of manifolds of positive isotropic curvature'' Math Annex 1102 3:00 p.m., Monday, September 24, 2001 Mathematics Colloquium Ailana Fraser, Department of Mathematics, Brown University ``The free boundary problem for minimal disks and applications'' Math 100 12:30 p.m., Tuesday, September 25, 2001 Algebra/Topology Seminar Dale Rolfsen, Department of Mathematics, UBC ``Free Lie algebras and the figure-of-eight knot'' WMAX 216 (PIMS Seminar Room) 3:00 p.m., Wednesday, September 26, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Mean-field behavior for the contact process'' Math Annex 1102 3:30 p.m., Thursday, September 27, 2001 Number Theory Seminar Nils Bruin, PIMS, SFU, UBC ``Walking around a local-global obstruction for elliptic curves'' WMAX 216 (note permanent change of location) 4:30 p.m., Thursday, September 27, 2001 Cancelled PIMS-MITACS Mathematical Finance Seminar R. Tompkins, T.U. Vienna ``Pricing, No-arbitrage Bounds and Robust Hedging of Installment Options'' WMAX 216 3:00 p.m., Friday, September 28, 2001 Mathematics Colloquium Alex Iosevich, University of Missouri-Columbia ``Some combinatorial problems associated with the study of convex bodies'' Math 100 3:00 p.m., Monday, October 1, 2001 Institute of Applied Mathematics Colloquium Philippe Spalart, Boeing, Seattle ``Detached-Eddy Simulation'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 3, 2001 Probability Seminar Akira Sakai, Department of Mathematics, UBC ``Hyperscaling inequalities for the contact process'' MATX 1102 3:00 p.m., Wednesday, October 3, 2001 Geometry/PDE Seminar Izabella Laba, Department of Mathematics, UBC ``Spectral Cantor measures'' WMAX 216 3:00 p.m., Wednesday, October 3, 2001 (note different time and location for this week's seminar only) Algebraic/Geometry Seminar Rekha Thomas, Department of Mathematics, Univ. of Washington, Seattle ``The Combinatorics of the Toric Hilbert Scheme'' MATX 1118 2:00 p.m., Thursday, October 4, 2001 MITACS Math Biology Seminar Kerry Landman, Department of Mathematics and Statistics, Univ. of Melbourne, Australia ``Part 1. Can you still read the fine print? Part II. Development of the nervous system of the gut'' WMAX 216 3:30 p.m., Thursday, October 4, 2001 Number Theory Seminar Chris Smyth, University of Edinburgh ``Polylogs and Mahler measures'' WMAX 216 3:00 p.m., Wednesday, October 10, 2001 Probability Seminar Antal Jarai, Department of Mathematics, UBC ``On a problem in percolation theory'' MATX 1102 3:00 p.m., Wednesday, October 10, 2001 Geometry/PDE Seminar Nassif Ghoussoub, Department of Mathematics, UBC ``New Hardy-Sobolev Inequalities'' WMAX 216 4:00 p.m., Wednesday, October 10, 2001 Algebraic/Geometry Seminar Jim Bryan, Department of Mathematics, UBC ``An informal discussion on the Gopakumar-Vafa conjecture and related topics'' WMAX 216 2:00 p.m., Thursday, October 11, 2001 MITACS Math Biology Seminar Leah Keshet, Department of Mathematics, UBC ``Applications of mathematical modelling to social aggregation and swarming behaviour'' WMAX 216 3:30 p.m., Thursday, October 11, 2001 Number Theory Seminar Nike Vatsal, Department of Mathematics, UBC ``Uniform distribution of Heegner points'' WMAX 216 4:30 p.m., Thursday, October 11, 2001 PIMS-MITACS Mathematical Finance Seminar Jaksa Cvitanic, Univ. of Southern California ``Computation of Hedging Portfolios for Options with Discontinuous Payoffs'' WMAX 216 3:00 p.m., Friday, October 12, 2001 Mathematics Colloquium Mark Haiman, Department of Mathematics, Univ. of California, Berkeley ``The geometric significance of Macdonald positivity'' Math 100 3:00 p.m., Monday, October 15, 2001 Institute of Applied Mathematics Colloquium William Reinhardt, Chemistry Department, Univ. of Washington ``The 2001 Nobel Prize in Physics: The Gaseous Bose-Einstein Condensate, a Field Day for Physics and Applied Maths'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 17, 2001 Geometry/PDE Seminar Jingyi Chen, Department of Mathematics, UBC ``Mean curvature flow of surface in 4-manifolds'' WMAX 216 4:00 p.m., Wednesday, October 17, 2001 Algebraic/Geometry Seminar (postponed from 10th, October) Jim Bryan, Department of Mathematics, UBC ``An informal discussion on the Gopakumar-Vafa conjecture and related topics'' WMAX 216 2:00 p.m., Thursday, October 18, 2001 MITACS Math Biology Seminar Dan Reinders, Department of Bio-resource Engineering, UBC ``Computer Modelling of Endometrial Thermal Ablation for Menorrhagia'' WMAX 3:30 p.m., Thursday, October 18, 2001 Number Theory Seminar Greg Martin, Department of Mathematics, UBC ``Egyptian fractions with lots and lots and lots of terms'' WMAX 216 11:00 a.m., Saturday, October 20, 2001 Third North West Probability Seminar (4 talks) David Brydges, Department of Mathematics, UBC ``Branched Polymers and Dimensional Reduction'' Savery Hall 249, University of Washington 3:00 p.m., Monday, October 22, 2001 Institute of Applied Mathematics Colloquium Susan Baldwin, Department of Chemical and Biological Engineering, UBC ``Mathematical Modelling of Thermal Damage in Human Tissues'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 24, 2001 Probability Seminar Marten Klok, Delft University of Technology ``Performance analysis of advanced third generation receivers'' MATX 1102 4:00 p.m., Wednesday, October 24, 2001 Algebraic/Geometry Seminar Jim Bryan, Department of Mathematics, UBC ``The enumerative geometry of K3 surfaces and modular forms'' WMAX 216 2:00 p.m., Thursday, October 25, 2001 MITACS Math Biology Seminar Stan Maree, IAM, Department of Mathematics, UBC ``Small variations in multiple parameters account for wide variations in HIV-1 set points: a novel modelling approach'' WMAX 216 3:30 p.m., Thursday, October 25, 2001 Number Theory Seminar Izabella Laba, Department of Mathematics, UBC ``A characterization of finite sets that tile the integers'' WMAX 216 4:30 p.m., Thursday, October 25, 2001 PIMS-MITACS Mathematical Finance Seminar Joern Sass, Department of Mathematics, UBC ``Maximizing the asymptotic growth rate under fixed and proportional transaction costs'' WMAX 216 3:00 p.m., Monday, October 29, 2001 Institute of Applied Mathematics/Pacific Institute of Mathematical Sciences Distinguished Colloquium David Gottlieb, Division of Applied Mathematics, Brown University ``Spectral Methods for Discontinuous Problems'' LSK Bldg. Room 301 3:00 p.m., Wednesday, October 31, 2001 Probability Seminar David Brydges, Department of Mathematics, UBC ``Branched polymers and dimensional reduction, I'' MATX 1102 3:00 p.m., Wednesday, October 31, 2001 Geometry/PDE Seminar Mihail Cocos, Department of Mathematics, UBC ``Square integrable harmonic forms and the heat flow on complete manifolds'' WMAX 216 3:00 p.m., Wednesday, October 31, 2001 Institute of Applied Mathematics Colloquium Herschel Rabitz, Department of Chemistry, Princeton University ``High Dimensional Model Representations with applications in the Chemical/Physical Sciences'' LSK Bldg. Room 301 3:30 p.m., Thursday, November 1, 2001 Number Theory Seminar Imin Chen, SFU ``Rational points on a certain modular curve of level p^2'' WMAX 216 3:00 p.m., Friday, November 2, 2001 Mathematics Colloquium Joel Spencer, Courant Institute of Mathematical Sciences, New York University ``Erdos' Magic'' Math 100 3:00 p.m., Monday, November 5, 2001 Institute of Applied Mathematics Colloquium Reinhard Illner, Department of Mathematics and Statistics, University of Victoria ``An Enskog Equation for Inelastic Particle Dynamics: Energy Dissipation and Diffusive Equilibria'' LSK Bldg. Room 301 3:00 p.m., Wednesday, November 7, 2001 Probability Seminar David Brydges, Department of Mathematics, UBC ``Branched polymers and dimensional reduction, II'' MATX 1102 4:00 p.m., Wednesday, November 7, 2001 Algebraic/Geometry Seminar Zinovy Reichstein, Department of Mathematics, UBC ``A brief introduction to geometric invariant theory and the Kirwan resolution'' WMAX 216 2:00 p.m., Thursday, November 8, 2001 MITACS Math Biology Seminar Muhammad A.S. Chaudry, Biotech Lab, UBC ``Mathematical Modeling of Epidermal Growth Factor Signal Transduction Pathway'' WMAX 216 3:30 p.m., Thursday, November 8, 2001 Number Theory Seminar Hugh Edgar, San Jose State University (emeritus) ``1/2-PINT'' WMAX 216 3:00 p.m., Friday, November 9, 2001 Mathematics Colloquium Nassif Ghoussoub, Director, Pacific Institute for the mathematical sciences, Department of Mathematics, UBC ``Phase transitions, Domain walls and minimal surfaces'' Math 100 3:00 p.m., Wednesday, November 14, 2001 Institute of Applied Mathematics Special Seminar David Kan, COMSOL ``Femlab-Multiphysics Modeling'' LSK Bldg. Room 301 4:00 p.m., Wednesday, November 14, 2001 Algebraic/Geometry Seminar Zinovy Reichstein, Department of Mathematics, UBC ``A brief introduction to geometric invariant theory and the Kirwan resolution, II'' WMAX 216 2:00 p.m., Thursday, November 15, 2001 MITACS Math Biology Seminar Donald Ludwig, Mathematics and Zoology, UBC ``Ecology, Conservation and Public Policy'' WMAX 216 3:00 p.m., Friday, November 16, 2001 Mathematics Colloquium John Friedlander, Department of Mathematics, University of Toronto ``Sieve methods, old and new'' Math 100 3:00 p.m., Monday, November 19, 2001 Institute of Applied Mathematics Colloquium Holger Hoos, Department of Computer Science, UBC ``Stochastic Local Search -- Foundations and Applications'' LSK Bldg., Room 301 12:30 p.m., Tuesday, November 20, 2001 Algebra/Topology Seminar Laura Scull, Department of Mathematics, UBC ``Rational S^1-equivariant homotopy theory'' WMAX 216 3:00 p.m., Wednesday, November 21, 2001 Probability Seminar Remco van der Hofstad, Delft University of Technology ``Weak interaction limits of one dimensional polymers'' MATX 1102 2:00 p.m., Thursday, November 22, 2001 MITACS Math Biology Seminar Colin Clark, Department of Mathematics, UBC ``The logic of fisheries management failures'' WMAX 216 3:00 p.m., Friday, November 23, 2001 Mathematics Colloquium Michael Doebeli, Departments of Zoology and Mathematics, UBC ``Evolutionary branching and speciation'' Math 100 3:00 p.m., Monday, November 26, 2001 IAM-PIMS Distinguished Colloquium Joel H. Ferziger, Flow Physics & Computation Division, Stanford University ``Numerical Simulation of Turbulence'' LSK Bldg., Room 301 12:30 p.m., Tuesday, November 27, 2001 Algebra/Topology Seminar Sadok Kallel, University of Lille ``On the geometry of configuration spaces, their loop spaces and the N-body problem'' WMAX 216 3:00 p.m., Wednesday, November 28, 2001 Probability Seminar Gianluca Guadagni, Department of Mathematics, UBC ``Is it really Gaussian? Looking at an ``almost" Gaussian integral through renormalization group glasses'' MATX 1102 2:00 p.m., Thursday, November 29, 2001 MITACS Math Biology Seminar Gerald Lim, Bio-Physics Group, Department of Physics, SFU ``Three-Dimensional Simulation of the Shapes and Shape Transformations of the Human Red Blood Cell (The Stomatocyte-Discocyte-Echinocyte Cycle and more)'' WMAX 216 3:30 p.m., Thursday, November 29, 2001 Number Theory Seminar Michael Bennett, Department of Mathematics, UBC ``Cubic Thue equations'' Math Annex 1102 4:15 p.m., Thursday, November 29, 2001 Special Seminar Michael Thaddeus, Department of Mathematics, Columbia University ``Mirror symmetry and Higgs bundles'' Math 225 3:00 p.m., Friday, November 30, 2001 Mathematics Colloquium Michael Thaddeus, Department of Mathematics, Columbia University ``Mirror symmetry and Langlands duality'' Math 100 3:00 p.m., Monday, December 3, 2001 Institute of Applied Mathematics Colloquium Remco W. Van der Hofstad, Faculty of Information Technology and Systems, Delft University of Technology ``Improving Performance of Third Generation Wireless Communication Systems'' LSK Bldg., Room 301 2:00 p.m., Thursday, December 6, 2001 MITACS Math Biology Seminar Michael Shelley, Courant Institute and the Center for Neural Science, New York University ``The Simple and the Complex in Visual Cortex Dynamics'' WMAX 216 3:30 p.m., Thursday, December 6, 2001 Number Theory Seminar Kevin O'Bryant, University of Illinois ``The algebraic life of a combinatorial object arising in the analytic theory of Diophantine approximation'' WMAX 216
{"url":"http://www.math.ubc.ca/Dept/Events/2001sem.shtml","timestamp":"2014-04-17T21:33:55Z","content_type":null,"content_length":"50332","record_id":"<urn:uuid:9afcf926-f3c3-49d6-8b7e-e36cf5201b13>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamical systems: Mapping chaos with R July 13, 2012 By Corey Chivers Chaos. Hectic, seemingly unpredictable, complex dynamics. In a word: fun. I usually stick to the warm and fuzzy world of stochasticity and probability distributions, but this post will be (almost) entirely devoid of randomness. While chaotic dynamics are entirely deterministic, their sensitivity to initial conditions can trick the observer into seeing iid. In ecology, chaotic dynamics can emerge from a very simple model of population. $x_{t+1} = r x_t(1-x_t)$ Where the population in time-step t+1 is dependent on the population at time step t, and some intrinsic rate of growth, r. This is know as the logistic (or quadratic) map. For any starting value of x at t[0], the entire evolution of the system can be computed exactly. However, there some values of r for which the system will diverge substantially with even a very slight change in the initial We can see the behaviour of this model by simply plotting the time series of population sizes. Another, and particularly instructive way of visualizing the dynamics, is through the use of a cobweb plot. In this representation, we can see how the population x at time t maps to population x at time t+1 by reflecting through the 1:1 line. Each representation is plotted here: You can plot realizations of the system using the following R script. ############# Trace ############# for(i in 2:N) ########## Quadradic Map ######## for(i in 1:length(x)) lines(x=c(start,start),y=c(0,r*start*(1-start)) ) for(i in 1:(2*N)) lines(x=c(start,start),y=c(start,r*start*(1-start)) ) r*start*(1-start)) ) To use, simply call the function with any value of r, and a starting position between 0 an 1. Fun right? Now that you’ve tried a few different values of r at a few starting positions, it’s time to look a little closer at what ranges of r values produce chaotic behaviour, which result in stable orbits, and which lead to dampening oscillations toward fixed points. There is a rigorous mathematics behind this kind of analysis of dynamic systems, but we’re just going to do some numerical experimentation using trusty R and a bit of cpu time. To do this, we’ll need to iterate across a range of r values, and at each one start a dynamical system with a random starting point (told you there would be some randomness in this post). After some large number of time-steps, we’ll record where the system ended up. Plotting the results, we can see a series of period doubling (2,4,8, etc) bifurcations interspersed with regions of chaotic for(i in 1:res) for(i in 2:N) #warning: Even in parallel with 4 cores, this is by no means fast code! plot(bi,col='green',xlab='R',ylab='n --> inf',main='',pch=15,cex=0.2) This plot is known as a bifurcation diagram and is likely a familiar sight. Hopefully working through the R code and running it yourself will help you interpret cobweb plots, as well as bifurcation diagrams. It is really quite amazing how the simple looking logistic map equation can lead to such interesting behaviour. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/dynamical-systems-mapping-chaos-with-r/","timestamp":"2014-04-17T13:04:10Z","content_type":null,"content_length":"42805","record_id":"<urn:uuid:d4c2df31-b812-445e-b8bd-2e4748cc3b8d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
DESY 93-133 A Portable High-Quality Random Number Generator for Lattice Field Theory Simulations Martin Luscher hep-lat/9309020 28 Sep 1993 Deutsches Elektronen-Synchrotron DESY Notkestrasse 85, D-22603 Hamburg, Germany The theory underlying a proposed random number generator for numeri- cal simulations in elementary particle physics and statistical mechanics is dis- cussed. The generator is based on an algorithm introduced by Marsaglia and Zaman, with an important added feature leading to demonstrably good sta- tistical properties. It can be implemented exactly on any computer complying with the IEEE{754 standard for single precision oating point arithmetic. DESY 93-133 September 1993 1. Introduction Numerical simulations in elementary particle physics and statistical me- chanics are increasingly performed on massively parallel computers. These machines o er unmatched computing power, thus making it possible to simu- late larger systems and to achieve greater statistical precision. It is well-known that the random number generators employed in these computations can be a source of systematic error. In fact many of the popular generators used to date failed to give correct results in some recent simulations of the 2-dimensional Ising model 1,2]. While the Ising model is a rather special case, with un- usual regularity, the lesson clearly is that random number generators should be chosen with care, especially when one aims for high-precision results. The generator discussed in this paper derives from an algorithm originally proposed by Marsaglia and Zaman 3]. It has a very long period and excellent statistical properties on short and long time scales. The quality of the gen- erator is established using some mathematical results on chaotic dynamical systems, the spectral test and a number of empirical tests. The algorithm has been implemented on the APE-100, a parallel computer now intensively used in elementary particle physics (for a short description and guide to the literature see ref. 4]). One may also easily write a FORTRAN code for the generator, which will run correctly on any machine complying with the IEEE-754 standard for single precision oating point arithmetic. The de nition and basic properties of the Marsaglia-Zaman algorithm are reviewed in sect. 2. For appropriately chosen parameters the period of the generator can be proved to be very large 3]. Its statistical properties are however not as good as initially assumed. In particular, the generator fails in the classical gap test 5] and an unfavourable lattice structure in the distribution of random numbers in high dimensions has been discovered 6,7]. The important new observation made in this paper is that the Marsaglia- Zaman algorithm is closely related to a dynamical system, which is known to be chaotic in a strong sense (it is a so-called K -system 11]). One then infers that the correlations detected in the gap test, for example, are short ranged in time. A sequence of random numbers with much better statistical properties is therefore obtained by picking out elements of the original sequence at time intervals greater than the correlation time. All this is explained in sects. 3 and 4, and the quality of the so improved generator is evaluated in sect. 5. Imple- mentation details and timing benchmarks for various machines are included for completeness. 2. Marsaglia-Zaman generator The random number generator de ned below is based on a so-called subtract-with-borrow algorithm 3]. For the particular choice of parameters speci ed in subsect. 2.4 the generator is known by the name of RCARRY 8]. 2.1 De nition Let b be an arbitrary integer greater than 1, referred to as the base, and de ne X to be the set of integers x satisfying 0 x < b. The algorithm generates a random sequence x0 ; x1 ; x2; : : : of elements of X recursively, together with a sequence c0 ; c1 ; c2; : : : of \carry bits". The latter take values 0 or 1 and are used internally, i.e. the interesting output of the algorithm are the numbers xn , or rather xn =b, if one requires random numbers uniformly distributed between 0 and 1. The recursion involves two xed lags, r and s, which are assumed to satisfy r > s 1. For n r one rst computes the di erence n = xn s xn r cn 1 ; (2:1) and then determines xn and cn through xn = n ; cn = 0 if n 0; xn = n + b; cn = 1 if n < 0: It is trivial to verify that xn is contained in X if xn s and xn r are and if cn 1 is 0 or 1. The name \carry bit" for cn is now quite intuitive, since cn simply indicates whether a shift by the base b was necessary when computing xn . To start the recursion, the rst r values x0 ; x1 ; : : :; xr 1 together with an initial carry bit cr 1 must be provided. The con gurations x0 = x1 = : : : = xr 1 = 0; cr 1 = 0; (2:3) x0 = x1 = : : : = xr 1 = b 1; cr 1 = 1; (2:4) should be avoided, because the algorithm yields uninteresting sequences of numbers in these cases. All other choices of initial values are admitted in the following and we shall then say that the generator has been properly initialized. 2.2 Period of the generator For some values of the base b and the lags r; s, the period of the sequence generated through eqs.(2.1),(2.2) can be determined rigorously. De ne m = br bs + 1 (2:5) and let q be the smallest positive integer such that bq = 1 mod m: (2:6) The existence of q is guaranteed since m and b are relatively prime. An important mathematical result of Marsaglia and Zaman now is 3] Theorem 2.1. If m is a prime number, the period of the generator de ned through eqs.(2.1),(2.2) is equal to q . More precisely, if the generator has been properly initialized, the following is true. 1. For all n r we have xn+q = xn . 2. Any number p, such that xn+p = xn for more than r successive values of n, is an integer multiple of q . It should be emphasized that the period is independent of the chosen initial values x0 ; x1 ; : : :; xr 1 . Note that this particular string of numbers may not occur anywhere else in the sequence, i.e. in general the algorithm gets into a loop only after the rst r updates have been made. Another comment is that the period of the generator must be expected to depend on the initial values, if m is not prime. Such generators are not safe and should be avoided unless all periods can be shown to be large. 2.3 Associated linear congruential generator The algorithm of Marsaglia and Zaman is closely related to the standard linear congruential generator with multiplier a = m (m 1)=b (2:7) and modulus m 6]. Such generators have been studied vigorously in the past and we shall later rely on some of this theory when we discuss the statistical properties of the random number sequence produced by the Marsaglia-Zaman The linear congruential generator alluded to above operates on the set of all integers y in the range 0 < y < m. Starting from an initial value y0 , a sequence of random numbers y0 ; y1 ; y2 ; : : : is obtained recursively through yn = ayn 1 mod m: (2:8) The multiplier a satis es ab = 1 mod m (2:9) and the recursion is thus equivalent to yn = byn+1 mod m: (2:10) It is not di cult to show that the period of the sequence is equal to q if m is The relation between this generator and the Marsaglia-Zaman generator is summarized by 6] Theorem 2.2. Let (xn )n 0 be a sequence of random numbers generated through the Marsaglia-Zaman algorithm, with carry bits (cn )n 0 and proper initial values. Then, for all n r, the integers r 1 X s 1 yn = xn r+k b k xn s+k bk + cn 1 (2:11) k=0 k=0 are in the range 0 < yn < m. Moreover the relation byn+1 yn = mxn (2:12) holds and the sequence (yn )n r is thus generated through the recursion (2.8). The theorem shows at once that the Marsaglia-Zaman algorithm is essentially a clever way to implement certain linear congruential generators with huge moduli. Manipulations of large integers are avoided by breaking them up into a vector of smaller numbers which are then processed one by one. 2.4 Choice of parameters Most computers used for large scale numerical simulations have been designed to yield maximum performance for oating point operations. The parameters b, r and s should thus be chosen so as to be able to implement the generator using oating point arithmetic. Single precision real numbers on computers complying with the IEEE- 754 standard are represented by a string of 32 bits, with 23 bits reserved for the mantissa and the rest for the sign and exponent of the number. Signed integers of absolute magnitude up to 224 can thus be dealt with exactly on such machines using oating point arithmetic. So if we choose b = 224 ; (2:13) all elements of X (and b itself) will be computer representable numbers. As for the lags r and s, we take r = 24; s = 10; (2:14) a choice proposed by Marsaglia and Zaman 3] and recommended by James 8]. The di erence n in the recursion (2.2) then is n = xn 10 xn 24 cn 1 ; (2:15) and 24 integers x0 ; x1 ; : : :; x23 in the range 0 xk < 224 plus a carry bit c23 are required to initialize the generator y. Note that no rounding occurs in the computation of n , since the nal and intermediate results are representable numbers, i.e. the algorithm is implemented exactly. The modulus m and multiplier a for this choice of parameters are given m = 2576 2240 + 1; (2:16) a = 2576 2552 2240 + 2216 + 1: (2:17) Using elementary number theory and the complete decomposition of m 1 into prime factors, it is possible to prove that m is a prime number 3]. The y The FORTRAN code for this algorithm printed in ref. 8] contains an error. A correct program is obtained by interchanging the indices I24 and J24 in the line UNI=SEEDS(I24)- SEEDS(J24)-CARRY 9]. period of the generator is thus determined by theorem 2.1. Some further work then yields q = (m 1)=48 ' 5:2 10171 ; (2:18) which is a very long period indeed. There is no chance that, on any earthly computer, one will ever come close to exhausting this sequence of random In the following the parameters of the generator are assumed to be as speci ed above. The reader should however meet no di culty in carrying over the discussion to any other case of interest. 3. Origin of statistical correlations The Marsaglia-Zaman generator is now known to fail in several empirical tests of randomness, a particularly simple case being the gap test ( 5]; for a lucid description of the test see ref. 10], p.60f). As explained below there are in fact some rather obvious correlations between successive vectors of r random numbers. They are seen most clearly when the generator is described in the language of dynamical systems. 3.1 Geometrical preliminaries The unit hyper-cube in r dimensions is the set of all vectors v = (v0 ; v1 ; : : :; vr 1 ) (3:1) with real components between 0 and 1. If opposite faces of the hyper-cube are identi ed one obtains an r dimensional torus T r . The points on this manifold are also represented by vectors v , as above, with the understanding that v and w describe the same point if vk = wk mod 1 for all k. T r contains a discrete subset, T_ r , which consists of all vectors v with components of the form vk = nk =b; nk = 0; 1; 2; : : :; b 1: (3:2) T_ r is an r dimensional hyper-cubic lattice with spacing 1=b, which may be regarded as a discrete approximation of the torus. The distance between any two points v and w on T r is de ned through d(v; w) = max dk ; dk = min jvk wk j; 1 jvk wk j : (3:3) It is straightforward to check that d has all the properties required for a decent distance function on T r . In particular, it is invariant under translations modulo 1. 3.2 The Marsaglia-Zaman generator as a dynamical system Let us now consider a sequence of random numbers x0 ; x1 ; x2 ; : : : generated through the Marsaglia-Zaman algorithm, with carry bits (cn )n 0 and proper initial values. The vectors v (t) = (xn ; xn+1; : : :; xn+r 1 )=b; n = rt; (3:4) de ne a point on the (discrete) torus T r which moves as the \time" t progresses from 0 in steps of 1. If we also introduce a time dependent carry bit, c(t) = crt+r 1; (3:5) it is clear that the evolution of v (t) and c(t) is determined by the recursion We are thus led to interpret the Marsaglia-Zaman generator as a discrete dynamical system, consisting of a set S of states and a mapping : S 7! S . A state is de ned by a point on the discrete torus and a carry bit. maps any such state onto the next one, viz. v (t + 1); c(t + 1) = v (t); c(t) : (3:6) Note that does not refer to any of the previous states. One only needs to know the current state to be able to compute the next one. 3.3 Continuity and statistical correlations For a good generator one requires that successive vectors of random numbers be statistically independent. That is, if (v; c) runs through all possible states, the joint distribution of (v; c) and (v; c) should be uniform on S S . Of course this cannot be true since operates on a nite set of states. The distribution is at best approximately uniform. Since one can only generate a relatively small number of states in practice, one is anyway unable to test the distribution very precisely. One should however be worried by correlations that are strong enough to give a measurable e ect in any simple statistical We now show that such correlations exist. Let us rst ignore the carry bits. The recursion (2.1),(2.2) then reads xn = xn s xn r mod b (3:7) and becomes a linear transformation of the torus. An important consequence of this fact is that nearby points are mapped onto nearby ones. So if one chooses a set of random points v in some small volume, their successors (v ) are contained in some other small volume. In particular, they are not scattered over the whole torus, as one would expect if (v ) were statistically independent of v . The carry bits only a ect the least signi cant digits of the random num- bers and so cannot destroy the basic continuity of . More precisely, if we de ne (^; c) = (v; c); (3:8) it is possible to show that v ^ d(^; w) 4d(v; w) + 3=b: (3:9) The distance between two points on T r thus increases by at most a factor 4 plus 3 lattice spacings. In particular, small regions are mapped onto small regions and so we again conclude that successive vectors of random numbers are strongly correlated. It should be emphasized that the e ects caused by these correlations are readily seen in empirical tests. In particular, the failure of the Marsaglia- Zaman generator in the gap test can be explained in this way. Note, inciden- tally, that similar correlations are present in all lagged Fibonacci generators using addition or subtraction as the binary operation. 4. Deterministic chaos A characteristic feature of chaotic dynamical systems is that trajectories starting at nearby states diverge exponentially with time. Even if the evolution is locally continuous, such a system appears to behave randomly on larger time scales. One could also say that any state speci ed to some nite precision has an exponentially deteriorating memory of its history. We now show that the dynamical system underlying the Marsaglia-Zaman generator is chaotic in this 4.1 Numerical experiment It is helpful to start with a simple experiment illustrating the chaotic nature of the mapping . The experiment consists in choosing a random sample of 1000 pairs of trajectories v (t); c(t) and v 0 (t); c0 (t) , with initial values separated by 1 lattice spacing, viz. d(v (0); v0(0)) = 1=b: (4:1) One then computes the average distance (t) = d(v (t); v 0(t)) (4:2) as a function of the evolution time t. Fig. 1 shows that the trajectories are rapidly diverging. In the range 4 t 16 the data are well described by (t) = Aet ; A = 5 10 8; (4:3) i.e. the separation is growing exponentially with a rate close to 1. Around t = 17, (t) levels o and assumes a value equal to 12=25 within statistical errors. This is the average distance between two randomly chosen points on the torus, thus indicating that v (t) and v 0 (t) are no longer correlated. Fig. 1. Average distance (t) between neighbouring trajectories as a function of the evolution time t. 4.2 Continuum limit For the further study of deterministic chaos it is now useful to pass to the continuum limit 1=b ! 0, where the space of states S becomes equal to the full torus T r and the carry bit is neglected. This is an accurate approximation to the discrete system on short time scales and if all distances of interest are much greater than the lattice spacing. In particular, the evolution of diverging trajectories can be expected to be correctly described when they are su ciently far apart. In the continuum limit the mapping reduces to (v ) = Lr v mod 1; (4:4) where L is the linear transformation Lv = (v1 ; v2; : : :; vr 1 ; vr s v0 ): (4:5) L can be considered an r r matrix with entries 0; 1 and 1. It is then trivial to verify that det L = 1 and is hence invertible and volume preserving. According to the established mathematical terminology, the continuum system (T r ; ; ) (where denotes the standard measure on T r ) is a classical dynamical system. The occurence of chaos in such systems has been studied extensively and many deep results have been obtained. In the rest of this section the system (T r ; ; ) will be discussed from the point of view of the mathematical theory. Although no previous knowledge on dynamical systems is required, the reader may now nd it useful to consult one or the other book on the subject such as refs. 11{13], for example. 4.3 Liapunov exponent In the continuum system the exponential rate of divergence of neighbouring trajectories can be computed analytically as follows. Suppose v (t) and v 0 (t) are two trajectories such that their distance is very much smaller than 1 at t = 0. Let us de ne the di erence vector u(t) = v 0 (t) v (t) mod 1; 1 2 < uk (t) 1 2: (4:6) It is clear that the norm of this vector, ku(t)k = max juk (t)j; is equal to the distance between the trajectories at time t. Furthermore, from eq.(4.4) one infers that u(t + 1) = Lr u(t) (4:8) if kLr u(t)k < 1 , a condition which is satis ed as long as the trajectories are su ciently close. The dominant exponential growth of the deviation vector u(t) is hence determined by the largest eigenvalues of L. The characteristic equation of L, r r s + 1 = 0; (4:9) can easily be solved numerically and one nds that all eigenvalues are com- plex and non-degenerate. There are 4 eigenvalues with maximal absolute value given by j jmax = 1:04299 : : : (4:10) Now if the initial deviation vector u(0) has a non-zero component in the di- rection of the corresponding eigenvectors (which is the generic case), one con- cludes that ku(t)k / e t (4:11) at large times t, where = r ln j jmax = 1:01027 : : : (4:12) Of course eq.(4.11) only holds as long as the evolution equation (4.8) applies. By considering smaller and smaller initial deviations, this condition will be ful lled for any desired length of time. Eq.(4.11) then becomes asymptotically The exponent is referred to as the Liapunov exponent of the system. As already noted in subsect. 4.2, the evolution of diverging trajectories in the discrete system is expected to be accurately described by the continuum system. A comparison of the result of the experiment, eq.(4.3), with the value of the Liapunov exponent con rms this. We have thus shown that the chaotic behaviour of the Marsaglia-Zaman generator can be traced back to the instability of the underlying lagged Fibonacci generator. 4.4 Kolmogorov entropy and mixing The continuum system (T r ; ; ) can be proved to belong to a class of strongly unstable systems. While the relevance of this remark for the discrete system is not completely obvious, it does provide some further insight into how repeated application of a smooth mapping can lead to randomness. The mapping is in many respects similar to the famous cat map of Arnold. In particular, under the action of the torus is stretched in r=2 directions and shrunk in r=2 complementary directions. After many iterations any region in T r (a cat's body, for example) is rst made very long and thin and then wrapped on the torus. As a result the region is scattered over the whole manifold. These heuristic remarks can be made much more precise and it is then possible to show, using the theorems discussed in ref. 11], that (T r ; ; ) is a so-called K -system. This means that it has a positive Kolmogorov entropy and that consequently it is mixing and ergodic. The property of mixing is particularly intuitive. It states that A \ t (B) = (A) (B) (4:13) for all measurable sets A; B . In other words, if the set B is evolved for a long time, it will be uniformly distributed over the torus and thus occupies a fraction (B ) of every other set A (recall that is volume preserving). The Kolmogorov entropy is a substantially more di cult notion. Basically it is the rate at which the knowledge about the system is lost as it evolves from an only imprecisely speci ed initial state. A positive entropy thus implies that one loses information exponentially fast. 5. Improved generator The important qualitative implication of the chaotic nature of is that the correlations discovered in sect. 3 are short ranged in time. A sequence of random numbers with signi cantly better statistical properties is therefore obtained by keeping only a fraction of the full sequence of numbers produced by the Marsaglia-Zaman algorithm. The precise rule is given below and several statistical tests are performed to con rm the expected improvement. 5.1 De nition We again start from a sequence of random numbers x0 ; x1; x2 ; : : : generated through the Marsaglia-Zaman algorithm, with carry bits (cn )n 0 and proper initial values. Instead of using all numbers xn , we now read r successive elements of the sequence, discard the next p r numbers, read r numbers, and so on. The integer p r is a xed parameter which allows us to monitor the fraction of random numbers \thrown away". In particular, the old generator corresponds to p = r, where no numbers are discarded. The numbers selected in this manner de ne a history of states v (t); c(t) v (t) = (xn; xn+1; : : :; xn+r 1 )=b; n = pt; c(t) = cn+r 1 : As before the time evolution is generated by a well-de ned mapping p : S 7! S such that v (t + 1); c(t + 1) = p v (t); c(t) : (5:2) In the continuum limit p reduces to the linear transformation p (v ) = Lp v mod 1; (5:3) where L is given by eq.(4.5). The discussion in sect. 4 now suggests that deterministic chaos leads to a complete decorrelation of successive states for values of p greater than about 16r = 384. For such p the corresponding sequence of random numbers is expected to possess excellent statistical properties. In practice one may be satis ed with a smaller value of p, as a full decorrelation, down to the level of the least signi cant bits, may in many cases be unnecessary. The statistical tests reported in the following subsections help to clarify the situation and a more de nite recommendation as to which value of p to choose will be issued after that. 5.2 Spectral test For any state (v; c) an integer y in the range 0 < y < m may be de ned r 1 X s 1 y= vk bk+1 vr s+k bk+1 + c; (5:4) k=0 k=0 where v0 ; v1 ; : : :; vr 1 are the components of v (cf. theorem 2.2). y should be regarded as an observable constructed from the given state. In particular, a trajectory v (t); c(t) of states, generated by the mapping p , is associated with a sequence of values y (t). Theorem 2.2 tells us that y (t + 1) = ap y (t) mod m; (5:5) i.e. p is related to a linear congruential generator with modulus m and mul- tiplier ap mod m. The multi-dimensional distributions of y (t) can be studied by applying the powerful spectral test for linear congruential generators. The test e ectively probes the statistical independence of successive states v (t); c(t) , since any correlation between the values of y (t) can be regarded as a correlation among the corresponding states. For a detailed description of the spectral test the reader is referred to Knuth's book 10]. Here we merely introduce the necessary notations and discuss the results of the test. An infamous property of linear congruential generators is that vectors of D successive random numbers fall on parallel hyper-planes with often appre- ciable spacing. The spectral test consists in calculating the maximal spacing hD , or rather the \accuracy" D = 1=hD , for low dimensionalities D. The Table 1. Merits D of some generators with modulus m and multiplier ap mod m p 2 3 4 5 6 7 8 48 0:20 0:07 0:03 9 23 5:08 2 33 2 31 96 2:67 1:04 1:64 0:04 1:60 0:14 0:10 192 1:82 0:67 0:70 1:53 2:69 4:78 1:54 384 0:56 0:82 2:30 1:56 0:84 4:60 0:29 768 1:63 2:59 3:08 0:59 0:96 1:29 1:12 223 1:80 0:87 2:39 3:79 2:29 0:78 2:29 389 2:27 3:46 3:92 2:49 2:98 4:23 0:46 = 10 ; 1 m and a are given by eqs.(2.16),(2.17)] outcome of the spectral test may be rated through the gures of merit ( D )D : D = 1 (5:6) m 2D + 1 Good generators achieve values of D greater than 1 for say D = 2; : : :; 6. On the other hand, if the merit is signi cantly smaller than 0:1 for some of these dimensions, one has picked a particularly bad multiplier. The results of the spectral test are listed in table 1. The rst line cor- responds to the original generator where no random numbers are discarded. As already noted in refs. 6,7], there are strong correlations between successive values of the observable y in this case, for any dimensionality D. Evidently this generator is a poor source of random numbers. In general the merits are quite acceptable for p greater than about 200. The merits for two favoured values around 200 and 400 are listed in the last two lines of table 2. All this is very much in line with what one expects from deterministic chaos. It should however be emphasized that the spectral test is a full period test, while the decorrelation through diverging trajectories takes place on short time scales. 5.3 Further statistical tests a. Serial correlation test. This test is applied to the associated linear con- gruential generator. It is a full-period theoretical test, where one computes the correlation coe cient between successive values of y exactly (see ref. 10] for further explanations). For values of p greater than about 100 it is passed b. Gap test. In ref. 5] the original generator (p = 24) has been subjected to a large number of empirical tests. All tests were passed with the exception of the gap test. This test has now been repeated for various values of p, with the same test parameters, and no signi cant statistical correlations were detected for p 48. c. Ising model. Simulations of the 2-dimensional Ising model, using cluster algorithms, have proved to be a particularly sensitive test of random number generators 1,2]. Such a test has recently been performed by Wol 14] for p = 223 and p = 389. In both cases no discrepancy between the simulation data and the exact analytic results was found. d. SU(2) lattice gauge theory. The generator with p = 223 is now being used in some high-precision calculations of the running coupling in the SU(2) lattice gauge theory 15]. So far all results obtained are compatible with earlier computations where shift register generators were employed. 5.4 Recommended values of p From the theoretical discussion and the tests of the improved generator one concludes that the remaining statistical correlations are small when p is greater than about 200. The recommended default value is p = 223, and if one has any doubts that the simulation results might be biased by the random number generator, one may still set p = 389. A decorrelation of successive vectors of r random numbers down to the least signi cant digits is then guaranteed. To take still larger values of p appears to be pointless, since no empirical test or theoretical consideration indicates that a further improvement will be Table 2. Average time needed to produce 1 new random number (p = 223) machine time s] SUN 10-41 5 HP 9000/735 2 CRAY YMP (1 CPU) 0.7 APE-100 (1 node) 5 5.5 Implementation and timing As discussed in sect. 2, the Marsaglia-Zaman algorithm can be implemented exactly using single precision oating point arithmetic. If random numbers between 0 and 1 are desired, it is advantageous to work directly with the numbers xn =b instead of xn . No rounding is implied by this renormalization since b is a power of 2, i.e. the implementation remains exact. A portable FORTRAN code for the improved generator has been devel- oped by James 16] and is available through the CPC library. The name of the program is RANLUX. It comes with an initialization subroutine and further entry points to save and read the state of the generator. The generator has also been implemented on the APE-100 parallel com- puter 17]. The program may be obtained through anonymous ftp by dialing 141.108.16.27 and copying the contents of the directory pub/random, or by writing to the author (luscher@ips102.desy.de). Since one uses only a fraction of the basic sequence of random numbers, the improved generator tends to be slow. For numerical simulations of lattice eld theories, where large quantitites of random numbers are requested, it is hence important to take full advantage of any pipelining capabilities of the hardware. A di culty here is that the Marsaglia-Zaman recursion (2.1),(2.2) refers to the carry bit cn 1 computed in the preceding step and so is not suitable for vectorization. The problem can be overcome by running several copies of the generator in parallel, with di erent initial values. The arithmetic operations are then pipelined horizontally, i.e. when looping over the copies. On the APE-100, for example, a good e ciency is achieved with 24 copies on each node. Some care should of course be paid to properly initialize the generators. In view of the astronomical period of the generator, the chances that any two of the copies yield signi cantly correlated random numbers are however extremely slim. Some timing benchmarks for the improved generator with p = 223 are listed in table 2 14,17]. The programs were written in FORTRAN and APESE, a high-level language for the APE-100. It is obvious that the numbers quoted depend on many technical details. They should hence be interpreted as a rough estimate of what can be achieved with a modest programming e ort. 6. Concluding remarks A well-known problem with random number generators is that their qual- ity is di cult to assess in any rigorous way. Some con dence in the reliability of any given generator can of course be gained by performing a large number of statistical tests. But doubts will always remain that the generator might fail in the next test. There exists an impressive list of classical dynamical systems which have been shown to be strongly chaotic. The states in these systems move ran- domly on time scales substantially greater than a certain characteristic time, related to the Liapunov exponent of the system. It should be emphasized that randomness can be given a precise mathematical meaning in this framework. The random number generator discussed in this paper may be considered a discrete approximation to such a chaotic dynamical system. A theoretical understanding of why the algorithm yields statistically independent random numbers is thus obtained. On longer time scales theoretical support for the good quality of the generator comes from the spectral test and the fact that the period can be shown to be extremely long. One might object that the generator is too slow for large scale applications. But other parts of the program are often much more costly so that the extra computer time needed for the generator is insigni cant. One may also prefer to pay the price rather than taking any risk of producing corrupted data, especially when spending months of parallel computer time to a single project. I would like to thank Ulli Wol for performing the Ising model tests and providing some of the timing benchmarks quoted in table 2. I am also indebted to Fred James for various useful informations and constant encouragement. Helpful discussions with Kari Kankaala, Rainer Sommer, Marcus Speh, Frank Steiner and Peter Weisz are gratefully acknowledged. 1] A. M. Ferrenberg, D.P. Landau, and Y. J. Wong, Phys. Rev. Lett. 69 (1992) 3382 2] P. D. Coddington, Analysis of Random Number Generators Using Monte Carlo Simulation, preprint, Northeast Parallel Architectures Center, Syra- cuse University (1993) 3] G. Marsaglia and A. Zaman, Ann. Appl. Prob. 1 (1991) 462 4] E. Marinari, Nucl. Phys. B (Proc. Suppl.) 30 (1993) 122 5] I. Vattulainen, K. Kankaala, J. Saarinen and T. Ala-Nissila, A Compar- ative Study of Some Pseudorandom Number Generators, preprint, Uni- versity of Helsinki HU-TFT-93-22, hep-lat 9304008 6] S. Tezuka, P. L'Ecuyer and R. Couture, On the Lattice Structure of the Add-With-Carry and Subtract-With-Borrow Random Number Genera- tors, preprint (1993) 7] R. Couture and P. L'Ecuyer, On the Lattice Structure of Certain Lin- ear Congruential Sequences Related to AWC/SWB Generators, preprint 8] F. James, Comp. Phys. Commun. 60 (1990) 329 9] F. James, private communication (1993) 10] D. E. Knuth, Semi-Numerical Algorithms, in : The Art of Computer Pro- gramming, vol. 2, 2nd ed. (Addison-Wesley, Reading MA, 1981) 11] V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics (Addi- son-Wesley, Redwood City, 1989) 12] H. G. Schuster, Deterministic Chaos, 2nd ed. (VCH Verlagsgesellschaft, Weinheim, 1989) 13] A. M. Ozorio de Almeida, Hamiltonian Systems: Chaos and Quantization (Cambridge University Press, Cambridge, 1988) 14] U. Wol , private communication (1993) 15] R. Frezzotti, M. Guagnelli, M. Luscher, R. Petronzio, R. Sommer, P. Weisz and U. Wol , work in progress 16] F. James, Comp. Phys. Commun., to appear 17] M. Luscher, A Random Number Generator for the APE-100 Parallel Com- puter, unpublished internal report (June 1993)
{"url":"http://www.docstoc.com/docs/143259308/9309020v1","timestamp":"2014-04-24T22:30:13Z","content_type":null,"content_length":"90254","record_id":"<urn:uuid:70f47cc3-8cc0-46de-a48c-e5a04626213c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
│ │ Thursday │ Friday │ Saturday │ Sunday │ │ 9:00 - 10:00 │ Ozsváth │ Kotschick │ Kotschick │ Schönenberger │ │ 10:15 - 11:15 │ Ozsváth │ Bourgeois │ Braun │ Szabó │ │ 11:30 - 12:30 │ Strle │ Colin │ Némethi │ │ │ 14:30 - 15:30 │ │ Geiges │ Rollin │ │ │ 15:45 - 16:45 │ Jacobsson │ Lisca │ Bobadilla │ │ Javier Fernandez Bobadilla: TBA Frederic Bourgeois: Fundamental group of the space of tight contact structures on torus bundles We show that the fundamental group of the space of tight contact structures on torus bundles coincides with the infinite cyclic subgroup described by Geiges and Gonzalo. The proof uses results on characteristic foliations, convex surfaces and bypasses by Giroux and Honda. This work was initiated jointly with Fabien Ngo. Gábor Braun: Recovering a surface singularity from its resolution graph Let us consider an isolated surface singularity at the origin given by a polynomial in three variables. For Newton non-degenarate singularities whose link is a rationally homology sphere, we give an algorithm which computes the singularity from the resolution graph. Vincent Colin: Reeb vector fields and open book decompositions: the periodic case We prove that any contact structure supported by an open book whose monodromy is (isotopic to) a periodic diffeomorphism satisfies the Weinstein conjecture. The approach is to study holomorphic curves for a particularly nice Reeb vector field. It also allows to deal with the topology of the manifold. This is a joint work with Ko Honda. Hansjörg Geiges: On the classification of Legendrian knots This is a report on joint work with Fan Ding about certain knot and link types whose Legendrian realisations (e.g. in the 3-sphere with its standard contact structure) are classified by the two classical invariants (Thurston-Bennequin invariant and rotation number). I shall try to present the general scheme (due to Etnyre and Honda) behind such classification results. Magnus Jacobsson: A review of Khovanov homology with applications relevant to this workshop. Dieter Kotschick: Foliations and symplectic structures We shall discuss the interplay between symplectic geometry and the theory of foliations, concentrating on foliations with symplectic holonomy. Even the special case when there are two complementary symplectic foliations is very interesting. We shall consider connections to the group homology of symplectomorphism groups as discrete groups, to the theory of bi-Hamiltonian systems, to symplectic pairs and to holomorphic symplectic structures. Paolo Lisca: 2-bridge knots and the ribbon conjecture The long-standing ribbon conjecture states that a smoothly slice knot in the 3-sphere is ribbon. I will describe a proof of the ribbon conjecture for the special class of 2-bridge knots. András Némethi: The canonical contact structure of isolated singularities. On the local link of any complex analytic isolated singularity on has a canonical contact structure induced by the complex structure. We show that this structure is supported by all the Milnor open book decompositions associated with analytic germs defined on the singularity. Moreover, we prove that for surface singularities, the contact structure is determined up to a contactomorphism by the topology of the link (a fact, which is not true in higher dimensions). Peter Ozsvath: Floer homology and knots and links Given a Heegaard diagram for a closed three-manifold, one can associate an invariant defined by counting pseudo-holomorphic curves in a symmetric product of the Heegaard surface. This construction can be adapted to the case of knots. For knots in the three-sphere, this gives an invariant whose Euler characteristic in a suitable sense is the Alexander polynomial, and the invariant detects the Seifert genus of the knot. I will describe this construction and some of its applications. This material is joint work with Zoltan Szabo. In the second lecture I will discuss Floer homology and links -- some extensions of the earlier construction to the case of links. Yann Rollin: Contact invariants and Monopole Floer homology A element of the monopole Floer homology is associated to every contact structure on 3-dimensional manifold. We show that this contact invariant is functorial for the category of special symplectic cobordisms. We use this property to relate the contact invariant to the Floer homology of mapping tori. Stephan Schönenberger: Determining symplectic fillings from planar open books Saso Strle: Definite four-manifolds with boundary According to a celebrated theorem of Donaldson, if the intersection form of a smooth closed four-manifold is definite then it is diagonalizable. Later proofs of this result use Elkies' characterisation of the diagonal definite unimodular form. I will describe a generalization of Elkies' theorem to forms of arbitrary determinant. Combined with a theorem of Ozsváth and Szabó this gives a generalization of Donaldson's theorem to four-manifolds with boundary. This is joint work with Brendan Owens. Zoltán Szabó: Link Floer homology and the Thurston norm. In this lecture we compute the link Floer homology HFL for various links in S^3. We also study a relationship between HFL and the Thurston norm of the link complement.
{"url":"http://www.renyi.hu/conferences/2005/knots/program.html","timestamp":"2014-04-21T03:01:51Z","content_type":null,"content_length":"12588","record_id":"<urn:uuid:34553275-8660-4ea4-aa26-29c3f4411681>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Re: Arbitrary Objects charles silver silver_1 at mindspring.com Sat Feb 9 10:26:46 EST 2002 Fine, Kit, _Arbitrary Objects_ (Aristotelian Society Series, Vol 3, Blackwell, Oxford, 1985). I... The General Framework II...Some Standard Systems III..Systems in General IV..Non-Standard Systems The Preface begins: "This book had its origin in the classroom. I was teaching natural deduction to a group of students and ahd come to the point at which the rule of universal generalization is introduced. I had wanted to give an explanation of the rule in terms of arbitrary objects. But my sense of rigour got in the way, and I gave instead an explanation in terms of schematic names. When I left the classroom, I gave the matter some more thought. I hold it as a general methodological principle that when there is a clash between intuition and rigour, when one's sense of rigour prevents one from saying what, from an intuitive point of view, it seems that one can say, then it is rigour and not intuition that should give way. Applying this principle to the case at hand, it seemed that there should be an account of arbitrary objects upon the basis of which a satisfactory explanation of the rule of universal generalization could be given. It was the attempt to develop such an account that led to the present work." The Introduction begins: "This book deals with certain problems in understanding natural deduction and ordinary reasoning." He singles out two rules for special consideration: universal generalization (UG) and existential instantiation Part I, The General Framework, begins with Chapter 1, Arbitrary Objects Defended. In this section, he says (p. 5) "An arbitrary object has those properties common to the individual objects in its range. So, an arbitrary number is odd or even, an arbitrary man is mortal, since each number is odd or even, each individual man is mortal...." "Such a view used to be quite common, but has now fallen into disrepute." Fine says that "Frege led the way" in this, and "[w]here Frege led, others have been glad to follow. Among the many subsequent philosophers who have spoken against arbitrary objects, we might mention Russell, Lesniewski, Tarski, Church, Quine, Rescher, and Lewis. [Fine provides references for each.]..." "In the face of such united opposition, it might appear rash to defend any form of the theory of arbitrary objects. But that is precisely what I intend to do. Indeed, I would want to claim, not only that a form of the theory is defensible, but also that it is extremely valuable. In application to a wide variety of topics -- the logic of generality, the use of variables in mathematics, the role of pronouns in natural language -- the theory provides explanations that are as good as those of standard quantification theory and sometime" (p.6) He then mounts a defense of arbitrary objects by replying to criticisms of them, and in the process of responding to criticisms reveals what exactly he takes arbitrary objects to be. After he has dealt with criticisms of arbitrary objects, a theory of them emerges, for which he then provides a technical account in Chapter 2 (of Part I), The Models. Whereas the earlier discussion was exclusively philosophical, this chapter is quite technical. He develops the model theory for arbitrary objects and proves a number of lemmas pertaining to them. One aspect of this which deserves to be singled out is his account, both philosophical and mathematical, of how certain arbitrary objects come to depend on others. This leads to "dependency relations" among arbitrary objects, which, later on, is more fully developed (and applied to various standard natural deduction systems) in terms of "dependency diagrams." Fine provides fully convincing reasons (philosophical and technical) that a satisfactory account of arbitrary objects must take their dependency relationships into consideration. The "A-model," explained in Chapter 2, then has various conditions applied to it in Chapter 3, The Conditions. Four types of conditions on A-models are considered: i) the extendibility of value assignments, ii) the existence of A-objects (i.e., arbitrary objects), iii) their identity, and iv) their multiplicity. Incidentally, he also calls A-objects "generic objects," and A-models "generic models," which may be of interest to category theorists (though I am not qualified to say whether there are important comparisons or not.) You may be wondering what this is leading up to. Among other things Fine does with A-models, he uses them to prove a kind of "generic completeness theorem," and similarly, generic soundness. His view and treatment of arbitrary objects thus leads to a way of evaluating each natural deduction system not only in terms of its completeness and soundness, but also for appraising it on the basis of its "naturalness" and "intuitiveness." Thus, the arbitrary-object approach is pressed into service for helping us to see which natural deduction systems seem preferable over others, and on what basis. Fine considers several well-known systems of natural deduction, among them systems of Hilbert, Gentzen, Quine, Copi (as made sound by Kalish), Kalish & Montague, and numerous others. He offers variants of these systems, compares them, and sometimes creates hybrid systems by combining two distinct ones. In each case, he employs his notions of genericity in evaluating them. There is much, much more to Fine's book, but I hope the above gives some idea of the book. He ends with a short Chapter on Inclusive and Intuitionistic Systems. Though this chapter is brief, it is very At any rate, I hope FOMers will read Fine's book, because I would like to know their opinions of Fine's treatment of the issues raised in it. P.S. To Thomas: I hope this short summary suffices to take your "God-given" graduate students off the hook. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-February/005204.html","timestamp":"2014-04-16T10:13:47Z","content_type":null,"content_length":"8295","record_id":"<urn:uuid:5e5816de-a93a-4b0d-8253-e44365a5a0f8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the minimum number of covering subsets? up vote 1 down vote favorite Fix a positive integer constant $C$. Let $S_1, S_2, \cdots , S_k$ be subsets of $\{ 1, 2, \cdots , N \}$. Let us call $S_1, S_2, \cdots , S_k$ a $C$-cover if for every subset $T$ of $\{ 1, 2, \cdots , N \}$, there exists $i_1, i_2, \ cdots , i_C$ not necessarily distinct such that $T\subseteq S_{i_1} \cup S_{i_2} \cup \cdots \cup S_{i_C}$ and $C \cdot |T| > |S_{i_1} \cup S_{i_2} \cup \cdots \cup S_{i_C}|$ For all positive integers $N$, What is smallest such $k$ so that there exists $k$ subsets that form a $C$-cover. I'm not looking for an exact answer but rather asymptotic bounds. Specifically, I'm wondering if the minimal $k$ is polynomial in $N$. I'm sorry if I'm stating this really badly. If T is empty, then $C \cdot |T| = 0$ which cannot exceed $|S_{i_1} \cup S_{i_2} \cup \cdots \cup S_{i_C}|$. So maybe you want to assume $T$ is not empty? Or make the inequality non-sharp? Likewise, if $T=\\{1,\cdots,N\\}$ then $|T| = N = |S_{i_1} \cup S_{i_2} \cup \cdots \cup S_{i_C}|$ so that would preclude the possibility $C=1$. Maybe this is intentional, though? Moreover, if $C> N$ then your inequality is automatically satisfied (except for the edge cases I mentioned above). Again, possibly intentional? Perhaps you could clarify a bit what you intended in these edge cases? – Max Horn Sep 9 '10 at 22:26 add comment 2 Answers active oldest votes Let $C=2$. Consider sets $T$ of some fixed cardinality $m$. There are $\binom Nm$ of such sets. Each of them must be covered by some set of the form $S_{i_1}\cup S_{i_2}$ of cardinality at most $2m$. A set of cardinality $\le 2m$ can cover no more than $\binom{2m}m\le 2^{2m}$ sets of cardinality $m$. Hence you need at least $2^{-2m}\binom Nm$ of distinct sets of the form $S_ {i_1}\cup S_{i_2}$. But there are only $k(k+1)/2\le k^2$ sets of the form $S_{i_1}\cup S_{i_2}$. Hence $k\ge 2^{-m}{\binom Nm}^{1/2}$. For a fixed $m$ and $N\to\infty$, $\binom Nm\sim c(m)N^m$, hence $k$ grows faster than a polynomial of degree $m/2$. And since $m$ is arbitrary, $k$ grows faster than any polynomial. To get up vote 2 an explicit exponential lower bound, take $m=N/4$ and use Stirling's formula to estimate the binomial coefficients. down vote For a larger $C$, the same argument shows that $k$ grows faster than any polynomial (the only difference is that you get degree $m/C$ rather than $m/2$). I have not checked the exponential lower bound but I am sure a suitable choice of $m$ will do it. add comment Thanks for the response. Yes, $T$ is nonempty. The $>$ should be a $\ge$. So $C$ is constant, but $N$ is not. I guess it's safe to assume that $N >> C$. up vote 0 down vote Sorry I didn't take note of the edge cases. I'm really only interested in an asymptotic bound for the minimum $k$ in terms of $N$ and $C$ and the constructions for the bounds. add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/38228/what-is-the-minimum-number-of-covering-subsets","timestamp":"2014-04-17T19:05:34Z","content_type":null,"content_length":"54421","record_id":"<urn:uuid:6ce18e54-881a-4b4a-aa5d-1efd07295d7f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Highland Park, IL Algebra Tutor Find a Highland Park, IL Algebra Tutor ...My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. I consistently monitor progress and adjust lessons to meet the specific needs of each individual student. Thank you for considering my services. 12 Subjects: including algebra 1, algebra 2, calculus, geometry I have an extensive background in math - both academically: PhD in Math and teaching at two large universities, as well as in the business arena: being part of Research & Development teams in major corporations for 12 years. I love when students go from "this is too big for me, I can't do it" to "y... 10 Subjects: including algebra 1, algebra 2, geometry, SAT math ...Even a few tutoring sessions can help improve your scores. I welcome the opportunity to discuss your student's needs and strengths to begin an enjoyable and productive learning experience with your student.I am certified to teach students in all core subjects from pre-k through age 21. I am als... 34 Subjects: including algebra 1, reading, English, ASVAB I have a PhD in microbial genetics and have worked in academic research as a university professor and for commercial companies in the biotechnology manufacturing sector. I have a broad background in science and math, a love of written and oral communication and a strong desire to share the knowledg... 35 Subjects: including algebra 2, SAT math, geometry, physics ...In addition, I teach history (U.S., European and World), politics and government. I have worked with students who not only need to know about these subjects, but also have to write papers on them. Let's get together and learn!I am qualified to tutor Hebrew for several reasons. 38 Subjects: including algebra 1, algebra 2, English, reading Related Highland Park, IL Tutors Highland Park, IL Accounting Tutors Highland Park, IL ACT Tutors Highland Park, IL Algebra Tutors Highland Park, IL Algebra 2 Tutors Highland Park, IL Calculus Tutors Highland Park, IL Geometry Tutors Highland Park, IL Math Tutors Highland Park, IL Prealgebra Tutors Highland Park, IL Precalculus Tutors Highland Park, IL SAT Tutors Highland Park, IL SAT Math Tutors Highland Park, IL Science Tutors Highland Park, IL Statistics Tutors Highland Park, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/highland_park_il_algebra_tutors.php","timestamp":"2014-04-17T11:20:47Z","content_type":null,"content_length":"24288","record_id":"<urn:uuid:4fb5155b-c057-4117-91af-fbab3cce20ac>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Pascal's Triangle November 30th 2008, 02:44 PM #1 Junior Member Oct 2008 Dallas, TX Pascal's Triangle Suppose b is an integer with b >= 7. Use the Binomial Theorem and the appropriate row of Pascal's triangle to find the base-b expansion of ((11)b)^4 (that is, the fourth power of the number (11)b in base b notation). I don't understand what exactly the question is asking. Do I need to write out a particular row of Pascal's triangle? What is meant by the base b notation? No row of Pascal's triangle contains 11 ^4 = 14641 in it, so what is meant by the fourth power? Thanks much. Hello, aaronrj! I had to read it twice to catch on . . . Suppose $b$ is an integer with $b \geq 7$. Use the Binomial Theorem and the appropriate row of Pascal's triangle to find the base-b expansion of $(11_b)^4$ (that is, the fourth power of the number $11_b$ in base b notation). We are expected to be familiar with number bases. For example, $3104_5$ means: $3\!\cdot\!5^3 + 1\!\cdot\!5^2 + 0\!\cdot\!5 + 4\!\cdot\!1 \:=\:104$ Then we see that: . $11_b$ means: . $1\!\cdot\!b + 1 \;=\;b + 1$ And they are asking for: . $(b + 1)^4$ . . Got it? December 1st 2008, 11:02 AM #2 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/discrete-math/62480-pascal-s-triangle.html","timestamp":"2014-04-17T12:54:14Z","content_type":null,"content_length":"34406","record_id":"<urn:uuid:34bae584-4dc5-4c25-bb4e-89456d28daa7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Glendale, WI ACT Tutor Find a Glendale, WI ACT Tutor ...I have been a successful cook for several years. I currently just focus on my own health and nutrition in cooking. I also took several cooking classes at the college level. 46 Subjects: including ACT Math, chemistry, algebra 1, statistics ...The results are then discussed with the client to determine if the problems were due to carelessness or due to a lack of understanding of a particular skill or concept. I have a wealth of resources to supplement the instruction of any concept or skill needed to succeed on the TEAS. A customized... 36 Subjects: including ACT Math, English, GED, reading ...Though I possess two Bachelor's degrees in Economics and International Studies, my tutoring specialty is high school and lower division college math courses. I particularly enjoy modeling the critical thinking skills required for Geometry courses. Additionally, I have had the opportunity to tea... 16 Subjects: including ACT Math, calculus, statistics, geometry ...For 3 years I was studying Psychology at the University of Wisconsin-Milwaukee, but discovered that my heart was not in it as much as I wanted it to be. So I decided to follow my true passion of cooking and photography. I am now a Culinary Arts student, studying Photography as well, at Milwaukee Area Technical College. 21 Subjects: including ACT Math, reading, writing, ESL/ESOL ...I love to help and watch others succeed. I went to school originally for engineering, but I always had a passion for math more than anything else and so I switched to a math degree. Currently, I work Monday-Friday until 3:30, but my nights are free and I am willing to work weekends as well. 23 Subjects: including ACT Math, English, reading, calculus Related Glendale, WI Tutors Glendale, WI Accounting Tutors Glendale, WI ACT Tutors Glendale, WI Algebra Tutors Glendale, WI Algebra 2 Tutors Glendale, WI Calculus Tutors Glendale, WI Geometry Tutors Glendale, WI Math Tutors Glendale, WI Prealgebra Tutors Glendale, WI Precalculus Tutors Glendale, WI SAT Tutors Glendale, WI SAT Math Tutors Glendale, WI Science Tutors Glendale, WI Statistics Tutors Glendale, WI Trigonometry Tutors Nearby Cities With ACT Tutor Bayside, WI ACT Tutors Brookfield, WI ACT Tutors Brown Deer, WI ACT Tutors Fox Point, WI ACT Tutors Greenfield, WI ACT Tutors Menomonee Falls ACT Tutors Mequon ACT Tutors Milwaukee, WI ACT Tutors New Berlin, WI ACT Tutors River Hills, WI ACT Tutors Shorewood, WI ACT Tutors Wauwatosa, WI ACT Tutors West Allis, WI ACT Tutors West Milwaukee, WI ACT Tutors Whitefish Bay, WI ACT Tutors
{"url":"http://www.purplemath.com/Glendale_WI_ACT_tutors.php","timestamp":"2014-04-19T23:26:06Z","content_type":null,"content_length":"23767","record_id":"<urn:uuid:2b21c4c6-d629-4271-a15a-ec41dbe26360>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematician David Gale leaves legacy Earlier this month UC Berkeley professor emeritus of mathematics David Gale passed away. Gale made a number of significant contributions to mathematics and he loved puzzles, games, and finding beauty in mathematics. Gale's daughter had this to say: "He thought math was beautiful, and he wanted people to understand that," said his daughter, Katharine Gale. "But he was emphatic that the way for people to get the beauty and elegance of mathematics was to engage in it, not just be told about it." She recalled that her father would discuss his mathematical work at the dinner table and share with his children his fascination with chess puzzles, card games, puzzle blocks and interlocking puzzles, as well as with all types of math games. Throughout his life, Gale would insist that visitors look at the newest puzzle he was working on. According to his daughter, just before his death, Gale e-mailed a colleague to discuss the mathematics of Sudoku, a popular game where players place numbers on a nine-by-nine grid. Gale invented two games: Bridg-it and Chomp. He also wrote a recreational Math book, Tracking the Automatic Ant: And Other Mathematical Explorations. Gale developed, in 2003, an interactive Math "museum." The site is really wonderful for exploration and it does not require much mathematical sophistication to understand and enjoy. There are currently three major "exhibits" in the museum: Dissecting Triangles and Squares, Sorting Bricks and Sticks, and Geometric Orbits. All three are hands-on interactive exhibits. I really enjoyed the dissection exhibit as it showed me how to systematically dissect various shapes and put them back together to form other shapes. I had always wondered how to do basic dissections and now I know. The sorting exhibit is a hands-on exploration into how sorting algorithms work. The geometric orbits exhibit is one that students with a basic understanding of geometry should become engrossed by. I'm delighted that David Gale has left his legacy of a mathematical museum to all curious students.
{"url":"http://wildaboutmath.com/2008/03/26/mathematician-david-gale-leaves-legacy/","timestamp":"2014-04-21T01:59:04Z","content_type":null,"content_length":"36333","record_id":"<urn:uuid:4e846bc4-1419-4bed-bd9b-964ec54eb219>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
One-point perspective works fine if you happen to be looking directly at the front of something or standing in the middle of some railroad tracks, but what if the scene is viewed from the side? Then you shift into two-point perspective. Two-point perspective has two vanishing points on the horizon line. All lines, except the vertical, will converge onto one of the two vanishing points. In One-point perspective, the height and the width of the object are parallel to the picture plane. In Two-point perspective, only the height is parallel to the picture plane. The other dimensions, the two sides, recedes into the picture depth, therefore, it must have a set of imaginary lines and vanishing points. These vanishing points will also be established on the horizon line. The farther apart the points, the more we see of the sides. The closer together the points, the less we see of the sides. In order to keep our proportions fairly close to reality, we should lightly sketch to judge how much of each side we really see. Judge one side against the other. Notice that the parallel sides of the cube now appear smaller as they move into the depth of picture plane. Our viewing point is also established and will remain constant for all objects placed in the picture. Instead of viewing the cube from a straight on approach as in one-point perspective, in two-point perspective we are viewing it from an angle. In two-point perspective, the corner of the cube is the point closes to us. When we draw the angles of the top and bottom edges of the sides, the extended lines meet on the horizon line, establishing the vanishing points. The vanishing points are usually placed outside of the picture than the other, one of the vanishing points would be inside of the picture plane and one would be outside. The real world is rarely so organized as to align objects facing the viewer, nor are we often standing in the correct position to observe objects so directly. Because we view most objects from an angle, and not directly from the front or sides, two-point perspective allows us to represent our world more realistically by orienting two faces of an object obliquely to the picture plane. The book illustration shows an example of two-point perspective. Other than the obvious difference in having two vanishing points, it is also important to note that objects drawn using this method have an edge closest to the picture plane rather than a face as in one-point. The horizon line in the book image is higher in the two-point example than the horizon in the one-point perspective image. The higher horizon suggests a viewpoint from a higher position, such as looking down upon a book on a table. The position of the horizon line represents the viewer’s eye level and affects how the viewer interprets the image. A lower horizon suggests that the scene is either from greater distance or that the viewer is lower to the “ground.” A higher horizon could also be used to suggest the viewer was looking out a window from a tall building. Horizon line placement is similar to using a “bird’s eye view” or a “bug’s eye view” in photography. These extremes are useful for creating more dramatic visual results. Look for this technique in comic books, where horizon placement and exaggerated perspective are used to suggest action and create more visual interest. The two-point perspective drawing consists of two vanishing points that are both situated on a horizon line. The further apart these vanishing points are on the horizon line, the more relaxed or realistic the perspective will visually seem. In contrast, the closer that one VP is to another VP on the horizon line, the more squashed or forced the perspective will become. Understand that a cube is created of parallel lines for its height, width and length. Perspective will not physically change this; only visually will it seem to change. The two point perspective drawing has the y-axis lines converging to one vanishing point and all the x-axis lines converging to the other. The example below demonstrates the forced and distorted creation of a cube when the vanishing points are placed closer together. Drawing a Box in Two Point Perspective: Step 1: Step 2: Next, draw lines from the top and bottom of the line you drew in step one line back towards your two vanishing points. These lines will make the sides of the box. You should notice that these lines will naturally make triangles. If you can imagine this box as being so large that it went all the way back to the horizon that it would appear to get smaller and smaller as it gets closer to the Step 3: Now draw 2 more vertical lines between each of the triangle shapes. These lines will define the length and width of the box. Step 4: From the top of the lines that you added in step two draw another set of lines that go back to the vanishing points. You should note that these lines will cross. The point where they cross is the back corner of the top of your box. In the last step we’ll clean up the construction lines and finish off the 2 point perspective drawing. Step 5: Remove any lines that are not necessary to define the box. I colored in my perspective box to make it clearer. Refer this also:
{"url":"http://www.animationbrain.com/two-point-perspective.html","timestamp":"2014-04-20T00:40:00Z","content_type":null,"content_length":"24882","record_id":"<urn:uuid:865ff0d2-1f24-4490-80a4-330915d3702b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
November 16th 2008, 06:09 AM Let $A$ be a ring, $a,b \in A$ such that $ab=1$. Suppose that $X=\{x \in A;\ ax=0\}$ is finite. Prove that $ba=1$ We can see that $1-ba \in X$, so if $X=\{0\}$, it's over. Any idea? :) November 16th 2008, 10:52 AM we have: $a^{k-1} - ba^k \in X, \ \forall k \in \mathbb{N}.$ hence, since $X$ is finite, there exist $n, m \in \mathbb{N}$ such that $a^n-ba^{n+1}=a^m -ba^{m+1}, \ n > m,$ which gives us: $(1-ba) a^n=a^m-ba^{m+1}. \ \ \ \ (1)$ now multiply both sides of (1) from the right by $b^n$ and note that $\forall k \in \mathbb{N}: \ a^kb^k=1.$ then you'll get: $1-ba=(a^m-ba^{m+1})b^n=a^mb^n - ba^{m+1}b^n=b^{n-m}-b^{n-m}=0. \ \ \ \ \ \ \ \ \ \ \Box$ November 16th 2008, 11:16 AM Thanks for your answer. I don't see why there exists $n$ such that $a^n-ba^{n+1}=a^{n-1}-ba^n$. The conclusion is still the same with $a^n-ba^{n+1}=a^{k}-ba^{k+1}$ for a $k<n ,$ but there is something I'm missing... November 16th 2008, 11:47 AM you're right, thanks! it was just one of those really weird mistakes that i make sometimes! it's fixed now! November 16th 2008, 12:13 PM You're welcomed. But for me the problem is that doing this, we assume that every element of $X$ is equal to a $a^{q}-ba^{q+1}$, $q\in \mathbb{N}$. Why can we say that? November 16th 2008, 12:23 PM i didn't say that! what we have is this: $A=\{a^{k-1} - ba^k: \ k \in \mathbb{N} \} \subseteq X,$ and we're given that $X$ is finite. so $A$ must also be finite and thus we can find two elements of $A$ which are equal. November 16th 2008, 12:42 PM Great NonCommAlg, I finally got it! Thank you.
{"url":"http://mathhelpforum.com/advanced-algebra/59821-invertible-print.html","timestamp":"2014-04-19T05:29:51Z","content_type":null,"content_length":"14799","record_id":"<urn:uuid:ab385c26-d6d9-4d0b-b674-7dbcf433a61a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalues of the free sphere up vote 8 down vote favorite Consider the usual sphere $S^{n-1}\subset\mathbb R^n$. By Stone-Weierstrass $C(S^{n-1})$ is generated by the standard coordinates $x_1,\ldots,x_n:\mathbb R^n\to\mathbb R$, and in fact we have the presentation result $C(S^{n-1})=C^*_{comm}(x_1,\ldots,x_n|x_i=x_i^*,\sum x_i^2=1)$. The Riemannian structure of $S^{n-1}$, or at least part of it, can be recaptured from this formula. Indeed, the eigenspaces of $D=\sqrt{d^*d}$ are $E_k=H_k\cap H_{k-1}^\perp$, where $H_k=span(x_{i_1} \ldots x_{i_r}|r\leq k)$, and the corresponding eigenvalues are $\lambda_k=k(k+n-2)$. This leads to the following question: • What is the free analogue of $\lambda_k$? More precisely, consider the algebra $A=C^*(x_1,\ldots,x_n|x_i=x_i^*,\sum x_i^2=1)$, corresponding to the NCG-theoretic "free sphere". One can construct spaces $H_k,E_k$ as above, so this free sphere has indeed a spectral triple structure, and the question is to find the correct eigenvalues for $D=\sqrt{d^*d}$. 2 I would add emphasize that main difference on "free-Sphere" from "just Sphere" is that x_i do NOT COMMUTE (just emphasize for better reading). At the moment it is not clear for me how to define "d" is "free" setup ? And also not clear for me definition of $\perp$, both free and non-free. Can we define "d" for "free-space" I mean if we do not impose condition $\sum x_i^2 =1 $ ? What will be the answer in this case ? – Alexander Chervov Jan 6 '13 at 14:51 please read second sentence as: At the moment it is not clear for me how to define "d" IN "free" setup ? – Alexander Chervov Jan 6 '13 at 14:53 1 Also, the metric enters in a more subtle way in the definition of $d^*$. The notion of adjoint uses some metric. – Liviu Nicolaescu Jan 6 '13 at 15:22 2 for usual sphere we can do everything with algebra and NO analysis - sl(n) will act on sphere and Laplacian (=dd^*) = Casimir (center of U(sl)), and hence representation theory of sl(n) applies. Do expect something like this for "free-sphere" ? At least do you expect that non-commutative polynoms of degree less than "k" will be preserved by hypothetical Laplacian dd^* ? – Alexander Chervov Jan 6 '13 at 20:40 Actually I do not see correct analogs of "Casimirs" in free setup... that is why it is somewhat surprising for me what you write... I may be quite wrong... Just feelings... – Alexander Chervov Jan 6 '13 at 22:14 show 2 more comments 1 Answer active oldest votes How about $\lambda_k = \frac{U'_k(n)}{U_k(n)}$ up vote 9 where the $U_k$ denote the Chebyshev polynomials of the second kind, $U_0(x)=1$, $U_1(x)=x$, and $U_k(x)=xU_{k-1}(x)-U_{k-2}(x)$ for $k\ge 2$. down vote accepted In Section 10 of http://arxiv.org/abs/1210.6768 (See in particular Remark 10.4) we try to classify "Brownian motions" on $O_n^+$. The formula above follows, if you use the co-action of the free orthogonal quantum group on the free sphere to define an action of generator of "$O_n^+$-BM" on the free sphere. +1, but somehow it seems the definition of "d" in question was not actually given, seems OP wrote that no one known, or I am not correct ? So is it a "theorem" or "guess" ? – Alexander Chervov Jan 8 '13 at 13:26 We are asking the same question, but in different terminology. "what is the Laplace operator on ...?" becomes "what is Brownian motion on ...?" For quantum groups we have Schürmann's theory of Lévy processes, the question becomes "which of the many Lévy processes on given quantum group deserves to be called a Brownian motion?" On $O^+$ we discovered that invariance under the adjoint action is a nice condition that leads to a subclass that we where able to classify. This is not the only condition one could imagine, but it is one that also works for compact simple connected Lie groups. – Uwe Franz Jan 8 '13 at 16:42 The generator of the Brownian motion is then our candidate for a Laplace operator. From the Laplace operator we get a Dirichlet form, under certain conditions and after fixing a 2 reference state. If the reference state is tracial, then we can apply the construction by Cipriani and Sauvageot to get a derivation that implements the Dirichlet form via $\mathcal {E}[a]=||\partial a||^2$ and the Laplace operator as $\partial^*\partial$. The free sphere does have a tracial state that is invariant for the co-action of $O_n^+$, yes? – Uwe Franz Jan 8 '13 at 16:53 Thank you for your comments ! – Alexander Chervov Jan 9 '13 at 7:49 add comment Not the answer you're looking for? Browse other questions tagged oa.operator-algebras qa.quantum-algebra noncommutative-geometry or ask your own question.
{"url":"https://mathoverflow.net/questions/118206/eigenvalues-of-the-free-sphere","timestamp":"2014-04-17T07:15:10Z","content_type":null,"content_length":"63509","record_id":"<urn:uuid:65dc31e6-c294-45ff-b294-e7c6bed96ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Cotangent: Introduction to the trigonometric functions (subsection Trigonometrics/01) The six trigonometric functions sine , cosine , tangent , cotangent , cosecant , and secant are well known and among the most frequently used elementary functions. The most popular functions , , , and are taught worldwide in high school programs because of their natural appearance in problems involving angle measurement and their wide applications in the quantitative sciences. The trigonometric functions share many common properties.
{"url":"http://functions.wolfram.com/ElementaryFunctions/Cot/introductions/Trigonometrics/01/","timestamp":"2014-04-17T15:44:18Z","content_type":null,"content_length":"36099","record_id":"<urn:uuid:c20a029e-05f5-44c1-b37e-7c2471499a35>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursive types in Kleisli categories , 1997 "... To my parents A general abstract theory for computation involving shared resources is presented. We develop the models of sharing graphs, also known as term graphs, in terms of both syntax and semantics. According to the complexity of the permitted form of sharing, we consider four situations of sha ..." Cited by 62 (10 self) Add to MetaCart To my parents A general abstract theory for computation involving shared resources is presented. We develop the models of sharing graphs, also known as term graphs, in terms of both syntax and semantics. According to the complexity of the permitted form of sharing, we consider four situations of sharing graphs. The simplest is first-order acyclic sharing graphs represented by let-syntax, and others are extensions with higher-order constructs (lambda calculi) and/or cyclic sharing (recursive letrec binding). For each of four settings, we provide the equational theory for representing the sharing graphs, and identify the class of categorical models which are shown to be sound and complete for the theory. The emphasis is put on the algebraic nature of sharing graphs, which leads us to the semantic account of them. We describe the models in terms of the notions of symmetric monoidal categories and functors, additionally with symmetric monoidal adjunctions and traced , 1997 "... . Cyclic sharing (cyclic graph rewriting) has been used as a practical technique for implementing recursive computation efficiently. To capture its semantic nature, we introduce categorical models for lambda calculi with cyclic sharing (cyclic lambda graphs), using notions of computation by Moggi / ..." Cited by 45 (5 self) Add to MetaCart . Cyclic sharing (cyclic graph rewriting) has been used as a practical technique for implementing recursive computation efficiently. To capture its semantic nature, we introduce categorical models for lambda calculi with cyclic sharing (cyclic lambda graphs), using notions of computation by Moggi / Power and Robinson and traced monoidal categories by Joyal, Street and Verity. The former is used for representing the notion of sharing, whereas the latter for cyclic data structures. Our new models provide a semantic framework for understanding recursion created from cyclic sharing, which includes traditional models for recursion created from fixed points as special cases. Our cyclic lambda calculus serves as a uniform language for this wider range of models of recursive computation. 1 Introduction One of the traditional methods of interpreting a recursive program in a semantic domain is to use the least fixed-point of continuous functions. However, in the real implementations of - In 8th Annual Symposium on Logic in Computer Science , 1993 "... This paper describes a mixed induction/co-induction property of relations on recursively defined domains. We work within a general framework for relations on domains and for actions of type constructors on relations introduced by O'Hearn and Tennent [20], and draw upon Freyd's analysis [7] of recurs ..." Cited by 15 (2 self) Add to MetaCart This paper describes a mixed induction/co-induction property of relations on recursively defined domains. We work within a general framework for relations on domains and for actions of type constructors on relations introduced by O'Hearn and Tennent [20], and draw upon Freyd's analysis [7] of recursive types in terms of a simultaneous initiality/finality property. The utility of the mixed induction/co-induction property is demonstrated by deriving a number of families of proof principles from it. One instance of the relational framework yields a family of induction principles for admissible subsets of general recursively defined domains which extends the principle of structural induction for inductively defined sets. Another instance of the framework yields the co-induction principle studied by the author in [22], by which equalities between elements of recursively defined domains may be proved via `bisimulations'. 1 Introduction A characteristic feature of higher-order functional lan... - In Proceedings of Computer Science Logic , 1995 "... This paper extends Curry-Howard interpretations of Intuitionistic Logic (IL) and Intuitionistic Linear Logic (ILL) with rules for recursion. The resulting term languages, the rec -calculus and the linear rec -calculus respectively, are given sound categorical interpretations. The embedding of ..." Cited by 11 (0 self) Add to MetaCart This paper extends Curry-Howard interpretations of Intuitionistic Logic (IL) and Intuitionistic Linear Logic (ILL) with rules for recursion. The resulting term languages, the rec -calculus and the linear rec -calculus respectively, are given sound categorical interpretations. The embedding of proofs of IL into proofs of ILL given by the Girard Translation is extended with the rules for recursion, such that an embedding of terms of the rec -calculus into terms of the linear rec -calculus is induced via the extended Curry-Howard isomorphisms. This embedding is shown to be sound with respect to the categorical interpretations. Full version of paper to appear in Proceedings of CSL '94, LNCS 933, 1995. y Basic Research in Computer Science, Centre of the Danish National Research Foundation. Contents 1 Introduction 4 2 The Categorical Picture 6 2.1 Previous Work and Related Results : : : : : : : : : : : : : : : : : : : : : : 6 2.2 How to deal with parameters : : : : : : : ... - HIGHER-ORDER AND SYMBOLIC COMPUT , 2001 "... We propose an axiomatization of fixpoint operators in typed call-by-value programming languages, and give its justifications in two ways. First, it is shown to be sound and complete for the notion of uniform T-fixpoint operators of Simpson and Plotkin. Second, the axioms precisely account for Filins ..." Cited by 11 (5 self) Add to MetaCart We propose an axiomatization of fixpoint operators in typed call-by-value programming languages, and give its justifications in two ways. First, it is shown to be sound and complete for the notion of uniform T-fixpoint operators of Simpson and Plotkin. Second, the axioms precisely account for Filinski's fixpoint operator derived from an iterator (infinite loop constructor) in the presence of firstclass continuations, provided that we define the uniformity principle on such an iterator via a notion of effect-freeness (centrality). We then explain how these two results are related in terms of the underlying categorical structures.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=937325","timestamp":"2014-04-18T10:38:06Z","content_type":null,"content_length":"24691","record_id":"<urn:uuid:a5920c31-169b-4c0e-92d0-e83c6da27dc3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
SparkNotes: SAT Subject Test: Math Level 2: Identifying the Graphs of Polynomial Functions 10.1 Characteristics of a Function 10.6 Graphing Functions 10.2 Evaluating Functions 10.7 Identifying the Graphs of Polynomial Functions 10.3 Compound Functions 10.8 Review Questions 10.4 Inverse Functions 10.9 Explanations 10.5 Domain and Range Identifying the Graphs of Polynomial Functions Many of the functions on the Math IIC are polynomial functions. Although they can be difficult to sketch and identify, there are a few tricks to make it easier. If you can find the roots of a function, identify the degree, or understand the end behavior of a polynomial function, you will usually be able to pick out the graph that matches the function and vice versa. The roots (or zeros) of a function are the x values for which the function equals zero, or, graphically, the values where the graph intersects the x-axis (x = 0). To solve for the roots of a function, set the function equal to 0 and solve for x. A question on the Math IIC that tests your knowledge of roots and graphs will give you a function like f(x) = x^2 + x – 12 along with five graphs and ask you to determine which graph is that of f(x). To approach a question like this, you should start by identifying the general shape of the graph of the function. For f(x) = x^2 + x – 12, you should recognize that the graph of the function in the paragraph above is a parabola and that opens upward because of a positive leading coefficient. This basic analysis should immediately eliminate several possibilities but might still leave two or three choices. Solving for the roots of the function will usually get you to the one right answer. To solve for the roots, factor the function: The roots are –4 and 3, since those are the values at which the function equals 0. Given this additional information, you can choose the answer choice with the upward-opening parabola that intersects the x-axis at –4 and 3. The degree of a polynomial function is the highest exponent to which the dependent variable is raised. For example, f(x) = 4x^5 – x^2 + 5 is a fifth-degree polynomial, because its highest exponent is A function’s degree can give you a good idea of its shape. The graph produced by an n-degree function can have as many as n – 1 “bumps” or “turns.” These “bumps” or “turns” are technically called “extreme points.” Once you know the degree of a function, you also know the greatest number of extreme points a function can have. A fourth-degree function can have at most three extreme points; a tenth-degree function can have at most nine extreme points. If you are given the graph of a function, you can simply count the number of extreme points. Once you’ve counted the extreme points, you can figure out the smallest degree that the function can be. For example, if a graph has five extreme points, the function that defines the graph must have at least degree six. If the function has two extreme points, you know that it must be at least third degree. The Math IIC will ask you questions about degrees and graphs that may look like this: If the graph above represents a portion of the function g(x), then which of the following could be g(x)? (A) a (B) ax +b (C) ax^2 + bx + c (D) ax^3 + bx^2 + cx + d (E) ax^4 + bx^3 + cx^2 + dx + e To answer this question, you need to use the graph to learn something about the degree of the function. Since the graph has three extreme points, you know the function must be at least of the fourth degree. The only function that fits that description is E. Note that the answer could have been any function of degree four or higher; the Math IIC test will never present you with more than one right answer, but you should know that even if answer choice E had read ax^7 + bx^6 + cx^5 + dx^4 + ex^3 + fx^2 + gx + h it still would have been the right answer. Function Degree and Roots The degree of a function is based on the largest exponent found in that function. For instance, the function f(x) = x^2 + 3x^ + 2 is a second-degree function because its largest exponent is a 2, while the function g(x) = x^4 + 2^ is a fourth-degree function because its largest exponent is a 4. If you know the degree of a function, you can tell how many roots that function will have. A second-degree function will have two roots, a third-degree funtion will have three roots, and a ninth-degree function will have nine roots. Easy, right? Right, but with one complication. In some cases, all the roots of a function will be distinct. Take the function: The factors of g(x) are (x^ + 2)^ and (x^ + 1), which means that its roots occur when x equals –2 or –1. In contrast, look at the function While h(x) is a second-degree function and has two roots, both roots occur when x equals –2. In other words, the two roots of h(x) are not distinct. The Math IIC may occasionally present you with a function and ask you how many distinct roots the function has. As long as you are able to factor out the function and see how many of the factors overlap, you can figure out the right answer. Whenever you see a question that asks about the roots in a function, make sure you determine whether the question is asking about roots or distinct End Behavior The end behavior of a function is a description of what happens to the value of f(x) as x approaches infinity and negative infinity. Think about what happens to a polynomial containing x if you let x equal a huge number, like 1,000,000,000. The polynomial is going to end up being an enormous positive or negative number. The point is that every polynomial function either approaches infinity or negative infinity as x approaches positive and negative infinity. Whether a function will approach positive or negative infinity in relation to x is called the function’s end behavior. There are rules of end behavior that can allow you to use a function’s end behavior to figure out its algebraic characteristics or to figure out its end behavior based on its definition: • If the degree of the polynomial is even, the function behaves the same way as x approaches both positive and negative infinity. If the coefficient of the term with the greatest exponent is positive, f(x) approaches positive infinity at both ends. If the leading coefficient is negative, f(x) approaches negative infinity at both ends. • If the degree of the polynomial function is odd, the function exhibits opposite behavior as x approaches positive and negative infinity. If the leading coefficient is positive, the function increases as x increases and decreases as x decreases. If the leading coefficient is negative, the function decreases as x increases and increases as x decreases. For the Math IIC, you should be able to determine a function’s end behavior by simply looking at either its graph or definition. Function Symmetry Another type of question you might see on the Math IIC involves identifying a function’s symmetry. Some functions have no symmetry whatsoever. Others exhibit one of two types of symmetry and are classified as either even functions or odd functions. Even Functions An even function is a function for which f(x) = f(–x). Even functions are symmetrical with respect to the y-axis. This means that a line segment connecting f(x) and f(–x) is a horizontal line. Some examples of even functions are f(x) = cos x, f(x) = x^2, and f(x) = |x|. Here is a figure with an even function: Odd Functions An odd function is a function for which f(x) = –f(–x). Odd functions are symmetrical with respect to the origin. This means that a line segment connecting f(x) and f(–x) contains the origin. Some examples of odd functions are f(x) = sin x and f(x) = x. Here is a figure with an odd function: Symmetry Across the x-Axis No function can have symmetry across the x-axis, but the Math IIC will occasionally include a graph that is symmetrical across the x-axis to fool you. A quick check with the vertical line test proves that the equations that produce such lines are not functions:
{"url":"http://www.sparknotes.com/testprep/books/sat2/math2c/chapter10section7.rhtml","timestamp":"2014-04-20T03:16:10Z","content_type":null,"content_length":"55692","record_id":"<urn:uuid:4c4d9d52-a30a-42c2-bdf0-850d6057581f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
C program to implement Digital Differential Analyzer Line drawing algorithm void main() int gd=DETECT,gm; int x1,x2,y1,y2,dx,dy,steps,k; float xi,yi,x,y; initgraph (&gd,&gm,"C:\\TC\\BGI"); printf("Enter the co-ordinates of the first point \n"); printf("x1= "); printf("y1= "); printf("Enter the co-ordinates of the second point \n"); printf("x2= "); printf("y2= "); dx= x2-x1; dy= y2-y1; if (abs(dx) > abs(dy)) steps = abs(dx); steps = abs(dy); putpixel (x,y,BLUE); 3 comments: thank you so much!!! it helped a lotttttt!!! thanks a lot.................. it improved my programming logic it is not working when both the points are having the same y value.that is when y1 and y2 are same.for eg:(300,120) and (180,120).please help me out with this.
{"url":"http://code-heaven.blogspot.com/2009/05/c-program-to-implement-digital.html","timestamp":"2014-04-16T12:02:49Z","content_type":null,"content_length":"54120","record_id":"<urn:uuid:e6244291-d6ad-4529-aebb-679d5c1de2a2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Amanda on Wednesday, April 4, 2012 at 4:57pm. The ages (in years) of 10 infants and the number of hours each slept in a day Age, x: 0.1, 0.2, 0.4, 0.7, 0.6, 0.9, 0.1, 0.2, 0.4, 0.9 Hours slept, y: 14.9, 14.5, 13.9, 14.1, 13.9, 13.7, 14.3, 13.9, 14.0, 14.1 Find the equation of the regression line. Then use the regression equation to predict the value of y for the given x, if meaningful. If it is not meaningful, explain why. a. X-0.3years b. X=3.9years c. X=0.6years d. X=0.8years • Statistics - MathGuru, Wednesday, April 4, 2012 at 7:57pm If you need to show the work by hand, you can develop the regression equation in the following format: predicted y = a + bx ...where a represents the y-intercept and b the slope. To get to that point, here are some formulas to calculate along the way. To find a: a = (Ey/n) - b(Ex/n) Note: E here means to add up or to find the total. To find b: b = SSxy/SSxx To find SSxy: SSxy = Exy - [(Ex)(Ey)]/n To find SSxx: SSxx = Ex^2 - [(Ex)(Ex)]/n Once you have the equation, substitute the values for x and solve for predicted y. I hope this will help get you started. Related Questions statistics - 5. The ages of husbands and wives in a community were found to have... statistics - 5. The ages of husbands and wives in a community were found to have... AED - give an example of what a child would go through physically with each ... algebra - Jack was 10 years older then Priscilla together their ages totaled 170... statistics - The ages of commercial aircraft are normally distributed with a ... algebra - The present ages in years of four cousins are consecutive multiples of... statistics - The ages (in years) of 10 infants and the number of hours each ... algebra - brian is 26 years older than his daughter kim. the sum of their ages ... Algebra - Brian is 26 years oldr than his daughter Kim. The sum of their ages is... Math - the ages of two brothers are 11 and 8 years. In how many years time will ...
{"url":"http://www.jiskha.com/display.cgi?id=1333573063","timestamp":"2014-04-17T02:24:22Z","content_type":null,"content_length":"9228","record_id":"<urn:uuid:2b509153-0647-4a5f-a1cd-92d47c66b19e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Pembroke Pines Math Tutor Find a Pembroke Pines Math Tutor ...I work with students at their level of need whether it is basic phonics instruction, fluency, comprehension, vocabulary, or a combination of the above. Reading is essential for success in life, and my goal is make sure that all students are prepared for the future. I have worked with elementary, middle, high school and even college level writers. 34 Subjects: including algebra 1, special needs, elementary (k-6th), grammar ...My first language is Spanish, but was born and raised in the states so I am fully Bilingual (read, write, and speak), and have no accent. I have a lot of interest in History because my dad is a History buff. He is also a college professor with a background in Political Science and Social Work so it has rubbed off on me. 24 Subjects: including ACT Math, SAT math, chemistry, geometry ...I am experienced in helping students to set, meet and exceed their personal goals. I have helped individuals learn how to use computers, seek employment, improve their life skills, read, write, and improve their grammar. If you want to know how to learn, I am the teacher for you. 14 Subjects: including prealgebra, English, reading, writing I graduated from Florida International University with a masters degree in Accounting. I have had prior experience working at a substitute teacher in elementary and middle school, teaching Math and English. I am highly skilled in the subject matter of math and algebra and I will provide you with t... 3 Subjects: including algebra 2, algebra 1, prealgebra ...Also, differentials become the basis for some fundamental equations used in every day mathematics. Chemistry involves more than just boring theories and difficult lab experiments. The student learns how certain physical phenomenons exist and how to use house hold products to create an interesting presentation. 23 Subjects: including statistics, differential equations, linear algebra, ACT Math
{"url":"http://www.purplemath.com/pembroke_pines_math_tutors.php","timestamp":"2014-04-16T16:09:25Z","content_type":null,"content_length":"24272","record_id":"<urn:uuid:ec3c475b-0ac1-4912-a062-f10ba1c54383>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Centripetal/Centrifugal Force and Moment of Inertia Hello all. I have a problem and I’m not sure if my analysis is 100% accurate. I hope this is the correct spot since it deals with dynamics (mechanics). Here’s my scenario: I have a forklift and a pallet with a box on top. I’m trying to calculate the required velocity for the box+pallet to flip around a turn.. and at what angle. Pictures have been uploaded to visualize the problem better. Here are my following assumptions: - Forklift is 100% capable of carrying this load (forklift will NOT tilt on the turn and I can neglect the forklift all together) - Box is latched down to pallet creating a rigid body. - Center of gravity (CoG) is directly in the middle of the box. - The fork lift is at constant velocity (acceleration = 0), which makes centripetal force only acting in the normal direction into the curve. - No slipping will occur between the fork and the pallet + box. - I can calculate the velocity, radius of curvature, and all dimensions of the box/pallet. Centripetal force is Fc= (v^2)/r (in the normal direction toward the curve) Centrifugal force is the same in magnitude but is acting on the opposite direction at the CoG. I calculated the angle of when my rigid system will fall over by having the CoG align vertically to point P (as seen in picture) in a static situation. How would I go about calculating how much force is required to have my rigid body reach the angle I calculated? (so my pallet + box will flip) I have thought about calculating the moment at point P and CoG (Sum of moment at a point in the free body diagram and equaling it to the sum of moments, at the same point, in the mass-acceleration diagram or kinetic diagram), but I can’t seem to relate it with everything else. I'm not sure how to go about calculating angular acceleration either. Can't seem to put everything together. I don’t think I have left anything out. Any theories or comments would be appreciated. Thank you for your time!
{"url":"http://www.physicsforums.com/showthread.php?p=2041539","timestamp":"2014-04-16T13:43:55Z","content_type":null,"content_length":"40037","record_id":"<urn:uuid:5b680154-843f-4b00-9e9f-4c645c88651f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
question on continuity of functions November 12th 2009, 12:36 PM #1 Nov 2009 question on continuity of functions Okay, so in general, if I wanted to do a proof to show that some function f(x) is continuous for all x, how do i do that? I mean, general proofs of continuity are easy, but how do you prove it to be continuous for all x in a set? Choose any point in the set, say $c$. Then by showing that the function $f$ is continuous at $c$ suffices to show that $f$ is continuous on the set. Why do you think that there is anything vague about that method? You suppose that $c\in S$ and show that $f$ is continious at $c$. There is absolutely nothing vague about that. Maybe you don't understand the definition of continuity. Is that it? I guess general was more the word I was looking for. I understand the idea behind continuity: the limit as x approaches some point is the same as that function at some value, and alos how to formally work the definition. The book didn't give any examples of a prooof in which you'd prove continuity of a function in general. November 12th 2009, 01:16 PM #2 November 12th 2009, 02:27 PM #3 Nov 2009 November 12th 2009, 02:41 PM #4 November 12th 2009, 06:00 PM #5 Nov 2009
{"url":"http://mathhelpforum.com/differential-geometry/114169-question-continuity-functions.html","timestamp":"2014-04-19T11:11:43Z","content_type":null,"content_length":"46878","record_id":"<urn:uuid:287be1b1-9178-4e8a-9508-b17d41f96021>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Given the following quadratic equation, Y=x^2+6x+5, find - assignmentHOLE.com: Assignment HelpGiven the following quadratic equation, Y=x^2+6x+5, find Given the following quadratic equation, Y=x^2+6x+5, find 1. Given the following quadratic equation, Y=x^2+6x+5, find a. the vertex b. the axis of symmetry c. the intercepts d. the domain e. the range f. the interval where the function is increasing, and g. the interval where the function is decreasing h. Graph the function. SHOW ALL WORK. 2. Given the following polynomial, f(x)=(x+5)(x-3)(x+3),find a. the zeros and the multiplicity of each b. where the graph crosses or touches the x‐axis c. number of turning points d. end behavior SHOW ALL WORK. 3. Using the 7 steps outlined in Section 4.3 of your book, analyze the graph of the following function: R(x) = x+1/x SHOW ALL WORK. 4. Solve the following inequality. Write your solution in interval notation. X^3 + 81x > 0 SHOW ALL WORK. 5. Find the domain of the composite function f °g F(x) = x^2 + 5 ; g(x) = √x-8 6. Find the inverse of the following function. Find the domain, range, and asymptotes of each function. Graph both functions on the same coordinate plane. F(x) = 12e^-x Order This Question! Order Your Own Question! Recently Posted • Can you help me with the following assignment? ... • Develop a presentation summarizing the key know... • Considering a leader’s essential competen...
{"url":"http://www.assignmenthole.com/general/given-the-following-quadratic-equation-yx26x5-find/","timestamp":"2014-04-20T04:16:17Z","content_type":null,"content_length":"51013","record_id":"<urn:uuid:5a43e794-af59-4fb8-a33e-db40642a99f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/uditkulka/asked","timestamp":"2014-04-17T18:31:15Z","content_type":null,"content_length":"119135","record_id":"<urn:uuid:f5d5b6bb-ee4b-4dfa-9c9c-814fad398672>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
ON Course Analysing Network Visualization Statistics Posted on June 1st, 2012 by Jamie Mahoney As mentioned in a previous post, there are many statistics that can be derived from the network visualizations that I have been generating from the course data I have been collecting. At the moment, these are the particular numbers that I have been paying attention to: • Mean Degree of Nodes – The mean amount of connections per node on the graph. • Mean Weighted Degree of Nodes - The mean weight of connections per node on the graph. • Graph Density – A ratio of the number of edges per node to the number of possible edges. • Modularity – a measure of the strength of division of a network into modules. Networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. • Mean Clustering Coefficient - the degree to which nodes in the graph tend to cluster together. So, in terms of applying these to the networks generated with awards data: • Mean Degree of Nodes – The mean amount of connections for each award. i.e. the mean amount of awards that each award is connected to. • Mean Weighted Degree of Nodes - The mean weight of connections for each award. i.e. the mean amount of modules shared by that award with other awards. • Graph Density – The amount of connections per award when compared to the total amount of awards in the network. (more affected by an increase in awards offered than others) • Modularity - a higher modularity suggests that awards are very highly connected with specific other awards, but have very few ‘odd’ connections to other awards in the network. A very high modularity would suggest that a group of awards shared a lot of modules between themselves. • Mean Clustering Coefficient - a low coefficient would suggest that awards did not group together, and therefore did not share modules between them. A high coefficient would suggest that most of the awards in the network formed clusters with other awards. The numbers generated for the weighted connections between awards for the academic year 2006/07 through to 2012/13 are as follows: │Academic Year│Mean Degree│Mean Weighted Degree│Graph Density│Modularity│Mean Clustering Coefficient │ │ 2006 – 2007 │ 0.804 │ 1.821 │ 0.069 │ 0.657 │ 0.357 │ │ 2007 – 2008 │ 0.763 │ 1.711 │ 0.041 │ 0.726 │ 0.408 │ │ 2008 – 2009 │ 0.500 │ 1.324 │ 0.030 │ 0.588 │ 0.224 │ │ 2009 – 2010 │ 0.405 │ 1.432 │ 0.023 │ 0.574 │ 0.124 │ │ 2010 – 2011 │ 0.720 │ 1.880 │ 0.029 │ 0.777 │ 0.212 │ │ 2011 – 2012 │ 0.716 │ 2.486 │ 0.020 │ 0.810 │ 0.259 │ │ 2012 – 2013 │ 0.651 │ 4.349 │ 0.021 │ 0.847 │ 0.267 │ So what do these numbers show and are they actually useful? Well…. Mean degree shows the amount of awards that each award is connected to, on average. If we look at mean weighted degree instead, we then take into consideration the weight of a connection between a pair of nodes, i.e. the amount of joins between them, rather than just the fact that a join exists. Plotting this graphically helps to show the pattern that emerges. From the graph above it becomes clear that there is a definite drop on MWD (mean weighted degree) from the academic year 07/08 to the year 08/09 (around 22%), showing that the average amount of links between awards dropped fairly considerably. Through looking back at the university’s history, this can be explained as this was the point in time that the amount of points per module of study was altered, meaning that, essentially, multiple version of the same award were running in tandem: some with the old weighting of awards, some the new. This also explains the steady increase in MWD up to 11-12 which is the first year that the old weighted degrees would not have been active at all. From the highest point of the old weighting, to this point in the new weighting, there is an increase of over 36% in the amount of joins between awards offered at the university. This shows that (assuming an increased modularity is good in terms of curriculum design) that the provision has been improved through the alteration of module weightings. Taking into account the overall increase in the amount of awards offered, this also shows that the restructuring of the modules had a significant impact on the sharing of teaching and assessment across different awards. The number given for the ‘modularity’ of the graphs shows a couple of interesting things. As noted above, the modularity shows how well the nodes on the graph (i.e. the awards) form into self contained clusters. A value of 1 would suggest that the awards form perfectly into self-contained clusters, having lots of connections between themselves but no connections with other clusters, a value of 0 would suggest the opposite. As you can see from the graph above, in 06/07, the modularity was reasonably high, quite possibly due to the smaller amount of awards offered at the university. This figure rises over the next year, and then drops for two consecutive years as the weighting of modules at the university goes through a period of change. As the change is fully implemented, the modularity rises significantly and continues to rise, almost at a constant rate from 2010-11 through to 2012-13. This would suggest (though is not necessarily the case) that, either by design or good fortune, the awards offered at the university are starting to form into self-contained groups or areas of specialism. This is interesting to note, as the university has recently gone through an organizational restructuring whereby three colleges were formed – could these clusters be contained within the colleges? Though this has only looked at two series of numbers generated for each of these visualizations, it does show that visualizing course data produces extra data that cannot be collected when the data is in its raw form. Further to this, it also shows that this data accurately reflects historical changes in provision within the university. If these principles can be applied retrospectively to show changes, in which ways can they be applied to decision making processes, to help assess the impact of potential changes? Back to Visualizing Course Data! Posted on May 25th, 2012 by Jamie Mahoney After having worked on creating a badge system for universities over the past few weeks, I’ve now gone back to looking at how the massive amount of course data that I currently have can be visualized in a meaningful and useful way. My first bout of visualization resulted in a series of A0 posters showing the links between all of the modules currently being delivered at the university. Whilst these visualizations are very useful for showing the complexity of course structure and relationships, it becomes fairly difficult to extract any information that is particularly useful. For example, the edges in the network denote a connection between two modules in terms of the award that the combination is delivered on. A collection of edges of the same colour show a group of connections for the same award, i.e. a group of modules delivered as part of one particular award. As the next step in my on-going quest to make sense of all of this course data (and related datasets), I’ve decided to look at a different abstraction of the same datasets, this time looking at the connections on an award level. This is one level of abstraction higher on the scale of University -> College -> Faculty -> School -> Award -> Module. By changing to this level of abstraction, it means that a) there are far fewer nodes on the graph, making it easier to see the information and b) it is easier for people to relate to an award (i.e. more easily recognizable what the node is referring to) than it is at a module level. At the moment, the visualizations are considering data for awards that are ‘Active’ i.e. have students on all levels and have a full-time, ‘traditional’ degree ‘feel’. I chose to do this as taking into account awards that are on their way in or way out, and part-time variations on a theme offered in a full-time course started to distort the data, flooding the networks with nodes and edges that are essentially replicas of other nodes and edges in the graph. Obviously the visualization exercise could be repeated for part-time or post-graduate courses, or to include them. Narrowing the data down as described above, and running it through the trusted Gephi, this time using a circular layout algorithm, produces visualizations such as the following: Each node around the edge of the graph represents an award that was active at the university for that particular year. With the university being relatively young in the grand scheme of things, the time-span between the first visualization (06-07) and the final (12-13) represent a substantial proportion of the university’s (in its current form) history. Award codes have been used as they are fairly short and remain similar in groups of awards offered by the same departments or schools. By doing so, the relative position of awards is more or less maintained in each visualization. For example. the pattern created between Computer Science awards and Media awards exists and can be easily spotted in each of the visualizations, even though the amount of awards in each visualization changes and the exact award codes of each award code may change. The full collection of visualizations can be accessed here: 2006 – 2007, 2007-2008, 2008-2009, 2009-2010, 2010-2011, 2011-2012, 2012-2013. These visualizations show three different sets of information: the amount of active awards for each year, the codes for the active awards and the relationships (where they exist) between the awards, i.e. where they share modules in common. By taking into account the amount of modules shared between the awards and including this in the visualizations, we get a different view of the data. We can not only see where links exist, but also the strength of the links between the awards. Including the amount of modules shared between awards as the weight of each of the edges produces the following visualizations: The full collection of these visualizations can be found here: 2006-07, 2007-08, 2008-09, 2009-10, 2010-11, 2011-12, 2012-13 By introducing the weighted edges into the network, we can learn new pieces of information through the visualizations. Whilst the Computer Science – Media pattern exists across several years, we can see it move from being one of the more dominant links (2007-08 / 09-10) to being overshadowed by the amount of modules being shared by, for instance, Film and Television and Media Production and History & Social Science awards. As well as making pretty pictures with the course data, the statistics associated with these networks can also be analyzed, but that will be the focus for my next blog post. What to Do with Six Years of Course Data?!?! Posted on March 30th, 2012 by Jamie Mahoney After asking colleagues in Planning, I came across some stored reports that contain information about the various awards/courses offered at the university, along with the modules that constitute those awards – from short certificates to full undergraduate and postgraduate degrees. Whilst the reports date back to the 90s, the data within them is substantial enough to be used from 2006-07 onwards; in total this comes to around 50,000 individual award->module relationships spread over the 6 academic years represented in the data. The first question that arose was: ‘What to do with six years of course data?!?!?!’. After speaking with Tony Hirst last week, we came to the conclusion that this data would also have a great benefit if utilised in new ways within the university itself, as well as presenting the course information (and related datasets) to current and prospective students. The first way I decided to look at all of this information was to visualise the relationships between modules and courses offered at the university. The data shows how different awards share certain modules in common; this can be seen in small-scale examples within the raw data itself, but how would the entire dataset for a year look? To find out, I extracted the pertinent information from everything that was currently being stored, and eventually narrowed it down to a set of data that showed the relationships between modules – basically pairs of modules offered on the same awards. Modules formed the nodes of the graph and the links between the nodes – the edges, are representative of the various courses that the modules are offered With this dataset prepared, I loaded the data into Gephi, selected an appropriate layout algorithm and let Gephi work its magic. As a result, we get graphs like this: allmodules_11_12. (Each node is a module, each edge is an award that the module is available on, edge colours represent a single award). From these graphs we can see that clusters of courses form that share many modules in common, mainly around joint degrees (which makes sense!); we can also see that many courses ‘float away’ from these hubs as they are entirely self contained and share no modules with any other award offered at the university. The other graphs can be seen here: all modules 06 07, all modules 07 08, all modules 08 09, all modules 09 10 and all modules 10 11. So apart from making pretty pictures with course data, what purpose has this served? Well, firstly, I now know that I can get a vast amount of data covering the past six years of course and modules offered at the university. Secondly, I now have a better understanding of the inner workings of Gephi, which will no doubt serve me well over the rest of the project. Thirdly I also now know just who to pester in the right departments to get even more data. Finally…..we now have A0 printouts of these graphs plastered around the office walls – I certainly didn’t envisage using course data as wallpaper when I started on this project. Being able to quickly see the connections between modules, particularly where one module is used for multiple awards could be very useful for those involved in curriculum planning. Obviously I’m not suggesting that they consult one of these A0 posters to assess the impact of changing one module, but being able to quickly find the impact of changing it would be useful. Take for instance, a module that contains an element of group work. 5 courses use this module, 4 of which are run by one particular college, the 5th course is run by a completely separate college. 4 of the courses have far too much group work, it is decided, so the decision is made to remove the group work element from the module. Do those involved in the decision know that the module is used by a course in College B, and, that the module is the only element of group work within a year’s study on the course? Removing the group work element would mean that the course doesn’t contain all of the required elements to be re-validated, obviously causing problems further down the line. Combining the data used to produce the visualisations above, along with other datasources could help to resolve this issue. So where to go from here? Well, abstracting slightly further from the course->module level, we (I) can start to compare inter-departmental and inter-disciplinary sharing of modules at a department, faculty or college level within the university. Combining with other data that we make available through data.lincoln, we can look at how departments share modules across the physical space of the campuses that make up the university (more on that in another blog post). Combining the data with student numbers, we can look at the subscription levels to the modules that form a focal point to multiple awards. If / when I can get hold of full datasets for learning outcomes & module descriptors, I can start to look at modules that don’t necessarily share any course in common, but may be similar in terms of the learning outcomes they address or the topics they cover (as described in the module descriptions). There really are many ways to combine all of the information that I’m starting to stumble across and it is just a case of finding interesting combinations of datasets and assessing how useful the results are. As a result of this digging around and tidying up of various data sources, all of the data that can be made accessible through data.lincoln will be made available – in a nice format, unlike the multitude of document types and messy data that I’ve been dealing with recently. Any suggestions of ways to mash-up some data or ideas about new visualisations, feel free to leave me a comment or three below!
{"url":"http://coursedata.blogs.lincoln.ac.uk/tag/gephi/","timestamp":"2014-04-19T18:03:20Z","content_type":null,"content_length":"47848","record_id":"<urn:uuid:30282db3-3978-4633-b1dd-0d9a789bbecc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Radiation Patterns and Antenna Characteristics This chapter describes how to calculate the radiation fields. It also provides general information about the antenna characteristics that can be derived based on the radiation fields. About Radiation Patterns Once the currents on the circuit are known, the electromagnetic fields can be computed. They can be expressed in the spherical coordinate system attached to your circuit as shown in Co-polarization angle. The electric and magnetic fields contain terms that vary as 1/r, 1/r ^2 etc. It can be shown that the terms that vary as 1/r ^2 , 1/r ^3 , ... are associated with the energy storage around the circuit. They are called the reactive field or near-field components. The terms having a 1/r dependence become dominant at large distances and represent the power radiated by the circuit. Those are called the far-field components (E [ff] , H [ff] ). In the direction parallel to the substrate (theta = 90 degrees), parallel plate modes or surface wave modes, that vary as 1/sqrt(r), may be present, too. Although they will dominate in this direction, and account for a part of the power emitted by the circuit, they are not considered to be part of the far-fields. The radiated power is a function of the angular position and the radial distance from the circuit. The variation of power density with angular position is determined by the type and design of the circuit. It can be graphically represented as a radiation pattern. The far-fields can only be computed at those frequencies that were calculated during a simulation. The far-fields will be computed for a specific frequency and for a specific excitation state. They will be computed in all directions (theta, phi) in the open half space above and/or below the circuit. Besides the far-fields, derived radiation pattern quantities such as gain, directivity, axial ratio, etc. are computed. About Antenna Characteristics Based on the radiation fields, polarization and other antenna characteristics such as gain, directivity, and radiated power can be derived. The far-field can be decomposed in several ways. You can work with the basic decomposition in ([co], E [cross] ) which is a decomposition based on an antenna measurement set-up. For circular polarized antennas, a decomposition into left and right hand polarized field components (E [lhp] , E [rhp] ) is most appropriate. Below you can find how the different components are related to each The fields can be normalized with respect to: Circular Polarization Below is shown how the left hand and right hand circular polarized field components are derived. From those, the circular polarization axial ratio (AR [cp] ) can be calculated. The axial ratio describes how well the antenna is circular polarized. If its amplitude equals one, the fields are perfectly circularly polarized. It becomes infinite when the fields are linearly polarized. Linear Polarization Below, the equations to decompose the far-fields into a co and cross polarized field are given ( is the co polarization angle). From those, a "linear polarization axial ratio" (AR [lp] ) can be derived. This value illustrates how well the antenna is linearly polarized. It equals to one when perfect linear polarization is observed and becomes infinite for a perfect circular polarized antenna. Eco is defined as colinear and Ecross implies a component orthogonal to Eco. For a perfect linear polarized antenna, Ecross is zero and the axial ratio AR=1. If Ecross = Eco you no longer have linear polarization but circular polarization, resulting in AR = infinity. Co-polarization angle Radiation Intensity The radiation intensity in a certain direction, in watts per steradian, is given by: For a certain direction, the radiation intensity will be maximal and equals: Radiated Power The total power radiated by the antenna, in Watts, is represented by: Effective Angle This parameter is the solid angle through which all power emanating from the antenna would flow if the maximum radiation intensity is constant for all angles over the beam area. It is measured in steradians and is represented by: Directivity is dimensionless and is represented by: The maximum directivity is given by: The gain of the antenna is represented by: where P [inj] is the real power, in watts, injected into the circuit. The maximum gain is given by: Effective Area The effective area, in square meters, of the antenna circuit is given by: Planar (Vertical) Cut For the planar cut, the angle phi ( Cut Angle ), which is relative to the x-axis, is kept constant. The angle theta, which is relative to the z-axis, is swept to create a planar cut. Theta is swept from 0 to 360 degrees. This produces a view that is perpendicular to the circuit layout plane. Planar (vertical) cut illustrates a planar cut. Planar (vertical) cut In layout, there is a fixed coordinate system such that the monitor screen lies in the XYplane. The X-axis is horizontal, the Y-axis is vertical, and the Z-axis is normal to the screen. To choose which plane is probed for a radiation pattern, the cut angle must be specified. For example, if the circuit is rotated by 90 degrees, the cut angle must also be changed by 90 degrees if you wish to obtain the same radiation pattern from one orientation to the next. Conical Cut For a conical cut, the angle theta, which is relative to the z-axis, is kept constant. Phi, which is relative to the x-axis, is swept to create a conical cut. Phi is swept from 0 to 360 degrees. This produces a view that is parallel to the circuit layout plane. Conical cut illustrates a conical cut. Conical cut Viewing Results Automatically in Data Display If you choose to view results immediately after the far-field computation is complete, enable Open display when computation completed . When Data Display is used for viewing the far-field data, a data display window containing default plot types of the data display template of your choice will be automatically opened when the computation is finished. The default template, called FarFields, bundles four groups of plots: • Linear Polarization with E [co] , E [cross] , AR [lp]. • Circular Polarization with E [lhp] , E [rhp] , AR [cp]. • Absolute Fields with • Power with Gain, Directivity, Radiation Intensity, Efficiency. For more information, please refer to About Antenna Characteristics. Exporting Far-Field Data If 3D Visualization is selected in the Radiation Pattern dialog, the normalized electric far-field components for the complete hemisphere are saved in ASCII format in the file < project_dir>/ mom_dsn /<design_name>/ proj.fff . The data is saved in the following format: #Frequency <f> GHz /\* loop over <f> \*/ #Excitation #<i> /\* loop over <i> \*/ #Begin cut /\* loop over phi \*/ <theta> <phi_0> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut #Begin cut <theta> <phi_1> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut #Begin cut <theta> <phi_n> <real\(E_theta\)> <imag\(E_theta\)> <real\(E_phi\)> <imag\(E_phi\)> /\* loop over <theta> \*/ #End cut In the proj.fff file, E_theta and E_phi represent the theta and phi components, respectively, of the far-field values of the electric field. Note that the fields are described in the spherical co-ordinate system (r, theta, phi) and are normalized. The normalization constant for the fields can be derived from the values found in the proj.ant file and equals: The proj.ant file, stored in the same directory, contains the antenna characteristics. The data is saved in the following format: Excitation <i> /\* loop over <i> \*/ Frequency <f> GHz /\* loop over <f> \*/ Maximum radiation intensity <U> /\* in Watts/steradian \*/ Angle of U_max <theta> <phi> /\* both in deg \*/ E_theta_max <mag\(E_theta_max\)> ; E_phi_max <mag\(E_phi_max\)> E_theta_max <real\(E_theta_max\)> <imag\(E_theta_max\)> E_phi_max <real\(E_phi_max\)> <imag\(E_phi_max\)> Ex_max <real\(Ex_max\)> <imag\(Ex_max\)> Ey_max <real\(Ey_max\)> <imag\(Ey_max\)> Ez_max <real\(Ez_max\)> <imag\(Ez_max\)> Power radiated <excitation #i> <prad> /\* in Watts \*/ Effective angle <eff_angle_st> steradians <eff_angle_deg> degrees Directivity <dir> dB /\* in dB \*/ Gain <gain> dB /\* in dB \*/ The maximum electric field components (E_theta_max, E_phi_max, etc.) are those found at the angular position where the radiation intensity is maximal. They are all in volts. Displaying Radiation Results In EMDS for ADS Visualization, you can view the following radiation data: • Far-fields including E fields for different polarizations and axial ratio in 3D and 2D formats • Antenna parameters such as gain, directivity, and direction of main radiation in tabular format This section describes how to view the data. In EMDS for ADS RF mode, radiation results are not available for display. For general information about radiation patterns and antenna parameters, refer to About Radiation Patterns. Loading Radiation Results In EMDS for ADS, computing the radiation results is included as a post processing step. The Far Field menu item appears in the main menu bar only if radiation results are available. If a radiation results file is available, it is loaded automatically. The command Set Port Solution Weights (in the Current menu) has no effect on the radiation results. The excitation state for the far-fields is specified in the radiation pattern dialog box before You can also read in far-field data from other projects. First, select the project containing the far-field data that you want to view, then load the data: 1. Choose Projects > Select Project. 2. Select the name of the Momentum or Agilent EMDS project that you want to use. 3. Click Select Momentum or Select Agilent EMDS. 4. Choose Projects > Read Field Solution. 5. When the data is finished loading, it can be viewed in far-field plots and as antenna parameters. Displaying Far-fields in 3D The 3D far-field plot displays far-field results in 3D. To display a 3D far-field plot: 1. Choose Far Field > Far Field Plot. 2. Select the view in which you want to insert the plot. 3. Select the E Field format: □ E = sqrt(mag(E Theta)2 + mag(E Phi)2) □ E Theta □ E Phi □ E Left □ E Right □ Circular Axial Ratio □ E Co □ E Cross □ Linear Axial Ratio 4. If you want the data normalized to a value of one, enable Normalize. For Circular and Linear Axial Ratio choices, set the Minimum dB. Also set the Polarization Angle for E Co, E Cross, and Linear Axial Ratio. 5. By default, a linear scale is used to display the plot. If you want to use a logarithmic scale, enable Log Scale. Set the minimum magnitude that you want to display, in dB. 6. Click OK . Selecting Far-field Display Options You can change the translucency of the far-field and set a constant phi angle: 1. Click Display Options. 2. A white, dashed line appears lengthwise on the far-field. You can adjust the position of the line by setting the Constant Phi Value, in degrees, using the scroll bar. 3. Adjust the translucency of the far-field by using the scroll bar under Translucency. 4. Click Done . Defining a 2D Cross Section of a Far-field You can take a 2D cross section of the far-field and display it on a polar or rectangular plot. The cut type can be either planar (phi is fixed, theta is swept) or conical (theta is fixed, phi is swept). The figure below illustrates a planar cut (or phi cut) and a conical cut (or theta cut), and the resulting 2D cross section as it would appear on a polar plot. The procedure that follows describes how to define the 2D cross section. To define a cross section of the 3D far-field: 1. Choose Far Field > Cut 3D Far Field. 2. If you want a conical cut, choose Theta Cut. If you want a planar cut, choose Phi Cut. 3. Set the angle of the conical cut using the Constant Theta Value scroll bar or set the angle of the planar cut using the Constant Phi Value scroll bar. 4. Click Apply to accept the setting. The cross section is added to the Cut Plots list. 5. Repeat these steps to define any other cross sections. 6. Click Done to dismiss the dialog box Displaying Far-fields in 2D Once you have defined a 2D cross section of the 3D far-field plot, you can display the cross section on one of these plot types: • On a polar plot • On a rectangular plot, in magnitude versus angle In the figure below, a cross section is displayed on a polar and rectangular plot. To display a 2D far-field plot: 1. Choose Far Field > Plot Far Field Cut . 2. Select a 2D cross section from the 2D Far Field Plots list. The type of cut (phi or theta) and the angle identifies each cross section. 3. Select the view that you want to use to display the plot. 4. Select the E-field format. 5. Select the plot type, either Cartesian or Polar. 6. If you want the data normalized to a value of one, enable Normalize. 7. By default, a linear scale is used to display the plot. If you want to use a logarithmic scale, enable Log Scale. If available, set the minimum magnitude that you want to display, in dB; also, set the polarization angle. 8. Click OK. Displaying Antenna Parameters Choose Far Field > Antenna Parameters to view gain, directivity, radiated power, maximum E-field, and direction of maximum radiation. The data is based on the frequency and excitation state as specified in the radiation pattern dialog. The parameters include: • Radiated power, in watts • Effective angle, in degrees • Directivity, in dB • Gain, in dB • Maximum radiation intensity, in watts per steradian • Direction of maximum radiation intensity, theta and phi, both in degrees • E_theta, in magnitude and phase, in this direction • E_phi, in magnitude and phase, in this direction • E_x, in magnitude and phase, in this direction • E_y, in magnitude and phase, in this direction • E_z, in magnitude and phase, in this direction In the antenna parameters, the magnitude of the E-fields is in volts. Privacy Statement Terms of Use Legal | Contact Us © Agilent 2000-2008
{"url":"http://cp.literature.agilent.com/litweb/pdf/ads2008/emds/ads2008/Radiation_Patterns_and_Antenna_Characteristics.html","timestamp":"2014-04-18T23:23:27Z","content_type":null,"content_length":"38034","record_id":"<urn:uuid:0762d59a-c670-483d-a60a-fe9c56fd81b5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Permutation Combination Permutation Combination - Practice Questions A collection of questions that typically appear from the topic of Permutation and Combination. 1. Question 1: A college has 10 basketball players. A 5-member team and a captain will be selected out of these 10 players. How many different selections can be made? Explanatory Answer » 2. Question 2: Badri has 9 pairs of dark Blue socks and 9 pairs of Black socks. He keeps them all in a same bag. If he picks out three socks at random what is the probability he will get a matching Solution » 3. Question 3: How many words of 4 consonants and 3 vowels can be made from 12 consonants and 4 vowels, if all the letters are different? Solution » 4. Question 4: If the letters of the word CHASM are rearranged to form 5 letter words such that none of the word repeat and the results arranged in ascending order as in a dictionary what is the rank of the word CHASM? Solution » 5. Question 5: How many four letter distinct initials can be formed using the alphabets of English language such that the last of the four words is always a consonant? Solution » 6. Question 6: When four fair dice are rolled simultaneously, in how many outcomes will at least one of the dice show 3? Explanatory Answer » 7. Question 7: In how many ways can the letters of the word EDUCATION be rearranged so that the relative position of the vowels and consonants remain the same as in the word EDUCATION? Solution » 8. Question 8: How many ways can 10 letters be posted in 5 post boxes, if each of the post boxes can take more than 10 letters? Solution » 9. Question 9: How many numbers are there between 100 and 1000 such that atleast one of their digits is 6? Solution » 10. Question 10: A team of 8 students goes on an excursion, in two cars, of which one can seat 5 and the other only 4. In how many ways can they travel? Solution » 11. Question 11: How many ways can 4 prizes be given away to 3 boys, if each boy is eligible for all the prizes? Solution » 12. Question 12: There are 12 yes or no questions. How many ways can these be answered? Solution » 13. Question 13: How many words can be formed by re-arranging the letters of the word ASCENT such that A and T occupy the first and last position respectively? Solution » 14. Question 14: Four dice are rolled simultaneously. What is the number of possible outcomes in which at least one of the die shows 6? Solution » 15. Question 15: How many alphabets need to be there in a language if one were to make 1 million distinct 3 digit initials using the alphabets of the language? Solution » 16. Question 16: In how many ways can the letters of the word MANAGEMENT be rearranged so that the two As do not appear together? Solution » 17. Question 17: There are 5 Rock songs, 6 Carnatic songs and 3 Indi pop songs. How many different albums can be formed using the above repertoire if the albums should contain at least 1 Rock song and 1 Carnatic song? Solution » 18. Question 18: What is the value of 1*1! + 2*2! + 3!*3! + ............ n*n!, where n! means n factorial or n(n-1)(n-2)...1 Solution » 19. Question 19: How many number of times will the digit '7' be written when listing the integers from 1 to 1000? Solution » 20. Question 20: 36 identical chairs must be arranged in rows with the same number of chairs in each row. Each row must contain at least three chairs and there must be at least three rows. A row is parallel to the front of the room. How many different arrangements are possible? TANCET 2008 Question: Answer & Explanation »
{"url":"http://questions.ascenteducation.com/iim_cat_mba_free_sample_questions_math_quant/permutation_combination/","timestamp":"2014-04-18T08:11:49Z","content_type":null,"content_length":"28408","record_id":"<urn:uuid:ab53667a-ae27-4fed-970e-cd2d4645f0b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
What is significant about the half-sum of positive roots? up vote 26 down vote favorite I apologize for the somewhat vague question: there may be multiple answers but I think this is phrased in such a way that precise answers are possible. Let $\mathfrak{g}$ be a semisimple Lie algebra (say over $\mathbb{C}$) and $\mathfrak{h} \subset \mathfrak{g}$ a Cartan subalgebra. All the references I have seen which study the representation theory of $\mathfrak{g}$ in detail make use of the half-sum of positive roots, which is an element of $\mathfrak{h}^\ast$: e.g. Gaitsgory's notes on the category O introduce the "dotted action" of the Weyl group on $\mathfrak{h}^\ast$, the definition of which involves this half-sum. Is there a good general explanation of why this element of $\mathfrak{h}^\ast$ is important? The alternative, I suppose, is that it is simply convenient in various situations, but this is rather rt.representation-theory lie-algebras I don't why the math won't display properly: the code looks fine to me. Maybe someone who knows more about LaTeX and/or MathOverflow can fix it. – Justin Campbell Nov 5 '11 at 20:56 1 By itself, the asterisk is a control character, to display one use \ast in Latex. Wow, little earthquake just now. 14:52 PST. – Will Jagy Nov 5 '11 at 21:53 earthquake.usgs.gov/earthquakes/recenteqscanv/FaultMaps/… – Will Jagy Nov 5 '11 at 21:56 1 This question and the wonderful answers are a really great example of why MO is good. – Mariano Suárez-Alvarez♦ Nov 9 '11 at 2:32 I hope this very good question is not closed yet. I have heard that $\rho$ of $G$ can be related to the curvature of the homogenous space $G/H$, where $H$ is closed subgroup of $G$. I am interested in particular when $G$ is noncompact semisimple Lie group and $H$ is its maximal compact subgroup. Can anybody elaborate on this.. please? – spr Jun 30 '12 at 5:15 add comment 9 Answers active oldest votes I don't think there is a one-line answer to this question, since it depends a lot on the direction from which you approach semi-simple Lie theory. For one thing, it's probably best at first to emphasize just integral weights, among which the dominant ones parametrize irreducible finite dimensional representations. Here the weight usually denoted $\rho$ plays a ubiquitous role in the classical Weyl theory, but that too can be developed in a number of different ways. (There was some early experimentation with the notation; the alternative symbol $\delta$ also had widespread use before the Bourbaki preference for $\rho$ started to take over in 1968.) While it's important in proofs of the Weyl character formula to view $\rho$ as the half-sum of positive roots (given a fixed positive or simple system), it's also essential to identify it with the sum of fundamental dominant weights for many purposes. In this guise it's the smallest regular dominant weight, fixed by no element of the Weyl group except the identity. When passing from integral weights to line bundles on an associated flag variety $G/B$ (with $B$ a Borel subgroup associated to positive roots relative to a fixed maximal torus which it up vote 21 contains), the weight $\rho$ has the distinction of defining an ample line bundle. This property is crucial in geometric approaches to Weyl's formula, as well as in spin-offs in prime down vote characteristic due to Andersen and others. Ultimately the importance of the weight $\rho$ is probably appreciated best in the setting of representation theory, where the finite dimensional theory is enriched by treatment of highest weight modules in more generality and the shift by $\rho$ is again ubiquitous. By the way, the convenient "dot" notation $w \cdot \lambda := w(\lambda +\rho) - \rho$ is apparently due to Robert Moody. In the earlier literature the more awkward full notation appears, or else is replaced in the Paris notation by a hidden $\rho$-shift. None of what I've said is a complete answer to the question asked, but in any case it's more than a matter of "convenience" to emphasize $\rho$. 1 Note that for Kac-Moody groups, where there are infinitely many positive roots, one still defines $\rho$ as the sum of the fundamental weights. – Allen Knutson Nov 8 '11 at 1:21 @Allen: Yes, there's a lot more to be said in that direction. The two ways of looking at $\rho$ come up classically in the Weyl denominator formula, which Kac creatively generalized to certain representations of symnmetrizable Kac-Moody algebras (though in that setting $\rho$ is not quite uniquely determined). – Jim Humphreys Nov 8 '11 at 13:46 add comment From the point of view of geometry, the crucial fact about $\rho$ is that the corresponding line bundle on the flag manifold is (upt to a sign) a (the) square-root of the canonical bundle (top exterior power of $T^*_B G/B \simeq b_- $ is the sum of the negative roots). This is of course equivalent to Alain Valette's description in terms of the modular character of the Borel. In other words its sections in the real world are half-densities (things for which we can define the $L^2$ inner product). It is a universal fact about passage from the classical world to the quantum world (in particular the geometric construction of representations) involves a shift by the square root of the canonical bundle. There are many ways to explain or motivate this. For example if we seek unitary representations we need to be able to define an $L^2$ inner product, which means considering not sections of the bundle we might have expected but sections times half-densities (again this is Alain's answer restated). From the point of view of rings of differential operators, the adjoint of a differential operator acting on functions (or on sections of a bundle $L$) is invariantly not another diffop (on $L$) but a differential operator acting on volume forms (or on up vote sections of $L$ tensor the canonical bundle) --- so the self dual twist of differential operators is by half-forms, ie $\rho$-shifted. (Put another way, Serre duality is a reflection centered 18 down at half-forms!) My favorite explanation is in Beilinson-Bernstein's Proof of Jantzen Conjectures and doesn't involve self-adjointness or unitarity: it's a consistency condition for deformation quantization of symbols (functions on the cotangent bundle): if you want this deformation quantization to be correctly normalized to order two (this is not the right question to go into that) you find you need to look at differential operators twisted by half-forms, not functions. On the flag variety this means a $\rho$-shift, and from the D-module POV on representation theory this is one fundamental place where that shift is forced on you, independent of thinking of inner products. This is in particular one way to see why it comes up in the Weyl character formula, through the geometric proof via Atiyah-Bott or via the BGG resolution, both of which involve the geometry of the flag variety. add comment This is actually a fairly deep question. Your suspicion that there may be multiple answers is correct, but there might be some surprising connections between seemingly unrelated answers. Let me give one possible thread of explanation. The underlying principle is that the appearance of $\rho$ and the "dot" action $w\cdot\lambda=w(\lambda+\rho)-\rho$ in representation theory is closely related to the geometry of the flag variety. One of the first places one meets $\rho$ (and the dot action) is in the Weyl character formula. A theorem of Kostant shows that the formula can be written as the ratio of two Lie algebra cohomology Euler characteristics. From this perspective, the appearance $w \cdot \lambda$ and $w\cdot0$ in the WCF is ultimately explained by the fact that these are the weights appearing in the weight space decomposition of the relevant Lie algebra cohomology modules, namely $H^*(\mathfrak n, V^\lambda)$ and $H^\ast(\mathfrak n, V^0)$, where $\mathfrak n = \bigoplus_{\alpha>0} \mathfrak g_\alpha$ and $V^\mu$ denotes the irrep of highest weight $\mu$. up vote 11 down We can rephrase this in geometric terms by invoking the "geometric analogue" of Kostant's theorem, i.e. the Borel–Weil–Bott theorem. Kostant's description of the Lie algebra cohomology of $\ vote mathfrak n = \mathfrak g /\mathfrak b^-$ with coefficients in an irrep translates into a representation-theoretic description of the sheaf cohomology of certain line bundles $L_\lambda$ (constructed using integral weights $\lambda$) over the flag variety $G/B^-$ of $\mathfrak g$. Consequently, the dot action shows up in this description, and this time it's accompanied by a shift in degree. This in turn can be explained by Serre duality; the key fact is that canonical bundle of $G/B^-$ turns out to be $L_{-2\rho}$. So, in some sense, the appearance of $\rho$ and the dot action in the WCF can be thought of as a manifestation of Serre duality. [N.B. This is a condensed version of my lengthy original answer. The old version can be found in the edit history.] add comment While I appreciate Dave Ben-Zvi's half-densities answer, I'm going to put forth the contrary opinion that it's largely a bookkeeping artifact. The most familiar place that $\rho$ shows up is in the WCF of the irrep $V$ with highest weight $\lambda$, $$ Tr(t|_{V}) = \frac{\sum_w t^{w(\lambda+\rho)-\rho}}{\prod_{\Delta_+} (1-t^{-\ beta})}. $$ This version of WCF is good for suggesting the existence of the BGG resolution, or for taking the Fourier transform of and obtaining the Kostant multiplicity formula. But otherwise, I claim that that it's the worst way to write it down, and suggest instead $$ Tr(t|_V) = \sum_w w \cdot \frac{t^{\lambda}}{\prod_{\Delta_+} (1-t^{-\beta})}. $$ Hurray, it's manifestly $W$-invariant, up vote and no $\rho$ in sight! This is the natural version that one obtains by applying the Atiyah-Bott-Riemann-Roch-Lefschetz Woods Hole localization formula to the flag manifold $G/B$, as A&B 8 down mention in their paper. You really notice it if you try to write down a WCF for nonregular weights, which corresponds to applying the localization formula to a partial flag manifold $G/P$. Then you can no longer flip weights to put everything over the same denominator, so the first version is badly broken. The second, $W$-invariant, version works just fine in this case (the denominator is a product over only part of $\Delta_+$). EDIT: I suppose it's too strong to say it's badly broken. It's just that it's not in lowest terms. A general remark: You get that formula if you apply A–B to the $\overline\partial$ operator acting on $\Omega^{(0,q)}(G/B,L_\lambda)$. In another paper Bott mentions that you can make the other formula for the Weyl denom (the one with $\rho$ in it!) show up if you approach things differently: $G/B$ is spin, so we can work with its (elliptic) Dirac operator $S^+ \to S^-$. Incidentally, the existence of a spin structure on $G/B$ is related to $\rho$: the canonical bundle of $G/B$ is $L_{-2\rho}$ so a holomorphic square root is given by $L_{-\rho}$, and this determines the spin structure. – Faisal Nov 8 '11 at 4:11 add comment I don't quite have in mind any construction that fully singles out this element, which is called the Weyl element $\rho$, as the most important one. To an extent you don't even expect that in the non-simply-laced case, because a dual elements, half of the sum of the coroots, is sometimes comparably important. However, I know that the Weyl element has long been important for $q$-analogue" reasons, which in more modern work has become more and more important because it points to quantum groups and eventually even categorification. Consider the elementary fact that there are $\binom{n}{k} = \frac{n!}{k!(n-k)!}$ subsets of size $k$ of the set $\{1,\ldots,n\}$. This counting fact is a special case of the Weyl dimension formula for the dimension of an irreducible representation of a complex simple Lie algebra. In this model case the representation is $\mathfrak{sl}(n,\mathbb{C})$ acting on $\Lambda^k(\mathbb {C}^n)$. You could look at the same counting problem again with multiplicative weights for the elements of $\{1,\ldots,n\}$. If you make the weight of $j$ be some variables $x_j$, the total weight is an irreducible polynomial --- equivalent to the full character of $\Lambda^k(\mathbb{C}^n)$. Magically, if you let $x_j = q^j$ for a single variable $q$, you get (up to a power of $q$) the Gaussian binomial coefficient $\binom{n}{k}_q$. This is a special case of the Weyl $q$-dimension formula which gives the character of the Weyl element. That is, the dimension of an irreducible representation $V_\lambda$ of $\mathfrak{g}$ is given by a tidy ratio, and the $q$-dimension still is. The full character is not as simple, and therefore neither are most 1-variable specializations. up vote 7 down Bourbaki, and maybe Weyl himself, used the $q$-dimension to prove the dimension formula by plugging in $q=1$. (It can happen in combinatorics that a $q$-analogue is easier than the original vote question.) In modern representation theory the $q$-dimension is even more important, because it's also (after centering the powers of $q$) the quantum dimension of the same representation $V_ \lambda$ (or we can say, the same-named representation) of the quantum group $U_q(\mathfrak{g})$. The Weyl element also arises in many other ways in the representation theory of the quantum group. Actually, that's an understatement: This $q$ is the variable of the Jones polynomial and its generalizations. All of this $q$-structure remains important in the even newer categorifications of quantum groups. (A caveat: Because half-integer powers of $q$ commonly arise, there is a substitution of $q^2$ for $q$ in passing from $q$-analogues to quantum groups. I never liked this mismatch of conventions, in fact as a student I didn't even realize/believe it, but it is an established standard.) There is another formula for the Weyl element: It's the sum of the fundamental weights. It's already interesting that these two formulas agree. add comment Let $G=KAN$ be the Iwasawa decomposition of a semi-simple Lie group $G$. Then the modular function of the Borel subgroup (= minimal parabolic) $B=MAN$ is $\Delta_B(man)=e^{2\rho(\log a)}$ where $\rho$ is half the sum of the positive roots of the root system $\Delta(\mathfrak{g},\mathfrak{a})$. This is relevant for the definition of the principal series of representations of $G$, say in the compact picture: for $\nu\in i\mathfrak{a}^*$, the Hilbert space of $Ind_B^G(1\otimes e^\nu up vote 6 \otimes 1)$ is $L^2(K/M)$, with action given by $(\pi_\nu(g)f)(k)=e^{-(\nu+\rho)H(g^{-1}k)}f(\kappa(g^{-1}k))$, where $g=\kappa(g)e^{H(g)}n$ in the Iwasawa decomposition. So $e^{-\rho}$ down vote appears as the square root of the Radon-Nikodym cocycle, needed to make the representation unitary, since the measure on $G/B=K/M$ is not $G$-invariant. For more on this, see sections 5.6 and 7.1 in A.W. Knapp, Representation theory of semisimple groups (an overview based on examples), Princeton MAth. Series 36, 1986. add comment One answer that hasn't appeared here yet is that $\rho$ is the highest weight of a spinor representation. Instead of the Dolbeault operator on the flag manifold, one can work with the Dirac operator. This leads to the subject of "Dirac Cohomology", see the book "Dirac Operators in Representation Theory". Here, instead of working with the Universal Enveloping Algebra and modules for it, one works with the tensor product of this with a Clifford algebra. Irreducible modules then acquire a spinor representation factor. This explains nicely the shift by $\rho$. up vote 5 Hopefully this spring I'll finish this, which explains all this in more detail: down vote add comment I fear this question is getting a little crowded, but I do have my own hobby-horse to ride, so why hold back: For a holomorphic symplectic variety with nice enough behavior, quantizations of said variety are in canonical bijection with power series in $H^2(X)$ (this follows from work of Bezrukavnikov and Kaledin). Furthermore, for $X=T^*G/B$, there is a canonical isomorphism of $H^2(T^*G/B)$ with $\mathfrak{h}^*$, the dual abstract Cartan of $\mathfrak{g}$. up vote 4 So, picking any deformation quantization gives a power series in $H^2(T^*G/B)$, and there is one that we know and love the best: differential operators (of course, David Ben-Zvi was arguing down vote above that maybe you shouldn't love this one best, but set that aside for a moment). What power series does this correspond to? Of course, it's $\rho$ (this is essentially because the differential operators twisted in half-forms really are the most canonical thing, and thus correspond to 0). So, if you believe that differential operators in functions are particularly important as compared to other TDO's, you think $\rho$ is important. 1 If I'm not mistaken this correspondence with power series is, up to order two, precisely what Beilinson-Bernstein discuss: i.e. they say we should normalize a quantization by requiring it gives the zero power series to order two, resulting in half-forms.. They do this via an elementary observation: a deformation quantization to order two, when symmetrized, still gives a commutative associative algebra (this fails to higher order), so one gets a canonical 1-jet of a path into Poisson structures. – David Ben-Zvi Nov 8 '11 at 5:10 Right. I don't claim that this is really that different from your answer. It just emphasizes in a slightly different way just how canonical $\rho$ really is. – Ben Webster♦ Nov 10 '11 at add comment If you have any free abelian group with an integral bilinear form embedded in the Lorentz space $\mathbb{R}^{n,1}$, you may consider the group of automorphisms generated by roots, i.e., reflections in vectors. The reflection hyperplanes will split the corresponding hyperbolic $n$-space into fundamental domains, and if you fix a chamber, you can choose simple roots corresponding to its walls. This setting includes all finite, affine, and hyperbolic Weyl groups. If there is a vector $\rho$ in the span of the roots satisfying $\Vert r - \rho \Vert = \Vert \rho \Vert$ for all simple roots $r$, then it is called a Weyl vector. This always exists in the finite and affine cases, and the other answers on this page give some description of the relevant geometry. The existence of a Weyl vector gives a hyperbolic reflection group some arithmetic significance, and non-existence is generic - Lorentzian lattices of rank greater than 26 can't have Weyl vectors. From Barnard's thesis (and earlier work of Gristenko and Nikulin in small rank cases), if one has a lattice generated by roots with a Weyl vector, one may attach a vector-valued modular form, whose coefficients describe the root multiplicities of a Borcherds-Kac-Moody Lie algebra whose real simple roots are precisely those of the reflection group. The Lie algebra in turn has a Weyl denominator product that is a cusp expansion of an automorphic form on $O(n+1,2)$. up vote In the most extreme case, one may start with the 26-dimensional even unimodular Lorentzian lattice $I\! I_{25,1}$, and choose a chamber for its reflection group. The Dynkin diagram is 1 down naturally an affine space on the Leech lattice (by a theorem of Conway), and there is a norm zero Weyl vector $\rho$. There is an action of Leech, identified with the lattice quotient $\rho^\ vote perp/\mathbb{Z}\rho$, on the fundamental domain by parabolic translation. Because there is a Weyl vector, one has a modular form whose coefficients control root multiplicities of a Lie algebra. In this case, we have the weight -12 form $1/\Delta$, and the Lie algebra is the fake monster Lie algebra, which apparently describes bosonic strings propagating in a 26-torus. The roots of norm $2n$ have multiplicity $p_{24}(1-n)$, i.e., the number of partitions in 24 colors. In a different direction, there is a generalization of the Weyl character formula that holds for any Borcherds-Kac-Moody Lie algebra (not just hyperbolic), and $\rho$ appears here as any vector in the root space that satisfies the relation $\Vert r - \rho \Vert = \Vert \rho \Vert$ (equivalently, $r-2\rho \perp r$) for all simple roots $r$. In the BGG interpretation (worked out in detail in Jurisich's thesis), we find that $H_k(\mathfrak{n}_+, \mathbb{C})$ is spanned by the elements of $\bigwedge^k \mathfrak{n}_+$ whose weight $r$ satisfies $\Vert r - \rho \Vert = \Vert \rho \Vert$. In other words, when we throw away finiteness (and hence well-behaved flag varieties), $\rho$ still plays a role of selecting the part of the exterior power that contributes to the homology. add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/80150/what-is-significant-about-the-half-sum-of-positive-roots/80162","timestamp":"2014-04-16T13:37:29Z","content_type":null,"content_length":"107532","record_id":"<urn:uuid:ac9667a7-ee54-4d96-858f-478aaeab65d2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
The difference between a gradient vector to a surface and the normal vector to that s Hey nicksbyman. If you have a definition, you should post it (if its from lectures notes or a book) but a gradient vector can refer to the tangential vector with respect to a particular variable (i.e. one based on the partial derivative of the surface at a particular point). It's hard to say with certainty without knowing more information. I think your problem is where you refer to the "gradient vector" of a surface. There is no such thing. Rather, the gradient vector is the gradient of a function. If we have a function f(x,y,z) then the "gradient of f", also written $abla f$, is the "vector" $\frac{\partial f}{\partial x}\vec{i}+ \frac{\partial f}{\partial y}\vec{j}+ \frac{\partial f}{\partial z}\vec{k}$. Given such a function, the equation f(x,y,z)= constant, could, theoretically, be "solved" for one of the variables in terms of the other two. Since we can then have z= g(x,y), say, that equation defines a surface. Given the equation f(x,y,z)= C, $abla f$, the gradient vector of the function is a normal vector to the surface at every point. (In Britain, the term "gradient" can be used to refer to the derivative of a function, which then is the slope of the tangent line. Chiro may be thinking of that situation. Why in the world can't those Brits speak English!) Last edited by HallsofIvy; November 1st 2012 at 07:08 PM.
{"url":"http://mathhelpforum.com/calculus/206575-difference-between-gradient-vector-surface-normal-vector-s.html","timestamp":"2014-04-17T03:14:03Z","content_type":null,"content_length":"39369","record_id":"<urn:uuid:3acc14fd-c0b4-4c85-96d2-3736a16cc22f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
double vs int64 July 31st, 2013, 01:32 AM #1 Elite Member Power Poster Join Date Apr 2001 Manchester, England double vs int64 You'd think this would be a simple thing to find out from the internet but I must admit, I've struggled to find the answer 1) What is the range of numbers covered by a 64-bit double? 2) Ignoring fractions, is the above range wider or narrower than the range covered by an int64? "A problem well stated is a problem half solved.” - Charles F. Kettering Re: double vs int64 Data Type Ranges Best regards, Re: double vs int64 Also look at limits.h All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on. Re: double vs int64 Thanks guys. If I'm using my rusty old calculator correctly it looks like double has got a MUCH wider range than int64 - and even the humble float is almost comparable to int64. float seems to be roughly +/-1.1 x 10^17. int64 approx. +/-9.25 x 10^18. int32 is +/-2.1 x 10^9. Last edited by John E; July 31st, 2013 at 04:01 AM. "A problem well stated is a problem half solved.” - Charles F. Kettering Re: double vs int64 John E Thanks guys. If I'm using my rusty old calculator correctly it looks like double has got a MUCH wider range than int64 - and even the humble float is almost comparable to int64. float seems to be roughly +/-1.1 x 10^17. int64 approx. +/-9.25 x 10^18. int32 is +/-2.1 x 10^9. But you realize that there is a big difference between using integral and floating point values, correct? That difference being accuracy. Floating point variables are not exact (unless they are sums of inverse powers of 2). An int64 is always exact, since it is an integer. Calculations that require exact math cannot be done reliably using floats and doubles. So the reasons for using float/double versus int64 is much more than range. For example for money calculations, it is advantageous to use integers representing the smallest unit of currency (example, for USA it would be cents instead of dollars). Then the int64 can be used to represent purely cents instead of a dollar.cents. Paul McKenzie Last edited by Paul McKenzie; July 31st, 2013 at 04:48 AM. Re: double vs int64 Hi Paul, Yes, I understand about the inherent inaccuracies with float and double. Here's the problem I'm considering:- void some_func(int64_t a, int64_t b) printf ("%u\n", abs( a-b )); I'm working on a program (originally written for Linux) which consistently sends 64-bit values to abs(). That's just a simple example above. The actual functions are usually more convoluted. The problem is that VC++ doesn't seem to have a version of abs() that accepts int64_t. The only types available support float, double, int or long. I'm trying to figure out which type I should use so that I don't lose accuracy (or at least, I lose as little accuracy as possible). "A problem well stated is a problem half solved.” - Charles F. Kettering Re: double vs int64 an int64 has an effective accurate range from - 2^63 all the way to + 2^63-1 a double has an 53bit mantissa (with an implied leading 1) and it has a separate sign bit so it has an effective accurate integer range from - 2^54 all the way to + 2^54. Or to put it another way, a double can accurately represent any value an int55 (assuming such a thing existed) can. now, a double can store larger values (and it can store fractions), but none of those will guarantee accurate integer values not are they in a continuous range. or put another way, any other values not in the "int55" range will be approximations. Re: double vs int64 John E The problem is that VC++ doesn't seem to have a version of abs() that accepts int64_t. make your own... int64_t abs(int64_t val) if (val<0) return -val; return val; depending on need, you may have to do somethign special in case val is -2^63 because that can't be represented in a positive int. A potential solution is returning an unsigned int64_t, but that may not fit your problem domain. Re: double vs int64 John E Thanks guys. If I'm using my rusty old calculator correctly it looks like double has got a MUCH wider range than int64 - and even the humble float is almost comparable to int64. float seems to be roughly +/-1.1 x 10^17. int64 approx. +/-9.25 x 10^18. int32 is +/-2.1 x 10^9. Yes, it has a wider range, but that range isn't continuous. try storing 144.115.188.075.855.873 in a double, then reading it back out. also note that most calculators don't work with a "double", but work with a floating point type that is larger than a double. So even your rusty old calculator probably exceeds the capabilities of a double. Re: double vs int64 make your own... int64_t abs(int64_t val) if (val<0) return -val; return val; Good suggestion, Thanks. I also realised that for 64-bit values on Linux, they should really be calling llabs(), rather than abs(). A convenience macro can then be used to map llabs() to __abs64() which is the Windows "A problem well stated is a problem half solved.” - Charles F. Kettering Re: double vs int64 try storing 144.115.188.075.855.873 in a double, then reading it back out. Presumably I was supposed to remove all the periods? Interestingly, the compiler told me the number would get truncated from int64 to double. But according to the debugger it looked lie the right number "A problem well stated is a problem half solved.” - Charles F. Kettering Re: double vs int64 John E Presumably I was supposed to remove all the periods? The dot is the thousands separator in Belgium (where I presume ORueben is posting from). Paul McKenzie Re: double vs int64 yes, sorry about that. Also, a correction. I initially looked up the value of DBL_MANT_DIG to post the above, and DBL_MANT_DIG is defined as 53 I knew a double has an implied 1 in front, so I added this on, but DBL_MANT apparently already has it built in as well. (Doh!) so change my above to: a double has an 52bit mantissa (with an implied leading 1) and it has a separate sign bit so it has an effective accurate integer range from - 2^53 all the way to + 2^53. Or to put it another way, a double can accurately represent any value an int54 (assuming such a thing existed) can. Interestingly, the compiler told me the number would get truncated from int64 to double. But according to the debugger it looked lie the right number well yes, storing it in a double right away as in double x = 144115188075855873; I would have expected the compiler to output a warning (which in and by itself should already have been a clue of it's own). what you were getting is probably the compiler seeing it is a const and displaying the full const value without stuffing into an actual double. any sort of "simple" code is probably going to need some form of "don't optimize this" to actuall proove the point I was trying to make. double x = 144115188075855873; __int64 i = (__int64)x; Running this in a debug build or with all optimisations off results in i being equal to 144115188075855872 on VC2010. (and I would expect the same result on any compiler given how truncating/ rounding should work. Last edited by OReubens; August 1st, 2013 at 07:23 AM. Re: double vs int64 Thanks again for that full explanation OReubens. I tried that assignment, like you suggested (double to int64_t) and you were absolutely right. The int64_t was 1 less than the original number. Actually I think there's something else I haven't fully understood in all this (the meaning of the letter 'E'). Looking at that web page that Igor linked to, I noticed that type float can hold a maximum (positive) number of 3.4E38. I originally thought that 'E' meant 'e' the natural logarithm (i.e. 2.7182818). So I calculated 3.4E38 to mean:- 3.4 x E^38 - or in other words, ((2.7182818^38) * 3.4) But my debugger suggests that my assumption was completely wrong! It gives the impression that 3.4E38 actually means (3.4 * 10^38) How confusing... "A problem well stated is a problem half solved.” - Charles F. Kettering Re: double vs int64 E stands for exponent and is used for scientific notation. You're impression is right 3.4E38 means 3.4 * 10^38. See All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on. July 31st, 2013, 01:57 AM #2 Elite Member Power Poster Join Date Nov 2000 Voronezh, Russia July 31st, 2013, 02:28 AM #3 Senior Member Join Date Dec 2012 July 31st, 2013, 03:47 AM #4 Elite Member Power Poster Join Date Apr 2001 Manchester, England July 31st, 2013, 04:40 AM #5 Elite Member Power Poster Join Date Apr 1999 July 31st, 2013, 05:24 AM #6 Elite Member Power Poster Join Date Apr 2001 Manchester, England July 31st, 2013, 06:50 AM #7 Elite Member Join Date Apr 2000 Belgium (Europe) July 31st, 2013, 06:54 AM #8 Elite Member Join Date Apr 2000 Belgium (Europe) July 31st, 2013, 07:03 AM #9 Elite Member Join Date Apr 2000 Belgium (Europe) July 31st, 2013, 09:47 AM #10 Elite Member Power Poster Join Date Apr 2001 Manchester, England July 31st, 2013, 10:00 AM #11 Elite Member Power Poster Join Date Apr 2001 Manchester, England July 31st, 2013, 11:24 AM #12 Elite Member Power Poster Join Date Apr 1999 August 1st, 2013, 07:21 AM #13 Elite Member Join Date Apr 2000 Belgium (Europe) August 1st, 2013, 08:20 AM #14 Elite Member Power Poster Join Date Apr 2001 Manchester, England August 1st, 2013, 08:42 AM #15 Senior Member Join Date Dec 2012
{"url":"http://forums.codeguru.com/showthread.php?538849-double-vs-int64&p=2125459","timestamp":"2014-04-19T16:18:20Z","content_type":null,"content_length":"167086","record_id":"<urn:uuid:1820eb87-bf44-480f-9286-9d39ae8b1259>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
The SIAM Journal on Mathematical Analysis features research articles of the highest quality employing innovative analytical techniques to treat problems in the natural sciences. Every paper should have content that is primarily analytical and that employs mathematical methods in such areas as partial differential equations, the calculus of variations, functional analysis, approximation theory, harmonic or wavelet analysis, or dynamical systems. Secondly, every paper should relate to a model for natural phenomena in such areas as fluid mechanics, materials science, quantum mechanics, biomathematics, mathematical physics, or to the computational analysis of such.
{"url":"http://www.efluids.com/efluids/pages/j_midpages/siam_j_on_mathematical.htm","timestamp":"2014-04-19T04:24:25Z","content_type":null,"content_length":"7087","record_id":"<urn:uuid:3734c9dd-e5f3-4c70-bfa8-c89918a7a405>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Claudius Ptolemy Home -> Ptolemaic System Notable People Claudius Ptolemy Claudius Ptolemy lived in Alexandria, in Egypt, in the middle of the second century CE. He was descended from an Egyptian family and associated with the renowned library situated in the city. He is celebrated for his great work of astronomy, known today as the Almagest. In fact, the very title is testament to its status and its tortuous history. Almagest is a transliteration of the name given to the work when it was rendered into Arabic in the early middle ages. The Islamic scholars, aware of what they possessed, called it The greatest work, and the name stuck when it was later retranslated into Latin. Ptolemy himself, however, had used the more modest Greek title, Syntaxis - Treatise (or System) of mathematics. The Almagest provided what was as far as we know the first complete guide to calculating the planetary motions. To achieve this, Ptolemy adopted a number of conceptually simple but mathematically complex procedures. He reasoned that a planet must move on a combination of circular motions, which must be uniform - but that in certain circumstances the circles need not center on the Earth, and their motions need not be uniform about either the Earth or their own centers. The result was a series of discrete mathematical models, one for each planet. Ptolemy's astronomy was therefore simple and elegant in the theoretical building-blocks it used, but disjointed in the model it proposed for the cosmos as a whole. And it was mathematically very demanding: the simulation of Ptolemaic theorizing integrated into Microcosmos necessarily simplifies its processes. Ptolemy also produced other works. Most important of them was the Geography, a survey of the known world that was to be recovered in the Renaissance and inspire a scholarly tradition of its own. In addition, he wrote the Tetrabiblos, a work on astrology, or, more properly, on celestial causation, that eschewed the mathematical approach of the Almagest for a natural philosopher's attention to substance and process. This emphasis he also displayed in his Planetary Hypotheses, which attempted to provide a physical account capable of accommodating his mathematical arguments in the Almagest. Ptolemy also produced highly successful work on light and vision that would survive in incomplete form as his Optics. And he completed a work on music theory, the Harmonics, that would again be recovered and prove influential in the Renaissance. This image from a fifteenth-century translation of Ptolemy's Geography confuses the astronomer with the Pharaoh of the same name, to whom he was in fact not related. Note the profusion of books and instruments, including at least one astrolabe, to be seen in his chamber. From A. Turner, Early Scientific Instruments (1987).
{"url":"http://microcosmos.uchicago.edu/ptolemy/people.html","timestamp":"2014-04-18T23:15:22Z","content_type":null,"content_length":"4346","record_id":"<urn:uuid:6be15761-23a3-4270-b4a2-af9311af24f4>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
PowerPoint Presentation •Network is evaluated in topological order •At each node its fanins have a specific vector of values. •The relation at the node determines a set of possible output values of that node •One of these is chosen randomly and broadcast to the fanouts The NS-behavior is the set of all PI/PO vectors that can be obtained this way. is in general a MV Boolean relation
{"url":"http://www.eecs.berkeley.edu/~brayton/ND%20networks2_files/slide0005.htm","timestamp":"2014-04-17T09:37:40Z","content_type":null,"content_length":"15029","record_id":"<urn:uuid:9c301819-0d53-4cc0-9f3d-1b44febe8632>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Capitol Heights Calculus Tutor I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi... 15 Subjects: including calculus, physics, geometry, GRE ...This includes: I. Functions, graphs, and limits. A. 21 Subjects: including calculus, statistics, geometry, algebra 1 ...I have a degree in Mechanical Engineering and my math skills got me through it; that material is far tougher than discrete math, but more importantly I explain mathematics well. I can effectively tutor the math and logic related questions for the GRE - I am a former full time high school math te... 28 Subjects: including calculus, chemistry, physics, geometry ...I point these out at every opportunity in order to build bridges to material that the student may know well. A simple example of this is the trigonometric identity sin^2 + cos^2 = 1. I explain to the student that this is the trigonometric form of the Pythagorean theorem of geometry. 13 Subjects: including calculus, chemistry, physics, algebra 1 ...My philosophy of teaching has been influenced by my background in social sciences. Specifically, I strive to reduce the power differential between teacher and student and achieve a learner-learner relationship per the critical pedagogical approach of Paulo Freire. Overall, I stress conceptual u... 15 Subjects: including calculus, geometry, statistics, algebra 1
{"url":"http://www.purplemath.com/Capitol_Heights_Calculus_tutors.php","timestamp":"2014-04-16T16:12:32Z","content_type":null,"content_length":"24151","record_id":"<urn:uuid:a41a0285-5ba1-4826-97c0-199b3271c53c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Need Help for C and C++ Avoiding Floationg Pt Usage Hi can anybody help me in the following question? u16 x, y; //u16 means unsigned integer 16 bit x = y * 0.728 Solve the above equation without using the floating point library and Division method. x and y data type also cannot be changed. how to think without using division for that case? any idea? Thanks for your reply. Tricky. How accurate does it need to be? Why 0.728? Where does this question come from? Also, what is the possible range of y? It can be solved if you can use 44 bit arithmetic or more: multiply by 0x02E978D5 and shift the result right by 26 bits, which is accurate to at least 4 decimal places (checked on a spreadsheet) but needs that extra headroom. Last edited by xpi0t0s; 25Oct2011 at 22:36.. Thank you very much for ur kind help. it is an interview question for me to explain in second interview. Ur idea is very nice and i tested in my compiler it is working also Thanks again yes that one i need to ask u that how u got this value 26 bits have to shift to get this precision? Is there any way to calculate that no:26? I can't spoon-feed you everything. I'll end up doing your job for you. Think about it and have a few guesses. ok anyway thanks for ur help until now. i will try to figure out about that
{"url":"http://www.go4expert.com/forums/help-c-cpp-avoiding-floationg-pt-usage-t27011/","timestamp":"2014-04-16T22:19:05Z","content_type":null,"content_length":"43122","record_id":"<urn:uuid:aa942d35-669e-422d-a8f2-f2028c997725>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Boiling point on top of mount everest You use the first equation to get the pressure at the top of Everest P_atm is the pressure at sea level - the equation is really for a difference in height so if you take z to be measured form sea level that is the pressure you are comparing to. The variation of boiling point with altitude is a similar equation, rather than retype it - I will just give you the link. Interestingly guess which mountain they use as an example!
{"url":"http://www.physicsforums.com/showthread.php?t=287691","timestamp":"2014-04-19T12:37:33Z","content_type":null,"content_length":"36122","record_id":"<urn:uuid:37b0a5ca-db75-4690-980c-7da1c4fc66c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Sluggish Array Calculations Topic: Sluggish Array Calculations Thanks for looking. My page just went from taking 0 to 10 seconds to load Within the view: $<%= @project.actual_backlog_array[(Date.today - @project.start_date).to_i][1].to_i - @project.average_backlog_array[(Date.today - @project.start_date).to_i][1].to_i %> The model methods: def actual_backlog_array @assignments = self.assignments first_day = (self.start_date - Date.new(1970, 1, 1)).to_i * 86400000 #cool, so right now we have the project and all of its assignments. Let's create an array that holds each day's value (initialized to 0) backlog = Array.new(((self.end_date - self.start_date).to_i + 1), 0) #alright, backlog is now an array with as many elements as there are days between the project's start and end dates burn_carry = self.mswa_value.to_f #we'll start out with the firm value on the first day. backlog[0] = [first_day, burn_carry] i = 1 #awesome, right now the first day of the graph has the initial firm value, before costs are incurred #now we have to loop through each day and subtract the summation of all the employees' burn rates backlog.each do |day| day_burn = 0.0 #this day's total burn always starts out at zero current_day = self.start_date.advance(:days => i) #we'll advance the current day by a number of days equal to the iteration this loop is on @assignments.each do |assignment| #right now we're on a single day. This day has several assignments; let's loop through them and add the respective burn to the day's total burn. if assignment.start_date..assignment.end_date).to_a.include?(current_day) #if the current day falls within the assignment's start date and end date, add the daily burn day_burn += assignment.daily_burn # we're adding the cost of that assignment (which is returned by the daily_burn method in the assignment model) backlog[i*] = [ i * 86400000 + first_day, burn_carry-day_burn] unless day_burn == nil burn_carry = burn_carry-day_burn #this value will be used by the next iteration to decrease the total backlog i = i + 1 if i == backlog.size return backlog def average_backlog_array first_day = (self.start_date - Date.new(1970, 1, 1)).to_i * 86400000 backlog = Array.new(((self.end_date - self.start_date).to_i), 0) burn_carry = self.mswa_value.to_f day_burn = self.mswa_value.to_f/(self.end_date - self.start_date).to_i backlog[0] = [first_day, burn_carry] for i in 1..backlog.size backlog[i*] = [i * 86400000 + first_day, burn_carry - day_burn] burn_carry -= day_burn return backlog See anything uber-inefficient? Re: Sluggish Array Calculations I can't see the view section of your post for some reason. By approaching it from the day perspective, you guarantee that you will iterate across every assignment for every day in the array. Also, for every assignment, you call the to_a method on the range just to find out if it's active on the day in question. While this brute force approach works, it won't scale as it gets exponentially worse as you have longer projects with more assignments. Suppose you initialise the backlog array, then iterate through the assignments one at a time. For each assignment, get the daily burn rate (once), then using the assignment start and end range as an index into the backlog array range, loop through incrementing the value in the backlog array for the appropriate day. When you're done, zoom through the backlog array to calculate the carry value. Re: Sluggish Array Calculations Thank you sir Re: Sluggish Array Calculations specious wrote: I can't see the view section of your post for some reason. Seems to be a bug in IE6, as I can see it fine in Firefox. Did you get any performance improvement? Re: Sluggish Array Calculations Actually I went to make some changes to it yesterday and I'm struggling a little bit with the new logic, but I think I'm almost there. I'll post what I end up with
{"url":"http://archive.railsforum.com/viewtopic.php?pid=107014","timestamp":"2014-04-17T01:15:05Z","content_type":null,"content_length":"17668","record_id":"<urn:uuid:68d94c19-f51d-44cb-a971-3f802db1e02f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Sun BELCHES twice, mighty plasma loops miss Earth - NASA vid The Sun lashed out with two plasma eruptions one after the other early on Friday morning. NASA's Solar Dynamic Observatory (SDO) spotted the solar flares, when red-hot loops of plasma burst out of the surface of the Sun. The flares both happened within a four-hour window between 6am and 10am GMT, when prominences in the Sun … This topic is closed for new posts. Global Warming Uh, yeah, its caused by global warming, yeah, that's it! Re: Global Warming The funny thing is you are almost right. It is indeed causing global warming (the sun) Re: Global Warming ...in the same sense that long underwear causes global warming of each of my bollocks. "red-hot loops of plasma" that were "captured in the 304 Angstrom wavelength of extreme ultraviolet light" Re: Lucky? Well, when my kids complain "The sun's shining in my eyes!" I retort "Not a bad shot for 93 million miles" Re: Lucky? Lets see, distance of Earth from Sun, 150 Million Km Therefore the surface area of the sphere around the earths orbit is, hmm 4.PI.R^2 = 2.8 x 10^17 sq Km. Radius of the earth is 6400 Km, so the area of the earth is the area of a circle of this radius, so PI.r^2 = 1.3 x 10^7 sq Km. So the ratio of these gives us the probability that a random solar flare will hit earth. or approx 2 x 10^9, or 1 in 2 Billion. Re: Lucky? Including other factors? Beermat tells me one in ten-ish. Which amounts to once or twice a year. Which gets duly reported at all your favourite news outlets, including Vulture Central. Wrong equation If you start at the sun, pick a random direction and fire something tiny, like the moon then the chance of hitting the Earth are about 1 in 2 billion. Coronal mass ejections are big - similar to the size of the sun when they start. They spread out. I could not find an decisive figure for how much they spread out. The closest I could find to a useful number was 0.25au long. If we pretend that CME's are 0.25au wide when they pass Earth orbit (1au) then the chances of a hit are about 1 in 250. If someone knows a vaguely sensible number for the diameter of a CME when gets 1au from the sun, please speak up. I have almost no confidence in that 0.25au guess. real time? are those pics in real time? Re: real time? No that was a 4 hour time lapse. Re: real time? should have guessed that, thanks.....still over 4 hours those gasses must have been travelling something. Look, I know The Sun is almost as bad as The Daily mail, but don't you think that headline is going a bit too far? Ahh the Triffids are just waiting, dormant in orbit, for the World to go blind so they can feast on our flesh. Well makes a change from the usual Zombie apocalypse.... 8) This topic is closed for new posts.
{"url":"http://forums.theregister.co.uk/forum/1/2012/11/19/double_sun_eruptions/","timestamp":"2014-04-17T22:27:44Z","content_type":null,"content_length":"48647","record_id":"<urn:uuid:d3c9f2a3-cfb0-4ce3-98e5-87bc31470726>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
medians for degree measurements Steve Howell showell30 at yahoo.com Mon Jan 25 19:36:19 CET 2010 On Jan 24, 5:26 pm, Robert Kern <robert.k... at gmail.com> wrote: > On 2010-01-23 05:52 , Steven D'Aprano wrote: > > On Fri, 22 Jan 2010 22:09:54 -0800, Steve Howell wrote: > >> On Jan 22, 5:12 pm, MRAB<pyt... at mrabarnett.plus.com> wrote: > >>> Steve Howell wrote: > >>>> I just saw the thread for medians, and it reminded me of a problem > >>>> that I need to solve. We are writing some Python software for > >>>> sailing, and we need to detect when we've departed from the median > >>>> heading on the leg. Calculating arithmetic medians is > >>>> straightforward, but compass bearings add a twist. > > [...] > >> I like this implementation, and it would probably work 99.9999% of the > >> time for my particular use case. The only (very contrived) edge case > >> that I can think of is when you have 10 bearings to SSW, 10 bearings to > >> SSE, and the two outliers are unfortunately in the NE and NW quadrants. > >> It seems like the algorithm above would pick one of the outliers. > > The trouble is that median of angular measurements is not a meaningful > > concept. The median depends on the values being ordered, but angles can't > > be sensibly ordered. Which is larger, 1 degree north or 359 degrees? Is > > the midpoint between them 0 degree or 180 degree? > Then don't define the median that way. Instead, define the median as the point > that minimizes the sum of the absolute deviations of the data from that point > (the L1 norm of the deviations, for those familiar with that terminology). For > 1-D data on the real number line, that corresponds to sorting the data and > taking the middle element (or the artithmetic mean of the middle two in the case > of even-numbered data). My definition applies to other spaces, too, that don't > have a total order attached to them including the space of angles. > The "circular median" is a real, well-defined statistic that is used for exactly > what the OP intends to use it for. I admitted pretty early in the thread that I did not define the statistic with much rigor, although most people got the gist of the problem, and as Robert points out, you can more clearly the define the problem, although I think under any definition, some inputs will have multiple solutions, such as (0, 90, 180, 270) and (0, 120, 240). If you've ever done lake sailing, you probably have encountered days where the wind seems to be coming from those exact angles. This is the code that I'll be using (posted by "Nobody"). I'll report back it if it has any issues. def mean(bearings): x = sum(sin(radians(a)) for a in bearings) y = sum(cos(radians(a)) for a in bearings) return degrees(atan2(x, y)) def median(bearings): m = mean(bearings) bearings = [(a - m + 180) % 360 - 180 for a in median = bearings[len(bearings) / 2] median += m median %= 360 return median More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2010-January/565527.html","timestamp":"2014-04-20T15:17:27Z","content_type":null,"content_length":"6192","record_id":"<urn:uuid:29cbf985-b684-469d-85fd-a5131a5822c0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
[IPython-User] Parallel question: Sending data directly between engines [IPython-User] Parallel question: Sending data directly between engines Olivier Grisel olivier.grisel@ensta.... Sun Jan 8 06:26:40 CST 2012 AFAIK the traditional way to implement the AllReduce is the to first a spanning tree over the nodes / engines. For instance if you have 10 nodes, define a fixed arbitrary binary tree that spans all the nodes involved in the computation: 0 is the root and has 1 and 2 has children. 1 has parent 0 and as 3 and 4 as children and so on. Each engine is only aware of his parent and 2 direct children. When a node computation reaches a AllReduce barrier it waits for his to children to send him the partial results then compute the aggregate with his own internal state and ship the result to his parent. Leaf nodes start first without waiting at all (as they know they don't have any children to wait for). When the root is reached the final result is recursively broadcasted to all the children. This spanning tree strategy ensures that a single node node mailbox will never receive more that 2 messages at once. This is very important to scale to large clusters (e.g. 1000 nodes) since if you have many incoming messages of a couple of megabytes you might saturate the network interface of a single node and potentially it's memory buffers if the messages are not consumed in a streamed manner. However as far as I understand IPython.parallel might not be designed to address 1000+ nodes clusters and the saturation problem probably does not occur before hundreds of nodes if the messages are not to big. Still I think it would be good for the IPython.parallel project to implement such primitives and make some benchmarks to be sure whether the spanning tree is useful or note. Esp. checking the impact of the arity of the tree and message size on the overall cluster Note that the AllReduce scheme implemented with the spanning tree strategy impose the aggregation function to be commutative and distributive. It might not be the case if you implement the naive gather / reduce / broadcast strategy where you can reorder the partial data before performing the reduce. More information about the IPython-User mailing list
{"url":"http://mail.scipy.org/pipermail/ipython-user/2012-January/009030.html","timestamp":"2014-04-19T17:19:55Z","content_type":null,"content_length":"4981","record_id":"<urn:uuid:75bd5276-8874-4c1f-9ddb-3ac9db79fe96>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Wrapping Up the Unit During this final lesson in the unit, the students use the mathematical knowledge and skills developed in the previous lessons as they visit five stations to review comparative subtraction. Assign groups of four students each to work at one of the five stations. If you need more than five stations, you might choose to provide an extra computer for a sixth station.] Station 1: High versus Low Materials: Twelve index cards, numbered 0 through 11 Station 2: How Many More? Materials: fish-shaped crackers, paper, paper plates, number cubes Station 3: Spin, Spin, Spin Materials: Adjustable Spinner, paper Adjustable Spinner into 12 parts.] Direct the group to take turns spinning the spinner twice and recording the numbers. When all have recorded two numbers, ask them to subtract the smaller from the greater number. Then ask them to see whether anyone got a difference larger than everyone else. If so, that student wins a point. The student who has earned the most points when time is called wins the game. Station 4: Heads or Tails? Materials: 20 pennies, cup Station 5: What a Difference Materials: Four number cubes, Number Lines Activity Sheet, fish-shaped crackers After each 10-minute interval has passed, assign the students to new stations. When time is up, call them together and ask students to record in their journals which station they liked most and why. Explain to students that they should focus on the mathematics they learned from each station rather than on other aspects of the activity. • Fish-shaped crackers in resealable bags • Index cards • Paper plates • Paper and Crayons • Number cubes • Pennies • Cups You may wish to review the completed Class Notes recording sheets completed throughout this unit. These can guide the summative comments you make for individual students. Question for Students 1. What addends less than 10 have differences of 2? Of 5? 2. What subtraction sentence shows that we have compared a set of seven red pencils with a set of five blue pencils? 3. A balance has three crackers of the right side and five on the left side. Which side needs more crackers? How many more? 4. How could you use a number line to compare a plate of eight fish-shaped crackers with a plate of five fish-shaped crackers? 5. If you subtract 0 from a number, what happens? 6. What are the addition facts and the subtraction facts in one family where the sum is 6? When the sum is 8? 7. How did you use subtraction in the games that you played? What activity did you like most? Which was hardest for you? Why? Teacher Reflection • With what meanings of subtraction were the majority of the students most comfortable? • Did all the students display understanding of the subtraction meanings? • Can the students explain how to compare to find differences? • Which students met all the objectives of this unit? What extension activities are appropriate for those students? • Which students are still having difficulty with the objectives of this unit? What additional instructional experiences do they need? • What were the greatest challenges for the students? • What will I do differently the next time that I teach this unit? • What other learning situations would extend their experiences with comparison subtraction? • How might I connect the essential ideas of this unit with lessons about related mathematics content? (Data is an area that is a logical extension of this unit.) Students count back to compare plates of fish-shaped crackers, and then they record the comparison in vertical and horizontal format. They apply their skills of reasoning and problem solving during this lesson in several ways. [Because students have associated the word "more" with addition, the comparative approach to subtraction is typically more challenging for the students to understand.] Students write subtraction problems, model them with sets of fish-shaped crackers, and communicate their findings in words and pictures. They record differences in words and in symbols. The additive identity is reviewed in the context of comparing equal sets. In this lesson, students determine differences using the number line to compare lengths. Because this meaning is based on linear measurement, it is a distinctly different representation from the meanings presented in Lessons One and Two. At the end of the lesson, the students use reasoning and problem solving to predict differences and to answer puzzles involving subtraction. This lesson encourages the students to explore another meaning for operations of subtraction, the balance. This meaning leads naturally into recording with equations. The students will imitate the action of a pan balance and record the modeled subtraction facts in equation form. In this lesson, the relation of addition to subtraction is explored with fish-shaped crackers. The students search for related addition and subtraction facts for a given number and also investigate fact families when one addend or the difference is 0. Learning Objectives Students will: • Review the meanings for subtraction • Practice comparative subtraction in a variety of formats Common Core State Standards – Mathematics -Kindergarten, Counting & Cardinality • CCSS.Math.Content.K.CC.A.2 Count forward beginning from a given number within the known sequence (instead of having to begin at 1). -Kindergarten, Counting & Cardinality • CCSS.Math.Content.K.CC.B.5 Count to answer ''how many?'' questions about as many as 20 things arranged in a line, a rectangular array, or a circle, or as many as 10 things in a scattered configuration; given a number from 1-20, count out that many objects. -Kindergarten, Algebraic Thinking • CCSS.Math.Content.K.OA.A.1 Represent addition and subtraction with objects, fingers, mental images, drawings1, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or equations. -Kindergarten, Algebraic Thinking • CCSS.Math.Content.K.OA.A.2 Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. -Kindergarten, Algebraic Thinking • CCSS.Math.Content.K.OA.A.5 Fluently add and subtract within 5. Grade 1, Algebraic Thinking • CCSS.Math.Content.1.OA.B.4 Understand subtraction as an unknown-addend problem. For example, subtract 10 - 8 by finding the number that makes 10 when added to 8. Grade 1, Algebraic Thinking • CCSS.Math.Content.1.OA.C.5 Relate counting to addition and subtraction (e.g., by counting on 2 to add 2). Grade 1, Algebraic Thinking • CCSS.Math.Content.1.OA.C.6 Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 - 4 = 13 - 3 - 1 = 10 - 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 - 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13). Grade 2, Algebraic Thinking • CCSS.Math.Content.2.OA.B.2 Fluently add and subtract within 20 using mental strategies. By end of Grade 2, know from memory all sums of two one-digit numbers. Grade 2, Number & Operations • CCSS.Math.Content.2.NBT.B.5 Fluently add and subtract within 100 using strategies based on place value, properties of operations, and/or the relationship between addition and subtraction. Grade 2, Number & Operations • CCSS.Math.Content.2.NBT.B.7 Add and subtract within 1000, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method. Understand that in adding or subtracting three-digit numbers, one adds or subtracts hundreds and hundreds, tens and tens, ones and ones; and sometimes it is necessary to compose or decompose tens or hundreds. Grade 2, Number & Operations • CCSS.Math.Content.2.NBT.B.9 Explain why addition and subtraction strategies work, using place value and the properties of operations. Grade 2, Measurement & Data • CCSS.Math.Content.2.MD.B.6 Represent whole numbers as lengths from 0 on a number line diagram with equally spaced points corresponding to the numbers 0, 1, 2, ..., and represent whole-number sums and differences within 100 on a number line diagram. Common Core State Standards – Practice • CCSS.Math.Practice.MP1 Make sense of problems and persevere in solving them. • CCSS.Math.Practice.MP4 Model with mathematics. • CCSS.Math.Practice.MP5 Use appropriate tools strategically. • CCSS.Math.Practice.MP7 Look for and make use of structure.
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=450","timestamp":"2014-04-16T21:52:22Z","content_type":null,"content_length":"87231","record_id":"<urn:uuid:9b93a177-8b65-47b2-87d9-5288ff81d804>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Soil Texture Analysis by Ted Sammis A simple method to estimate the percent sand silt and clay in a soil and determine it texture. 1. Get a quart jar from the supermarket with a lid or use any jar with a large mouth. 2. Fill the jar half full of soil 3. Wet the soil to a mud consistency and tap the jar to settle the soil. 4. Mark the level of soil on the jar with a marking pen or whiteout 5. If you have some calgon put a teaspoon full in the jar 6. Add water to the top of the jar and shake the soil water mix till the soil is all mixed up in the water. 7. Put the jar on a table and let the soil settle out for 40 seconds, mark the level of soil on the jar. This is sand portion in the soil. 8. Wait 6 hours and mark the level of the soil in the jar. The difference between the bottom mark, which is the sand, and the second mark up is the silt portion of the soil. The total sand plus silt is the distance from the bottom of the jar to the second mark. 9. Calculate the percent sand, silt and clay by measuring the depth of the soil by measuring the distance from the bottom to the first mark up in inches which is the sand fraction, the distance from the first mark up to the second mark up which is the silt fraction and the distance from the bottom to the third mark up from the bottom which is the sand plus silt plus clay fraction. . Sometimes, when all the sand silt and clay has settled, the height of the soil is higher than when you marked the jar after making a mud solution. This can only be determined by letting the jar sit for several days. If you have the time to do this , then a more accurate calculation of % sand silt and clay can be determined based on this new measured total height. Also, the percent sand, silt, and clay is a volume percentage. The soil triangle and table below for soil classification is in percent by weight. You need to correct this problem by converting from percent volume to percent weight by multiplying the percentage of sand by 1.19, the percentage of silt by 0.87 and the percentage of clay by 0.94. These numbers are the weight ratio's of bulk density compared to average bulk density of the material. 10. The percent sand is the depth of the sand divided by the depth of the total soil 11. The percent silt is the depth of the silt divided by the depth of the total soil 12. The percent clay is 100 minus the percent sand plus silt. 13 To determine the soil texture knowing percent sand silt and clay using the table below │Soil classification │Clay Soil│Loam soil│sandy soil │ │percent clay │40-100% │7-27% │1-10% │ │percent silt │0-40% │28-50% │1-15% │ │percent sand │0-45% │23-52% │85-100% │ 14. A more precise determine of soil texture can be determine from percent sand silt and clay using the soil triangle. This simple approach to determining texture will not work if the soil contains a lot of gypsum. Soil containing a lot of gypsum normally are pinkish white in color. │If you have any questions please contact webmaster@weather.nmsu.edu │ Department of Agronomy and Horticulture│ │Updated: Dec 18 1996 │Box 30001 / Dept.3Q / Las Cruces, N M 88003-8003│ │Copyright © 1996 New Mexico State University │ Telephone: (505)646-3405│ │ │ FAX: (505)646-6041│
{"url":"http://hydrology1.nmsu.edu/Teaching_Material/soil456/soiltexture/soiltext.htm","timestamp":"2014-04-16T14:56:31Z","content_type":null,"content_length":"4907","record_id":"<urn:uuid:0c026fb9-69ff-4ca1-ab51-f64e9fa07b57>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Radius of Convergence of a divergent function March 31st 2013, 06:55 PM #1 Jan 2013 Radius of Convergence of a divergent function I am supposed to find the radius of convergence of the sum from 0 to infinity of ((n+3)!x^n)/2^(n+1) by using the ratio test. However, this appears to be a divergent series? What does this mean for the radius of convergence? Re: Radius of Convergence of a divergent function Dont know how to delete this post, but I figured it out. It's 0. March 31st 2013, 07:09 PM #2 Jan 2013
{"url":"http://mathhelpforum.com/calculus/216141-radius-convergence-divergent-function.html","timestamp":"2014-04-16T13:31:38Z","content_type":null,"content_length":"31554","record_id":"<urn:uuid:5291b14d-9f23-4ec9-be62-49c909891543>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the volume of the given prism. Round to the nearest tenth if necessary. (1 point) 2,511.5 yd3 1,255.7 yd3 1,025.3 yd3 1,450.0 yd3 • 11 months ago • 11 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/517de1abe4b0be6b54ab5daf","timestamp":"2014-04-21T04:43:23Z","content_type":null,"content_length":"37943","record_id":"<urn:uuid:593c0253-fb4e-4da3-b4e7-ee8713503063>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
How demonstrate f(x+3pie)-f(x)=0 when f(x)=ctgx -1? - Homework Help - eNotes.com How demonstrate f(x+3pie)-f(x)=0 when f(x)=ctgx -1? Since peridcity of cot(x) is `pi` Hence proved. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/how-demonstrate-f-x-3pie-f-x-0-when-f-x-ctgx-1-446054","timestamp":"2014-04-19T00:31:36Z","content_type":null,"content_length":"24665","record_id":"<urn:uuid:b2966d80-06e1-447b-8fed-d99e750dba8f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Maria has a box of 2 cookies. she gives 2 cookies to each friend. which expression shows the number of cookies Maria has left after giving cookies to m friends? A: 2m-20 B: 20-2m C: 20m-2 D: 20+m • one year ago • one year ago Best Response You've already chosen the best response. Do you mean she has a box of 20 cookies Best Response You've already chosen the best response. Your question probably meant to read a "box of 20 cookies" not 2. She is LESS 2 cookies for each friend, so the slope is a negative 2 and the intercept is 20 because she started with that many. So, 20 - 2m after "m" friends is the # of cookies left. Best Response You've already chosen the best response. I am guessing that was a typo and you meant that she had a box of 20 cookies. You said she gave 2 cookies to each friend, then that means that 2m, where m is the number of friends is the number of cookies that she gives away in total. So, to know how many she has left, you subtract 2m from the cookies she had in first place (20). Therefore, the expression would look like this: \[20-2m\] That is choice B. Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a8150ae4b082f0b85313cd","timestamp":"2014-04-21T10:36:54Z","content_type":null,"content_length":"35408","record_id":"<urn:uuid:e3a30819-df4d-4287-9fc6-3c89e74e8cdc>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 1996 [00219] [Date Index] [Thread Index] [Author Index] problem devolping a formula • To: mathgroup at smc.vnet.net • Subject: [mg2954] problem devolping a formula • From: ryangall at gpu.srv.ualberta.ca (Bobby Sixkiller) • Date: Mon, 15 Jan 1996 03:40:15 -0500 • Organization: University of Alberta, Edmonton, Canada I am wondering if there is any way I can find a formula that will solve the following condidion CONSTANTS: MAX=1000 Z=SOME NUMBER (arbitrary) IF N=0 THEN maxx=MAX; POST CONDITION: SPACE is returned as the answer FOR N=0 TO N=Z DO N+1 /* RUN THE LOOP Z TIMES */ I hope you understand what I want, I originally wrote this in C, but tried to write it more universally for this post. what I want (if possible) is some formula that wont have to go through this for loop/series sum ,to get the desired number.Since the Z value can easily be over 10000 the time it take is unacceptable (Im using it for I need some function that takes in a z value, and pops out SPACE as the answer, If any of you know how to do this, or perhaps doing it some other way, please e-mail me asap....one more thing, can you explain how you got the answer, I deal with these sorts of problems all the time, but have no mathematical insight to them.....If you don have time to explain it, could you tell me what I can study to learn about it. THANKS ALOT FOR YOUR HELP! ==== [MESSAGE SEPARATOR] ====
{"url":"http://forums.wolfram.com/mathgroup/archive/1996/Jan/msg00219.html","timestamp":"2014-04-17T21:29:03Z","content_type":null,"content_length":"35592","record_id":"<urn:uuid:ca92ff0f-a6cb-4163-829b-8e2788e006a3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientific Notation of Zero Date: 09/09/2002 at 17:12:12 From: Tim Subject: Scientific notation of zero What is the scientic notation of zero? Is it 0 x 10^0? How can that be? Zero isn't between 1 and 10 and it will never be by moving the decimal. Does that mean there is no scientific notation for zero? Date: 09/10/2002 at 07:04:02 From: Doctor Floor Subject: Re: Scientific notation of zero Hi, Tim, Good question! There is no unique scientific notation of zero. It may be 0x10^0, 0x10^1, or 0*x0^n for whatever integer n. See also, from Eric Weisstein's World of Mathematics: Scientific Notation If you have more questions, just write back. Best regards, - Doctor Floor, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/61184.html","timestamp":"2014-04-17T04:20:36Z","content_type":null,"content_length":"5859","record_id":"<urn:uuid:6bd98ef6-a359-4788-9147-66557d4e747c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
PHYS771 Lecture 7: Randomness (Thanks to Jibran Rashid for help preparing these notes.) In the last two lectures, we talked about computational complexity up till the early 1970's. Today we'll add a new ingredient to our already simmering stew -- something that was thrown in around the mid-1970's, and that now pervades complexity to such an extent that it's hard to imagine doing anything without it. This new ingredient is randomness. Certainly, if you want to study quantum computing, then you first have to understand randomized computing. I mean, quantum amplitudes only become interesting when they exhibit some behavior that classical probabilities don't: contextuality, interference, entanglement (as opposed to correlation), etc. So we can't even begin to discuss quantum mechanics without first knowing what it is that we're comparing against. Alright, so what is randomness? Well, that's a profound philosophical question, but I'm a simpleminded person. So, you've got some probability p, which is a real number in the unit interval [0,1]. That's randomness. Question: But wasn't it a big achievement when Kolmogorov put probability on an axiomatic basis in the 1930's? Answer: Yes, it was! But in this class, we'll only care about probability distributions over finitely many events, so all the subtle questions of integrability, measurability, and so on won't arise. In my view, probability theory is yet another example where mathematicians immediately go to infinite-dimensional spaces, in order to solve the problem of having a nontrivial problem to solve in the first place! And that's fine -- whatever floats your boat. I'm not criticizing that. But in theoretical computer science, we've already got our hands full with 2^n choices. We need 2 ^ choices like we need a hole in the head. Alright, so given some "event" A -- say, the event that it will rain tomorrow -- we can talk about a real number Pr[A] in [0,1], which is the probability that A will happen. (Or rather, the probability we think A will happen -- but I told you I'm a simpleminded person.) And the probabilities of different events satisfy some obvious relations, but it might be helpful to see them explicitly if you never have before. First, the probability that A doesn't happen equals 1 minus the probability that it happens: Agree? I thought so. Second, if we've got two events A and B, then Pr[A or B] = Pr[A] + Pr[B] - Pr[A and B]. Third, an immediate consequence of the above, called the union bound: Pr[A or B] ≤ Pr[A] + Pr[B]. Or in English: if you're unlikely to drown and you're unlikely to get struck by lightning, then chances are you'll neither drown nor get struck by lightning, regardless of whether getting struck by lightning makes you more or less likely to drown. One of the few causes for optimism in this life. Despite its triviality, the union bound is probably the most useful fact in all of theoretical computer science. I use it maybe 200 times in every paper I write. What else? Given a random variable X, the expectation of X, or E[X], is defined to be Σ[k] Pr[X=k] k. Then given any two random variables X and Y, we have This is called linearity of expectation, and is probably the second most useful fact in all of theoretical computer science, after the union bound. Again, the key point is that any dependencies between X and Y are irrelevant. Do we also have Right: we don't! Or rather, we do if X and Y are independent, but not in general. Another important fact is Markov's inequality (or rather, one of his many inequalities): if X ≥ 0 is a nonnegative random variable, then for all k, Markov's inequality leads immediately to the third most useful fact in theoretical computer science, called the Chernoff bound. The Chernoff bound says that if you flip a coin 1,000 times, and you get heads 900 times, then chances are the coin was crooked. This is the theorem that casino managers implicitly use when they decide whether to send goons to break someone's legs. Formally, let h be the number of times you get heads if you flip a fair coin n times. Then one way to state the Chernoff bound is $Pr[|h-n/2| \geq \alpha ] \leq 2 e^{-c \alpha^{2} / n}$, where c is a constant that you look up since you don't remember it. (Oh, all right: c=2 will work.) How can we prove the Chernoff bound? Well, there's a simple trick: let x[i]=1 if the i^th coin flip comes up heads, and let x[i]=0 if tails. Then consider the expectation, not of x[1]+...+x[n] itself, but of exp(x[1]+...+x[n]). Since the coin flips had better be uncorrelated with each other, we have \begin{align*}E\left[ e^{x_{1}+\cdots x_{n}}\right] & = & E\left[ e^{x_{1}}\cdots e^{x_{n}}\right] \\ & = & E\left[ e^{x_{1}}\right] \cdots E\left[ e^{x_{n}}\right] \\ & = & \left( \frac{1+e}{2}\ Now we can just use Markov's inequality, and then take logs on both sides to get the Chernoff bound. I'll spare you the calculation (or rather, spare myself). What do we need randomness for? Even the ancients -- Turing, Shannon, and von Neumann -- understood that a random number source might be useful for writing programs. So for example, back in the forties and fifties, physicists invented (or rather re-invented) a technique called Monte Carlo simulation, to study some weird question they were interested in at the time involving the implosion of hollow plutonium spheres. Statistical sampling -- say, of the different ways a hollow plutonium sphere might go kaboom! -- is one perfectly legitimate use of randomness. There are many, many reasons you might want randomness -- for foiling an eavesdropper in cryptography, for avoiding deadlocks in communication protocols, and so on. But within complexity theory, the usual purpose of randomness is to "smear out ignorance": that is, to take an algorithm that works on most inputs, and turn it into an algorithm that works on all inputs most of the time. Let's see an example of a randomized algorithm. Suppose I describe a number to you by starting from 1, and then repeatedly adding, subtracting, or multiplying two numbers that were previously described (as in the card game "24"). Like so: You can verify (if you're so inclined) that j, the "output" of the above program, equals zero. Now consider the following general problem: given such a program, does it output 0 or not? How could you Well, one way would just be to run the program, and see what it outputs! What's the problem with that? Right: Even if the program is very short, the numbers it produces at intermediate steps might be enormous -- that is, you might need exponentially many digits even to write them down. This can happen, for example, if the program repeatedly generates a new number by squaring the previous one. So a straightforward simulation isn't going to be efficient. What can you do instead? Well, suppose the program has n operations. Then here's the trick: first pick a random prime number p with n^2 digits. Then simulate the program, but doing all the arithmetic modulo p. This algorithm will certainly be efficient: that is, it will run in time polynomial in n. Also, if the output isn't zero modulo p, then you certainly conclude that isn't zero. However, this still leaves two questions unanswered: 1. Supposing the output is 0 modulo p, how confident can you be that it wasn't just a lucky fluke, and that the output is actually 0? 2. How do you pick a random prime number? For the first question, let x be the program's output. Then |x| can be at most $2^{2^{n}}$, where n is the number of operations -- since the fastest way to get big numbers is by repeated squaring. This immediately implies that x can have at most 2^n prime factors. On the other hand, how many prime numbers are there with n^2 digits? The famous Prime Number Theorem tells us the answer: about $2^{n^{2}}/n^{2}$. Since $2^{n^{2}}/n^{2}$ is a lot bigger than 2^n, most of those primes can't possibly divide x (unless of course x=0). So if we pick a random prime and it does divide x, then we can be very, very confident (but admittedly not certain) that x=0. So much for the first question. Now on to the second: how do you pick a random prime with n^2 digits? Well, our old friend the Prime Number Theorem tells us that, if you pick a random number with n^2 digits, then it has about a one in n^2 chance of being prime. So all you have to do is keep picking random numbers; after about n^2 tries you'll probably hit a prime! Question: Instead of repeatedly picking a random number, why couldn't you just start at a fixed number, and then keep adding 1 until you hit a prime? Answer: Sure, that would work -- assuming a far-reaching extension of the Riemann Hypothesis! What you need is that the n^2-digit prime numbers are more-or-less evenly spaced, so that you can't get unlucky and hit some exponentially-long stretch where everything's composite. Not even the Extended Riemann Hypothesis would give you that, but there is something called Cramér's Conjecture that would. Of course, we've merely reduced the problem of picking a random prime to a different problem: namely, once you've picked a random number, how do you tell if it's prime? As I mentioned in the last lecture, figuring out if a number is prime or composite turns out to be much easier than actually factoring the number. Until recently, this primality-testing problem was another example where it seemed like you needed to use randomness -- indeed, it was the granddaddy of all such examples. The idea was this. Fermat's Little Theorem (not to be confused with his Last theorem!) tells us that, if p is a prime, then x^p=x (mod p) for every integer x. So if you found an x for which x^p≠x (mod p), that would immediately tell you that p was composite -- even though you'd still know nothing about what its divisors were. The hope would be that, if you couldn't find an x for which x^p≠x (mod p), then you could say with high confidence that p was prime. Alas, 'twas not to be. It turns out that there are composite numbers p that "pretend" to be prime, in the sense that x^p=x (mod p) for every x. The first few of these pretenders (called the Carmichael numbers) are 561, 1105, 1729, 2465, and 2821. Of course, if there were only finitely many pretenders, and we knew what they were, everything would be fine. But Alford, Granville, and Pomerance showed in 1994 that there are infinitely many pretenders. But already in 1976, Miller and Rabin had figured out how to unmask the pretenders by tweaking the test a little bit. In other words, they found a modification of the Fermat test that always passes if p is prime, and that fails with high probability if p is composite. So, this gave a polynomial-time randomized algorithm for primality testing. Then, in a breakthrough a few years ago that you've probably heard about, Agrawal, Kayal, and Saxena found a deterministic polynomial-time algorithm to decide whether a number is prime. This breakthrough has no practical application whatsoever, since we've long known of randomized algorithms that are faster, and whose error probability can easily be made smaller than the probability of an asteroid hitting your computer in mid-calculation. But it's wonderful to know. To summarize, we wanted an efficient algorithm that would examine a program consisting entirely of additions, subtractions, and multiplications, and decide whether or not it output 0. I gave you such an algorithm, but it needed randomness in two places: first, in picking a random number; and second, in testing whether the random number was prime. The second use of randomness turned out to be inessential -- since we now have a deterministic polynomial-time algorithm for primality testing. But what about the first use of randomness? Was that use also inessential? As of 2006, no one knows! But large theoretical cruise-missiles have been pummeling this very problem, and the situation on the ground is volatile. Consult your local STOC proceedings for more on this developing story. Alright, it's time to define some complexity classes. (Then again, when isn't it time?) When we talk about probabilistic computation, chances are we're talking about one of the following four complexity classes, which were defined in a 1977 paper of John Gill. • PP (Probabilistic Polynomial-Time): Yeah, apparently even Gill himself recently admitted that it's a lousy name. But this is a serious course, and I will not tolerate any seventh-grade humor. Basically, PP is the class of all decision problems for which there exists a polynomial-time randomized algorithm that accepts with probability greater than 1/2 if the answer is yes, or less than 1/2 if the answer is no. In other words, we imagine a Turing machine M that receives both an n-bit input string x, and an unlimited source of random bits. If x is a yes-input, then at least half of the random bit settings should cause M to accept; while if x is a no-input, then at least half of the random bit settings should cause M to reject. Furthermore, M needs to halt after a number of steps bounded by a polynomial in n. Here's the standard example of a PP problem: given a Boolean formula φ with n variables, do at least half of the 2^n possible settings of the variables make the formula evaluate to TRUE? (Incidentally, just like deciding whether there exists a satisfying assignment is NP-complete, so this majority-vote variant can be shown to be PP-complete: that is, any other PP problem is efficiently reducible to it.) Now, why might PP not capture our intuitive notion of problems solvable by randomized algorithms? Right: because we want to avoid "Florida recount" situations! As far as PP is concerned, an algorithm is free to accept with probability 1/2+2^-n if the answer is yes, and probability 1/2-2^-n if the answer is no. But how would a mortal actually distinguish those two cases? If n was (say) 5000, then we'd have to gather statistics for longer than the age of the universe! And indeed, PP is an extremely big class: for example, it certainly contains the NP-complete problems. Why? Well, given a Boolean formula φ with n variables, what you can do is accept right away with probability 1/2-2^-2n, and otherwise choose a random truth assignment and accept if and only if it satisfies φ. Then your total acceptance probability will be more than 1/2 if there's at least one satisfying assignment for φ, or less than 1/2 if there isn't. Indeed, complexity theorists believe that PP is strictly larger than NP -- although, as usual, we can't prove it. The above considerations led Gill to define a more "reasonable" variant of PP: • BPP (Bounded-Error Probabilistic Polynomial-Time): This is the class of decision problems for which there exists a polynomial-time randomized algorithm that accepts with probability greater than 2/3 if the answer is yes, or less than 1/3 if the answer is no. In other words: given any input, the algorithm can be wrong with probability at most 1/3. What's important about 1/3 is just that it's some constant smaller than 1/2. Any such constant would be as good as other. Why? Well, suppose we're given a BPP algorithm that errs with probability 1/3. If we're so inclined, we can easily modify the algorithm to err with probability at most (say) 2^-100. How? Right: just rerun the algorithm a few hundred times; then output the majority answer! If we take the majority answer out of T independent trials, then our good friend the Chernoff bound tells us we'll be wrong with a probability that decreases exponentially in T. Indeed, not only could we replace 1/3 by any constant smaller than 1/2; we could even replace it by 1/2-1/p(n) where p is any polynomial. So, that was BPP: if you like, the class of all problems that are feasibly solvable by computer in a universe governed by classical physics. • RP (Randomized Polynomial-Time): As I said before, the error probability of a BPP algorithm can easily be made smaller than the probability of an asteroid hitting the computer. And that's good enough for most applications: say, administering radiation doses in a hospital, or encrypting multibillion-dollar bank transactions, or controlling the launch of nuclear missiles. But what about proving theorems? For certain applications, you really can't take chances. And that leads us to RP: the class of problems for which there exists a polynomial-time randomized algorithm that accepts with probability greater than 1/2 if the answer is yes, or probability zero if the answer is no. To put it another way: if the algorithm accepts even once, then you can be certain that the answer is yes. If the algorithm keeps rejecting, then you can be extremely confident (but never certain) that the answer is no. RP has an obvious "complement," called coRP. This is just the class of problems for which there's a polynomial-time randomized algorithm that accepts with probability 1 if the answer is yes, or less than 1/2 if the answer is no. • ZPP (Zero-Error Probabilistic Polynomial-Time): This class can be defined as the intersection of RP and coRP -- the class of problems in both of them. Equivalently, ZPP is the class of problems solvable by a polynomial-time randomized algorithm that has to be correct whenever it does output an answer, but can output "don't know" up to half the time. Again equivalently, ZPP is the class of problems solvable by an algorithm that never errs, but that only runs expected polynomial time. Sometimes you see BPP algorithms called "Monte Carlo algorithms," and ZPP algorithms called "Las Vegas algorithms." I've even seen RP algorithms called "Atlantic City algorithms." This always struck me as stupid terminology. (Are there also Indian reservation algorithms?) Here are the known relationships among the basic complexity classes that we've seen so far in this course. The relationships I didn't discuss explicitly are left as exercises for the reader (i.e., It might surprise you that we still don't know whether BPP is contained in NP. But think about it: even if a BPP machine accepted with probability close to 1, how would you prove that to a deterministic polynomial-time verifier who didn't believe you? Sure, you could show the verifier some random runs of the machine, but then she'd always suspect you of skewing your samples to get a favorable outcome. Fortunately, the situation isn't quite as pathetic as it seems: we at least know that BPP is contained in NP^NP (that is, NP with NP oracle), and hence in the second level of the polynomial hierarchy PH. Sipser, Gács, and Lautemann proved that in 1983. I went through the proof in class, but I'm actually going to skip it in these notes, because it's a bit technical. If you want it, here it is. Incidentally, while we know that BPP is contained in NP^NP, we don't know anything similar for BQP, the class of problems solvable in polynomial time on a quantum computer. BQP hasn't yet made its official entrance in this course -- you'll have to wait a couple more lectures! -- but I'm trying to foreshadow it by telling you what it apparently isn't. In other words, what do we know to be true of BPP that we don't know to be true of BQP? Containment in PH is only the first of three examples we'll see in this lecture. In complexity theory, it's hard to talk about randomness without also talking about a closely-related concept called nonuniformity. Nonuniformity basically means that you get to choose a different algorithm for each input length n. Now, why would you want such a stupid thing? Well, remember in Lecture 5 I showed you the Blum Speedup Theorem -- which says that it's possible to construct weird problems that admit no fastest algorithm, but only an infinite sequence of algorithms, with each one faster than the last on sufficiently large inputs? In such a case, nonuniformity would let you pick and choose from all algorithms, and thereby achieve the optimal performance. In other words, given an input of length n, you could simply pick the algorithm that's fastest for inputs of that particular length! But even in a world with nonuniformity, complexity theorists believe there would still be strong limits on what could efficiently be computed. When we want to talk about those limits, we use a terminology invented by Karp and Lipton in 1982. Karp and Lipton defined the complexity class P/f(n), or P with f(n)-size advice, to consist of all problems solvable in deterministic polynomial time on a Turing machine, with help from an f(n)-bit "advice string" a[n] that depends only on the input length n. You can think of the polynomial-time Turing machine as a grad student, and the advice string a[n] as wisdom from the student's advisor. Like most advisors, this one is infinitely wise, benevolent, and trustworthy. He wants nothing more than to help his students solve their respective thesis problems: that is, to decide whether their respective inputs x in {0,1}^n are yes-inputs or no-inputs. But also like most advisors, he's too busy to find out what specific problems his students are working on. He therefore just doles out the same advice a[n] to all of them, trusting them to apply it to their particular inputs x. We'll be particularly interested in the class P/poly, which consists of all problems solvable in polynomial time using polynomial-size advice. In other words, P/poly is the union of P/n^k over all positive integers k. Now, is it possible that P = P/poly? As a first (trivial) observation, I claim the answer is no: P is strictly contained in P/poly, and indeed in P/1. In other words, even with a single bit of advice, you really can do more than with no advice. Why? Right! Consider the following problem: Given an input of length n, decide whether the n^th Turing machine halts. Not only is this problem not in P, it's not even computable -- for it's nothing other than a slow, "unary" encoding of the halting problem. On the other hand, it's easy to solve with a single advice bit an that depends only on the input length n. For that advice bit could just tell you what the answer is! Here's another way to understand the power of advice: while the number of problems in P is only countably infinite, the number of problems in P/1 is uncountably infinite. (Why?) On the other hand, just because you can solve vastly more problems with advice than you can without, that doesn't mean advice will help you solve any particular problem you might be interested in. Indeed, a second easy observation is that advice doesn't let you do everything: there exist problems not in P/poly. Why? Well, here's a simple diagonalization argument. I'll actually show a stronger result, that there exist problems not in P/n^log n. Let M[1],M[2],M[3],... be a list of polynomial-time Turing machines. Also, fix an input length n. Then I claim that there exists a Boolean function f:{0,1}^n→{0,1} that the first n machines (M[1],...,M[n]) all fail to compute, even given any n^log n-bit advice string. Why? Just a counting argument: there are $2^{2^{n}}$ Boolean functions, but only n Turing machines and $2^{n^{\log n}}$ advice strings. So choose such a function f for every n; you'll then cause each machine M[i] to fail on all but finitely many input lengths. Indeed, we didn't even need the assumption that the M[i]'s run in polynomial time. Of course, all this time we've been dancing around the real question: can advice help us solve problems that we actually care about, like the NP-complete problems? In particular, is NP contained in P /poly? Intuitively, it seems unlikely: there are exponentially many Boolean formulas of size n, so even if you somehow received a polynomial-size advice string from God, how would that help you to decide satisfiability for more than a tiny fraction of those formulas? But -- and I'm sure this will come as a complete shock to you -- we can't prove it's impossible. Well, at least in this case we have a good excuse for our ignorance, since if P=NP, then obviously NP would be in P/poly as well. But here's a question: if we did succeed in proving P≠NP, then would we also have proved that NP is not in P/poly? In other words, would NP in P/poly imply P=NP? Alas, we don't even know the answer to that. But as with BPP and NP, the situation isn't quite as pathetic as it seems. Karp and Lipton did manage to prove in 1982 that, if NP were contained in P/poly, then the polynomial hierarchy PH would collapse to the second level (that is, to NP^NP). In other words, if you believe the polynomial hierarchy is infinite, then you must also believe that NP-complete problems are not efficiently solvable by a nonuniform algorithm. This "Karp-Lipton Theorem" is the most famous example of a very large class of complexity results, a class that's been characterized as "if donkeys could whistle, then pigs could fly." In other words, if one thing no one really believes is true were true, then another thing no one really believes is true would be true! Intellectual onanism, you say? Nonsense! What makes it interesting is that the two things that no one really believes are true would've previously seemed completely unrelated to each other. It's a bit of a digression, but the proof of the Karp-Lipton Theorem is more fun than a barrel full of carp. So let's see the proof right now. We assume NP is contained in P/poly; what we need to prove is that the polynomial hierarchy collapses to the second level -- or equivalently, that coNP^NP = NP^NP. So let's consider an arbitrary problem in coNP^NP, like so: For all n-bit strings x, does there exist an n-bit string y such that φ(x,y) evaluates to TRUE? (Here φ is some arbitrary polynomial-size Boolean formula.) We need to find an NP^NP question -- that is, a question where the existential quantifier comes before the universal quantifier -- that has the same answer as the question above. But what could such a question possibly be? Here's the trick: we'll first use the existential quantifier to guess a polynomial-size advice string a[n]. We'll then use the universal quantifier to guess the string x. Finally, we'll use the advice string a[n] -- together with the assumption that NP is in P/poly -- to guess y on our own. Thus: Does there exist an advice string a[n] such that for all n-bit strings x, φ(x,M(x,a[n])) evaluates to TRUE? Here M is a polynomial-time Turing machine that, given x as input and an as advice, outputs an n-bit string y such that φ(x,y) evaluates to TRUE whenever such a y exists. By one of your homework problems from last week, we can easily construct such an M provided we can solve NP-complete problems in P/poly. Alright, I told you before that nonuniformity was closely related to randomness -- so much so that it's hard to talk about one without talking about the other. So in the rest of this lecture, I want to tell you about two connections between randomness and nonuniformity: a simple one that was discovered by Adleman in the 70's, and a deep one that was discovered by Impagliazzo, Nisan, and Wigderson in the 90's. The simple connection is that BPP is contained in P/poly: in other words, nonuniformity is at least as powerful as randomness. Why do you think that is? Well, let's see why it is. Given a BPP computation, the first thing we'll do is amplify the computation to exponentially small error. In other words, we'll repeat the computation (say) n^2 times and then output the majority answer, so that the probability of making a mistake drops from 1/3 to roughly $2^{-n^{2}}$. (If you're trying to prove something about BPP, amplifying to exponentially small error is almost always a good first step!) Now, how many inputs are there of length n? Right: 2^n. And for each input, only a $2^{-n^{2}}$ fraction of random strings cause us to err. By the union bound (the most useful fact in all of theoretical computer science), this implies that at most a $2^{n-n^{2}}$ fraction of random strings can ever cause us to err on inputs of length n. Since $2^{n-n^{2}}$ < 1, this means there exists a random string, call it r, that never causes us to err on inputs of length n. So fix such an r, feed it as advice to the P/poly machine, and we're done! So that was the simple connection between randomness and nonuniformity. Before moving on to the deep connection, let me make two remarks. 1. Even if P≠NP, you might wonder whether NP-complete problems can be solved in probabilistic polynomial time. In other words, is NP in BPP? Well, we can already say something concrete about that question. If NP is in BPP, then certainly NP is also in P/poly (since BPP is in P/poly). But that means PH collapses by the Karp-Lipton Theorem. So if you believe the polynomial hierarchy is infinite, then you also believe NP-complete problems are not efficiently solvable by randomized algorithms. 2. If nonuniformity can simulate randomness, then can it also simulate quantumness? In other words, is BQP in P/poly? Well, we don't know, but it isn't considered likely. Certainly Adleman's proof that BPP is in P/poly completely breaks down if we replace the BPP by BQP. But this raises an interesting question: why does it break down? What's the crucial difference between quantum theory and classical probability theory, which causes the proof to work in the one case but not the other? I'll leave the answer as an exercise for you. Alright, now for the deep connection. Do you remember the primality-testing problem from earlier in the lecture? Over the years, this problem crept steadily down the complexity hierarchy, like a monkey from branch to branch: • It's obvious that primality-testing is in coNP. • In 1975, Pratt showed it was in NP. • In 1977, Solovay, Strassen, and Rabin showed it was in coRP. • In 1992, Adleman and Huang showed it was in ZPP. • In 2002, Agrawal, Kayal, and Saxena showed it was in P. The general project of taking randomized algorithms and converting them to deterministic ones is called derandomization (a name only a theoretical computer scientist could love). The history of the primality-testing problem can only be seen as a spectacular success of this project. But with such success comes an obvious question: can every randomized algorithm be derandomized? In other words, does P equal BPP? Once again the answer is that we don't know. Usually, if we don't know if two complexity classes are equal, the "default conjecture" is that they're different. And so it was with P and BPP -- (ominous music) -- until now. Over the last decade and a half, mounting evidence has convinced almost all of us that in fact P=BPP. In the remaining ten minutes of this lecture, we certainly won't be able to review this evidence in any depth. But let me quote one theorem, just to give you a flavor of it: Theorem (Impagliazzo-Wigderson 1997): Suppose there exists a problem that's solvable in exponential time, and that's not solvable in subexponential time even with the help of a subexponential-size advice string. Then P=BPP. Notice how this theorem relates derandomization to nonuniformity -- and in particular, to proving that certain problems are hard for nonuniform algorithms. The premise certainly seems plausible. From our current perspective, the conclusion (P=BPP) also seems plausible. And yet the two seem to have nothing to do with each other. So, this theorem might be characterized as "If donkeys can bray, then pigs can oink." Where does this connection between randomness and nonuniformity come from? It comes from the theory of pseudorandom generators. We're gonna see a lot more about pseudorandom generators in the next lecture, when we talk about cryptography. But basically, a pseudorandom generator is just a function that takes as input a short string (called the seed), and produces as output a long string, in such a way that, if the seed is random, then the output looks random. Obviously the output can't be random, since it doesn't have enough entropy: if the seed is k bits long, then there are only 2^k possible output strings, regardless of how long those output strings are. What we ask, instead, is that no polynomial-time algorithm can successfully distinguish the output of the pseudorandom generator from "true" randomness. Of course, we'd also like for the function mapping the seed to the output to be computable in polynomial time. Already in 1982, Andy Yao realized that, if you could create a "good enough" pseudorandom generator, then you could prove P=BPP. Why? Well, suppose that for any integer k, you had a way of stretching an O(log n)-bit seed to an n-bit output in polynomial time, in such a way that no algorithm running in n^k time could successfully distinguish the output from true randomness. And suppose you had a BPP machine that ran in n^k time. In that case, you could simply loop over all possible seeds (of which there are only polynomially many), feed the corresponding outputs to the BPP machine, and then output the majority answer. The probability that the BPP machine accepts given a pseudorandom string has to be about the same as the probability that it accepts given a truly random string -- since otherwise the machine would be distinguishing random strings from pseudorandom ones, contrary to assumption! But what's the role of nonuniformity in all this? Well, here's the point: in addition to a random (or pseudorandom) string, a BPP machine also receives an input, x. And we need the derandomization to work for every x. But that means that, for the purposes of derandomization, we must think of x as an advice string provided by some superintelligent adversary for the sole purpose of foiling the pseudorandom generator. You see, this is why we had to assume a problem that was hard even in the presence of advice: because we need to construct a pseudorandom generator that's indistinguishable from random even in the presence of the "adversary," x. (That reminds me of something: why are there so many Israelis in complexity, and particularly in the more cryptographic kinds of complexity? I have a theory about this: it's because complexity is basically mathematicized paranoia. It's that field where, whenever anyone else has any choice in what to do, you immediately assume that person will do the worst possible thing to you and proceed To summarize: if we could prove that certain problems are sufficiently hard for nonuniform algorithms, then we would prove P=BPP. This leads to my third difference between BPP and BQP: while most of us believe that P=BPP, most of us certainly don't believe that P=BQP. (Indeed we can't believe that, if we believe factoring is hard for classical computers.) We don't have any "dequantization" program that's been remotely as successful as the derandomization program. Once again, it would seem there's a crucial difference between quantum theory and classical probability theory, which allows certain ideas (like those of Sipser-Gács-Lautemann, Adleman, and Impagliazzo-Wigderson) to work for the latter but not for the Incidentally, over the last few years, Kabanets, Impagliazzo, and others managed to obtain a sort of converse to the derandomization theorems. What they've shown is that, if we want to prove P=BPP, then we'll have to prove that certain problems are hard for nonuniform algorithms. This could be taken as providing some sort of explanation for why, assuming P=BPP, no one has yet managed to prove it. Namely, it's because if you want to prove P=BPP, then you'll have to prove certain problems are hard -- and if you could prove those problems were hard, then you would be (at least indirectly) attacking questions like P versus NP. In complexity theory, pretty much everything eventually comes back to P versus NP. Puzzles for Thursday 1. You and a friend want to flip a coin, but the only coin you have is crooked: it lands heads with some fixed but unknown probability p. Can you use this coin to simulate a fair coin flip? (I mean perfectly fair, not just approximately fair.) 2. n people are standing in a circle. They're each wearing either a red hat or a blue hat, assigned uniformly and independently at random. They can each see everyone else's hats but not their own. They want to vote on whether the number of red hats is even or odd. Each person votes at the same time, so that no one's vote depends on anyone else's. What's the maximum probability with which the people can win this game? (By "win," I mean that their vote corresponds to the truth.) Assume for simplicity that n is odd. [Discussion of this lecture on blog] [← Previous lecture | Next lecture →]
{"url":"http://www.scottaaronson.com/democritus/lec7.html","timestamp":"2014-04-18T08:19:49Z","content_type":null,"content_length":"41545","record_id":"<urn:uuid:3a17b74d-a969-48f1-82b4-db0f566391bf>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Raindrops and the Doppler Effect Editor's note: As part of the preparations for the upcoming Marine ARM GPCI Investigations of Clouds (MAGIC) field campaign, principal investigator Ernie Lewis discusses how radars use the Doppler effect to determine raindrop sizes and speeds. Most of us know that the Doppler effect pertains to the change in frequency of a wave emitted by or scattered from a moving object. Our familiarity with this phenomenon is predominantly with sound waves, but the effect is the same for any wave. When a siren, for example, is moving toward us, the pitch (i.e., the frequency of the sound) is greater, whereas when it is moving away from us the pitch is lower—this is the Doppler effect in a nutshell. The amount by which the pitch is greater or lower, called the Doppler shift, is related to the speed of the object and to the speed of sound. Similarly, for radars, the amount by which the frequency of the radio waves reflected from a moving object changes depends on the speed of the object and the speed of propagation of radio waves, which is the speed of light. Radio waves consist of oscillations that occur a given number of times every second, which by definition is the frequency of the wave. Each of these oscillations propagates at the speed of light toward the receiver, where they will be detected at a later time that is determined by the distance to the object and the speed of light. Because all oscillations travel the same distance and at the same speed from the object to the receiver, the receiver detects the same number of oscillations every second as are being created by the object. In other words, it detects the wave at the same frequency at which it was emitted. For the situation in which the object is moving toward the radar receiver, the same number of oscillations is being created every second, but each successive oscillation occurs closer to the receiver, and takes less time to travel to the receiver than the previous one. As the motion of the object toward the radar results in more oscillations being received by the radar every second, the frequency is higher. If the object is moving away from the radar the oscillations will be received less often, and the frequency will be lower. How big are raindrops? The Doppler effect is employed by the ARM radars to determine the sizes of raindrops. This may at first seem puzzling, as the magnitude of the Doppler effect depends on the speed of an object, not its size. The speed at which a drop is moving toward or away from the radar might not be the same as the speed at which it would normally fall because of updrafts and downdrafts in clouds (and in the atmosphere in general). In the simplest case, the Doppler signal measured by a vertically pointing radar consists of frequency shifts, each shift corresponding to a given speed. By employing the relation between this speed and raindrop size, the Doppler signal can be related to the raindrop sizes. How fast do water drops fall? For drops near the surface of the Earth, the following approximate values will give an idea of the speeds involved. • The terminal velocity of a cloud drop, with typical diameter 20 millionths of a meter (approximately one thousandth of an inch), is one centimeter (~1/2 inch) per second. • For drops comprising drizzle, which are perhaps ten times as large, it is 3/4 of a meter (2 feet) per second. • Small raindrops, with diameters of one millimeter, fall at 4 meters (13 feet) per second, and large raindrops, with diameters 5 millimeters, fall at 9 meters (30 feet) per second (20 mph). Another way to look at this is to consider the times required to fall (in still air) a distance of ten meters, the height of a three-story building. Approximate values are fifteen minutes for cloud drops, fifteen seconds for drizzle drops, two second for small raindrops, and one second for large raindrops. Not only do we know the relation between raindrop size and terminal velocity, we also know how strongly raindrops of a given size reflect radio waves back to the radar. This information means that from the strength of the Doppler signal at a given frequency shift we can determine how many raindrops of the corresponding size are in the volume of air sampled by the radar. The sizes of the raindrops, plus the number of drops of each size, comprise an important quantity in meteorology known as the drop size distribution (DSD). If the DSD is known, we can calculate the rainfall rate, as we know how much water is in each size of raindrop, how many raindrops of each size there are, and how fast drops of each size are falling. --Ernie Lewis, MAGIC principal investigator
{"url":"http://www.arm.gov/news/blog/post/18689","timestamp":"2014-04-18T02:59:38Z","content_type":null,"content_length":"31953","record_id":"<urn:uuid:beffa291-d669-456c-93b6-dca5ed61ab59>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
A centrifuge, r = 0.06 cm, revolves at 24000 rpm. If its tensile strength is 120000 N, what is the maximum mass that... - Homework Help - eNotes.com A centrifuge, r = 0.06 cm, revolves at 24000 rpm. If its tensile strength is 120000 N, what is the maximum mass that it can hold? The centrifuge rotates at 24000 rpm. The radius of the centrifuge is 0.06 cm. A mass is held with a tensile strength of 120000 N. The maximum mass that can be held has to be determined. The speed of rotation of the centrifuge in terms of rad/s is `(24000*2*pi)/60` = 2513.27412 rad/s. The radius of the centrifuge is 0.06 cm = 0.0006 m. The centrifugal force acting on a mass M is given by M*0.0006*(2513.27412)^2 = 3789.92*M N. Equating the centrifugal force acting on the mass to the tensile strength gives: M*0.0006*(2513.27412)^2 = 120000 => M = 31.66 kg The maximum mass that can be held in the centrifuge is 31.66 kg Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/centrifuge-r-0-06-cm-revolves-24-000-rpm-its-345884","timestamp":"2014-04-23T23:14:25Z","content_type":null,"content_length":"26389","record_id":"<urn:uuid:ccd3ee8a-36ad-469e-92b5-25c38491e5e6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Covering points with a convex hull up vote 3 down vote favorite Consider a set of $n$ points $x_1,\ldots,x_n \in \mathbb{R}^d$, for some $n \gg d$. Suppose $\{x_1,\ldots,x_n\} \subset C \subset \mathbb{R}^d$. Say that a set of points $y_1,\ldots,y_m \in C$ covers $x_1,\ldots,x_n$ if $x_1,\ldots,x_m$ lie inside the convex hull of $y_1,\ldots,y_m$. Are there known conditions under which a set of $n$ points has a cover of size $m$ in some set $C$, for $m$ much smaller than $n$? Clearly, if $C$ contains the $\ell_\infty$ ball of radius $\max_i || x_i||_1$, then there is a cover of size at most $d$ (i.e. a scaling of the simplex) -- but what if $C$ is smaller? e.g. what if $C$ is only contains the infinity ball of radius $\sqrt{\max_i ||x_i|| More generally, is there a literature that studies these kinds of problems? One term that will lead you to related literature is the notion of a core set. E.g., P. K. Agarwal, S. Har-Peled, and K. R. Varadarajan. "Geometric approximation via core-sets." In J. E. Goodman, J. Pach, and E. Welzl, editors, Combinatorial and Computational Geometry, volume 52 of MSRI Publications. Cambridge University Press, 2005. – Joseph O'Rourke Apr 24 '13 at 13:53 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged convex-polytopes convex-geometry convexity reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/128616/covering-points-with-a-convex-hull","timestamp":"2014-04-17T05:42:24Z","content_type":null,"content_length":"47969","record_id":"<urn:uuid:5f04ac2b-761e-4392-b6e9-9851b17a8ac6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
finding smallest and largest number given n binary bits December 10th 2011, 08:25 PM #1 finding smallest and largest number given n binary bits Given n bits that can hold 0s or 1s what is the largest and smallest integer they can hold? I know the formula $min=-2^{n-1}, max=2^{n-1}-1$ but is that only for numbers stored using two's compliment? Is there a formula for one's compliment and signed magnitude? If a number is unsigned then it would be 2^n right? Re: finding smallest and largest number given n binary bits "An n-bit ones' complement numeral system can represent integers in the range $-(2^{n-1}-1)$ to $2^{n-1}-1$" (Wikipedia). For the sign-and-magnitude method, the interval is the same because n - 1 bits represent the absolute value. The largest representable integer is not $2^n$. Consider the example when n = 3 and the largest integer is $111_2$. December 11th 2011, 08:49 AM #2 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/discrete-math/193967-finding-smallest-largest-number-given-n-binary-bits.html","timestamp":"2014-04-20T16:22:01Z","content_type":null,"content_length":"36481","record_id":"<urn:uuid:e49daf2f-9fa6-4f59-bc57-6b89ea9daa39>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Play Basel II Accord with SAS (2): portfolio simulation Although Basel II largely depends on probability instead of generalized linear model that SAS is especially good at, still SAS’ excellent data manipulation and visualization features make it one of the finest tools to explore and implement this accord. Paragraphs from No. 403 to No. 409 of Basel II - Pillar 1 list the requirements for a functioning grading structure, such as at least 7 grading levels are needed. The internal rating-based (IRB) approaches utilize a bank's own rating structure to estimate the risk weights. To discover the impact of an arbitrary grading structure toward portfolio-wise capital requirement, I simulated a 2000-borrower size credit portfolio. Other settings are 45% loss given default (LGD), 2.5 yr maturity and a 7-level grading structure 0-0.05-0.08-0.15-0.5-2-15, exactly like what were used by Gunter and Peter . The histogram of this simulated portfolio between probability of default (PD) and exposure at default (EAD) is displayed above. data simu; format lgd pd percent8.2 ead dollar8.; lgd = 0.45; m = 2.5; do i = 1 to 2000; pd = rand('EXPO') * 1.8 / 100; ead = rand('UNIFORM') * 300 + 700; drop i; label lgd = 'Loss given default' pd = 'Probability of default' ead = 'Exposure at default' m = 'Maturity'; ods html gpath = 'c:\tmp\' style = money; ods graphics on; proc kde data = simu; bivar pd ead / plots = histsurface; ods graphics off; ods html close; data grdstr01; grdstr = '0-0.05-0.08-0.15-0.5-2-15'; informat lowbound 8.4; do i = 1 to 7; lowbound = scan(grdstr, i, '-') / 100; call symput('grdstr', grdstr); keep lowbound; data grdstr02; merge grdstr01 grdstr01(firstobs=2 rename=(lowbound=uppbound)); if missing(uppbound) = 1 then uppbound = 1; grade = _n_; ratio = uppbound - lowbound; proc sql noprint; select cats(lowbound, '-<', uppbound, '=', grade) into: fmtvalue separated by ' ' from grdstr02; A grading format and the capital requirement function were built. Then the portfolio was graded and the by-grade required capitals were calculated. The classified results are showed above. Overall, the weighted portfolio-wise capital requirement is 7.95%. proc format; value gradefmt &fmtvalue ; proc fcmp outlib = work.myfunclib.finance; function reqcap(pd, lgd, m); corr = 0.12*(1-exp(-50*pd))/(1-exp(-50)) + 0.24*(1-(1-exp(-50*pd))/(1-exp(-50))); mtradj = (0.11852 - 0.05478 * log(pd))**2; return( (lgd * probnorm((probit(pd) + corr**0.5 * probit(0.999)) / (1-corr)** 0.5) - pd*lgd) * (1 + (m-2.5)*mtradj) / (1-1.5*mtradj) ); options cmplib = (work.myfunclib); proc sql noprint; create table _tmp01 as select * , put(pd, gradefmt.) as grade from simu select count(*) into: totalno from _tmp01 select avg(pd) into: totalpd from _tmp01 create table _tmp02 as select distinct grade, count(grade) / &totalno as proppd, avg(pd) as grouppd, reqcap(calculated grouppd, lgd, m) as groupcr, (calculated proppd * calculated grouppd) / &totalpd as propdft from _tmp01 group by grade create table _tmp03 as select a.*, b.groupcr as cr, b.groupcr*a.ead as expcr 'EAD*Required Capital' format = dollar8.2 from _tmp01 as a left join _tmp02 as b on a.grade = b.grade select sum(expcr) / sum(ead) into :totalcr from _tmp03 ods html gpath = 'c:\tmp\' style = money; proc sgpanel data = _tmp03 noautolegend; title "The portfolio-wise required capital is %sysfunc(putn(&totalcr, percent8.2))"; title2 "By a grading structure &grdstr"; panelby grade / columns = 4 rows = 2; needle x = pd y = expcr; rowaxis grid; ods html close; The required capital corresponding to each grade was demonstrated in a tile plot. The size of the subplots indicates the counts of borrowers at individual grades. Obviously, the concentrations of borrows concentrated in some grades such as 5 and 6, which contradicts the requirement by Paragraph 406. proc sql; create table _tmp04 as select distinct grade, cr 'Required capital', count(grade) as count from _tmp03 group by grade ods html gpath = 'c:\tmp\' style = harvest; goptions device=javaimg ftitle="arial/bold" ftext="arial" htitle=.15in htext=.2in xpixels=600 ypixels=500; proc gtile data = _tmp04; tile count tileby = (grade, count) / colorvar = cr; ods html close; Furthermore we can use area under curve(AUC) or Gini coefficient to evaluate this grading structure. For this portfolio, Gini coefficient by such a grading structure is 0.4296, which is pretty low and may suggest that they don’t match each other. data _tmp05; if _n_ = 1 then do sum_proppd = 0; sum_propdft = 0; dif_dft = 1; dif_pd = 1; set _tmp02; retain sum_proppd sum_propdft; sum_proppd = sum_proppd + proppd; sum_propdft = sum_propdft + propdft; dif_dft = max(0, 1 - sum_propdft); dif_pd = max(0, 1 - sum_proppd); proc iml; use _tmp05; read all var{dif_pd dif_dft}; start TrapIntegral(x,y); call sort(x,1); call sort(y,1); N = nrow(x); dx = x[2:N] - x[1:N-1]; meanY = (y[2:N] + y[1:N-1])/2; return( dx` * meanY ); area = TrapIntegral(dif_pd, dif_dft); gini = (area - 0.5) / (&totalpd / 2 + (1 - &totalpd) - 0.5); call symput('area', left(char(area))); call symput('gini', left(char(gini))); ods html gpath = 'c:\tmp\' style = money; title; proc sgplot data = _tmp05; series x = dif_pd y = dif_dft ; scatter x = dif_pd y = dif_dft ; band upper = dif_dft lower = 0 x = dif_pd / transparency=.5; xaxis grid label = 'Ratio of observations '; yaxis grid label = 'Ratio of default' ; inset "GINI Coefficient is %sysfunc(putn(&gini, 8.4))" / position = topleft border; keylegend "scatter"; ods html close; In conclusion, the grading structure has significant impact toward the required capital. Optimization of grading structure can be realized by the method of random walk, which will be discussed next
{"url":"http://www.sasanalysis.com/2011/07/play-basel-ii-accord-with-sas-2.html","timestamp":"2014-04-21T15:13:09Z","content_type":null,"content_length":"89562","record_id":"<urn:uuid:f4612237-dacc-4c3c-967f-bef0f3ddeaa7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Reinhold Blumel Teaching and Research Paul Trap Dynamic Kingdon Trap Ray Splitting Quantum Graphs Microwave Ionization Chaotic Scattering Chaos in Atomic Physics Finite Quantum Square Well The quest for manipulating matter at the atomic scale is currently one of the central directions in research and technology. Not so long ago the founders of quantum mechanics thought that experimenting with atoms and ions is fundamentally impossible. Well, they were wrong. But it was only fairly recently, well after the final formulation of nonrelativistic quantum theory in the mid 1920′s that first individual electrons and then individual ions and atoms were trapped in electrodynamical devices called atom and ion traps. My research focuses on (i) design of novel electrostatic and electrodynamic particle traps and (ii) the investigation of the dynamics of trapped neutral and charged particles. Paul Trap • R. Blümel, J. M. Chen, E. Peik, W. Quint, W. Schleich, Y. R. Shen and H. Walther, Phase Transitions of Stored Laser-Cooled Ions, Nature 334, 309–313 (1988). • R. Blümel, C. Kappler, W. Quint and H. Walther, Chaos and Order of Laser Cooled Ions in a Paul-Trap, Phys. Rev. A 40, 808–823 (1989). • R. Blümel, Hetero Charged Ion Clusters in a Paul Trap, Z. Phys. D 16, 293–297 (1990). • R. Blümel, E. Peik, W. Quint, and H. Walther, Phase Transitions of Stored Laser-Cooled Ions, Acta Physica Polonica A 78, 419–432 (1990). • R. Blümel, On the Integrability of the Two-ion Paul-Trap in the Pseudo Potential Approximation, Phys. Lett. A 174, 174–175 (1993). • R. Blümel, Comment on Regular and Chaotic Motions in Ion Traps: A Nonlinear Analysis of Trap Equations, Phys. Rev. A 48, 854–855 (1993). • M. Moore and R. Blümel, Quantum Manifestations of Order and Chaos in the Paul-Trap, Phys. Rev. A 48, 3082–3091 (1993). • J. W. Emmert, M. Moore and R. Blümel, Prediction of a deterministic melting transition of two-ion crystals in a Paul trap, Phys. Rev. A 48, 1757–1760 (1993). • M. G. Moore and R. Blümel, Prediction of an Alignment Transition Region of Two-ion Crystals in a Paul Trap, Phys. Rev. A 50, R4453–R4456 (1994). • R. Blümel, Cooling-induced melting of ion crystals in a Paul trap, Phys. Rev. A 51, 620–624 (1995). • R. Blümel, An introduction to chaos in dynamic ion traps, Physica Scripta T 59, 126–130 (1995). • R. Blümel, Nonlinear Dynamics of Trapped Ions, Physica Scripta T 59, 369–379 (1995). • M. G. Moore and R. Blümel, An Improved Pseudo Potential for the Two-Ion Paul Trap, Physica Scripta T 59, 429–433 (1995). • M. G. Moore and R. Blümel, Prediction of Deterministic Melting Regions of Two and Three Laser-cooled Ions in a Paul Trap, Physica Scripta T 59, 434–437 (1995). • R. Alheit, X. Z. Chu, M. Hoefer, M. Holzki, G. Werth, and R. Blümel, Nonlinear Collective Oscillations of an Ion Cloud in a Paul Trap, Phys. Rev. A 56, 4023–4031 (1997). • M. A. N. Razvi, X. Z. Chu, R. Alheit, G. Werth and R. Blümel, Fractional frequency collective parametric resonances of an ion cloud in a Paul trap, Phys. Rev. A 58, R34–R37 (1998). • B. Reusch and R. Blümel, Crystallized Vortex Crystals, Eur. Phys. J. D 3, 123–127 (1998). • V. I. Savichev and R. Blümel, Squeezing close to the stability boundaries of the Paul trap, Phys. Lett. A 309, 211–214 (2003). • I. Garrick-Bethell, Th. Clausen, and R. Blümel, Universal instabilities of radio-frequency traps, Phys. Rev. E 69, 056222 (2004), pp. 1–15. Dynamic Kingdon Trap Ray Splitting Quantum Graphs Microwave Ionization Chaotic Scattering • Y. Dabaghian, R. V. Jensen and R. Blümel, Exact trace formulas for a class of one-dimensional ray-splitting systems, Phys. Rev. E 63, 066201, pp. 1–6 (2001). • R. Blümel and Y. Dabaghian, Combinatorial identities for binary necklaces from exact ray-splitting trace formulas, J. Math. Phys. 42, 5832–5839 (2001). • Yu. Dabaghian, R. V. Jensen and R. Blümel, One-dimensional quantum chaos: Explicitly solvable cases, Pis’ma Zh. \’Eksp. Teor. Fiz. 74, 258–262 (2001); JETP Lett. 74, 235–239 (2001). • R. Blümel, Yu. Dabaghian and R. V. Jensen, Explicitly solvable cases of one-dimensional quantum chaos, Phys. Rev. Lett. 88, 044101 (2002). • R. Blümel, Yu. Dabaghian and R. V. Jensen, Exact, convergent periodic-orbit expansions of individual energy levels of regular quantum graphs, Phys. Rev. E 65, 046222, 1–10 (2002). • Yu. Dabaghian, R. V. Jensen and R. Blümel, Spectra of regular quantum graphs, J. Exp. Theor. Phys. 94, 1201–1215 (2002); Zh. Exp. Teor. Fiz. 121, 1399-1414 (2002). • Yu. Dabaghian and R. Blümel, Solution of scaling quantum networks, Pis’ma v. ZhETF 77, 629–632 (2003); JETP Lett. 77, 530–533 (2003). • Yu. Dabaghian and R. Blümel, Explicit analytical solution for scaling quantum graphs, Phys. Rev. E 68, 055201(R) (2003), pp. 1–4. • Yu. Dabaghian and R. Blümel, Explicit spectral formulas for scaling quantum graphs, Phys. Rev. E 70, 046206 (2004), pp. 1–16. • A. S. Bhullar, R. Blümel, and P. M. Koch, Ray splitting with ghost orbits: explicit, analytical and exact solution for spectra of scaling step potentials with tunneling, J. Phys. A: Math. Gen. 38 , L563–L569 (2005). • A. S. Bhullar, R. Blümel, and P. M. Koch, Ghost orbit spectroscopy, The Physical Review E (2006), in press. • R. Blümel, Comment on ‘Quantum chaos in elementary quantum mechanics’ [Eur. J. Phys. 26 (2005) 423--439] by Yu Dabaghian and R. Jensen, Eur. J. Phys. 27, L1–L4 (2006). Finite Quantum Square Well SUPER COMPUTING WITH RECYCLED PC’s
{"url":"http://rblumel.faculty.wesleyan.edu/","timestamp":"2014-04-17T18:23:49Z","content_type":null,"content_length":"18216","record_id":"<urn:uuid:e93d95a3-9155-4ba4-9e11-a0c87e7222a1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Organization Society Founded in 1888 to further mathematical research and scholarship, the American Mathematical Society fulfills its mission through programs and services that promote mathematical research and its uses, strengthen mathematical education, and foster awareness and appreciation of mathematics and its connections to other disciplines and to everyday life. The American Statistical Association The American Statistical Association (ASA) is a scientific and educational society founded in 1839 with the following mission: To promote excellence in the application of statistical science across the wealth of human endeavor. The Association for Women in Mathematics The purpose of the Association for Women in Mathematics is to encourage women and girls to study and to have active careers in the mathematical sciences, and to promote equal opportunity and the equal treatment of women and girls in the mathematical sciences. Have fun investigating our web pages! Institute of Mathematical Statistics The IMS is an international professional and scholarly society devoted to the development, dissemination, and application of statistics and probability. The Institute currently has about 4,000 members in all parts of the world. The Mathematical Association of America The Mathematical Association of America is the largest professional society that focuses on mathematics accessible at the undergraduate level. Our members include university, college, and high school teachers; graduate and undergraduate students; pure and applied mathematicians; computer scientists; statisticians; and many others in academia, government, business, and industry. We welcome all who are interested in the mathematical sciences. Mathjobs.org is an automated job application system, sponsored by the American Mathematical Society. We welcome all job applicants with advanced degrees in Mathematics. The system is free for National Institute of Statistical Sciences NISS was established in 1991 by the national statistics societies and the Research Triangle universities and organizations, with the mission to identify, catalyze and foster high-impact, cross- disciplinary research involving the statistical sciences. National Council of Teachers of Mathematics NTCTM is the world’s largest organization dedicated to improving math education, serving over 100,000 members and more than 240 Affiliates. Society for Industrial and Applied Mathematics Whether you are a student considering a career in mathematics, or an established mathematician, you will find the job-search and career information resources in this site invaluable. The Society of Actuaries is an educational, research and professional organization dedicated to serving the public and Society members. The Society’s vision is for actuaries to be recognized as the leading professionals in the modeling and management of financial risk and contingent events.
{"url":"http://www.csuohio.edu/sciences/dept/mathematics/undergraduate/career_links.html","timestamp":"2014-04-18T01:50:49Z","content_type":null,"content_length":"11814","record_id":"<urn:uuid:03a2cf1b-c5c1-412a-99da-0a7a197b819d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Killer Sudoku Table of Contents Killer Sudoku Solving Strategies There are three basic methods used to solving killer sudoku puzzles. The first is to use the strategies for solving regular sudoku puzzles. The second is to consider the different ways that a sum can be created. The third is to consider the total value of a region. Here we outline the basic strategies and then show how they are applied in a sample puzzle. At a later date we will post more complex strategies. (The Terminology used on this page is defined on the rules page Basic Solving Strategies The following are the basic rules used to solve killer sudokus. Rule of 1 This comes directly from the definition of sudoku. No region can contain any duplicate digits. In a sudoku region each digit appears exactly once. For example, if a digit appears in a row, it cannot be in any other cell in the row. Likewise, each digit can appear in a cage only once. If a digit is in a cage, it cannot appear in that cage again. Rule of Necessity This rule can be applied to sudoku regions (i.e., row, column, or nonet) or to a cage. In the former case, each region must contain all the digits one to nine. Thus, if all the digits but one appear in a row, the missing digit must appear in the empty cell. Rule of 45 Each sudoku region (i.e., row, column, or nonet) contains the digits one through nine. Thus, each sudoku region has a total value of 45. If S is the sum of all the cages contained entirely in a region, then the cells not covered must sum to 45-S. Rule of K The Rule-of-k is an extension of the Rule-of-1. If there are k cells contained entirely in a region that contain exactly k different possible values, then no other cell in that region can contain any of those k values. Sum Elimination This strategy examines the different possible ways of making the sum of a cage. Reducing the number of different possible ways of making a sum, can often lead to a potential solution. There are many ways of reducing the number of sums. For example, if a 2-cage has a total of 3, 4, 16, or 17 there is only one combination of values that can be used. (3=2+1, 4=3+1, 16=9+7, and 17=9+8.) 3-cages with only 1 combination are: 6=1+2+3, 7=1+2+4, 23=9+8+6, 24=9+8+7. The sum calculator found in the online player page can be very handy. more to come... Applying the basic strategies. An Example Here we will use the above strategies to solve a puzzle. You might want to print out the puzzle so you can follow along all the steps. Rule of 45 Rule of necessity Rule of 1 Unique Sums Rule of K Rule of K Limited possible sums Rule of necessity Limited Possible Sums Rule of K Limited Possible Sums Rule of 1 Rule of K Rule of Necessity Rule of 1 All Possible Sums Rule of Necessity Limited possible sums Rule of K Rule of 1 Limited Possible Sums Remaining sum Rule of 1 Rule of K Rule of necessity Sum Elimination Sum elimination Rule of 1 Range of Totals Sum Elimination Rule of 1 Rule of necessity Remaining Sum Limited possible sums Rule of 1 Rule of necessity Rule of necessity Rule of 1
{"url":"http://killersudokuonline.com/tips.html","timestamp":"2014-04-20T00:54:36Z","content_type":null,"content_length":"13459","record_id":"<urn:uuid:c481b1b5-570a-4ac5-ba52-7bb26899179a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Video Library Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org. It is a standard axiom of quantum mechanics that the Hamiltonian H must be Hermitian because Hermiticity guarantees that the energy spectrum is real and that time evolution is unitary. In this talk we examine an alternative formulation of quantum mechanics in which the conventional requirement of Hermiticity is replaced by the more general and physical condition of space- time reflection (PT) symmetry. We show that if the PT symmetry of H is unbroken, Then the spectrum of H is real. Examples of PT-symmetric non-Hermitian Hamiltonians are $H=p^2+ix^3$ and $H=p^2-x^4$. This is an introduction to background independent quantum theories of gravity, with a focus on loop quantum gravity and related approaches. -Quantum Gravity, by Carlo Rovelli, Cambridge University Press 2005 -Quantum gravityy with a positive cosmological constant, Lee Smolin, hep-th/0209079 -Invitation to loop quantum gravity, Lee Smolin, hep-th/0408048 -Gauge fields, knots and gravity, JC Baez, JP Muniain Globular proteins, which act as enzymes, are a key component of the network of life. Over many decades, much experimental data has been accumulated yet theoretical progress has been somewhat limited. We argue that the key results accumulated over the years inexorably lead to a unified framework for understanding proteins. Our framework yields predictions on the existence of a fixed menu of folds determined by geometry, the role of the amino acid sequence in selecting the native state structure from this menu and the propensity for amyloid formation. I discuss the backreaction of inhomogeneities on the expansion of the universe. The average behaviour of an inhomogeneous spacetime is not given by the Friedmann-Robertseon-Walker equations. The new terms in the exact equations hold the possibility of explaining the observed acceleration without a cosmological constant or new physics. In particular, the coincidence problem may be solved by a connection with structure formation. We express the total equation of state parameter of a spatially flat Friedman-Robertson-Walker universe in terms of derivatives of the red-shift dependent spin-weighted angular moments of the two-point correlation function of the three dimensional cosmic shear. In the talk I will explain all the technical terms in the first sentence, I will explain how such an expression is obtained and highlight its relevance for determining the expansion history of the universe.
{"url":"http://perimeterinstitute.ca/video-library?title=&page=658&qt-videos=0","timestamp":"2014-04-19T13:04:23Z","content_type":null,"content_length":"66865","record_id":"<urn:uuid:41038f65-b673-4470-9bbf-3dbca10ea7f4>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Identical Champions League Draw: What Were the Odds? A number of news outlets have reported a peculiar quirk that arose during Friday’s Champions League draw. Apparently, the sport’s European governing body, UEFA, ran a trial run the day before the main event, and the schedule chosen during this event was identical to that of the actual draw on Friday. Given this strange coincidence, a number of people have been expressing the various odds of this occurrence. For example, the author of this newspaper article claimed that ‘bookies’ calculated the odds at 5,000 to 1. In other words, the probability of this event was 0.0002. The same article also says that the probability of this event (two random draws being identical) occurring is not as low as one might think. However, this article does not give the probability or odds of this event occurring. The oblivious reason for this is that such a calculation is difficult. Since teams from the same domestic league and teams from the same country cannot play each other, such a calculation involves using conditional probabilities over a variety of scenarios. Despite my training in Mathematics and interest in quantitative pursuits, I have always struggled to calculate the probability of multiple conditional events. Given that there are many different ways in which two identical draws can be made, such a calculation is, unfortunately, beyond my admittedly limited ability. Thankfully, there’s a cheats way to getting a rough answer: use Monte Carlo simulation. The code below shows how to write up a function in R that performs synthetic draws for the Champions League given the aforementioned conditions. With this function, I performed two draws 200,000 times, and calculated that the probability of the identical draws is: 0.00011, so the odds are around: 1 in 9,090. This probability is subject to some sampling error, however getting a more accurate measure via simulation would require more computing power like that enabled by Rcpp (which I really need to learn). Nevertheless, the answer is clearly lower than that proposed either by the ‘bookies’ or the newspaper article’s author. # cl draw dat <- read.csv("cldraw.csv") > dat team iso pos group 1 Galatasaray TUR RU H 2 Schalke GER WI B 3 Celtic SCO RU G 4 Juventus ITA WI E 5 Arsenal ENG RU B 6 Bayern GER WI F 7 Shakhtar Donetsk UKR RU E 8 Dortmund GER WI D 9 Milan ITA RU C 10 Barcelona ESP WI G 11 Real Madrid ESP RU D 12 Man. United ENG WI H 13 Valencia ESP RU F 14 PSG FRA WI A 15 Porto POR RU A 16 Malaga ESP WI C draw <- function(x){ fixtures <- matrix(NA,nrow=8,ncol=2) p <- 0 for(j in 1:8){ k <- 0 n <- 0 n <- n + 1 aa <- x[x[,"pos"]=="RU",] t1 <- aa[sample(1:dim(aa)[1],1),] bb <- x[x[,"pos"]=="WI",] t2 <- bb[sample(1:dim(bb)[1],1),] k <- ifelse(t1[,"iso"]!=t2[,"iso"] & t1[,"group"]!=t2[,"group"],1,0) fixtures[j,1] <- as.character(t1[,"team"]) fixtures[j,2] <- as.character(t2[,"team"]) x <- x[!(x[,"team"] %in% c(as.character(t1[,"team"]), if(n>50){p <- 0} p <- ifelse(sum(as.numeric(is.na(fixtures)))==0,1,0) drawtwo <- function(x){ f1 <- as.vector(unlist(x)) joinup <- data.frame(team=f1[1:16], iso=f1[17:32], pos=f1[33:48], group=f1[49:64]) check1 <- data.frame(draw(joinup)) check2 <- data.frame(draw(joinup)) rightdraw <- ifelse(sum(na.omit(check1[order(check1),2])== na.omit(check2[order(check2),2]))==8, 1, 0) dat2 <- rbind(as.vector(unlist(dat)), dat3 <- dat2[rep(1,1000),] vals <- 0 for(i in 1:200){ yy <- apply(dat3, 1, drawtwo) vals <- sum(yy) + vals # Probability > vals/200000 [1] 0.00011 # Odds > 1/(vals/200000)-1 [1] 9089.909 7 thoughts on “Identical Champions League Draw: What Were the Odds?” 1. Well, I can’t help myself looking at your work. I just have to provide the theoretical results to compliment your simulation results and to perhaps illuminate how seemingly complex systems like this can be broken down into theoretical probabilities. There are really several answers to the odds question, and here are the summarized results: 1) If the ORDER of the draw means anything, then drawing each team in a specific order is just a serial set of independent probabilities. Just calculate the probability of each draw in order and multiply the results. These odds are extremely low (about 1 in 169,344,000 – 254,016,000, depending on whether you draw runners-up or winners first). 2) If the order is completely irrelevant, the problem becomes one of counting the allowable permutations of the matches. The odds will be 1 in [number of permutations]. The odds here are very close to the simulation results in the post (1 in 10,926). I used an inelegant, brute-force method, since the brute-force method is easier to understand for anyone reading this. Now for the curious, hungry, and/or masochistic, here is an R script to actually explain and calculate all these probabilities: # Load data frame of team data. draw_Data <- read.csv("UEFA Draw.csv") # This is what the resulting data looks like: # Team Country RU_WI Group #1 Galatasaray TUR RU H #2 Schalke GER WI B #3 Celtic SCO RU G #4 Juventus ITA WI E #5 Arsenal ENG RU B #6 Bayern GER WI F #7 Shakhtar Donetsk UKR RU E #8 Dortmund GER WI D #9 Milan ITA RU C #10 Barcelona ESP WI G #11 Real Madrid ESP RU D #12 Man. United ENG WI H #13 Valencia ESP RU F #14 PSG FRA WI A #15 Porto POR RU A #16 Malaga ESP WI C get_Draw_Odds <- function(draw_Data, first_Set, second_Set){ # Initialize array of odds for each draw. odds_Array <- integer(16) # Initialize final odds variable. final_Odds <- 1 # Calculate odds for each first set draw # which have independent and decreasing odds. # 1/8, 1/7, 1/6, etc. odds_Array[first_Set] <- 1 / 8:1 # Calculate odds for each second set draw # restricted to non-matching group # and non-matching country compared to # first set draw. for (i in 1:8){ RU_Country <- draw_Data$Country[first_Set[i]] RU_Group <- draw_Data$Group[first_Set[i]] # Filter list of second set to meet the criteria. valid_Set <- subset(draw_Data[second_Set[i:8],], Country != RU_Country & Group != RU_Group) # Calculate odds (1/n). odds_Array[second_Set[i]] <- 1 / nrow(valid_Set) # Calculate theoretical odds of the draw # happening in exact order. # This is assuming independent, serial # probabilities, where # Pr(A followed by B) = Pr(A) * Pr(B). for (i in 1:16){ final_Odds <- final_Odds * odds_Array[i] # Initialize arrays of odd and even numbers # for use in referring to runners-up and winners. # This is really just for my convenience. runner_Up <- c(1,3,5,7,9,11,13,15) winner <- c(2,4,6,8,10,12,14,16) # Here are the results if we draw a runner-up # and then draw a suitable group winner: get_Draw_Odds(draw_Data, runner_Up, winner) # final odds = 3.93676e-09 # or 1 in 254,016,000 # If we draw winners first and then match, # we get a slightly different result: get_Draw_Odds(draw_Data, runner_Up, winner) # final odds = 5.90514e-09 # or 1 in 169,344,000 # Now, if we assume the draw can be performed in any # order and we are only concerned with the final set # of match-ups, then this actually becomes fairly simple. # We just find the number of final permutations. # Below is a brute-force method. # For my own sanity, I am breaking the data # up into runners-up and winners, since every # runner-up must be matched with a winner. # This is how new data is envisioned: # Runners-up # Team Country RU_WI Group #1 Galatasaray TUR RU H #2 Celtic SCO RU G #3 Arsenal ENG RU B #4 Shakhtar Donetsk UKR RU E #5 Milan ITA RU C #6 Real Madrid ESP RU D #7 Valencia ESP RU F #8 Porto POR RU A # Winners # Team Country RU_WI Group #1 Schalke GER WI B #2 Juventus ITA WI E #3 Bayern GER WI F #4 Dortmund GER WI D #5 Barcelona ESP WI G #6 Man. United ENG WI H #7 PSG FRA WI A #8 Malaga ESP WI C # Convert all of this into a matrix of # teams allowed to be matched. match_Allowed <- matrix(0,8,8) match_Allowed[1,] <- c(1,1,1,1,1,0,1,1) match_Allowed[2,] <- c(1,1,1,1,0,1,1,1) match_Allowed[3,] <- c(0,1,1,1,1,0,1,1) match_Allowed[4,] <- c(1,0,1,1,1,1,1,1) match_Allowed[5,] <- c(1,0,1,1,1,1,1,0) match_Allowed[6,] <- c(1,1,1,0,0,1,1,0) match_Allowed[7,] <- c(1,1,0,1,0,1,1,0) match_Allowed[8,] <- c(1,1,1,1,1,1,0,1) # With runners-up as the rows and winners # as the columns, the matrix looks like this: # [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] #[1,] 1 1 1 1 1 0 1 1 #[2,] 1 1 1 1 0 1 1 1 #[3,] 0 1 1 1 1 0 1 1 #[4,] 1 0 1 1 1 1 1 1 #[5,] 1 0 1 1 1 1 1 0 #[6,] 1 1 1 0 0 1 1 0 #[7,] 1 1 0 1 0 1 1 0 #[8,] 1 1 1 1 1 1 0 1 # Create a matrix of all possible permutations. all_Matches <- allPerms(1:8, observed=TRUE, max=50000) # I'm going to use vector math, because it # makes more sense to me. I'm going to convert # the all_Matches data into 8 different identity # matrices to represent each of the 8 runners-up. # Then, I will use each row of the match_Allowed # matrix as a "mask" on the matching identity # matrix. I will use the resulting identity # matrices as an aggregate mask on the original # all_Matches data. Then, we will just count # the allowable rows. # This is a function that checks to see if # one number is the same as another. If # true, it returns a 1, and if false, a 0. identity_Select <- function(identity_Number, data_Number){ if (identity_Number==data_Number){ } else { # Calculate the total number of items in # the all_Matches matrix. # We don't want to keep doing this. total_Items <- length(all_Matches) # Transfer the all_Matches data into # individual matrices. matches_1 <- all_Matches matches_2 <- all_Matches matches_3 <- all_Matches matches_4 <- all_Matches matches_5 <- all_Matches matches_6 <- all_Matches matches_7 <- all_Matches matches_8 <- all_Matches # Loop through every item in each matrix, # converting it into an identity for each # number 1 through 8. This takes a little bit # and really isn't efficient. for (i in 1:total_Items){ matches_1[i] <- identity_Select(1,matches_1[i]) matches_2[i] <- identity_Select(2,matches_2[i]) matches_3[i] <- identity_Select(3,matches_3[i]) matches_4[i] <- identity_Select(4,matches_4[i]) matches_5[i] <- identity_Select(5,matches_5[i]) matches_6[i] <- identity_Select(6,matches_6[i]) matches_7[i] <- identity_Select(7,matches_7[i]) matches_8[i] <- identity_Select(8,matches_8[i]) # Use the rows in the matches allowed matrix # to mask the appropriate identity matrices. for (i in 1:nrow(all_Matches)){ matches_1[i,] <- matches_1[i,] * match_Allowed[1,] matches_2[i,] <- matches_2[i,] * match_Allowed[2,] matches_3[i,] <- matches_3[i,] * match_Allowed[3,] matches_4[i,] <- matches_4[i,] * match_Allowed[4,] matches_5[i,] <- matches_5[i,] * match_Allowed[5,] matches_6[i,] <- matches_6[i,] * match_Allowed[6,] matches_7[i,] <- matches_7[i,] * match_Allowed[7,] matches_8[i,] <- matches_8[i,] * match_Allowed[8,] # Merge all of the identity matrices, so we don't have # to keep track of them all. matches_Merged <- matches_1 + matches_2 + matches_3 + matches_4 + matches_5 + matches_6 + matches_7 + matches_8 # Filter disallowed matches from the all_Matches matrix. filtered_Matches <- all_Matches * matches_Merged # Initialize counter for the number of allowable # permutations. allowed_Permutations 0){ allowed_Permutations <- allowed_Permutations + 1 # Here is the result: # allowed_Permutations = 10,926 # so the odds are 9.15248e-05 # or 1 in 10,926 □ Thanks for the comment Dinre. 2. A small correction for posterity. I noticed that the end of my R script didn’t paste in correctly and should read like so: # Initialize counter for the number of allowable # permutations. allowed_Permutations 0){ allowed_Permutations <- allowed_Permutations + 1 Also, I re-ran the script and received a new result! I don't know what I did the first time, but something was apparently off in the permutations file. The "real" answer is half the originally posted answer: 1 in 5463. I guess the bookies were actually pretty close. 3. You may wonder why your MC simulation result differs from the analytic result (1:5463. Dinre is correct, except that there is an easier way to compute the result if you know that what your are looking for is the value of the permanent of the constraint matrix Dinre showed). Your code is correct, except that it doesn’t sample draws uniformly from the possible draw space. when you “build” a random draw you try to assign teams randomly, until you find a legit draw. but some legit draws are more likely to be found than others, or in other words, some initial values are more likely to later fail than others. This results in a distorted distribution of the sample □ Thanks for the comment but I’m confused. Do you mean that certain random seed numbers are more likely to fail than others, or that my function is not replicating the draw correctly? ☆ Let’s consider an example. Suppose your objects are not as complex as the uefa draw (the matching of 16 teams into 8 pairs under constraints), for example increasing series of 10 integers each chosen from between 1 and 100. suppose you are interested in estimating the probability that the number 5 appears in a series, using MC method. So you start building an increasing series. you first choose the smallest, and to avoid any future conflict you draw it from 1:91, let’s call it x1. then you use your first choice to help you draw the second number from the range (x1+1):92. and x3 is chosen from (x2+1):93, and so on. Surely, you’ll get a random draw of an increasing series. but does this draw has the same distribution of the distribution you are trying to estimate? because you build your object of interest sequentially, and because a choice made at step X might affect the choices that remain available at step Y (Y>X), then your random draw of objects might have a distribution that is dependent on the how you build your objects, and not only the kind and number of “legit” objects. In the case of increasing series – the way I built it will result with extremely high probabilities for numbers greater than 90, while in reality the probability of any number is equal. So might be the case of your drawing algorithm that builds the match sequentially. You could argue that the way you built it is exactly how the real draw goes, in which case I’ll admit that your simulation is better than my (and others’) formal mathematical solution/ 4. I did simple direct monte carlo simulations on this before the draws. Here is my post dated 7.12.2012: This is a direct simulation that repeatedly creates possible pairs randomly, I run this 20M times.
{"url":"http://diffuseprior.wordpress.com/2012/12/24/identical-champions-league-draw-what-were-the-odds/","timestamp":"2014-04-16T07:13:19Z","content_type":null,"content_length":"85485","record_id":"<urn:uuid:840fe65d-56cd-47a4-acf9-480d876086fc>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Ncert Solutions Class 6th Chapter 3 Playing with Numbers Exercise 3.5 Question 12 I am the smallest number, having four different prime factors. Can you find me? Ncert Solutions Class 6th Chapter 3 Playing with Numbers Exercise 3.5 Question 11 18 is divisible by both 2 and 3. It is also divisible by 2 × 3 = 6. Similarly, a number is divisible by both 4 and 6. Can we say that the number must also be divisible by 4 × 6 = 24? If not, give an example to justify your answer. This website has helped me a lot to get good marks. Thanks I like the content of this site. It helps me a lot
{"url":"http://mathinstructor.net/","timestamp":"2014-04-16T16:05:26Z","content_type":null,"content_length":"29149","record_id":"<urn:uuid:b41a8d12-a3c0-498f-9052-66f211647cb8>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Tricky Substitution Equation Trick brain teasers appear difficult at first, but they have a trick that makes them really easy. Category: Trick Submitted By: eighsse Karl and his friend Larry are always pulling trick brain teasers on each other. Larry has been getting the best of Karl all too often lately, so Karl really wants to get him back. He comes up with a good one, and writes the following on a piece of paper: " YXZ - YXY = Y" He shows it to Larry, and says, "Each different letter in this equation stands for a different digit. All instances of a given letter stand for the same digit. There are multiple true solutions, but what is the greatest digit that Y can stand for in a true solution?" Larry scans the equation over and over. He can't come up with any way that the answer could not be 4. "The answer must be four," he says, with confidence. Karl smiles and replies, "Sorry, you're incorrect." What is the correct answer?
{"url":"http://www.braingle.com/wii/brainteasers/teaser.php?id=50023","timestamp":"2014-04-19T02:04:17Z","content_type":null,"content_length":"11438","record_id":"<urn:uuid:eddf5173-1c63-4e59-81f1-d1bca64505b8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
North Aurora Algebra Tutor ...I am currently pursuing my Teaching Certification from North Central College. I have assisted in Pre-Algebra, Algebra, and Pre-Calculus classes. I have also tutored Geometry and Calculus 7 Subjects: including algebra 1, algebra 2, geometry, trigonometry ...I have been teaching college composition, which includes a reading and literary analysis component, for six years. I continue to write and read as hobbies. I earned a bachelor's degree in English and a master's degree in Literature and Rhetoric. 17 Subjects: including algebra 1, algebra 2, reading, English ...My favorite subjects are Maths and Science(Physics preferably) and I love to teach. I would be willing to teach students to get ahead with their Microsoft Skills(Word, Xl).I have experience taking variety of Aptitude and Multiple Choice exams. I am from India and came to this country 10 years back. 6 Subjects: including algebra 1, algebra 2, chemistry, geometry ...However, I have spent the last two-and-a-half years at Troy Middle School in Plainfield as a substitute teacher and home bound tutor. I come highly recommended by my administrators and fellow mathematics teachers, one of whom is the accelerated mathematics teacher for the school. In addition to... 5 Subjects: including algebra 2, algebra 1, geometry, prealgebra ...Students tell me I am quite good at counseling -- better than their actual school counselors in some cases. I have a minor in Sociology, which is intricately related to Anthropology. I took several anthropology courses to fulfill a requirement for the Sociology minor. 41 Subjects: including algebra 1, reading, chemistry, English
{"url":"http://www.purplemath.com/North_Aurora_Algebra_tutors.php","timestamp":"2014-04-18T16:23:50Z","content_type":null,"content_length":"23811","record_id":"<urn:uuid:88e54871-2c41-4b14-ae2c-df74fbe64fa6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Named after Rudolf Lipschitz. Lipschitz (not comparable) 1. (mathematics) (Of a real-valued real function $f$) Such that there exists a constant $K$ such that whenever $x_1$ and $x_2$ are in the domain of $f$, $|f(x_1)-f(x_2)|\leq K|x_1-x_2|$. Derived termsEdit • Lipschitz condition • Lipschitz constant • Lipschitz continuity • Lipschitz continuous Last modified on 19 June 2013, at 20:45
{"url":"http://en.m.wiktionary.org/wiki/Lipschitz","timestamp":"2014-04-16T20:06:46Z","content_type":null,"content_length":"15759","record_id":"<urn:uuid:28c4a2a4-004c-4835-83c7-ead307317e56>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Averages of singular series, or: when Poisson is everywhere Here is another post where the mediocre mathematical abilities of HTML will require inserting images with some TeX-produced text… I have recently posted on my web page a preprint concerning some averages of “singular series” (another example of pretty bad mathematical terminology…) arising in the prime k-tuple conjecture, and its generalization the Bateman-Horn conjecture. The reason for looking at this is a result of Gallagher which is important in the original version of the proof by Goldston-Pintz-Yildirim that there are infinitely many primes p for which the gap q-p between p and the next prime q is smaller than ε times the average gap, for arbitrary small ε>0. This result refers to the behavior, on average over h=(h_1,…,h_k), of the constant S(h) which is supposed to be the leading coefficient in the conjecture |{n<X | n+h_i is prime for i=1,…,k}|~S(h) X(log X)^-k Gallagher showed that the average value of S(h) is equal to 1, and I’ve extended this in two ways… One Response to “Averages of singular series, or: when Poisson is everywhere” 1. Dear Emmanuel, an almost-solution to the latex problem is including the following HTML <script src=’http://math.etsu.edu/LaTeXMathML/LaTeXMathML.js’ type=’text/javascript’/> <link href=’http://math.etsu.edu/LaTeXMathML/LaTeXMathML.standardarticle.css’ rel=’stylesheet’ type=’text/css’/> in the template (possibly in the head), this will automatically load a javascript file that will automatically convert $\latex$ expressions to MathML. Or maybe even better you may put the .js and .css file on your site instead that directly loading them from http://math.etsu.edu/LaTeXMathML/ (i could not do this because it is disallowed by blogspot.com). Of course this is not a complete solution, and the script does not support all latex expressions, but at least you won’t have to include a pgn image just for a formula. Best regards! Post a Comment
{"url":"http://blogs.ethz.ch/kowalski/2008/03/08/averages-of-singular-series-or-when-poisson-is-everywhere/","timestamp":"2014-04-20T11:42:30Z","content_type":null,"content_length":"18405","record_id":"<urn:uuid:3aca470b-7de0-4251-8c8a-76e5f2be9835>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Archive for October, 2007 For example, to find the zenith angle (angle to overhead) and azimuth (angle from North) of the sun at any day and time of the year for any location on Earth, the laws of spherical trigonometry produce the formulas below. Here the solar declination δ is a function of the solar longitude λ and ecliptic angle ε as shown in the figure to the left. These calculations can be automated today—but did I mention that these solutions were found before electronic calculators? … or slide rules, or logarithms? … or trigonometric formulas? … or even algebra?? In fact, Vitruvius (ca. 50) and Ptolemy (ca. 150) provided mathematical and instrumental means of calculating the sun’s position for any hour, day, and observer location by the use of geometric constructions called analemmas (only indirectly related to the figure-8 analemma on globes). An important application of analemmas was the design of accurate horizontal and vertical direct and declining sundials for any observer location. These analemmas are awe-inspiring even today, and as the study of “Descriptive Geometry” has disappeared from our schools they can strike us as mysterious and wondrous inventions! 6 Comments » Oct 17 2007 Posted by: Ron D. in administrative Posts here are brief or not-so-brief essays of unusual things of this nature that I read or hear about, supplemented with references and some amount of research I typically do on these topics. Any longer papers that emerge (particularly on mental calculation and antique scientific instruments) will be placed in my main website area http://www.myreckonings.com. To avoid printing difficulties with this wide format, there will be a link to a PDF version at the end of each entry. Comments on the posts are appreciated! A forum has also been added for discussing anything related to lost art in the mathematical sciences at http://www.myreckonings.com/forum. Also, feel free to use the Contact link to send me general comments or any ideas (or text!) for new topics. Ron Doerfler (The figure above is from Oronce Fine’s Second Book of Solar Horology, translated with interpretation by Peter Drinkwater) 5 Comments »
{"url":"http://myreckonings.com/wordpress/2007/10/","timestamp":"2014-04-18T21:06:27Z","content_type":null,"content_length":"28518","record_id":"<urn:uuid:7cfb18ef-3cb6-451b-a8c2-87ddc2cc361a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
steady state heat conduction Steady state means that nothing changes with time. In your case, it means that the temperature is constant in all points of the rod. Let's begin an experiment. Say, you put a hot body in contact with a rod extremity. This extremity begins to heat, but nothing happens at the other extremity. You must wait some time (theoretically infinite in this case) for all the points in the rod to attain their equilibrium temperature. This is the transient state. Practical steady state means that the temperatures of all points are so near their final temperatures, that you can assume that the temperatures are the equilibrium ones. It depends, of course, on the precision you want to attain.
{"url":"http://www.physicsforums.com/showthread.php?t=164157","timestamp":"2014-04-20T16:06:03Z","content_type":null,"content_length":"22004","record_id":"<urn:uuid:2a2a77b2-d8e2-41fb-9ade-fc8a572cc6ed>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Compound Interest Q Help! June 10th 2008, 01:06 AM Compound Interest Q Help! Can you guys help me with this question? I';m so confused on how I should approach it: Tom’s father paid $500 into an account on the day Tom was born. ?After that, he paid $500 into the account on Tom’s bday until Tom’s 18th bday. ?If the account accrued interest at 8%p.a compounded monthly, calculate how much Tom would receive on his 18th bday. A: 20880.97 Thanks in advance! June 10th 2008, 06:31 AM Just think it through, one payment at a time. From the start, I'm not absolutely certain there was a payment ON the 18th birthday. let's assume that there was. If we get $500 too much, we'll discard this payment. Starting from the last payment and working backwards, we have: 18th: $500 and no accumulation for interest 17th: $500(1+i) -- 1 year's accumulation for interest 16th: $500(1+i)^2 -- 2 year's accumulation for interest 15th: $500(1+i)^3 -- 3 year's accumulation for interest Birth: $500(1+i)^18 -- 18 year's accumulation for interest With any luck, one should notice this is a Geometric Sequence and we should be able to add them all up. $500 + $500(1+i) + $500(1+i)^2 + $500(1+i)^3 + ... + $500(1+i)^18 = $500(1 + (1+i) + (1+i)^2 + (1+i)^3 + ... + (1+i)^18) = Our only remaining concern is 'i'. What is it? The formula above uses 'i' as an annual effective interest rate. We need to find one of those. We are given 8% Nominal Interest and Monthly Compounding. This gives: $\left(1 + \frac{0.08}{12}\right)^{12} - 1 = 0.082999511 = i$ As can be seen, $21,380.97 - $500.00 = $20,880.97, so I guess there was not a payment made ON the 18th Birthday Anniversary. This leaves us with a bit of a dilemma. On a written exam or a homework assignment, I would state my assumptions and provide both answers, citing the ambiguity of the word "until". Anything marked wrong would get a vigorous challenge. On a multiple-choice exam, I would be prepared to find either answer. In my view, if both appear on the multiple-choice exam, the question probably should be discarded as accepting either answer will not necessarily provide any information about a student's knowledge. One may simply have done it badly. In any case, questions should be clear. If you have discussed the word "until" in class, and it has been defined clearly to mean "NOT on the end date", then you can be expected to get the unique value. There may also be a diagram explaining the intent. It is a very hard thing to write perfectly clear questions. It is up to the student to explain any point of ambiguity. The exam writer cannot be expected to think of every possible translation, but I'm sure the exam writer tries to do Well, enough of exam philosophy...
{"url":"http://mathhelpforum.com/business-math/41170-compound-interest-q-help-print.html","timestamp":"2014-04-19T17:58:01Z","content_type":null,"content_length":"7082","record_id":"<urn:uuid:ab5791c6-7808-4e76-944b-069a83ce2e1b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can someone help me? I have to write the equation of the line using the point-slope formula (0, 4) and (2, 3). 0 – 2 4 – 3 Change the equation to slope-intercept form. Can someone explain how to do Best Response You've already chosen the best response. Calculate slope by (y2-y1)/(x2-x1) = -1/2 Best Response You've already chosen the best response. slope = (3-4)/(2-0) intercept => at point x=0 you see that y is 4. This means you now can create the formula: y = slope*x + intercept y = -1/2 x + 4 Best Response You've already chosen the best response. Thank you. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ee77816e4b023040d4bec44","timestamp":"2014-04-17T01:43:07Z","content_type":null,"content_length":"32503","record_id":"<urn:uuid:703bfbea-c61c-4c9f-96ca-39cbea9693f8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Free Search Engine for Rapidshare Files. Type what you are looking for in the box bellow, hit search and download it from RapidShare.com! a first course in finite element method daryl l logan solution manual rapidshare 500 results found, page 1 from 50 for "a first course in finite element method daryl l logan solution manual" A First Course in Linear Algebra Robert A Beezer p982 (7.23 MB) File name: A First Course in Linear Algebra Robert A Beezer p982 Source title: Rapidshare A First Course In Probability Sheldon Ross - RapidShareMix - Search for Shared Files A First Course in Differential Equations with Modeling Applications (14.91 MB) File name: A First Course in Differential Equations with Modeling Applications Source title: download A First Course in Differential Equations with Model - Pastebin.com a first course in probability part1 (5.2 MB) File name: a first course in probability part1 Source title: Rapidshare A First Course In Probability Sheldon Ross - RapidShareMix - Search for Shared Files a first course in probability part2 (5.2 MB) File name: a first course in probability part2 Source title: Rapidshare A First Course In Probability Sheldon Ross - RapidShareMix - Search for Shared Files A First Course In Abstract Algebra J Rotman p629 (4.21 MB) File name: A First Course In Abstract Algebra J Rotman p629 Source title: Rapidshare A First Course In Probability Sheldon Ross - RapidShareMix - Search for Shared Files A First Course In Partial Differential Equations with complex variables and transform methods H F (13.42 MB) File name: A First Course In Partial Differential Equations with complex variables and transform methods H F Source title: Rapidshare A First Course In Probability Sheldon Ross - RapidShareMix - Search for Shared Files A First Course in Applied Statistics (19.67 MB) File name: A First Course in Applied Statistics Source title: -Download A First Course in Applied Statistics, A: with applications in biology, business and the social sciences 2th Edition (PDF) RapidShare | Free eBooks Download - EBOOKEE! A First Course in Fuzzy Logic (19.44 MB) File name: A First Course in Fuzzy Logic Source title: A First Course in Fuzzy Logic, Third Edition by Hung T. Nguyen and Elbert A. Walker (PDF) RapidShare Download | Free eBooks Download - EBOOKEE! Also try: finite element methods and their applications finite element method finite element method fem collection part4 elementary principles of chemical processes solutions manual rm felder elementary principles of chemical processes solutions manual first course in fuzzy logic LAST 10 RAPIDSHARE SEARCHES: 2020 design applied survival analysis oracle press series video sex porno RapidShare Search a first course in finite element method daryl l logan solution manual
{"url":"http://rapidtrend.com/a/a-first-course-in-finite-element-method-daryl-l-logan-solution-manual.html","timestamp":"2014-04-21T12:23:17Z","content_type":null,"content_length":"17062","record_id":"<urn:uuid:efe51f6a-b829-4036-867a-72cb540335e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Don't use Scatterplots In a series of blog posts, Gary Rubinstein attempts show that the Value Added Modelling scores recently released by the NYC Department of Education prove that VAM (Value Added Modelling) is not accurate. However, whatever flaws VAM may have, Gary Rubinstein (henceforth GR) hasn’t demonstrated them. All he has demonstrated is why you shouldn’t use scatterplots. In his series of blog posts, he claims that intuition predicts certain correlations should be present between different sorts of VAM scores. He then uses visual inspection of scatterplots to assert that this correlation is not present. For example, in his first post, he examines the relationship between a teachers VAM score in one year and their VAM score in the next year: He also plotted the VAM score of students of the same teacher, but from two separate classes: Looks pretty messy, hard to see much correlation there. What’s also very strange is that in this picture, all the data points seem to line up. It turns out that when we inspect the data file, all the percentiles were truncated to the nearest integer. So it’s actually possible that we might have multiple teachers with data points occupying the same pixel! This creates the visual artifact called truncation. A scatterplot can only visually represent density up to a certain threshold - the threshold of “points everywhere”. In at least some parts of this picture, specifically the bottom left and top right corners, we seem dangerously close to that point. GR also measures the VAM scores of the same teacher, but teaching the same class to two different grade levels: Still messy, no obvious relationship. Again we see the phenomenon of visual truncation, this time caused by the large tick size. At many points of the graph, multiple ticks overlap each other, which makes the visual density appear lower than it really is. Plot density, not points The solution is to plot the binned point density rather than the points themselves. We already know this method in one dimension as the histogram. In two dimensions, there are multiple ways of doing it. The bin shapes can be taken from any method of uniformly tiling the plane, such as squares or hexagons. For each tile, the number of data points inside the tile are counted. The tile is then assigned a color according to the number of points. I’ll demonstrate a density plot using the hexagonal tiling, since matplotlib has the hexbin function. I applied this hexbin function to GR’s data comparing teachers across multiple school years: Looks like a pretty clear relationship. It’s noisy, but present. More importantly, it’s far easier to see in a hexbin plot than it is in a scatterplot. (Side note: GR also plotted a strange data set. He compared the same teacher, but across different subjects and grade levels. Based on this, he got a correlation of 0.3. The relationship strengthens to 0.4 when you compare the same teacher teaching the same subject at the same grade level. But I’ll ignore this - this is a blog post about why scatterplots suck.) The left two columns are GR’s data, but redone with a density plot instead of a scatterplot. The right two columns are the same plots, but for 2008-2009 instead of 2009-2010. The top two plots display the density of VAM scores for teachers who taught different classes to the same grades. I chose a gridsize of 20 since we have 5319 and 5553 data points (respectively). In the density plot, it’s pretty easy to see the data clustering along the line y=x. The correlation coefficient of 0.50 suggests the relationship probably is pretty strong. So teachers who are good at teaching math are also good at teaching english. VAM passes this common sense check with flying colors. The bottom two plots display the density of VAM scores for teachers who taught the same class but in different grades. I chose a gridsize of 7 since there were very few data points (only 742 in 2008-2009 and 769 in 2009-2010). GR’s plot displays 2009-2010, so it is directly comparable to my bottom left plot. It’s a bit messy, but there is certainly a lot more data near (0,0) and (100, 100) than there is near (0,100) or (100, 0). The correlation coefficient of 0.22 suggests there is a relationship, albeit weaker (but this could be an artifact of having little data). It looks like teachers who are good at teaching one grade are also good at teaching another. The correlation is weak (0.22), but present. It’s tough to say whether VAM passes this check. The correlation is certainly there, but it’s not that clear. On the other hand, it could just be due to having too few data points. So all told, it looks like Gary Rubinstein was wrong about NYC’s teacher evaluation method. Value Added Modelling holds up to his high standards, he just didn’t realize it because he made some bad Don’t use scatterplots. Use a density plot such as a hexbin instead. Also, go read the hacker news comments, some of which are excellent. Edit: Some people seem to be interpreting me as making a stronger claim than I intend. There are obviously a few cases when a scatterplot truly is the right tool. My claim is that they are sufficiently uncommon that you should make a density plot your default tool, and use the scatterplot only in the rare cases when you truly aren’t looking to demonstrate a density. Source code Only source for my plots, I think his were made in Excel.
{"url":"http://www.chrisstucchio.com/blog/2012/dont_use_scatterplots.html","timestamp":"2014-04-18T08:25:13Z","content_type":null,"content_length":"17537","record_id":"<urn:uuid:92ea75aa-5a49-49f8-9351-f1c196dd2152>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Second Order Differential Equation November 16th 2008, 01:08 AM Second Order Differential Equation How do I determine the particular integral for second order differential equations with mixed f(x)s? Like, it's neither strictly polynomial, trigonometry or exponential. Example of questions, (i) $y'' - 4y' + 5y = (16x + 4)e^{3x}$ (ii) $y'' + 3y' = (10x + 6)sin x$ (iii) $y'' - 2y' + 4y = 541e^{2x}cos5x$ Thank you! (: November 16th 2008, 08:14 AM Hey, each of those have right members which are particular solutions to some homogeneous differential equation. For example, the first one has a right member which is a solution to the equation: $(D-3)^2 y=0$ So the operator $(D-3)^2$ becomes an "annihilation" operator that we can apply to both side of the equation to convert it to a homogeneous equation: $(D-3)^2 (D^2-4D+5)y=0$ This is the method of undetermined coefficients. Are you familiar with that method? Try it first on some simple ones. Any DE book should have a section on this subject. November 16th 2008, 08:49 PM Sorry but I do not understand what are you talking about at all. :( November 16th 2008, 08:56 PM Chris L T521 Hey, each of those have right members which are particular solutions to some homogeneous differential equation. For example, the first one has a right member which is a solution to the equation: $(D-3)^2 y=0$ So the operator $(D-3)^2$ becomes an "annihilation" operator that we can apply to both side of the equation to convert it to a homogeneous equation: $(D-3)^2 (D^2-4D+5)y=0$ This is the method of undetermined coefficients. Are you familiar with that method? Try it first on some simple ones. Any DE book should have a section on this subject. Aha!! I'm not the only one to know about the Annihilator approach :D Read post #6 and #7 here to see how to tackle equations like these. November 16th 2008, 09:05 PM 1. First you must solve the homogeneous equations (i) y''-4y-+5y=0 (ii) y''+3y'=0 (iii) y''-2y'+4y=0 and you will find the general solutions for them. 2. For non homogeneous equations you have to find the partial solutions according to what is on the right side of the equations (i) http://www.mathhelpforum.com/math-he...9a1005f4-1.gif (ii) http://www.mathhelpforum.com/math-he...abc676a5-1.gif (iii) http://www.mathhelpforum.com/math-he...fa9942ab-1.gif 3. The final solution of your equations is the sum of the general solutions from 1st point and partial solutions from the 2nd point. November 17th 2008, 12:36 AM Thank you for referrals of posts and helps. But I still don't really get it after I read the posts :( Generally, I do know how to handle second order differential equations if f(x) was one of the normal terms like, purely exponential, or trig or polynomials. But when it is multiplied together like that, I can't immediately identify the pattern of the particular integral. For example, the particular integral form for an exponential function of $f(x) = e^{kx}$ would be $y_p = pe^{2x}$ and we differentiate and sub it back in to compare coefficients in order to obtain p. How do I deal when f(x) is not purely of those forms but multiplied together? Thank you. November 17th 2008, 02:53 AM I'll work the first one using undetermined coefficients; differential equations open a unique window into the universe such that all her secrets are revealed: The general solution from the auxiliary equation is then: $y=c_1 e^{3x}+c_2 e^{3x}+c_4 e^{2x}\cos(x)+c_5 e^{2x}\sin(x)$ with the desired solution (I'm taking this right out of Rainville almost word for word): where $y_c=c_4 e^{2x}\cos(x)+c_5 e^{2x}\sin(x)$. Then there must be a particular solution of the original equation containing at most the remaining terms: $y_p=Ae^{3x}+Bxe^{3x}$. That's the undetermined coefficients which can be determined by substituting this $y_p$ into the original DE: When I do that I get: Equating coefficients, I get $B=8$ and $A=-6$. Then the general solution of $y''-4y'+5y=(16x+4)e^{3x}$ is: November 17th 2008, 06:57 PM Although I still do not understand the method you recommended, but I'm still thankful for your generous help and time! I figured out a different approach to do it already :) But still thanks November 18th 2008, 04:23 AM
{"url":"http://mathhelpforum.com/differential-equations/59784-second-order-differential-equation-print.html","timestamp":"2014-04-19T15:27:28Z","content_type":null,"content_length":"15639","record_id":"<urn:uuid:3801da87-f7f1-4f49-8583-4f5f41037003>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Princeton Junction ACT Tutor Find a Princeton Junction ACT Tutor ...For the most part this is usually a "stand-alone" course, though it may be integrated (so to speak) into physics, chemistry, and statistics courses. I have studied all this in my scientific schooling, and applied it very occasionally, mostly for recreational engineering-type calculations. I try... 23 Subjects: including ACT Math, chemistry, physics, reading ...Rose High School. I have experience tutoring and familiarity the following high school curriculum/tests: Algebra 2, Algebra 2 Honors, Trigonometry, Trigonometry Honors, PreCalc, PreCalc Honors, Calculus AB/BC, Physics, Physics Honors, Physics AP B, Physics AP C, SAT Math/Verbal/Writing, SAT 2 Ph... 9 Subjects: including ACT Math, calculus, physics, algebra 1 ...My teaching expertise lies in helping students develop strong communication skills. I have successfully taught speech, phonics, reading comprehension, grammar, rhetoric, literature (poetry and fiction), and writing. I encourage students to incorporate their own personal interests into our discu... 17 Subjects: including ACT Math, reading, English, grammar ...I breezed through my Bachelor's in Engineering and minor in mathematics, graduating Summa Cum Laude in 3 years and continued to pursue a a graduate degree. My expertise was in tissue engineering and I was doing front of the line research in nerve regeneration, but I realized I was becoming an ex... 15 Subjects: including ACT Math, English, calculus, GRE ...My students achieve scores in the 700 range for verbal scores as well as their essays. I have tutored students and adults who needed help in passing the ASVAB in order to enter the military. I proudly served in the US Army for 13 years so I am familiar with what to expect and have taken the ASVAB several times as well as given my students practice tests. 73 Subjects: including ACT Math, Spanish, reading, English
{"url":"http://www.purplemath.com/princeton_junction_act_tutors.php","timestamp":"2014-04-21T07:51:39Z","content_type":null,"content_length":"24217","record_id":"<urn:uuid:3221d731-712e-43ee-9cc6-a1518d6cce12>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
UBASIC/Scripts: yet another DOF stacker Talk:UBASIC/Scripts: yet another DOF stacker 565pages on this wiki Can someone write exactly what needs to be pressed and when to get this script to work? Every time I try to use this it only takes one picture and says finished. I cant figure out whats wrong. Thanks I *really* like the way this is implemented. I was hoping to do something similar one day, but I like your method much nicer than what I had in mind. I notice that your script relies on tables for the aperture and zoom values. If I was to convert this for the S-series cameras would it be easy? Where would I find those tables, and would it require 129 entries for each and every zoom step? The aperture values table should be just as short, yes? Thanks for doing this. I hope I can make it useable. If someone (=you?) gets this working on S3, please do share your knowledge. Don't be shy. :] Perhaps the focal length table could be (again) based what's found in grand/platform/s3is/main.c. One could easily(?) calculate all 122 "missing" focal length values. Or just write the method for calculating them (for-loop?) into the script. (from S3's main.c) static const struct { int zp, fl; } fl_tbl[] = { { 0, 6000 }, { 11, 6400 }, { 41, 12100 }, { 64, 21300 }, { 86, 41600 }, { 105, 61400 }, { 128, 72000 }, #define NUM_FL (sizeof(fl_tbl)/sizeof(fl_tbl[0])) int get_focal_length(int zp) { int i; if (zp<fl_tbl[0].zp) return fl_tbl[0].fl; else if (zp>fl_tbl[NUM_FL-1].zp) return fl_tbl[NUM_FL-1].fl; for (i=1; i<NUM_FL; ++i) { if (zp==fl_tbl[i-1].zp) return fl_tbl[i-1].fl; else if (zp==fl_tbl[i].zp) return fl_tbl[i].fl; else if (zp<fl_tbl[i].zp) return fl_tbl[i-1].fl+(zp-fl_tbl[i-1].zp)*(fl_tbl[i].fl-fl_tbl[i-1].fl)/(fl_tbl[i].zp-fl_tbl[i-1].zp); return fl_tbl[NUM_FL-1].fl; In A710 this table was much easier, the code reads just: static const int fl_tbl[] = {5800, 6420, 7060, 7700, 8340, 9950, 11550, 13160, 14750, 17150, 19570, 22760, 26750, 30750, 34800}; #define NUM_FL (sizeof(fl_tbl)/sizeof(fl_tbl[0])) int get_focal_length(int zp) { if (zp<0) return fl_tbl[0]; else if (zp>NUM_FL-1) return fl_tbl[NUM_FL-1]; else return fl_tbl[zp]; Thanks MUCH for this further info. But that programming language is beyond my comprehension, and beyond my desire to learn a whole new one just for this. :) What I did do is plot the known values on a graph and saw the unique S-curve it makes. When I have the time I will have to do it manually on a graph and pick out the values for all 129 spots. I tried using a program called "FindGraph" that plots a curve to known sample points, but I can't get it to do what I need. It has a pretty steep learning curve too. It plots a nice curve to approximate those sample points on one of the available plotting types but it some of the curve is outside of the known values. So I may have to do it manually using some vector curve plotting on a graph grid. Just how important is the accuracy of those values? If I pick them off of a graph of just straight lines from point to point would that be just as accurate as the function that is already being used for the S-series cameras? If that's the case it would be much easier than trying to fit bezier vector curves to those data points. I could pick off all 129 values in just one evening from straight lines from point to point instead of trying to plot a complex curve to precisely fit them all. re: S3 Table -- I just found that you added the table of S3 values!! Thanks!! It might have been another month before I would have drummed up the patience to find all the values. :-) Ooops, just noticed something, shouldn't that table have all if z= instead of if v= commands? Oopsy, yes indeed. :I FL table should now be correct. Also should have checked earlier whether aperture values are better smaller or bigger. Smaller values produce shorter DOF, thus being "safer". And now that I think of it, bigger Av = longer DOF... And yet another ooops, I realized now in seeing your correction that I left all the then let v= in error too. :-) (I noticed the original problem when I was dead tired and missed the full correction.) Thanks again for the addition and the correction! I'm sure I'll make good use of this! Now to find someone with an A6x0 to see what their aperture values are and it should be good to go for ALL supported cameras. Way cool! I think if people see this and realize how it is being done that your method should and will become the default focus-bracket script. ~Keo~ Can some explain how the Av tables were derived, in particular for the S3? From platform/generic/shooting.c, what's happening is that the following formula gets applied to the table in platform/s3is/ >> round( 100*sqrt(2).^([283, 320, 352, 384, 416, 448, 480, 512, 544, 576]/96)) ans = There's a missing factor of 10 that comes in later, in the calculation of the hyperfocal distance in core/gui_osd.c -- but the numbers still don't agree with the table provided by Keo. Any idea what's going on here? More generally, what *are* the units for the focal length (zoom setting, z), aperture Av setting (v), circle of confusion (c=5), and the focus setting (f,t,l,h)? They don't agree! It seems that focus setting is in mm -- but that focal length and circle of confusion are in um=mm/1000, and that Av (after including the factor of 10) is similarly larger by a factor of 1000x. Can anyone verify this? What about the mysterious factor of 96?? 128.100.4.23 02:03, 7 February 2008 (UTC)swh
{"url":"http://chdk.wikia.com/wiki/Talk:UBASIC/Scripts:_yet_another_DOF_stacker","timestamp":"2014-04-18T03:04:16Z","content_type":null,"content_length":"47212","record_id":"<urn:uuid:8cdb0789-76ab-4b22-ba5c-0365bc2e2a5d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: binary search with arbitrary or random split Replies: 2 Last Post: Jul 26, 2012 3:50 PM Messages: [ Previous | Next ] Re: binary search with arbitrary or random split Posted: Jul 26, 2012 12:56 PM pam <pamelafluente@libero.it> writes: > On Jul 26, 3:55 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote: >> You did not answer my key question (maybe I forgot to ask it!). Do you >> agree that the cost function (in terms of compares) of your uniform >> splitting algorithm is C(n) = n? I can say that this is not in >> O(log(n)) (but it is in O(n)) without taking any limits, and without any >> reference to probability. > Sure it is. > You have understood perfectly my point, though. Did you mean "not understood"? > Considering the formulation with random splitting point n/k ( not n- > k ! ) (again: "divide", not subtract!) Do you see why that makes no difference? uniform(1,n), n-uniform(1,n) and n/uniform(1,n) all produce splits in [1,n] but with different > IF the only cases where i get cost n are those "degenerate", where by > mere (insignificant, for n large) chance > the binary search happens to be a linear search, i would still object > that in a probabilistic > setting, using sure convergence has little meaning. What would you object to? The (worst case) cost function in n. That n occurs rarely does not alter the big O class to which this function Look at it another way, do you agree that "f(x) = x is in O(n)" is a theorem of mathematics? Do you think the truth or falsehood of this theorem depends on where f(x) comes from -- if it's the distance travelled by a car it's true but, it if it's the cost of an algorithm with some random component it's false? If you mean that the complexity class of this algorithm is not interesting to you, that you find other measures more intuitive and practical, then we can stop this discussion now. I can't object to such a preference, but you seem to agree with everything except this last point. Unfortunately it's not a matter of opinion or intuition; the identity function is O(n). > So considering that O() is defined using limits, as far i can see > everywhere, > *in a probabilistic setting * (such is what we are considering here) > i would consider *probabilistic forms* of convergence. O() can be defined in terms of limits but it usually is not. Even when it is defined in terms of limits, these are the limits from analysis not from probability theory. Membership of the equivalence class O(n) does not depend on the frequency with which a function takes certain values, but on weather there is a point beyond which f(x) <= n*k for some constant k. > You say that this is not the way this is done in CS and they always > consider "sure" convergence. > I can and have to accept that for sure. But i also feel to say it > makes no sense, in a probabilistic setting, nor intuitively. We may be nearly done here. I don't think there is any point in trying to change your mind about what makes sense. It makes sense to me because I want an algorithm witch cost function n to be in O(n) regardless of any further knowledge of the algorithm, but we can safely disagree about > I have in mind large problem, like n bigger than 5000 digits. In this > case i only care of asymptotics > and to me if the cost n reduces to O(logn) asymptotically, because the > events causing cost n are > practically impossible, i think it's right to see it as O(logn), no > matter what happens for small n. All find except for the notation. You can't "see it as O(log(n))" if it isn't. You can see it as Opr(log(n)) for some new measure, Opr, that uses some kind of almost sure convergence. You'd still have to define Opr, and prove that your chosen algorithm is in OPr(log(n)) though. > Clearly, IF there are other situations which causes the cost n (apart > the linear searches and having > a probability which does not tend to 0 then ok, i could not, > obviously, object anything). It so much more complicated than that. The relationship between split choices and cost is an almost smooth one. Choosing n (always) *reduces* the cost in many cases but increases it in others. Capturing that in your new OPr measure will be hard (but I am sure it's possible). The trouble with thinking about "cases" is that most cases have a probability that tends to zero. For example, the set of executions where the random algorithm choose n/2 every time is vanishingly small in the long run. There are also an unbounded number of cases that give pathological results (always choosing 1, always choosing 2 etc...). Don't get me wrong, I know that probability theory can resolve all these issues but I think the details will be hard for and actual algorithm. >> O(log(n)) (but it is in O(n)) without taking any limits > This part is not completely clear to me yet. I always understood O() > as a short hand > for an event involving a limit. A description of an asymptotic > behavior. > But maybe you are referring to complexity classes definition and they > don't use limits (?). Discussed above. You agree that the cost function in n. n is in O(n) and not in O(log(n)) these are theorems of mathematics. In those cases where O() is defined using limits, these are not probabilistic limits, they are normal analytic limits. Date Subject Author 7/26/12 Re: binary search with arbitrary or random split Ben Bacarisse 7/26/12 Re: binary search with arbitrary or random split pamela fluente
{"url":"http://mathforum.org/kb/message.jspa?messageID=7853805","timestamp":"2014-04-17T16:55:53Z","content_type":null,"content_length":"22966","record_id":"<urn:uuid:71325f30-a3bb-45d6-b3f0-31140a2a5c50>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
I have been experimenting with using Yesod to throw together a web application or two. My experience so far has been broadly positive—if you like computers to check things for you, I recommend it.^1 That said, watching the full chain of dependencies fly past was moderately entertaining:^2 An excellent parser-combinator library, widely imitated. This wouldn’t be funny, except… Another excellent parser-combinator library, inspired by parsec. This defines a bunch of Unicode aliases for standard functions with boring ASCII names. Why write: when you could write: Two UTF-8 encoding libraries! “In mathematics, a semigroup is an algebraic structure consisting of a set together with an associative binary operation. A semigroup generalizes a monoid in that there might not exist an identity element. It also (originally) generalized a group (a monoid with all inverses) to a type where every element did not have to have an inverse, thus the name semigroup.” 1. assuming you like deciphering compiler error messages when the computer says no, that is [↩] 2. for a quiet Wednesday morning… [↩]
{"url":"http://blogs.gnome.org/wjjt/2012/01/04/","timestamp":"2014-04-24T20:08:42Z","content_type":null,"content_length":"11648","record_id":"<urn:uuid:720e9dfe-a4c5-4934-b671-04c03c2744d4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
• It's the only way to access our downloadable files; • You can use our search box tool; • Registered users see fewer Adverts; • You will receive our 'irregular' newsletters; • It's free. Unless specified otherwise in the individual descriptions MathSticks resources are licenced under a Creative Commons Licence. You are free to use; share; copy; distribute and transmit the work. Provided that you give mathsticks.com credit for the work and logos remain intact. You may not alter, transfrom, or build upon the work, nor may you use it in any form for commercial purposes.
{"url":"http://mathsticks.com/taxotouch/38","timestamp":"2014-04-16T04:44:00Z","content_type":null,"content_length":"51678","record_id":"<urn:uuid:41635fcc-704e-46c3-b141-f2f919a762e9>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: This is False. 0/0 {x | x ~e x} e {x | x ~e x} A single Principle to Resolve Several Paradoxes Replies: 53 Last Post: Feb 13, 2013 3:53 PM Messages: [ Previous | Next ] Re: This is False. 0/0 {x | x ~e x} e {x | x ~e x} A single Principle to Resolve Several Paradoxes Posted: Feb 9, 2013 8:21 PM On Feb 5, 5:43 am, "Lord Androcles, Zeroth Earl of Medway" <LordAndroc...@Januaryr2013.edu> wrote: > > In this case, because primitives of logical expressions must be > > relations and ~e is not a relation. > I (1) don't make the assumption that primitives of logical expressions must > be relations. I (2) assume you mean the relation "~e" to be the set of > ordered pairs (x, y) such that x ~e y. > Since I (3) don't take logical expressions to be sets, I (4) certainly don't > take logical expressions to be relations. I (5) would prefer to say that a > logical expression may sometimes determine a set. But sometimes a > logical expression won't determine a set (e.g., the logical expression > "x ~e x" wont' determine a set.) not ( e(x x) ) <=> e(x,x) e not NOT is the SET/RELATION/ATOMIC-PREDICATE it's even PREFIX! > Thus, I (6) say that "x ~e x" is a wff, but "x ~e x" cannot be used to > define a relation that corresponds to it. WFF are blind syntactic *CONSTRUCTIONS* That they are *CONSTRUCTED* to be TRUE(wff) or NOT(wff) does not make them predicates (ACTUALLY TRUE OR NOT) or "In-The-Language".
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2432271&messageID=8290344","timestamp":"2014-04-19T23:22:30Z","content_type":null,"content_length":"81394","record_id":"<urn:uuid:cbab34cb-89fd-42c9-bba2-2d294d63af7c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Edgewater, NJ Math Tutor Find an Edgewater, NJ Math Tutor ...I study college level courses at home consistently and have been doing that for years for continuing education. I love to learn! I am a passionate teacher and will do more than my best to see my students be the best at what they seek to learn. 81 Subjects: including algebra 1, algebra 2, probability, prealgebra ...I like to give clear explanations for each important concept and do examples right after. Most importantly, I am personable and easy to talk to; Lessons are thorough but generally informal. I also make myself available by phone and e-mail outside of lessons--My goal is for you to succeed on your tests. 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...If you answered yes to any of these questions, I can help. I have four years experience tutoring and giving piano lessons in both the New York City Public School system and privately as well as a degree in Education from Indiana University. Whether a student needs to learn addition or upper level algebra, basic reading skills or SAT level English, I can help. 30 Subjects: including prealgebra, geometry, reading, statistics ...For the past 10 years I have been working at Washingtonville High School in Orange County New York. Among the events I have experienced there were the transitions from New York State's Sequential Math Program to New York State's Math A and Math B program in my first year and the more rec... 10 Subjects: including geometry, algebra 1, algebra 2, American history ...Using a combination of these programs plus strategies gleaned from my long professional career, I work with students who struggle with decoding, phonemic awareness, reading comprehension, and dyslexia. I have experience working with ADD/ADHD students and Autism spectrum children and have develop... 39 Subjects: including algebra 1, algebra 2, geometry, prealgebra Related Edgewater, NJ Tutors Edgewater, NJ Accounting Tutors Edgewater, NJ ACT Tutors Edgewater, NJ Algebra Tutors Edgewater, NJ Algebra 2 Tutors Edgewater, NJ Calculus Tutors Edgewater, NJ Geometry Tutors Edgewater, NJ Math Tutors Edgewater, NJ Prealgebra Tutors Edgewater, NJ Precalculus Tutors Edgewater, NJ SAT Tutors Edgewater, NJ SAT Math Tutors Edgewater, NJ Science Tutors Edgewater, NJ Statistics Tutors Edgewater, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/edgewater_nj_math_tutors.php","timestamp":"2014-04-20T13:19:33Z","content_type":null,"content_length":"23992","record_id":"<urn:uuid:ffdc1e40-87a0-4229-bdf8-0e938938da3b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Lynwood Algebra 2 Tutor ...I don't want rub it in anybody's face or anything, but not only did I get a perfect SAT score AND get nominated as a Presidential Scholar, but I've also coached hundreds of students to an average point increase of 463 points! You can find students of mine on the campuses of every Ivy League univ... 26 Subjects: including algebra 2, reading, English, writing ...I appreciate the time you've taken to read a little bit about me and my approach. If there is any demystifying I can do--about WyzAnt or tutoring in general--ask! Cheers, Kathryn P.S. 43 Subjects: including algebra 2, English, reading, chemistry ...I am CRLA Level I regular tutor, certified by the University standard. I have taught Human Anatomy and Physiology in Physical therapy school. I have taught Human anatomy and physiology in physical therapy school. 33 Subjects: including algebra 2, chemistry, geometry, statistics ...I have taught 6th grade math and science, 8th grade Honors Algebra, CASHEE PREP for high schoolers and for the last three years Geometry and Algebra for Special Education students. I will retire June 10, 2013. During my teaching career I have been very successful in teaching students to understand basic concepts in Math. 7 Subjects: including algebra 2, algebra 1, special needs, autism ...I find it imperative to always maintain a positive and encouraging attitude when tutoring to maintain an engaging and safe environment for someone to learn. Building relational capacity is also incredibly useful for me to help promote a culture of learning and reinforce positive learning habits.... 7 Subjects: including algebra 2, geometry, precalculus, prealgebra
{"url":"http://www.purplemath.com/Lynwood_algebra_2_tutors.php","timestamp":"2014-04-19T17:05:14Z","content_type":null,"content_length":"23757","record_id":"<urn:uuid:950e8e4b-309c-49e9-afd5-5510f3d56fc8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Impossible SAT Math Problem for XIGGI and thoise who purchas Click here to go to the NEW College Discussion Forum Discus: SAT/ACT Tests and Test Preparation: October 2003 Archive: Impossible SAT Math Problem for XIGGI and thoise who purchas I have another one for you. It has a figure, so I can't really write it out. It is from the January 2000 Form Code VQ, TL administration, #25 in Section 1 about a quadilateral with three equal sides and another pair of equal sides. It asks for the value of x. PLEASE EXPLAIN! I just can't get it. I do not have the books here. I can do it later. Take a picture of it "... a quadrilateral with three equal sides and another pair of equal sides..." does anyone remember i think it was a june 2003 question... it had a cube with like 5 triangle possibilities in the cube that branched from a common side/corner of the cube. One of the triangles went digonally through the cube, another was exactly on its face, etc and the question asked which triangle was longest... that one hurt Report an offensive message on this page E-mail this page to a friend
{"url":"http://www.collegeconfidential.com/discus/messages/69/30473.html","timestamp":"2014-04-21T04:33:19Z","content_type":null,"content_length":"11575","record_id":"<urn:uuid:f6b281d1-bbe9-4995-8bf5-e3919a0fec37>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Large nearly regular induced subgraphs Noga Alon Michael Krivelevich Benny Sudakov For a real c 1 and an integer n, let f(n, c) denote the maximum integer f so that every graph on n vertices contains an induced subgraph on at least f vertices in which the maximum degree is at most c times the minimum degree. Thus, in particular, every graph on n vertices contains a regular induced subgraph on at least f(n, 1) vertices. The problem of estimating f(n, 1) was posed long time ago by Erdos, Fajtlowicz and Staton. In this note we obtain the following upper and lower bounds for the asymptotic behavior of f(n, c): (i) For fixed c > 2.1, n1-O(1/c) f(n, c) O(cn/ log n). (ii) For fixed c = 1 + with > 0 sufficiently small, f(n, c) n(2 / ln(1/)) (iii) (ln n) f(n, 1) O(n1/2 An analogous problem for not necessarily induced subgraphs is briefly considered as well.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/564/2970696.html","timestamp":"2014-04-17T08:02:23Z","content_type":null,"content_length":"7973","record_id":"<urn:uuid:f3de553f-b7b7-45d4-ba09-b16900e2888f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Back to Tutorials. Tutorial 3: Solving a CSP For this tutorial, we will refer to the CSP created in Tutorial 1 shown below: To start solving the CSP, switch over to 'Solve' mode by clicking on the 'Solve' tab. Some menu items which were previously unavailable in 'Create' mode are now enabled. Also, a 'Domain-splitting History' panel will appear at the bottom of the window. The toolbar buttons will change to give you solving options. The solve toolbar will initially look like the toolbar below: Generally, with CSPs, you would first make a CSP arc-consistent and then check for solutions or, if necessary, recursively split a variable's domain and make the CSP arc-consistent again. An 'arc' between variables refers to the constraint relation between the variables. An arc is arc consistent if for each domain value in one variable, there exists a value in the domain of each other variable such that the constraint between the variables is satisfied. A CSP is 'arc consistent' if all of its arcs are arc consistent. A 'solution' to a CSP is an assignment of a unique value to each variable such that all of the constraints are satisfied. This applet provides many ways to make a CSP arc-consistent: • Clicking on the 'Fine Step' button will randomly pick arcs in the CSP and make them arc consistent. Blue edges means that an arc has not been made arc consistent. The first fine step will highlight one of these blue edges and then proceed to make it arc consistent. If the domain of the variable connected to the edge is inconsistent with the constraint relation that it is connected to, then the second fine step will make the arc red. The third fine step will remove values from the domain of the variable that made the arc inconsistent if necessary. When an arc is consistent it will appear green. Note that a green arc may turn blue again if it becomes inconsistent as you are solving the graph and removing domain values from other variables. • Clicking on the 'Step' button will also randomly pick arcs in the CSP and make them arc consistent. One 'Step' is equivalent to three 'Fine Steps'. • Clicking directly on an a blue arc will carry out a 'Step' on that arc to make it consistent. • Clicking on the 'Auto Arc-Consistency' button will fine step through the entire CSP for you, until the CSP is arc consistent or has no solution. • Clicking on the 'AutoSolve' button will recursively make the CSP arc-consistent and split domains until a solution is found. Clicking on this button again will find another solution, and so on until there are no more solutions. You can specify how you want AutoSolve to split domains by opening the 'AutoSolve Options' dialog under the CSP Options menu. Here you can specify how AutoSolve selects variables to split and how to split them. By default, AutoSolve selects the variable with the smallest number of domain values left and splits the domain in half. You can stop Auto Arc-Consistency and AutoSolve at any time by clicking the 'Stop' button. To start solving again from the state in which you stopped, simply carry out any of the solving methods mentioned above. You can also change the speed of arc-consistency and select whether you want each fine step to be shown as the CSP is being solved or not. Both of these options are available in the 'CSP Options' menu. You can reset the CSP to its initial state at any time by clicking on the 'Reset' button. Once a CSP has been made arc consistent, there are three possibilities: • There are no domain values left in any variable, which means the CSP has no solution, or no value assignment to each variable such that each constraint is satisfied. In this case the message panel will say the CSP has no solution. • Each variable in the CSP has exactly one value left in its domain. This is a solution to the CSP, and the final variable value assignment will appear in the message panel. • If each variable in the CSP has one value in its domain, but at least one variable has greater than one value, then there exists a solution, but domain splitting is required to find it. After making our example CSP arc consistent it looks like this: This CSP needs domain splitting to find a solution. Click on any variable that has greater than one value in its domain to split it. The "Split the Domain..." dialog will be shown as below: The domain values for this variable are displayed. You can manually select which domain values you would like to keep, to solve the CSP with, by clicking on the corresponding value checkboxes. You can also allow the applet to select the first half of the values to keep, or randomly select values to keep. Once you have a reduced domain, some of the arcs in the CSP that were green will have turned blue, meaning that the CSP has to be made arc consistent again. After each split the reduced domain values of a variable will appear in the 'Domain-splitting History' panel at the bottom of the applet window. A solution to the CSP may not exist given a certain split. In this case a failure for that variable assignment will appear in the 'Domain-splitting History' panel. You can then recover the variable domain values that you discarded when you split a domain by clicking on the 'Backtrack' button and then try solving again. You may have to split, solve, and backtrack through a CSP recursively until a solution is found. The history of this process is kept for you in the 'Domain-splitting History' panel.
{"url":"http://aispace.org/constraint/help/tutorial3.shtml","timestamp":"2014-04-16T07:21:21Z","content_type":null,"content_length":"13361","record_id":"<urn:uuid:9f352450-8aad-44b8-b44d-a985ad8f9e5a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please help. 3(y-2)/5 = 1- 3y • one year ago • one year ago Best Response You've already chosen the best response. first distribute the 3 into (y-2) to get rid of the parenthesis what do you get? Best Response You've already chosen the best response. 3y -6 Best Response You've already chosen the best response. ok so now since 3y-6 divided by 5 doesn't come out evenly you can multiply both sides by 5 instead, now what do you have? Best Response You've already chosen the best response. 3y - 6 = 1 - 3y * 5 Best Response You've already chosen the best response. .....ok um let's focus on the right side, to get rid of /5 from the left you have to multiply everything on the right by 5, what do you get? Best Response You've already chosen the best response. Best Response You've already chosen the best response. what's 3x5 Best Response You've already chosen the best response. oh lol 15. wasn't if i multiply something w. a variable next to it. Best Response You've already chosen the best response. Best Response You've already chosen the best response. lol yeah you do.... so do you get how to do the rest of it? Best Response You've already chosen the best response. Best Response You've already chosen the best response. now divide both sides by 18 Best Response You've already chosen the best response. my study guide says the answer should be 11/18. not sure how tho. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Nv Ty :) Best Response You've already chosen the best response. uhhuh ^^ Best Response You've already chosen the best response. With equations like this you have to follow your order of operations. First parentheses, 3(y-2)/5=1-3y therefore your have (3y-6)/5=1-3y. Next you have Division, multiply both sides by 5 to cancel out the 5 on the left 5((3y-6)/5)=(1-3y)5 this gives you 3y-6=5-15y. Now we are ready for the last step of the order of operations before we start over, Add/Subbtract from both sides and it will look like this, 18y=11. we now have to start back from the top to get y by its self devide both sides by 18 to cancel, final answer y=11/18 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50623263e4b0583d5cd306be","timestamp":"2014-04-20T21:05:05Z","content_type":null,"content_length":"129023","record_id":"<urn:uuid:e61e395e-66af-4d5d-b538-1f3482d23db9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Using for loop to calculate sine function (factorial issue) March 11th, 2011, 02:17 PM #1 Join Date Mar 2011 My Mood Thanked 4 Times in 4 Posts Hi again, I was working on another homework assignment and I got stuck trying to figure out why my program wasn't computing the right answer. The assignment was to basically write a program that computes the Sine function (as close as you possibly can) without using Math.sin(). I believe that my for loop in the original code is not doing the factorial for the right variable. I'm pretty sure the equation is right as well. I figured out the series needed to compute sin: Sine = (-1.0^n)(x^(2n+1)) / (2n+1)! Here's my original code: /* This program is suppose to compute the Sine function without using Math.sin */ import java.io.*; public class HW5 public static void main(String args[]) throws IOException BufferedReader keybd = new BufferedReader(new InputStreamReader(System.in)); String s; double Rad, Sine; int MaxE, i; Rad = 0.0; System.out.println("Degrees (d)? or Radians (r)? Please enter (d) or (r): "); s = keybd.readLine(); s = s.toLowerCase(); s = s.trim(); if (s.equals("d")) System.out.println("Enter a value (Degrees): "); Rad = Double.parseDouble(keybd.readLine()); Rad = (Math.PI * Rad) / 180; else if (s.equals("r")) System.out.println("Enter a value (Radians): "); Rad = Double.parseDouble(keybd.readLine()); System.out.println("Enter the desired maximum exponent: "); MaxE = Integer.parseInt(keybd.readLine()); i = (2 * MaxE) + 1; //This is where I think I messed and confused myself for (int a = 0; a <= MaxE; a++) i = a * i; Sine = ((Math.pow(-1.0,MaxE))*(Math.pow(Rad, i)) / i!); // I put the ! to indicate where I wanted the factorial to go System.out.println("The value of Sine is " + Sine); My first problem was trying to figure out how to compute a factorial. I looked through the tutorials and found some code for recursive factorials but I couldn't get it to work. I ended up finding a code that I got to work but the problem is is that I'm not sure how I could incorporate it into my original code. Factorial program: import java.io.*; public class s public static void main(String args[]) BufferedReader object = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter the number"); int a = Integer.parseInt(object.readLine()); int fact = 1; System.out.println("Factorial of " + a + ":"); for (int i = 1; i <= a; i++) fact = fact * i; catch (Exception e) {} I apologize for this being so long but I'm not sure if my logic is right or not. Also, does java have a built in factorial method or did I have to make a loop for it? Any help is appreciated. I ended up finding a code that I got to work but the problem is is that I'm not sure how I could incorporate it into my original code. If you don't know, you first should figure out how the code you found to calculate the factorial works. How does it calculate factorial? Knowing this will put you in a much better position to be able to reproduce its functionality in your code. What variable(s) are necessary, and what variable(s) does it produce as a result? Break the problem down, and perhaps to help you do so just isolate this functionality by writing a method that computes the factorial of a parameter and returns the result - this will let you focus primarily on this portion of the code independent of the Code Tags | Java Tutorials | SSCCE | Getting Help | What Not To Do The Following User Says Thank You to copeg For This Useful Post: Actinistia (March 16th, 2011) If you don't know, you first should figure out how the code you found to calculate the factorial works. How does it calculate factorial? Knowing this will put you in a much better position to be able to reproduce its functionality in your code. What variable(s) are necessary, and what variable(s) does it produce as a result? Break the problem down, and perhaps to help you do so just isolate this functionality by writing a method that computes the factorial of a parameter and returns the result - this will let you focus primarily on this portion of the code independent of the Thanks for the suggestion. I was just about to go through that, in fact. I felt that just sitting around and waiting isn't going to help so I'm going through all my code and writing down what each part does and what I intended it to do. I should of down this first instead of asking for help right away. Hopefully this will be solved soon. Thanks again. March 11th, 2011, 02:33 PM #2 Super Moderator Join Date Oct 2009 Thanked 779 Times in 725 Posts Blog Entries March 11th, 2011, 02:38 PM #3 Join Date Mar 2011 My Mood Thanked 4 Times in 4 Posts
{"url":"http://www.javaprogrammingforums.com/loops-control-statements/7878-using-loop-calculate-sine-function-factorial-issue.html","timestamp":"2014-04-19T22:08:38Z","content_type":null,"content_length":"74926","record_id":"<urn:uuid:0a9ab1bd-7aa5-475a-a0e1-e4046dfa783e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"}