content
stringlengths
86
994k
meta
stringlengths
288
619
I need help witha graph theory questionLet T be a tree with more than one vertex. Prove that T must have atleast... - Homework Help - eNotes.com I need help witha graph theory question Let T be a tree with more than one vertex. Prove that T must have atleast two verticies of degree 1. Proof by induction. I. Initial condition. Let T be a tree with 2 vertices. Then T has two vertices of degree 1, by definition of tree (connected graph without cycles). II: Rule of induction. Let b U be a tree with n, and addume that U has at least 2 vertices of degree 1. Adding another vertex will either (i) maintain the number of vertices of degree 1 (by connecting to a vertex of degree one) or (ii) increase the number of vertices of degree 1 (by connecting to a vertex of degree greater than 1). Taking I and II together means that any tree with 2 or more vertices will contain at least two vertices of degree 1. QED Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/need-help-witha-graph-theory-question-293099","timestamp":"2014-04-17T21:29:29Z","content_type":null,"content_length":"25332","record_id":"<urn:uuid:c76b2bbc-59eb-4f7a-a152-ed741e222630>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Adventure in two colors - Implementing an std::map like structure in C# As an originally C++ programmer, I miss some aspects of the STL containers which were not implemented in the standard .net collections. Specifically, the ability to search for an item that is not necessarily in the collection, and get the closest match. These are the lower_bound and the upper_bound of the std::map. The need for this functionality (lower_bound and the upper_bound) emerged while trying to display tags along the time axis. When the view was zoomed, the subset of tags within the zoomed range needed to be fetched. If the set of tags was static, then sorting once and then using the generic List<> could do the trick, since it provides the BinarySearch method. However, the set was dynamic, so another approach was needed. Here SortedMap<TKey, TValue> is presented: a generic collection based on the Red Black tree data structure. It provides lower_bound and upper_bound in addition to the standard collection interface. If you need the closest match functionality, you are welcome to read ahead and use the code. Even if you don't care about lower/upper bound, you might find the discussion about testing useful, since it presents some of the primary testing principles, that in my opinion are important for creating robust and (relatively) bug-free software. Red Black Trees The Red Black tree is described in many places, so we'll only present it briefly here. It is a binary tree, with each node painted either in black or red (perhaps inspired by the roulette colors). A red black tree satisfies the following properties: 1. Every leaf is black. 2. If a node is red, then its children are black. 3. Every path from a leaf to the root contains the same number of black nodes. It is not hard to prove using the properties above that the height of a Red Black tree is O(log n). Hence search for an element is O(log n). Searching for a closest match, rather than exact match is easy enough: Just search for the element, and if not found, return the node where the search stopped. Inserting an element into a Red Black tree is composed of two steps: • First perform a standard binary tree insert. Second fix the the tree so its properties are preserved. • Removing an element is similar: Standard remove followed by steps to preserve the Red Black properties. It follows that insertion and removal are also O(log n). Using the code SortedMap is composed of 2 layers: • RedBlackTree<T> (in RedBlackTree.cs) - implementation of the Red Black data structure. • SortedMap<TKey, TValue> (in SortedMap.cs) - a wrapper of the Red Black tree as a standard collection implementing IDictionary<>, ICollection<>, and IEnumerable<>. Other classes in the project: • GenericEnumerator<T> (in GenericEnumerator.cs) - helper for the SortedMap. • PermutationGenerator (in PermutationGenerator.cs) - creates all permutations of a specific size: used for testing • TreeVisualizer<T> (in TreeVisualizer.cs) - displays the tree in a graphical way: used for testing • RedBlackTreeTester (in RedBlackTreeTester.cs) - testing the RedBlackTree • SortedMapTester (in SortedMapTester.cs) - testing the SortedMap If you need a standard collection - use the SortedMap class, you will need SortedMap.cs, RedBlackTree.cs, and GenericEnumerator.cs. If you do not care about standard interfaces use directly the RedBlackTree class - only RedBlackTree.cs is needed. RedBlackTree<T> provides the following methods (note that it has only one type argument): • Clear() - empties the tree • void Add(T item) - add a new item to the tree • void Remove(T item) - removes an existing item • TreeNode Find(T item) - find an exact match • TreeNode FindGreaterEqual(T item) - find either an exact match or the next item • TreeNode First() - return the first node in the tree • TreeNode Next(T item) - return the next node • bool IsValid() - check whether the tree is a valid Red Black tree (for testing) • bool TravelTree(TreeVisitor visitor) - travels the tree and applies the visitor to each node (for testing) SortedMap<TKey, TValue> provides the closest match functionality in addition to the standard collection methods: • TValue LowerBound(TKey key) - return the first element whose key is no less than the provided key • TValue UpperBound(TKey key) - return the first element whose key is greater than the provided key • IEnumerable<KeyValuePair<TKey, TValue>> LowerBoundItems(TKey key) - return an enumerator starting with the lower bound item • IEnumerable<KeyValuePair<TKey, TValue>> UpperBoundItems(TKey key) - return an enumerator starting with the upper bound item It is surprising how much code is needed in order to create a standard collection. The SortedMap is almost 1000 lines although it does not contain any logic - just implementing standard interfaces. The following code snippet demonstrates how to retrieve all items with keys that greater than a particular key: SortedMap map = new SortedMap<int, double>(); int key = someValue; foreach (KeyValuePair<int, double> pair in map.UpperBoundItems(key)) // Do something As mentioned above, insertion removal and search for a key (both exact or closest match) cost O(log n) operations. Moving to the next element is O(log n) - worst, O(1) - amortized. This means that a single next operation may cost O(log n), but moving along the whole tree costs O(n) operations, so the average cost for a single operation is O(1). The following table summarizes the complexity of the various operations: Operation Complexity Insertion O(log n) Removal O(log n) Search O(log n) Next element O(log n) (amortized O(1)) Testing the Tree The first question that comes to mind when creating this kind of data structure is why this particular choice of colors (Red and Black). The second question is how to make sure that the structure fulfills its requirements. One answer is validity check. I added a method IsValid to the RedBlackTree class, that checks the validity of the structure, and makes sure in particular, that the count of black nodes from every leaf is the same. During testing, this method is called after each operation in order to locate the exact point when the tree became invalid. However this is not enough, since the tree may be valid, but contains wrong data. For this I used the "compare to a simpler data structure" technique. Each operation on the tree was applied also to a List<>. The content of the tree was compared to the content of the list after each modification - to verify that the content of the tree is correct. 2 automatic tests were used to test the tree: The first was adding to the tree all permutation of the numbers 1 to 9. The second was to randomly add and remove values to the tree while checking its validity. I often use this kind of random test since in most cases it will reveal bugs in unexpected situations. However this was not enough: The IsValid failed and the tree was too complex to understand the problem. Here the TreeVisualizer came to the rescue. It is a simple class that displays the tree in a text window (You can see a snapshot at the top of the article). Using it to within the tree implementation greatly helped to pinpoint the problems. So to summarize: • Validation - Write a method that tests the requirements and invoke it during testing. • Compare to a simpler structure - If possible, compare the new data structure to a simpler structure (preferably standard). • Automatic Random Testing - It will reveal situations that you didn't foresee. • Visualization - It helps to visualize the data structure. Sometimes it's a simple dump and in other times a graphical image.
{"url":"http://www.codeproject.com/Articles/408248/Adventure-in-two-colors-Implementing-an-std-map-li","timestamp":"2014-04-20T16:26:14Z","content_type":null,"content_length":"69302","record_id":"<urn:uuid:42124073-6a2c-41a0-bef6-bcde0555d336>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Semisimple-ish rings! up vote 6 down vote favorite Let S be the class of all rings R which have 1 and satisfy this condition: for every "non-zero" right ideal I of R there exists a "proper" right ideal J of R such that I + J = R. (The + here is not necessarily direct.) All semisimple rings are in S and (commutative) local rings which are not fields are not in S. The ring of integers Z is also in S and so S properly contains the class of semisimple rings. My questions: Will this condition by itself force an element of S to have any (known, interesting) structure? A more important question: What about simple rings which are in S? For example, do they have to be semisimple? (Unlikely!) noncommutative-algebra ra.rings-and-algebras add comment 1 Answer active oldest votes By Zorn's lemma, each right ideal is contained in a maximal right ideal, therefore if $I+J = R$ then $I+M = R$ where $M$ is a maximal right ideal. If $I+M\ne R$ for all maximal right ideals $M$ then $I\subseteq M$ for all maximal ideals $M$. Thus $I\subseteq J(R)$, the Jacobson radical of $R$ which is the intersection of all maximal right ideals of $R$. Hence condition $S$ is equivalent to $J(R)=0$. up vote 13 down vote A ring with vanishing Jacobson ideal is called semiprimitive. As $J(R)$ is also the intersection of the maximal left ideals of $R$ then the property of semiprimitivity is left-right accepted symmetric. There are plenty of examples of semiprimitive rings which are not semisimple. For instance every simple ring is semiprimitive and every subdirect product of semiprimitive rings is semiprimitive ($\mathbb{Z}$ is a subdirect product of finite fields). As a reference see Section 10.4 of P. M. Cohn Algebra (2nd ed. vol 3) Wiley 1991. 1 That's great! Thank you very much. – carlos Jun 28 '10 at 7:06 The type of right ideals which do not have such a complement are exactly the superfluous (or small) right ideals. As the excellent answer above shows, rings with $J(R)=0$ are the rings without small right or left ideals. Going one step further, semisimple rings (right Artinian +$J(R)=0$) are the rings without essential (or large) right ideals. It's interesting that "no essential right ideals" implies it's dual relative "no superfluous right ideals". It's someone akin to right Artinian implying right Noetherian in rings. – rschwieb Dec 17 '11 at 12:58 add comment Not the answer you're looking for? Browse other questions tagged noncommutative-algebra ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/29745/semisimple-ish-rings","timestamp":"2014-04-17T12:57:59Z","content_type":null,"content_length":"54240","record_id":"<urn:uuid:070629b8-a78c-4d3f-9e49-b043b20d7eb3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Strawberry, AZ Math Tutor Find a Strawberry, AZ Math Tutor ...I believe that everyone has the ability to learn and make good grades. This belief is what attracted me to tutoring 10 years ago. Since then, I have prospered as a professional tutor and helped many people achieve the grades they have desired. 11 Subjects: including prealgebra, algebra 1, algebra 2, calculus ...One very important thing to understand is that there are multiple ways to solve a problem. If you reach a solution that satisfies the criteria given in the problem, and can explain how you arrived at that solution, the response should never be "But you can't do it that way!" Any engineer will t... 10 Subjects: including calculus, algebra 1, algebra 2, chemistry ...To assist students for our STEM majors, we must help them understand the basics. Although I do not have formal tutoring experience, I have helped classmates with homework and in understanding concepts during my entire academic experience. I can have patience for those learning mathematics and the sciences, as well as for those interested in learning Spanish. 12 Subjects: including algebra 2, differential equations, linear algebra, electrical engineering ...Soon he was doing them just fine. After about a month he was pulling an A in his class. Everybody has there strong points in life I believe, this is just one of my strongest. 8 Subjects: including prealgebra, reading, elementary math, music theory ...I have worked with computers for over 20 years and strictly in Windows. From Windows 3.11, Windows98SE, and Windows XP (both Home and Professional) editions. I have a major in the Social Sciences as an undergrad at Indiana University and have taught sociology. 38 Subjects: including ACT Math, algebra 1, reading, English Related Strawberry, AZ Tutors Strawberry, AZ Accounting Tutors Strawberry, AZ ACT Tutors Strawberry, AZ Algebra Tutors Strawberry, AZ Algebra 2 Tutors Strawberry, AZ Calculus Tutors Strawberry, AZ Geometry Tutors Strawberry, AZ Math Tutors Strawberry, AZ Prealgebra Tutors Strawberry, AZ Precalculus Tutors Strawberry, AZ SAT Tutors Strawberry, AZ SAT Math Tutors Strawberry, AZ Science Tutors Strawberry, AZ Statistics Tutors Strawberry, AZ Trigonometry Tutors Nearby Cities With Math Tutor Bensch Ranch, AZ Math Tutors Bitahochee, AZ Math Tutors Black Mesa, AZ Math Tutors Circle City, AZ Math Tutors Dilkon, AZ Math Tutors Groom Creek, AZ Math Tutors Iron Springs Math Tutors Leupp Corner, AZ Math Tutors Litchfield, AZ Math Tutors Peeples Valley, AZ Math Tutors Pine Math Tutors Red Lake, AZ Math Tutors Shumway, AZ Math Tutors Tolani Lakes, AZ Math Tutors Tolani, AZ Math Tutors
{"url":"http://www.purplemath.com/Strawberry_AZ_Math_tutors.php","timestamp":"2014-04-19T12:42:48Z","content_type":null,"content_length":"23864","record_id":"<urn:uuid:163a53d6-657a-4ed3-a320-69990104d1b0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Zariski Geometries: Geometry from the Logician's Point of View In the last two decades or so a bridge has been built between algebraic geometry and the model theory of stable algebraic structures. An important part of this bridge is an axiomatization of the geometry underlying algebraic varieties and complex-analytic manifolds, the so-called Zariski geometries of E. Hrushovski and B. Zilber (see for example “Zariski Geometries”, J. Amer. Math. Soc. 9 (1996), 1–56 and Bull. Amer. Math. Soc. 28 (1993), 315–323). This interaction has brought several important applications of model theory to Diophantine geometry. One such application is Hrushovski’s proof of the Mordell-Lang conjecture for function fields (J. Amer. Math. Soc. 9 (1996), 667–690). Perhaps the starting point of these developments is Tarski’s theorem on elimination of quantifiers for the theory of algebraically closed fields: It is known that this is equivalent to Chevalley’s theorem in algebraic geometry (the image of a constructible set is constructible). But the turning point was Morley’s theorem, the beginning of stability theory. Morley’s theorem characterizes the isomorphism type of a structure by its model-theory description and its cardinality. Stability gives rise to a hierarchy of theories, with theories of finite Morley rank and categorical for uncountable cardinals at the top of this hierarchy. Classifying such theories is a central goal of model theory. It was when working towards this classification that certain algebro-geometric ideas have proved to be essential, since purely logical conditions were proved to be insufficient. This is when the notion of a Zariski structure came along. Roughly speaking, a Zariski geometry specifies a family of relations on the structure M, requiring that for each nonnegative integer n, the subsets of the Cartesian product M^n satisfy the axioms for a Noetherian topology; a notion of dimension for M is also required. Examples of Zariski geometries are, of course, the Zariski topology on an algebraic variety over an algebraically closed field, where the dimension is the classical Krull dimension. Another class of examples is given by compact complex manifolds with the topology given by the analytic subsets. One further example corresponds to the rigid varieties over non-archimedean fields. For a finite group G acting on an algebraic variety M, the set of orbits M/G with the natural topology is a Zariski structure: an orbifold. Notice that, in general, the orbifold M/G is not an algebraic variety. A central, and still open, problem is the classification of Zariski geometries. This has already been done in the one-dimensional case where it is shown that one-dimensional Zariski geometries are essentially algebraic curves over an algebraic closed field. The book under review devotes Chapters 3 and 4 to proving this theorem. Some facts from model theory are recalled in Chapter 1, and Chapter 2 sets the basic notions of topological structures (compact Noetherian topologies, irreducibility, constructible sets) into the formal language of mathematical logic. Chapter 5 deals with some examples of non classical Zariski structures, e.g., Zariski structures that are not interpretable as algebraic varieties over algebraically closed fields. Chapter 6 is devoted to some generalized Zariski structures that are obtained by weakening or dropping some conditions, for instance by dropping the Noetherian condition. The book has two appendices. The first one collects some general facts on formal languages and interpretations, a few results from model theory (e.g., the compactness theorem, and the Löwenheim-Skolem theorem), model completeness and categoricity, illustrating these concepts with examples that are used throughout the book. For the important example of the theory of algebraically closed fields on p. 174, the notions of trascendence basis and trascendence degree should be amended to take into account that these are relative notions, and so Steinitz’s theorem on the same page should also be corrected. The second appendix collects some useful results on geometric stability theory. Felipe Zaldivar is Professor of Mathematics at the Universidad Autonoma Metropolitana-I, in Mexico City. His e-mail address is fzc@oso.izt.uam.mx
{"url":"http://www.maa.org/publications/maa-reviews/zariski-geometries-geometry-from-the-logicians-point-of-view","timestamp":"2014-04-20T03:45:29Z","content_type":null,"content_length":"99197","record_id":"<urn:uuid:6c5c3672-615e-452a-8425-e2a0e207477b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Power computations for ANOVA designs fpower Power computations for ANOVA designs fpower SAS Macro Programs: fpower $Version: 1.2 (24 Mar 1995) Michael Friendly York University The fpower macro ( ) Power computations for ANOVA designs The fpower macro computes power of an F-test for main effects for one effect in a one- or n-way design with or without repeated measures, assuming main effects are fixed. The alternative used is a minimum power alternative. Actually, the program can be used for ANY fixed effect in ANY crossed factorial design, by designating the levels of the effect of interest as A, and the levels of all other crossed factors as B. If the design has repeated measures, the intraclass correlation (RHO) is assumed to be positive and constant across all repeated measures. The macro can calculate power for a range of sample sizes (N) and a range of effect sizes (DELTA) Ordinarily, the program produces printed output in the form of a Power Table, listing the power value for a combination of sample size and effect size. In addition, the program can rearrange this information into a Sample Size Table, showing the sample size required for given effect size and power values. An output dataset is also created for plotting or saving, and it contains an observation for each entry. Effect Size Effect size (delta) is specified by the difference between the largest mean and the smallest mean, in units of the within-cell standard deviation ( sigma = the square root of the MSE): largest mean - smallest mean delta = ---------------------------- The minimum power specification corresponds to the alternative hypothesis that all means other than the two extreme one are equal to the grand mean, and the two extreme factor level means are T1 = GM - DELTA/2 Tk = GM + DELTA/2 where DELTA is specified in units of SIGMA = SQRT(MSE) The computations assume: (a) fixed effects, and (b) equal sample sizes in all treatments. Under these assumptions, the non-centrality parameter of the F-distribution can be calculated as N (delta^2)/2, where N is the sample size per treatment. Effect size delta values are typically in the range of 0 - 3. In social science applications, values of delta = 0.25, 0.75, and 1.25 or greater correspond to "small", "medium", and "large" effects, according to Cohen & Cohen, Statistical Power Analysis for the Behavioral Sciences. fpower is a macro program. Only the A= parameter (number of levels of the effect of interest) need be specified. The arguments may be listed within parentheses in any order, separated by commas. For example: %fpower(A=4, ..., ) Number of levels of the effect for which power is to be calculated. Ordinarily, this will be the number of levels of a main effect. However, to calculate the power for an interaction of two factors in a 2 x 3 design, set A=2*3=6. Number of levels of a factor factor B crossed with A (default=1) Levels of crossed factor C (default=1) For >3 factors, make C=product of # of levels of factors D, E, etc. Number of levels of a repeated measure factor crossed with effect A. Significance level of test of effect A N =%str( 2 to 10 by 1, 12 to 18 by 2, 20 to 40 by 5, 50), List of sample sizes for which power is to be calculated. A separate computation is performed for each value specified. You may specify a single value, a list of values separated by commas, a range of the form x TO y BY z, or a combination of these. However, you must surround the N= value with %STR() if any commas appear in it. For example, n=10 to 30 by 5 n=%str(2, 5, 6, 8) n=%str( 2 to 10 by 1, 12 to 18 by 2, 20 to 40 by 5, 50) DELTA=.50 to 2.5 by 0.25 List of DELTA values for which power is to be calculated. A separate computation is performed for each value specified. You may specify a single value, a list of values separated by commas, a range of the form x TO y BY z, or a combination of these. However, you must surround the DELTA= value with %STR() if any commas appear in it. Intraclass correlation for repeated measures (a list of values, like N= and DELTA=) Print a power table? Plot power*delta=N ? Print a sample-size table ? The name of the output dataset. To determine power or sample size for 5 groups in a one-way design, using the default DELTA and N= options: %include macros(fpower); *-- or include in an autocall library; To determine the power or sample size for the BxC interaction in a 4x3x2 design, specify a=6 (the combinations of factors B and C), and b=4 (levels of factor A for each BC combination. The delta values here refer to the BC treatment means. See also WWW form interface for the fpower macro mpower Retrospective power analysis for multivariate GLMs rpower Retrospective power analysis for univariate GLMs
{"url":"http://www.datavis.ca/sasmac/fpower.html","timestamp":"2014-04-17T19:17:39Z","content_type":null,"content_length":"7547","record_id":"<urn:uuid:253f650f-1c3d-4083-8432-24579de76646>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
A package is dropped at time t = 0 from a helicopter that is descneding steadily at a speed vi.? I need help with this physics problem A pacakge is dropped at time t = 0 from a helicopter that is descending steadily at a speed vi. (a) What is the speed of the package in temrs of vi, g, and t? (Use the follwoing varibales as necessary: v_i for vi, g and t.) l vp l= b) What veritcal disatnce d is it from the helicopter in terms of g and t ? d = (c) What are the answers in parts (a) and (b) if the helicopter is rising steadily at the same speed? |vp| = d = 1 Answers New firstOld first A) V=U + at, so: Vf=Vi +gt <== part a anwser b) D= Ut+1/2at^2 so Dp = Vit + 1/2gt^2 where Dp is the disatnce the pacakge has dropped However, you have to subtract out the disatnce that the heliocpter has traveled downwards. Dh=Vit where Dh is the distance the helicopter has travleled. So the toatl distance betewen the helo and package can be exprsesed as: D= Dp-Dh D= Vit +1/2gt^2 -Vit simplify to: D=1/2gt^2 <== part b anwser Part C: The answer to part a wolud be the same, howveer, the distance between the helo and package would have the sign switched and would be written as: D=Dp+Dh. Therefore the distance would be defined as D= Vit +1/2gt^2 -Vit or D= 2Vit + 1/2 gt^2 <== part c answer Can't find the answer you are looking for? Ask a question! Asked 03 Feb Last activity 03 Feb
{"url":"http://www.anyanswer.org/question/ae96818dbde8dd432ede6d4cb5912f6c/a-package-is-dropped-at-time-t-0-from-a-helicopter-that-is/","timestamp":"2014-04-18T10:35:02Z","content_type":null,"content_length":"12541","record_id":"<urn:uuid:b7c3fb36-8c26-4c39-b4eb-17ddf50c0856>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Fluid Dynamics Fluid dynamics is a rich field which can be used to, e.g., model the behavior of gases and liquids or to calculate forces on airplanes or even to study the weather. Fluid dynamics is a very complex subject and many of its aspects are still not fully understood, turbulence being a prime example. There are many approaches to model fluid dynamics which vary according to their goals. For my project I have used cellular automata to model a two-dimensional fluid travelling between 2 walls. Cellular automata are arrays of cells which contain discrete (often boolean) values. If it is large enough, it is possible to coarse-grain the array and to then observe continuous macroscopic behavior. This makes it a good candidate to simulate fluids. When it is used for this purpose, an array is populated by particles whose velocities may be one of a few choices and Wolfram has shown that, for certain choices of lattices in two and three dimensions, the hydrodynamical equations derived from cellular automata yield slightly modified Navier-Stokes equations [2]. The Model I studied a two-dimensional fluid flowing between two walls. I used a hexagonal array (to preserve isotropy [2]) in which each site can contain up to 6 particles. Each particle can be uniquely described by its position in the array and its velocity (i.e. no two particles may occupy the same site and have the same velocity). This allows us to use a boolean array, which will simplify all of the calculations considerably. Velocities will then be v = (cos(60*a), sin(60*a)) where a = 0, 1, ..., 5. The figure below, taken from Wolfram's paper on fluid dynamics [2], shows what such a lattice could look like at a given time. At the beginning of the simulation, 52000 particles are randomly placed in 24000 cells, close to the suggested particle density of 2.1 particles per cell [3]. All of the particles initially have a equal to 0, 1 or 5 to simulate a flow. At each time-step (of which there were 10000 in my simulation), all of the particles move one lattice site in a direction according to v and, if they meet another particle at their new lattice site, they collide in a way that conserves momentum. Also, if the particle hits a smooth wall it bounces off it elastically whereas it bounces randomly off a rough wall. After all of these steps happen, we can calculate macroscopic quantities such as total velocity and particle density. Finally, as there will clearly be some particles leaving the studied region in the direction of the flow, we must add new particles each time-step to simulate the incoming flow. The figure below, also taken from Wolfram's paper on fluid dynamics [2], shows some possible "in states" and some of their corresponding "out states". As the fluid is constructed randomly, we cannot expect it to be in equilibrium immediately. Therefore, a first 1000 iterations occur before any macroscopic quantities are recorded. After those initial iterations, I averaged all recorded macroscopic quantities over a large number of iterations. I was interested in seeing how the average fluid velocity depends upon the fluid's distance to the walls. I studied this for both smooth (code) and rough (code) walls and obtained quite different Here Vx is in the direction of the fluid flow and Vy is transverse to it. As should be expected, Vy averages out to zero, except at the walls (as the particles can't travel into them). What is interesting is that, for smooth walls, Vx only drops right at the walls but rough walls have a larger effect and noticeably slow the flow to a significant distance away from them. I also looked at what happened during the initial evolution from inequilibrium to a stable configuration. Below are the graphs showing what the fluid flow looked like, averaged over the first hundred iterations (#1), iterations 101 to 200 (#2), 201 to 300 (#3) and 301 to 400 (#4). We can see that initially the fluid flow was more homogeneous and, as the effect of the rough walls began to spread, the fluid's velocity began to depend more importantly on its distance from the After studying how a fluid's velocity depended on its distance from the walls, I saw that smooth walls have a minimal effect whereas rough walls slow down the flow in their vicinity. An interesting (though difficult) extension to this project would be to do the same study in three dimensions and to incorporate gravity into the model. This model would be particularly difficult to implement as lattices which preserve isotropy in three dimensions are quite exotic. However, the results of such a model would most likely be very interesting. [1] T. Pang, An Introduction to Computational Physics, 2nd Ed., Cambridge University Press (2006). [2] S. Wolfram, J. Stat. Phys. 45, 471 (1986). [3] G. R. McNamara and G. Zanetti, Phys. Rev. Lett. 61, 20 (1988). Further reading: [4] G. Gallavotti, Foundations of Fluid Dynamics, Springer (2002). [5] T. J. Chung, Computational Fluid Dynamics, Cambridge University Press (2002).
{"url":"http://www.personal.psu.edu/euw122/PHYS_527/project/","timestamp":"2014-04-17T15:27:24Z","content_type":null,"content_length":"6023","record_id":"<urn:uuid:f6416d84-a8cd-4f1f-84cc-83d7a19ffe7a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The driver of a car moving on a straight road applies brakes to come to rest, with a constant retardation 'a'. Assuming that the time of motion is more than 2 s, the distance covered by the car in secondlast second of its motion is • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f04a13e4b0d4a537ce3822","timestamp":"2014-04-20T16:12:36Z","content_type":null,"content_length":"51370","record_id":"<urn:uuid:358b9a77-48b9-4af2-93b1-a43ccff05164>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
La Bocca della Verità A friend of mine who plays go a lot found this on her go server. Mathematicians post stuff there, she said, and, being mathematicians, they post problems, not answers. "You have a collection of 11 balls with the property that if you remove any one of the balls, the other 10 can be split into two groups of 5 that have the same weight. If you assume that all the balls have rational weight, there is a cute proof that they all must weigh the same. Can you find a proof? Can you find a way to extend the result to the general case where the balls have real Solving the implied system of 11 equations in 11 unknowns is fairly trivial and gives a proof, but it isn’t “cute” by any stretch. I have been unable to come up with either a cute proof, or a proof, cute or otherwise, which depends on the unknowns being rational. Does anyone see what I missed? 9 comments: 1. Sydney1:49 PM Yeah, the rational part. 2. Hmm as I was reading it I thought, "Huh? They must all be the same weight then..." even before I got to the problem part. So either I am more of a genius than I realized, or I am skipping a step in my head. I will ponder for a few more moments but right now I have to catch up on my email accounts... There might be an offshore drilling emergency requiring my attention. 3. Wabulon, I have to run to the store. I still "see" that it has to be that way, but I'm having trouble pinning down the proof, partly because my son just got back in town and he is climbing all over me. I think you should be able to do it pretty easily with a proof by contradiction. I will think about it in the car ride... 4. OK Wabulon it is taking a ridiculously long time for me to try to do this for the case of 11, so let me do it for 5 and then you can see if it extends. (And yes, feel free to say, "Ha ha, Bob couldn't solve his problem of little balls.") On a piece of paper draw two columns of two balls each, and then put the fifth ball by itself below them. Label the bottom ball L, denoting that it is the lightest ball of the 5. Now we know that it must be possible to arrange the other four balls such that the two columns weight the same. Without loss of generality, we will put the second lightest ball into the right column. (It might be tied in weight with the lightest ball.) CASE 1: Both the balls in the right column are L. ====In this case we're obviously done, because then the two balls in the left column must also weigh L. CASE 2: Only one of the balls in the right column is L. ====For convenience, label the bottom-right ball L. Then label the top-right ball H1, the bottom-left ball H2, and the top-left ball H3. These are not sorted, by the way; it just means 3 different balls that are all at least as heavy as L. Now, suppose I swap my bottom ball L with H2. This might make the columns no longer the same weight, because H2 might be greater than L. But we know that I must be able to do some swapping to restore the equality (by assumption). CASE 2a. I don't need to do any further swapping; the two columns are still the same. =====In this case, L=H2, and so now we are looking at three Ls with an H3 in the top left and an H1 in the top right. Then do one more swap of the bottom L with either H3 or H1, and then it's obvious they all equal L. CASE 2b. After the initial swap of L for H2, the left column is lighter than the right, and so I need to do a further swap between the columns to restore balance. =====You can quickly rule this case out as impossible. Swapping H3 for H1 doesn't help, because it just reverses the imbalance. And if we swap one of the Hs with one of the Ls, then the only way those two columns can be equal is if they all weigh L. But that contradicts our assumption that initially there was an imbalance. Case 3. The L ball is strictly the lightest. ======Shoot, I am getting bogged down in all the subcases on this one. I'll do part of it at least: Number the four balls from H1 to H4, where you know that each is strictly heavier than L. Without loss of generality, say H1 is the 2nd lightest ball (might be tied for second with others), and put it in right column. Now swap L with a ball from the left column. If no further swaps are needed--i.e. if left and right columns are still the same weight--then that means L must have equaled the weight of the ball you swapped out, but that contradicts our assumption in this case. So, that means you need one more swap to restore the balance. But this is the part where I'm getting bogged down. I can't see how to force that all must weigh L. Anyway, I think something like this would work for the case of 11 balls, but it would be a much bigger pain. Presumably this is not the "cute" proof that exists. When I had my immediate flash of intuition that said they must be the same, I was thinking along these lines of swapping the balls, and I quickly kept painting myself into a corner, by switching which ball I pulled out (and then had to be able to swap balls to restore balance). But like I said, I might have been skipping some steps, since I obviously didn't run through all these subcases, I kinda just thought, "This can't work, unless they all weigh the same." So I cannot give myself credit for guessing the answer, much like Fermat couldn't possibly have done the modern proof of his last theorem on his deathbed in his head. 5. wabulon12:04 PM Bob. nice work. The problem seems to have had an effect on you similar to its effect on me: I couldn't let go of it, and my thoughts seemed to verge on a solution, but never got there. And, no, your approach seems no more likely to end up "cute" than my actual proof (which I take it is the way an AI problem-solving algorithm would have gone about it). We shall no doubt keep trying. 6. Gene, bless his crooked little heart, suggested that we simply cheat. Google ("11 balls" rational "same weight") yielded: “Let’s have a problem solving marathon: post a problem and its difficulty level;…” Submitted 13 March 2008 It suffices to prove this statement when the ball weights are assumed to be positive integers, as we can just multiply through by denominators. Suppose that some example exists with the balls not all the same weight. Take an example with minimum total weight W. Say the weights are w1, w2... w_11. Note that they are positive integers, and not all identical, so W>=12. Considering the condition (mod 2), we find that the sum of any 10 of the weights must be even. Hence any two weights must be congruent (mod 2). Case a) Every wi is even. Then w1/2, w_2/2,... is another set of differing weights with the same property, but total weight W/2, contradicting the minimality of our example. Case b) Every wi is odd. Then (w1 + 1)/2, (w_2 + 1)/2, ... is another set of differing weights with the same property, and total weight (W+11)/2 = (W-1+12)/2 <= (W-1+W)/2 = W - 1/2 < W, a We conclude that no such example can exist. This follows essentially my line of reasoning; mine petered out when I failed to “take an example with minimum total weight W.” Bob was correct in intuiting proof by contradiction! 7. Anonymous11:29 AM "Yeah, the rational part." You're trying too hard, Sydney. 8. Anonymous2:17 AM Welcome to wow gold our wow Gold and wow power leveling store. We wow gold are specilized, wow power leveling professional and reliable wow power leveling website for wow power leveling selling and wow gold service. By the World of Warcraft gold same token,we offer wow power leveling the best WoW service wow power leveling for our long-term and wow powerleveling loyal customers. wow powerleveling You will find wow powerleveling the benefits and value powerleveling we created powerleveling different from other sites. As to most people, power leveling they are unwilling to power leveling spend most of wow power leveling the time wow gold grinding money Rolex for mounts or rolex replica repair when replica rolex they can purchase Watches Rolex what they Rolex Watches are badly need. The Watch Rolex only way is to look Rolex Watch for the best place rs gold to buy cheap WOW gold. Yes! You find it here! Our WoW Gold supplying service has already accumulated a high reputation and credibility. We have plenty of Gold suppliers, which will guarantee our delivery instant. Actually, we have been getting Runescape Gold tons of postive feedbacks from our loyal RuneScape Money customers who really appreciate our service. 9. Anonymous11:00 PM BEIJING, Nov. 19, according to Xinhua Taiwan's "Today" reported that Chen Shui-bian's four-day fasting guard for medical treatment, including the Oriental Hospital and the Panchiao to spend a wow power leveling total of nearly 20,000 yuan (NT, the same below) medical wow powerleveling expenses, as His health insurance card to stay in Taipei Detention Center, must first serve their own expense, the medical expenses from his detention in custody of the gold deduction, but Chen Shui-bian's custody, only 16,000 yuan deposit, the deduction is not enough, said the detention center, not wow gold part of the Will be asked to make up for the families or lawyers. 1 At about noon today, Chen Shui-bian ambulance ride to live for three days to leave the county medical Panchiao District Court, although power leveling the people left, but still have to pay money, including an ambulance referral to spend 800 yuan, into the ring adhere to the intensive care unit Fasting only saline water and glucose, a day to spend about 1500 yuan, plus medical expenses, hospital fees, in Panchiao powerleveling hospital guard spent a total of three million yuan.
{"url":"http://gene-callahan.blogspot.com/2008/10/little-balls.html","timestamp":"2014-04-20T13:24:27Z","content_type":null,"content_length":"195663","record_id":"<urn:uuid:c26b912a-580b-46bd-ad8e-f9bbe89010b6>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
common tangent line? find the two points on the curve y=x4-2x2-x that have a common tangent line The equation of the line to a function at $(x_n,f(x_n)$ is given in slope intercept form as: $y=f'(x_n)x+f(x_n)-x_nf(x_n)$ Two lines are the same if they have the same slope and y-intercept. Using the points $(x_1,f(x_1))$ and $(x_2,f(x_2))$ will give you two equations and two unknowns, from which you can find two distinct points. The slope of the tangent line is: $m = \frac{dy}{dx} = 4x^3-4x-1$ $f'(a) = 4 a^3 -4a -1$ $f'(b) = 4 b^3 -4b -1$ $f(a) = a^4 -2a^2 -a$ $f(b) = b^4 -2 b^2 -b$ Now: $f'(a) =\frac{f(b) - f(a)}{b -a}$ And: $f'(b) =\frac{f(b) - f(a)}{b -a}$ So now the equation becomes: $(4 a^3 -4a -1)(b - a) = (b^4 -2 b^2 -b) - (a^4 -2a^2 -a)..................(1)$ $(4 b^3 -4b -1)(b - a) = (b^4 -2 b^2 -b) - (a^4 -2a ^2 -a)..................(2)$ Solving these two equations by maple we find $a = -1 \text{ and } b = 1$ or $a = 1 \text{ and } b = -1$ Maple is a program that does mathematical calculations. Here's a link: Maple 16 by Maplesoft - Hollywood
{"url":"http://mathhelpforum.com/calculus/205056-common-tangent-line.html","timestamp":"2014-04-24T16:49:41Z","content_type":null,"content_length":"52696","record_id":"<urn:uuid:bab12eda-acc7-40bb-aad5-77beb6bcee14>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Fermat's Last Theorem/Paul Wolfskehl From Wikibooks, open books for an open world Paul Wolfskehl[edit] The works of Kummer on the factorisation of complex numbers threw a general mistrust on the possibility of finding a proof of Fermat’s theorem in a reasonable time. The researches also halted because of the birth of new branches of mathematics that drew the studious away from the theory of numbers. In 1908 Paul Wolfskehl gave a new impulse to the researches. Wolfskehl was a German industrialist from Darmstadt who came from a very rich family dedicated to patronage of the arts. Paul had studied at mathematical university and, although he had greater success in business than in mathematics, his contribution was decisive in reawakening interest in the theorem. Wolfskehl at that period was in love with a woman who refused his every attention. Driven by despondency Wolfskehl had decided to commit suicide at the stroke of midnight, but being a meticulous and precise person he had planned everything and had provided an adequate arrangement of his affairs and a salutation of his closest friends by means of letters. Wolfskehl had finished the preparations before midnight and in order to pass the time began to thumb through some texts on mathematics. In particular thumbing through the work of Kummer he noted an unproved assumption. If that assumption revealed itself in reality false perhaps it would have reopened the possibility of proving Fermat’s theorem with the method of Lamé or of Cauchy. Wolfskehl worked all night and finally succeeded in proving that the assumption was true and therefore the proof was correct. This was bad news for the mathematician but Wolfskehl was so happy to have been able to correct the great Kummer that he regained faith in himself. He abandoned the proposal of suicide and instead wrote a second will according to which he would have left a good part of his patrimony to whoever was able to prove Fermat’s theorem. The will became operative in 1908 and the Royal Society of Science of Göttingen became the organisation proposed for the verification of proofs that aspired to the prize. The prize was announced by all the European mathematical publications and thousands of aspiring mathematicians choked the university of Göttingen with presumed proofs of Fermat’s theorem. Unfortunately the prize did not attract many serious mathematicians, given that these were well aware of the extreme difficulty of the problem and therefore it did not produce a real turn-about in the field of mathematics, but it had the merit of rendering the problem of Fermat’s last theorem famous to the public at large.
{"url":"http://en.wikibooks.org/wiki/Fermat's_Last_Theorem/Paul_Wolfskehl","timestamp":"2014-04-19T13:09:22Z","content_type":null,"content_length":"26360","record_id":"<urn:uuid:6567d7e7-1d65-43ce-9283-78c57ee4fcc9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface Area of Cylinders ( Read ) | Geometry Did you have tinker toys when you were little? Trevor is working on wrapping a container of tinker toys. It is a bit complicated because it is a cylinder. Trevor is on his third attempt, and the woman who bought the tinker toys is getting a bit “I’m sorry maam,” Trevor said smiling. Twice, Trevor did not cut his wrapping paper long enough. The third time, he decides to figure out the surface area of the cylinder first and then cut the wrapping paper. “I should have done that to begin with,” Trevor thought to himself as he looked at the ruler on the table. Measuring the paper would have been easy had he known the dimensions. The height of the canister is 18" and the diameter of the cylinder is 6 inches. Trevor isn’t sure that he has enough information to find the surface area of the cylinder. He stops to think about this for a moment. Does Trevor have what he needs? How can he find the surface area of the cylinder? This Concept will teach you how to find the surface area of a cylinder. Make some notes and pay attention, you will see this problem again at the end of the Concept. In this Concept, we will learn to find the surface area of cylinders. A cylinder is a solid figure that exists in three-dimensional space. A cylinder has two faces that are circles. We do not call the side of a cylinder a face because it is curved. We still have to include its area in the total surface area of the cylinder, however. The surface area of a cylinder is the total of the area of each circular face and the side of the cylinder. Imagine a can of soup. The top, bottom, and label around the can would make up the surface area of the can. To find the surface area, we must be able to calculate the area of each face and the side and then add these areas together. We will look at two different ways to calculate the surface area of cylinders. One way is to use a net. As we’ve said, surface area is the total area for the faces and the side of a cylinder. That means we need to find the area of each face of the cylinder, and then the area of the side. One way to do this is to use a net. A net is a two-dimensional diagram of a three-dimensional solid. Imagine you could unroll the soup can so that it is completely flat. You would have something that looks like The shaded circles show the top and bottom faces of the cylinder, and the unshaded rectangle shows the side, as if it were unrolled. Can you see how to fold the net back up to make the cylinder? With the net, we can see each face of the cylinder more clearly. To find the surface area, we need to calculate the area for each circle in the net. We use the formula $A = \pi r^2$ $&\text{bottom face} && \text{top face}\\A &= \pi r^2 && A = \pi r^2\\A &= \pi (4)^2 && A = \pi (4)^2\\A &= 16 \pi && A = 16 \pi\\A &= 50.24 \ cm^2 && A = 50.24 \ cm^2$ The area of each circular face is 50.24 square centimeters. Now we need to find the area of the side. The net shows us that, when we “unroll” the cylinder, the side is actually a rectangle. Recall that the formula we use to find the area of a rectangle is $A = lw$the width of the rectangle is the same as the height of the cylinder. In this case, the height of the cylinder is 8 centimeters. What about the length? The length is actually the same as the perimeter of the circle, which we call its circumference. When we “roll” up the side, it fits exactly once around the circle. To find the area of the cylinder’s side, then, we multiply the circumference of the circle by the height of the cylinder. We find the circumference of a circle with the formula $C = 2 \pi r$ Let’s try it. $C & = 2 \pi r\\C & = 2 \pi 4\\C & = 8 \pi\\C & = 25.12 \times 6 = 150.72 \ cm^2$ Now we know the area of both circular faces and the side. Let’s add them together to find the surface area of the cylinder. $& \text{bottom face} \qquad \text{top face} \qquad \quad \text{side} \qquad \qquad \quad \text{surface area}\\& 50.24 \ cm^2 \quad + \ \ 50.24 \ cm^2 \ + \ 150.72 \ cm^2 \ = \ 251.2 \ cm^2$ The total surface area of the cylinder is 251.72 square centimeters. What is the surface area of the figure below? The first thing we need to do is draw a net. Get ready to exercise your imagination! It may help to color the top and bottom faces to keep you on track. Begin by drawing the bottom face. It is a circle with a radius of 7 inches. What shape is the side when we “unroll” the cylinder? It is a rectangle, so we draw a rectangle above the circular base. Lastly, we draw the top face, which is also a circle with a radius of 7 inches. Here is the net. Next let’s fill in the measurements for the side and radius of each face so that we can calculate the area of each component. Now we can calculate their areas. Remember to use the correct area and circumference formulas for circles. $& \text{bottom face} && \text{top face} && \text{side}\\A &= \pi r^2 && A = \pi r^2 && C = 2 \pi r\\A &= \pi (7)^2 && A = \pi (7)^2 && C = 2 \pi (7)\\A &= 49 \pi && A = 49 \pi && C = 14 \pi\\A &= 153.86 \ in.^2 && A = 153.86 \ in.^2 && C = 43.96 \ in.^2 \times 14 = 615.44 \ in.^2$ Now we add these areas together to find the surface area of the cylinder. $153.86 \quad + \quad 153.86 \quad + \quad 615.44 \quad = \quad 923.16 \ in.^2$ Let’s look at what we actually did to find the surface area. We used the formula $A = \pi r^2$$C = 2 \pi r$ We can always draw a net to help us organize information in order to find the surface area of a cylinder. A net helps us see and understand each face of the cylinder. Nets let us see each face so that we can calculate its area. However, we can also use formulas to represent the faces and the side as we find their area. You may have noticed in the previous section that the two circular faces always had the same area. This is because they have the same radius. We can therefore calculate the area of the pair of circular faces at once. We simply double the area formula, which gives us $2 \pi r^2$ We can also combine the measurements for the side into a simpler equation. We need to find the circumference by using the formula $2 \pi r$we can just write $2 \pi rh$ When we combine the formula for the faces and for the side we get this formula. $SA = 2 \pi r^2 + 2 \pi rh$ This formula may look long and intimidating, but all we need to do is put in the values for the radius of the circular faces and the height of the cylinder and solve. Write this formula down in your notebook. Now let’s apply this formula to the example from the previous section. In this cylinder, $r = 7$$h = 14$ $SA & = 2 \pi r^2 + 2 \pi rh\\SA & = 2 \pi (7)^2 + 2 \pi (7)(14)\\SA & = 2 \pi (49) + 2 \pi (98)\\SA & = 98 \pi + 196 \pi\\SA & = 294 \pi\\SA & = 923.16 \ in.^2$ As we have already seen, the surface area of this cylinder is 923.16 square inches. This formula just saves us a little time. Let’s try another. What is the surface area of the figure below? We have all of the measurements we need. Let’s put them into the formula and solve for surface area, $SA$ $SA & = 2 \pi r^2 + 2 \pi rh\\SA & = 2 \pi (3.5)^2 + 2 \pi (3.5) (28)\\SA & = 2 \pi (12.25) + 2 \pi (98)\\SA & = 24.5 \pi + 196 \pi\\SA & = 220.5 \pi\\SA & = 692.37 \ cm^2$ This cylinder has a surface area of 692.37 square centimeters. Try a few of these on your own. Find the surface area of each cylinder. Example A A cylinder with a radius of 5 ft and a height of 10 ft Solution: $471$ Example B A cylinder with a radius of 7 in and a height of 12 in Solution: $835.24$ Example C A cylinder with a diameter of 4 m and a height of 5 m Solution: $87.92$ Here is the original problem again. Then find the surface area of the cylinder. Trevor is working on wrapping a container of tinker toys. It is a bit complicated because it is a cylinder. Trevor is on his third attempt, and the woman who bought the tinker toys is getting a bit “I’m sorry maam,” Trevor said smiling. Twice, Trevor did not cut his wrapping paper long enough. The third time, he decides to figure out the surface area of the cylinder first and then cut the wrapping paper. “I should have done that to begin with,” Trevor thought to himself as he looked at the ruler on the table. Measuring the paper would have been easy had he known the dimensions. The height of the canister is 18" and the diameter of the cylinder is 6 inches. Trevor isn’t sure that he has enough information to find the surface area of the cylinder. He stops to think about this for a moment. We can use the formula for finding the surface area of a cylinder to help us to find the surface area of this cylinder. $SA=2\pi rh+2 \pi r^2$ Now we can take the dimensions and then substitute them into the formula. The first thing to notice is that the diameter of the canister has been given and we need the radius of the cylinder. The radius is one-half of the diameter. $6 \div 2 = 3 \ inches$ Now we can substitute them into the formula. $SA & = 2(3.14)(3)(18)+2(3.14)(3^2)\\SA & = 339.12+56.52 \\SA & = 395.64 \ sq.inches$ We can divide this measurement by 12 and we will know how many square feet of wrapping paper will be needed. $395.64 \div 12 = 32.97 \ \text{or} \ 33 \ sq. feet$ a three-dimensional figure with two circular bases. Surface Area the measurement of the outside of a three-dimensional figure. a two-dimensional representation of a three-dimensional figure. Guided Practice Here is one for you to try on your own. Mrs. Johnson is wrapping a cylindrical package in brown paper so that she can mail it to her son. The package is 22 centimeters tall and 11 centimeters across. How much paper will she need to cover the package? The picture clearly shows us the height and diameter of the cylinder, so let’s use the formula for finding the surface area. But be careful—we have been given the diameter, not the radius. We need to divide it by 2 to find the radius: $11 \div 2 = 5.5$ $SA & = 2 \pi r^2 + 2 \pi rh\\SA & = 2 \pi (5.5^2) + 2 \pi (5.5) (22)\\SA & = 2 \pi (30.25) + 2 \pi (121)\\SA & = 60.5 \pi + 242 \pi\\SA & = 302.5 \pi\\SA & = 949.85 \ cm^2$ Mrs. Johnson will need 949.85 square centimeters of brown paper in order to wrap the entire package. Video Review - This is a video on surface area of cylinders. Directions: Find the surface area of each cylinder given its height and radius. 1. $r = 1 \ m, \ h = 3 \ m$ 2. $r = 2 \ cm, \ h = 4 \ cm$ 3. $r = 6 \ in, \ h = 10 \ in$ 4. $r = 4 \ in, \ h = 6 \ in$ 5. $r = 5 \ in, \ h = 10 \ in$ 6. $r = 8 \ ft, \ h = 6 \ ft$ 7. $r = 10 \ m, \ h = 15 \ m$ 8. $r = 9 \ cm, \ h = 12 \ cm$ 9. $r = 6 \ m, \ h = 8 \ m$ 10. $r = 2 \ cm, \ h = cm$ Directions: Find the surface area of each cylinder given its height and diameter. 11. $d = 8 \ m, \ h = 11 \ m$ 12. $d = 10 \ in, \ h = 14 \ in$ 13. $d = 8 \ cm, \ h = 10 \ cm$ 14. $d = 12 \ m, \ h = 15 \ m$ 15. $d = 15 \ in, \ h = 20 \ in$
{"url":"http://www.ck12.org/geometry/Surface-Area-of-Cylinders/lesson/Surface-Area-of-Cylinders-Grade-7/","timestamp":"2014-04-20T08:42:03Z","content_type":null,"content_length":"132212","record_id":"<urn:uuid:11dddf3d-7e56-4812-a227-de9860383db7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
An atomic model for message-passing Results 1 - 10 of 21 - JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING , 1995 "... Cilk (pronounced "silk") is a C-based runtime system for multithreaded parallel programming. In this paper, we document the efficiency of the Cilk work-stealing scheduler, both empirically and analytically. We show that on real and synthetic applications, the "work" and "critical-path length" of a C ..." Cited by 534 (39 self) Add to MetaCart Cilk (pronounced "silk") is a C-based runtime system for multithreaded parallel programming. In this paper, we document the efficiency of the Cilk work-stealing scheduler, both empirically and analytically. We show that on real and synthetic applications, the "work" and "critical-path length" of a Cilk computation can be used to model performance accurately. Consequently, a Cilk programmer can focus on reducing the computation's work and critical-path length, insulated from load balancing and other runtime scheduling issues. We also prove that for the class of "fully strict" (well-structured) programs, the Cilk scheduler achieves space, time, and communication bounds all within a constant factor of optimal. The Cilk "... This paper studies the problem of efficiently scheduling fully strict (i.e., well-structured) multithreaded computations on parallel computers. A popular and practical method of scheduling this kind of dynamic MIMD-style computation is "work stealing," in which processors needing work steal computa ..." Cited by 398 (38 self) Add to MetaCart This paper studies the problem of efficiently scheduling fully strict (i.e., well-structured) multithreaded computations on parallel computers. A popular and practical method of scheduling this kind of dynamic MIMD-style computation is "work stealing," in which processors needing work steal computational threads from other processors. In this paper, we give the first provably good work-stealing scheduler for multithreaded computations with dependencies. Specifically, - IEEE Transactions on Parallel and Distributed Systems , 1996 "... In this paper, the LogP model is used to analyze four parallel sorting algorithms (bitonic, column, radix, and sample sort). LogP characterizes the performance of modern parallel machines with a small set of parameters: the communication latency (L), overhead (o), bandwidth (g), and the number of pr ..." Cited by 49 (10 self) Add to MetaCart In this paper, the LogP model is used to analyze four parallel sorting algorithms (bitonic, column, radix, and sample sort). LogP characterizes the performance of modern parallel machines with a small set of parameters: the communication latency (L), overhead (o), bandwidth (g), and the number of processors (P). We develop implementations of these algorithms in Split-C, a parallel extension to C, and compare the performance predicted by LogP to actual performance on a CM-5 of 32 to 512 processors for a range of problem sizes and input sets. The sensitivity of the algorithms is evaluated by varying the distribution of key values and the rank ordering of the input. The LogP model is shown to be a valuable guide in the development of parallel algorithms and a good predictor of implementation performance. The model encourages the use of data layouts which minimize communication and balanced communication schedules which avoid contention. Using an empirical model of local processor performance, LogP predictions closely match observed execution times on uniformly distributed keys across a broad range of problem and machine sizes for all four algorithms. Communication performance is oblivious to the distribution of the keys values, whereas the local sort performance is not. The communication phases in radix and sample sort are sensitive to the ordering of keys, because certain layouts result in contention. 1 , 1997 "... Parallel algorithm designers need computational models that take first order system costs into account, but are also simple enough to use in practice. This paper introduces the LoPC model, which is inspired by the LogP model but accounts for contention for message processing resources in parallel al ..." Cited by 45 (9 self) Add to MetaCart Parallel algorithm designers need computational models that take first order system costs into account, but are also simple enough to use in practice. This paper introduces the LoPC model, which is inspired by the LogP model but accounts for contention for message processing resources in parallel algorithms on a multiprocessor or network of workstations. LoPC takes the , and parameters directly from the LogP model and uses them to predict the cost of contention, . , 1999 "... There has been a great deal of interest recently in the development of general-purpose bridging models for parallel computation. Models such as the BSP and LogP have been proposed as more realistic alternatives to the widely used PRAM model. The BSP and LogP models imply a rather different style fo ..." Cited by 42 (11 self) Add to MetaCart There has been a great deal of interest recently in the development of general-purpose bridging models for parallel computation. Models such as the BSP and LogP have been proposed as more realistic alternatives to the widely used PRAM model. The BSP and LogP models imply a rather different style for designing algorithms when compared with the PRAM model. Indeed, while many consider data parallelism as a convenient style, and the shared-memory abstraction as an easyto-use platform, the bandwidth limitations of current machines have diverted much attention to message-passing and distributed-memory models (such as the BSP and LogP) that account more properly for these limitations. In this paper we consider the question of whether a shared-memory model can serve as an effective bridging model for parallel computation. In particular, can a shared-memory model be as effective as, say, the BSP? As a candidate for a bridging model, we introduce the Queuing Shared-Memory (QSM) model, which accounts for limited communication bandwidth while still providing a simple shared-memory abstraction. We substantiate the ability of the QSM to serve as a bridging model by providing a simple work-preserving emulation of the QSM on both the BSP, and on a related model, the (d, x)-BSP. We present evidence that the features of the QSM are essential to its effectiveness as a bridging model. In addition, we describe scenarios , 1996 "... Although cost-effective parallel machines are now commercially available, the widespread use of parallel processing is still being held back, due mainly to the troublesome nature of parallel programming. In particular, it is still diiticult to build eiticient implementations of parallel applications ..." Cited by 42 (2 self) Add to MetaCart Although cost-effective parallel machines are now commercially available, the widespread use of parallel processing is still being held back, due mainly to the troublesome nature of parallel programming. In particular, it is still diiticult to build eiticient implementations of parallel applications whose communication patterns are either highly irregular or dependent upon dynamic information. Multithreading has become an increasingly popular way to implement these dynamic, asynchronous, concurrent programs. Cilk (pronounced "silk") is our C-based multithreaded computing system that provides provably good performance guarantees. This thesis describes the evolution of the Cilk language and runtime system, and describes applications which affected the evolution of the - In Proc. 8th ACM Symp. on Parallel Algorithms and Architectures , 1996 "... The Bulk-Synchronous Parallel (BSP) model was proposed by Valiant as a model for general-purpose parallel computation. The objective of the model is to allow the design of parallel programs that can be executed efficiently on a variety of architectures. While many theoretical arguments in support of ..." Cited by 34 (3 self) Add to MetaCart The Bulk-Synchronous Parallel (BSP) model was proposed by Valiant as a model for general-purpose parallel computation. The objective of the model is to allow the design of parallel programs that can be executed efficiently on a variety of architectures. While many theoretical arguments in support of the BSP model have been presented, the degree to which the model can be efficiently utilized on existing parallel machines remains unclear. To explore this question, we implemented a small library of BSP functions, called the Green BSP library, on several parallel platforms. We also created a number of parallel applications based on this library. Here, we report on the performance of six of these applications on three different parallel platforms. Our preliminary results suggest that the BSP model can be used to develop efficient and portable programs for a range of machines and applications. 1 , 1994 "... The LogP model characterizes the performance of modern parallel machines with a small set of parameters: the communication latency (L), overhead (o), bandwidth (g), and the number of processors (P ). In this paper, we analyze four parallel sorting algorithms (bitonic, column, radix, and sample sort) ..." Cited by 27 (4 self) Add to MetaCart The LogP model characterizes the performance of modern parallel machines with a small set of parameters: the communication latency (L), overhead (o), bandwidth (g), and the number of processors (P ). In this paper, we analyze four parallel sorting algorithms (bitonic, column, radix, and sample sort) under LogP. We develop implementations of these algorithms in a parallel extension to C and compare the actual performance on a CM-5 of 32 to 512 processors with that predicted by LogP using parameter values for this machine. Our experience was that the model served as a valuable guide throughout the development of the fast parallel sorts and revealed subtle defects in the implementations. The final observed performance matches closely with the prediction across a broad range of problem and machine sizes. 1.2 INTRODUCTION Fast sorting is important in a wide variety of practical applications, is interesting to study from a theoretical viewpoint, and offers a wealth of novel parallel solutio... - Proc. 5th ACM-SIAM Symp. on Discrete Algorithms , 1997 "... Abstract. This paper introduces the queue-read queue-write (qrqw) parallel random access machine (pram) model, which permits concurrent reading and writing to shared-memory locations, but at a cost proportional to the number of readers/writers to any one memory location in a given step. Prior to thi ..." Cited by 23 (10 self) Add to MetaCart Abstract. This paper introduces the queue-read queue-write (qrqw) parallel random access machine (pram) model, which permits concurrent reading and writing to shared-memory locations, but at a cost proportional to the number of readers/writers to any one memory location in a given step. Prior to this work there were no formal complexity models that accounted for the contention to memory locations, despite its large impact on the performance of parallel programs. The qrqw pram model reflects the contention properties of most commercially available parallel machines more accurately than either the well-studied crcw pram or erew pram models: the crcw model does not adequately penalize algorithms with high contention to shared-memory locations, while the erew model is too strict in its insistence on zero contention at each step. The�qrqw pram is strictly more powerful than the erew pram. This paper shows a separation of log n between the two models, and presents faster and more efficient qrqw algorithms for several basic problems, such as linear compaction, leader election, and processor allocation. Furthermore, we present a work-preserving emulation of the qrqw pram with only logarithmic slowdown on Valiant’s bsp model, and hence on hypercube-type noncombining networks, even when latency, synchronization, and memory granularity overheads are taken into account. This matches the bestknown emulation result for the erew pram, and considerably improves upon the best-known efficient emulation for the crcw pram on such networks. Finally, the paper presents several lower bound results for this model, including lower bounds on the time required for broadcasting and for leader election.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1644887","timestamp":"2014-04-19T03:10:24Z","content_type":null,"content_length":"39508","record_id":"<urn:uuid:6e2223e2-f059-4b39-b980-562c7af92e7b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Write algebraic expressions for two numbers with a sum of -7. Let one of the number be represented by x. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ff088d8e4b03c0c4887ef7f","timestamp":"2014-04-17T06:53:21Z","content_type":null,"content_length":"39293","record_id":"<urn:uuid:08428a17-a939-4ca4-85d7-381dd5afe63f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Isomorphism Proof - Graph Theory August 5th 2008, 01:44 PM #1 Aug 2008 Isomorphism Proof - Graph Theory If there are two graphs, how would I go about proving that they are isomorphic. Also, how would I go about proving that two other graphs are not isomorphic. Thanks! There are several free ‘graph theory programs’ available on the web. Several of these the capacity to compare two graphs for an isomorphic relation. Basically they are programmed to compare the two adjacency matrices to see if one matrix can be transform by elementary operations into the other matrix. (That programming if beyond me of course.) There is a simple way to see that two graphs are not isomorphic: If they have different degree sequences. Now they can have the same degree sequence and still not be isomorphic. That is, it is necessary that two graphs have the same degree sequence in order to be isomorphic, but it is not sufficient August 5th 2008, 02:13 PM #2 Global Moderator Nov 2005 New York City August 5th 2008, 02:42 PM #3
{"url":"http://mathhelpforum.com/discrete-math/45364-isomorphism-proof-graph-theory.html","timestamp":"2014-04-20T20:40:29Z","content_type":null,"content_length":"37495","record_id":"<urn:uuid:23146919-53ce-4cd0-a74d-30950d7c5521>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2012 [00507] [Date Index] [Thread Index] [Author Index] Re: Plotting colorfunctions over multiple parametric curves • To: mathgroup at smc.vnet.net • Subject: [mg125200] Re: Plotting colorfunctions over multiple parametric curves • From: "djmpark" <djmpark at comcast.net> • Date: Tue, 28 Feb 2012 00:46:34 -0500 (EST) • Delivered-to: l-mathgroup@mail-archive0.wolfram.com • References: <5868125.5962.1330344098277.JavaMail.root@m06> You will obtain better responses if you supply a completely evaluable example and not leave it up to responders to make up part of the input. David Park djmpark at comcast.net From: Andrew Green [mailto:kiwibooga at googlemail.com] Hi there I am a bit stuck with this one. I have several curves (circles in this case) I want to plot at the same time. Each circle also has its own function defining a unique potential as a function of theta around the curve. I want to take advantage of Mathematica's colorfunction color scaling and show the potential around each curve in a color scaled to the min and max values for the entire set of circle potentials. I get about this far.... n = 5; (*number of circles*) potential[theta_, i_] := Sin[2*i*theta]; circle[i_, theta_] := {x[i] + r[i]*Cos[theta], y[i] + r[i]*Sin[theta]}; (*x,y, and r are earlier defined vectors of coordinates, radius*) Evaluate@Table[circle[i, theta], {i, 1, n}], {theta, 0, 2 Pi}, PlotStyle -> Thick, Axes -> False, ColorFunction -> Function[{x, y, theta}, ColorData["TemperatureMap"][theta]]]; I can get a nice rainbow color around each circle but it is the same for all because I can only seem to define one colorfunction and cannot incorporate the potential function which varies for each circle "i". Any help would be most appreciated. Andrew Green
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Feb/msg00507.html","timestamp":"2014-04-19T12:14:44Z","content_type":null,"content_length":"26832","record_id":"<urn:uuid:42912143-e0c5-4679-8ad6-371df7990758>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Bonds futures: Delta? No gamma! Henrard, Marc (2006): Bonds futures: Delta? No gamma! Download (175Kb) | Preview Bond futures are liquid but complex instruments. Here they are analysed in a one-factor Gaussian HJM model. The in-the-model delta and out-of-the-model delta and gamma are studied. An explicit formula is provided for in-the-model delta. The out-of-the-model delta and gamma are equivalent to partial derivatives with respect to discount factors. In particular cases the derivative can not be obtained by standard techniques. The same situations lead to cases where the gammas (second order partial derivatives) do not exists. Item Type: MPRA Paper Institution: BIS Original Title: Bonds futures: Delta? No gamma! Language: English Keywords: Bond future; delivery option; delta; gamma; HJM gaussian model; in-the-model; out-of-the-model Subjects: G - Financial Economics > G1 - General Financial Markets > G13 - Contingent Pricing; Futures Pricing E - Macroeconomics and Monetary Economics > E4 - Money and Interest Rates > E43 - Interest Rates: Determination, Term Structure, and Effects Item ID: 2249 Depositing Marc Henrard Date Deposited: 14. Mar 2007 Last Modified: 20. Feb 2013 18:33 Brody, D.~C. and Hughston, L.~P. (2004). Chaos and coherence: a new framework for interest-rate modelling. Proc. R. Soc. Lond. A., 460:85--110. Carverhill, A. (1994). When is the short rate Markovian. Mathematical Finance, 4(4):305--312. Heath, D., Jarrow, R., and Morton, A. (1992). Bond pricing and the term structure of interest rates: a new methodology for contingent claims valuation. Econometrica, 60(1):77--105. Henrard, M. (2003). Explicit bond option and swaption formula in Heath-Jarrow-Morton one-factor model. International Journal of Theoretical and Applied Finance, 6(1):57--72. References: Henrard, M. (2006a). Bonds futures and their options: more than the cheapest-to-deliver; quality option and marginning. Technical report, SSRN. Henrard, M. (2006b). A semi-explicit approach to Canary swaptions in HJM one-factor model. Applied Mathematical Finance, 13(1):1--18. Hunt, P.~J. and Kennedy, J.~E. (2004). Financial Derivatives in Theory and Practice. Wiley series in probability and statistics. Wiley, second edition. Lamberton, D. and Lapeyre, B. (2000). Introduction to stochastic calculus applied to finance. Capman \& Hall / CRC. Nunes, J. and de~Oliveira, L. (2004). Quasi-analytical multi-factor valuation of treasury bond futures with and embedded quality option. Technical Report 2493, EFA 2004 Maastricht URI: http://mpra.ub.uni-muenchen.de/id/eprint/2249
{"url":"http://mpra.ub.uni-muenchen.de/2249/","timestamp":"2014-04-19T13:08:49Z","content_type":null,"content_length":"19578","record_id":"<urn:uuid:361bdadd-1e23-4d1b-bba4-f117e259fa94>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Roulette Analysis App Our roulette analysis App is a professional tool. The software analyses the results of spins in a roulette game. In the case of real wheels the software exposes any sector bias that may be present. In the case of animations, based upon pseudo random number generators, the software identifies repeating patterns. The object being to gain an edge over the casino. New in Version 2.0 This Version of our Roulette Analysis App builds on version 1.0 with a few new features. Version 1.0 concentrated on the analysis of real wheels and identified sector bias. Version 2.0 does the same but now includes powerful statistical analysis applicable to on-line animated games which use computer generated pseudo random number generators. You can now record up to 2000 spins in one session. The analysis algorithm continues to be developed and improved. In the case of the Iphone/Ipad, the user interface has been changed. Errors in the last 10 spins can now be corrected. You no longer need to be connected to the internet to display wheel bias graphically. The bias graph is generated on the Iphone so the app is now completely portable to any casino. The Mac/Pc version has been changed in the same way but includes the archiving feature whereas the Iphone/Ipad version does not. Roulette Wheel Analysis - What's it all about ? Traditional roulette games are played in real casinos using real wheels and real croupiers. Real wheels are mechanical devices that are supposed to be "fair". A fair wheel will, over time, result in every number from 0 to 36 appearing with a probability of 1 in 36. This means that over time the odds that a single number will appear is 36 to 1. The problem for the punter is that even given a fair wheel, the payout is actually 35 to 1. This gives the casino an edge and in the long run the casino can't lose. These odds are made even worse for a punter using an American wheel. American wheels have 00 as well as 0, so the odds are even more in favour of the casino. In recent times casinos, and other organisations, have seized the opportunity to take the roulette game on-line. The transition started with the development of computer software that mimics a roulette table as an animation. They even created sounds that mimic the noise of a spinning ball and the noise made when it dropped into a slot. This is nonsense of course. The roulette table animation could simply by replaced with a box. The random number just appears in the box. This has nothing to do with the real game of roulette. A variation of the online game came when pneumatic wheels were developed. The wheel is spun automatically at regular intervals and the ball propelled by a blast of air. This mechanism could be on-line 24/7 without the expense of a real person. Yet another variation was the web cam in a real casino. The web cam, with sound, gives a sense of the atmosphere in the casino with real croupiers and punters placing bets. Online bets are synchronised with the game at the table. The best example of this is dublinbet.com. Some organisations broadcast on terrestrial TV. A real person is on hand to spin the wheel but bets are placed online. The table is located in a TV studio rather than a real casino. To sum up you have 3 choices. Either a real wheel spun automatically A real wheel in a studio or real casino spun by a real person or an animation and computer generated pseudo random number. Roulette Systems The only guaranteed system is the Martindale system. This system is very simple. You make an evens bet and if you lose you double your bet. So if you bet red and you win, you double your money. If you lose you repeat the bet with double the stake. If you win you get back your original stake and you are back to square one. If you lose you keep doubling up until you are guaranteed to eventually win and get your stake back. The only draw back is that you have to keep doubling your bet. The stake can become astronomical very quickly. On a run of reds for example you could be staking thousands of dollars in order to get back where you started. However you are guaranteed to get back where you started but only in a game which allows unlimited bets. The Martindale system was shut down a long time ago by casinos by placing betting limits. If you are on a table with a limit of say $500 and play Martnidale you are guaranteed to lose all your money sooner rather than later. The layout of the roulette table is designed quite specifically to deceive the punter. The layout of numbers on the roulette table bares no relation to the layout on the actual wheel. The casino gives the impression that there is any meaning to the term 1st dozen, odds, streets etc. Any bets based upon these groupings are no more that a random selection of numbers. Any system that's based upon such groupings has no meaning whatsoever. THERE IS NO SUCH THING AS A ROULETTE SYSTEM. How can the punter win in the long run ? The punter can only get an edge over the casino by identifying weaknesses in the apparatus and then exploiting that weakness. Lets consider firstly a real wheel. The only way to win is to get an edge over the casino. In other words beat the fixed odds over the long term. This can only be done by analysing the wheel in order to predict with accuracy which slot the ball is going to land in next. Slots on a roulette wheel have nothing whatever to do with numbers. They are positions in space on a wheel. The wheel has fixed mechanical characteristics. For example: the diameter of the wheel is known, the distance between slots is known. If every mechanical parameter about the wheel is known, the application of Newton's laws of motion would allow you to calculate, with certainty, where the ball will land next. Just as Newton's laws predict the motion of the planets. Such wheel analysis goes back many years. There are stories of clever individuals timing the speed of the wheel and doing calculations based upon the wheels dimensions etc. Of course this was in real casinos and it didn't take long for the casino owners to become wise to this practice. They ejected anyone suspected of being a wheel analyser. The one thing wheel analysers were looking for was a fixed wheel. A fixed wheel is one that displays a bias towards one sector of the wheel. If a wheel analyser could identify a biased wheel they simply bet on the sector and thats how they gained an edge on the casino and won in the long run. It may be wrong to suggest that casinos deliberately fixed wheels but in the early days mechanical engineering was not as accurate as now. With the technology available then it would have been common for a wheel to be naturally biased. Casinos deploy a number of techniques to thwart real wheel analysers: hazards are a placed in the path of the ball on the wheel surface, croupiers are changed at regular intervals, the ball is spun clockwise then anti-clockwise, the frame that displays the numbers on the wheel is moved around, wheels are moved from one table to another when the casino is We must emphasis that analysing a real roulette wheel is a complex task. You will need to record many sessions at a particular wheel in order to identify a definite fixed biased there. If you do find such a wheel then you will make some winnings. A repeating bias pattern is the holy grail of real wheel analysis and is hard to find. Casinos go to great lengths to ensure that the result of the next spin is a truly random event. What about on-line games that use animations ? As we have said before, these games have nothing to do with real roulette. They simply display the results from a pseudo random number generator. Pseudo random number generators are pieces of computer program that generate random numbers. The key word here is 'pseudo' because they are not pure random number generators at all but are inevitably based upon deterministic algorithms. Of course there is no concept of sector bias because there is no wheel. However the weakness lies in repeating patterns. If you can identify repeating patterns of behaviour then these can be exploited. Pseudo random number generators can get stuck in patterns. The most common one is repeating numbers. On-line games using auto wheels The auto wheel presents a good opportunity for the punter. Unlike a real wheel, operated by a human being, the auto wheel has a predictable pattern of behaviour. If you watch an auto wheel carefully you will notice that the wheel spins at the same speed, the ball always starts from the last winning slot and is spun alternately clockwise then anti-clockwise. These are predictable initial conditions. The only randomness in the system is created by the hazards in the ball's path. Which game should you play ? This is difficult as each format has it's own weaknesses. One thing has become clear though. The initial enthusiasm for on-line computer animation and pseudo random numbers has given way recently to real wheels with real croupiers. There is a reason for this. The traditional roulette wheel (assuming it's fair) with a human operator, is the best random number generator. We have analysed hundreds of sessions from different online casinos and we have come to the same conclusion. The casino has the best chance of winning if the wheel is truly random. Having said that if our software reveals that a real wheel is very random then a statistical bet based upon vacant numbers could give you an edge. If the weel is displaying transient sector bias it's probably not a good idea to bet on it. There is also another reason for not choosing to bet on real wheels - it's a slow process. This is especially true if the online casino is broadcasting on television at the same time. These television shows are a waste of time. However it's worthwhile spending time analysing the wheels as you may find one that does have a bias. If you prefer to bet on real wheels the auto wheel may be the best choice because they are more deterministic and the game moves more quickly. One thing to keep in mind though is the location of the wheel. If the wheel is in a studio it's easy for the casino to switch wheels. You would not be aware of this. Your analysis of the wheel is useless if the wheel was switched. The only way to guarantee that you are analysing the same wheel is to be in a real casino or to be watching a web cam of a real casino. We have analysed hundreds of sessions and our conclusion is that the online game which uses a computer animation and a pseudo random number generator is the best option. We have come to this conclusion for a number of reasons. You have to think about your objective. The objective is to win as much money as possible in the shortest possible time - not to play the game of roulette. The rapid turn around of spins in these games works in your favour because you have to record the results of many spins before you can identify any pattern. In a real casino with a real wheel this could take hours. This is a factor we haven't mentioned yet - the time involved in analysing a wheel. Wheel analysis is only practical using a computer and in order to identify any bias or patterns you need to analyse many spins and many sessions. A real casino would not allow you to use a computer at a roulette table but there is nothing to stop you writing down results and using a laptop in a quiet corner. Our software is ideal for this and the Iphone version could be used at a table without raising any suspicion. In reality it's most likely that you would be playing an online game in the comfort of your own home. How can the punter win in the long run ? Assuming that you have recorded many sessions at the same wheel you may find that a regular pattern becomes clear. In the case of a real wheel, our software identifies the same wining sector in each session. Certain numbers appear more often than others in the case of an animation. This is called a permanent bias and is caused by a fixed or faulty wheel or a random number generator with a software bug. In this case you have found a sure bet and could win a lot of money. Our research has clearly demonstrated that real wheels and pseudo random number generators can get stuck in transient patterns. This is not uncommon in random systems in nature. These patterns can last quite a long time in a session. Our software identifies these transient effects. We use a mathematical technique known as 'linear regression'. Linear regression is a way of predicting trends in apparently random data. How does our software help ? Set Analysis The software makes three sets of predictions. Repeating numbers, Vacant numbers and Sector bias. The sector bias is calculated from the beginning of the session so every result is included in the calculation. If the bias remains fixed on a particular sector for hundreds of spins and is always the same every time you analyse the same wheel then you have a real winner – no doubt a doctored wheel or in the case of an animation, hacked software. Repeating Numbers This set of numbers contains all those numbers that don't appear in the list of vacant numbers. It's impossible to predict repeating numbers directly because there is no clear reason for picking one number that has appeared 10 times over one that has appeared 6 times. Vacant Numbers This set of numbers contains all those numbers that have not appeared at all. This calculation cannot be done from the start of the session because a point will be reached when every number has appeared at least once. The calculation therefore is based upon a moving window. When all the numbers have appeared, the window shifts in the result history to the point where 18 vacant numbers can be identified. The reset point is determined by the number in the “Reset at” box. We have found that the optimum reset point is when there are 5 numbers left in the prediction. This is especially true if you are analysing an animation. As you enter more and more results you will see the bank increasing or decreasing as each prediction yields either a win or a lose. This value will fluctuate but over time a trend will occur. This trend can be used to help you chose whether to bet or not. The trend is calculated by a mathematical process known as “linear regression”. This is a powerful tool used extensively to predict future outcomes based upon seemingly random data. The software does two trend calculations. The left hand one is the trend from the start of the session. The right hand one is the trend over the last 10 results. A strong and sustained positive trend in both values is a good indicator. Bias Graphs To help you visualise any bias in the wheel we present the data graphically. The graph displays data in different ways which depend upon the “wheel type”. If this is set to “real wheel” the graph shows the predicted sector. If it is set to “animation” the graph shows the actual number of times each number has appeared. If the wheel is perfectly random and a large number of spins have been recorded the graph will look irregular and will not display any significant peaks or troughs. If your graph looks like this after about 100 spins then walk away. There is no point betting on a truly random wheel as the statistics are always in favour of the casino. If on the other hand the graph shows a distinct peak then the wheel is exhibiting a bias towards that sector or a particular number. You can enter up to 2000 results for a given session. We have analysed hundreds of sessions from different wheels and different online games. Most real wheels display a bias towards one sector or another and animations based on computer pseudo random generators tend to produce more repeated numbers. In order to maximise your chance of identifying a likely candidate the software lets you archive each session. Each archived session can be associated with a particular casino and or wheel. When you have amassed a large number of sessions you can analyse all the sessions. If you then view the bias you may see a definite trend either towards a particular number in the case of an animation or a sector in the case of a real wheel. WE MAKE NO CLAIMS THAT THE PREDICTIONS WILL WIN YOU MONEY. ITS UP TO YOU TO DECIDE WHETHER TO BET BASED UPON THEM OR NOT. OUR SOFTWARE IS A TOOL TO HELP YOU MAKE UP YOUR OWN MIND. THERE ARE NO
{"url":"http://www.roulette-analysis.com/","timestamp":"2014-04-19T15:37:36Z","content_type":null,"content_length":"38156","record_id":"<urn:uuid:824bf082-6c1f-4673-8fd1-61ea4692b663>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
critical point yeh i got that to work, now how do I show that it's not a local min,max or inflection. Would I look at the second derivative? If that's not defined it's not anything? The standard second-derivative fails, since the first derivative is discontinuous at x=0 (the 2.derivative is not defined). It remains to be shown that f(0) is not a local maximum/minimum. This should be fairly easy to show.. Use, for example, the following definition of local maximum: We say that a function f has a local maximum at [tex]x_{0}[/tex], iff there exists a [tex]\delta>0[/tex] so that for all [tex]x\in{D}(x_{0},\delta),f(x)\leq{f}(x_{0})[/tex] I've assumed that the x's in the open [tex]\delta[/tex]-disk are in the domain of f, as is the case in your problem. Note that this definition makes no assumption of differentiability or continuity of f.
{"url":"http://www.physicsforums.com/showthread.php?t=52525","timestamp":"2014-04-19T09:35:04Z","content_type":null,"content_length":"38170","record_id":"<urn:uuid:3972795f-43d5-48d7-bce7-f8e12f4faa68>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Void Fraction Measurement and Analysis at Normal Gravity and Microgravity Conditions Abstract (Summary) As the frequency and length of space flights increase, a better understanding of the physical phenomena associated with reduced gravity is needed. One of these processes is two-phase flow. Two-phase gas-liquid flows are encountered in many space applications such as; boiling and condensation, thermal-hydraulic power cycles for space stations and satellites, and in the transfer and storage of cryogenics. An experimental approach is needed to provide the background for accurate modeling and designing of equipment. Measurements of pressure, temperature, flow and void fraction at μ-g were made during several microgravity flights onboard NASA's KC-135 aircraft. High-speed video images were also recorded. The results were later compared to those obtained at 1-g. This study is focused on the measurement and analysis of the volumetric void fraction in water-air, two-phase flow at μ-g and 1-g. The water used for the tests was either de-ionized and distilled, or filtered through activated carbon and distilled. The volumetric void fraction can be found from the ratio of the volume occupied by the gas to the total volume of the gas and liquid. Two capacitance void fraction sensors were used. In the early stages of void fraction measurement, a helical-wound-electrode void fraction sensor was designed and tested in February 1994. Over the course of this research, some of the problems associated with this sensor were identified and a new concave-plate-electrode capacitance sensor was developed having a linear response over the flow settings and 10 times the sensitivity of the helical wound sensor. Data was collected covering a wide range of void fraction, from approximately 0.1 to 0.9 at both 1-g and μ-g conditions. The flow regimes encountered included bubble, slug, transitional flow, and annular flow. Void fraction values for slug flow appear to be slightly higher at μ-g. The average void fraction values for the remaining flow regimes do not appear to show any discernible difference. The development of the void fraction and flow profiles conducted by Zuber and Findlay (1965), was used to compare the profiles found at 1-g and μ-g. These results indicate that the void fraction profile is slightly flatter at 1-g for slug flow. Using this model, the results for bubble flow at 1-g agree with the results reported by other researchers where the well known "saddle" shape profile was found. A statistical approach was used by plotting probability density functions for the 1-g and μ-g void fraction data. A wider fluctuation in void fraction was found for bubble and slug flows at 1-g compared to μ-g. The probability density functions for the highly inertia dominant transition flow and annular flow regimes at 1-g and μ-g were comparable. Bibliographical Information: Advisor:Jeffrey, K. D.; Bolton, R. J.; Krause, Arnold Edwin; Wilson, J. N.; Rezkallah, Kamiel S.; Bugg, James D. School:University of Saskatchewan School Location:Canada - Saskatchewan Source Type:Master's Thesis Date of Publication:05/13/2009
{"url":"http://www.openthesis.org/documents/Void-Fraction-Measurement-Analysis-at-382666.html","timestamp":"2014-04-16T22:05:20Z","content_type":null,"content_length":"10775","record_id":"<urn:uuid:24af7d27-baab-48cf-be41-e884a3339144>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
The Should I Become a Mathematician? Thread Sci Advisor HW Helper P: 9,421 2) Basic Preparation, High school: In high school, it is usual nowadays to take AP calculus. More important for mathematical background, is to get a good course in polynomial algebra and Euclidean geometry, with thorough treatment of proofs. A course in logic would help as well if it is available. Again, one must make do with what is available, but be aware that courses like AP calculus are more designed to please parents and impress admissions officials than to train mathematicians. Most of the people making decisions about what to offer are completely ignorant of the needs of future scientists, and are only concerned with entrance to prestigious schools. Again one must play the game successfully, so even though these people have no idea what you need to become mathematician, they still are able to make decisions on who gets into top schools, so it is prudent to impress them, while also trying to actually learn something on the side. So what I am saying is this: in order to succeed in college calculus, one absolutely MUST have a solid grasp of high school algebra and geometry, although most high schools shortcut these subjects to offer the more prestigious but less useful AP calculus. Thus it is wise to work through an old fashioned high school algebra book like Welchons and Krickenberger (my old book), or an even older one you may run across. A wonderful geometry book is the newer one by Millman and Parker, Geometry: a metric approach with models, designed for high school teacher candidates in college. If you can find them, the SMSG books from Yale University Press, published in the 1960’s are ideal high school preparation for mathematicians. These were produced by the movement to reform high school math in the early 1960’s but the movement foundered on the propensity to put profit before all else, the lack of trained teachers, and the unwillingness to pay for training them. e.g. here is a copy of a precalculus book from that era: Bookseller: Lexington Books Inc (Garfield, WA, U.S.A.) Price: US$ 35.00 [Convert Currency] Shipping within U.S.A.: US$ 4.75 [Rates & Speeds] Book Description: Yale University Press., 1961. Good+ with no dust jacket; Contents are tight and clean; Ex-Library. Binding is Softcover. Bookseller Inventory # 41816 and an algebra book: Mathematics for High School First Course in Algebra Part I Student's Text Bookseller: Bank of Books (Ventura, CA, U.S.A.) Price: US$ 14.25 [Convert Currency] Shipping within U.S.A.: US$ 3.50 [Rates & Speeds] Add Book to Shopping Basket Book Description: Yale University Press. Soft Cover. Book Condition: ACCEPTABLE. Dust Jacket Condition: ACCEPTABLE. USED " :-:Fair:-:Writing on first page, covers bent and creased, a little water damage, corners bumped, covers dirty, page edges dirty, spine torn.:-:" Is less than good. Bookseller Inventory # 19620 another algebra book: Clarkson, Donald R. Et. Bookseller: Becker's Books (Houston, TX, U.S.A.) Price: US$ 15.00 [Convert Currency] Shipping within U.S.A.: US$ 4.50 [Rates & Speeds] Add Book to Shopping Basket Book Description: Yale University, 1961. Book Condition: GOOD+. wraps School Mathematics Study Group Studies in Mathematics Volum V111. Bookseller Inventory # W040215 and one on linear algebra: Introduction to Matrix Algebra. Student's Text. Unit 23. School Mathematics Study Group Bookseller: Get Used Books (Hyde Park, MA, U.S.A.) Price: US$ 25.00 [Convert Currency] Shipping within U.S.A.: US$ 3.70 [Rates & Speeds] Add Book to Shopping Basket Book Description: Yale University Press. Paperback. Book Condition: VERY GOOD. USED 4to, yellow wraps. Slightly skewed; wraps sunned and a little worn at spine; text fine. Bookseller Inventory # Here are some books I use currently in teaching math ed majors, which would be bettter used in high school: An Introduction to mathematical thinking, by William J. Gilbert and Scott A. Vanstone. paperback, ISBN 0-13-184868-2, Pearson and Prentice Hall. also: (better) Courant and Robbins, What is Mathematics? After mastering basic algebra and geometry, there is no harm in beginning to study calculus or (better) linear algebra, and probability. A good beginning calculus book is Calculus made easy, by Silvanus P. Thompson, (ISBN: 0312114109) Bookseller: Great Buy Books (Lakewood, WA, U.S.A.) Price: US$ 1.00 [Convert Currency] Shipping within U.S.A.: US$ 3.75 [Rates & Speeds] Add Book to Shopping Basket Book Description: St. Martin's Press, 1970. Paperback. Book Condition: GOOD. USED Ships Within 24 Hours - Satisfaction Guaranteed!. Bookseller Inventory # 2397224 . I love his motto: “what one fool can do, another can” Do not laugh, this is a good book. And therefore his book on electricity and magnetism is probably also good (he was a fellow of the Royal Society of Engineers). Elementary Lessons in Electricity & Magnetism. New Edition, Revised Throughout with Additions Thompson, Silvanus P. Bookseller: Science Book Service (St. Paul, MN, U.S.A.) Price: US$ 4.94 [Convert Currency] Shipping within U.S.A.: US$ 3.50 [Rates & Speeds] Add Book to Shopping Basket Book Description: MacMillan Company, New York, NY, 1897. Hard Cover. GOOD PLUS/NO DUST JACKET. Red cloth covers are clean and bright with some wear at the tips and the head and foot of the spine; gilt lettering on spine is bright and easy to read; institutional lib book plate on inside front cover and lib stamp on copyright page; owner's signature inked on front flyleaf; binding cracked between front and rear endpapers and has been reinforced with clear tape; inside pages clean, bright and tight throuhgout. Overall, still a very useful, solid and clean working or reading copy. Bookseller Inventory # 008802. Learn right now: the price of a book is unrelated to the value of the book as a learning tool, only to the scarcity of the book, and its popularity. [Notice how cheap these wonderful books are compared to the **&^%%$$!!! books that sell for $125. and up, that are required for college courses.] Finally, if you are a very precocious high school student, and have learnt algebra and geometry, you may profitably study calculus. In fact, to play the game of college admissions, you may need to take AP calculus, even if the teacher is an idiot, just so the admissions officials will believe you have “challenged yourself”. There are many good calculus books, beyond the humorous (but valuable) Silvanus P. Thompson, although that may already suffice for an AP course. The delightful math book I had as a high school senior, was a combination of logic, algebra, set theory, analytic geometry, calculus, and probability, called Principles of Mathematics, by Carl Allendoerfer and Cletus Oakley. This was a wonderful book, and opened my eyes to what was possible after a long period of boring mathematics courses at the dull high school level. here is a copy: Allendoerfer, Carl B. & Cletus O. Oakley Bookseller: Adams & Adams - Booksellers (Guthrie, OK, U.S.A.) Price: US$ 7.00 [Convert Currency] Shipping within U.S.A.: US$ 3.00 [Rates & Speeds] Add Book to Shopping Basket Book Description: McGraw-Hill, N.Y., 1963. Hard Cover. Book Condition: Very Good. No Jacket. 8vo - over 73⁄4" - 93⁄4" tall. xii + 540pp. name on front endpaper. Bookseller Inventory # 014846. I still have a copy of this book on my shelf. A lovely calculus book, for beginners, with delightful motivation, is Lectures on Freshman Calculus, by Cruse and Granberg. They motivate integration by the “Buffon’s problem” of computing the likelihood of a needle dropped at random, falling across a crack in the floor. (I reviewed it in 1970, and criticized the flawed discussion of Descartes’ solution of the problem of tangents, but I wish now I hadn’t, as it might have survived longer.) here is a copy: Lectures on Freshman Calculus Cruse, Allan B. & Granberg, Millianne Bookseller: Hammonds Antiques & Books (St. Louis, MO, U.S.A.) Price: US$ 18.00 [Convert Currency] Shipping within U.S.A.: US$ 3.00 [Rates & Speeds] Add Book to Shopping Basket Book Description: Addison Wesley 1971, 1971. Hardcover good condition with minor soiling, no dustjacket xlibrary with usual markings ISBN:none. Bookseller Inventory # LIB2958010770. As before, participate in the math team, and practice your vocabulary, to pass high on the verbal SAT. And read lots of books. Mathematicians have to also describe what they do to literate folk, and of course also need to “woo women” (or your choice), as observed in dead poets society. more later.
{"url":"http://www.physicsforums.com/showthread.php?p=1006099","timestamp":"2014-04-18T15:51:24Z","content_type":null,"content_length":"97342","record_id":"<urn:uuid:cd2d466b-1dcd-444f-b9b6-6c2b6921bf7b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Departments -Maths Key Stage 4 Maths Key Stage 4 Year 10 - Curriculum Content All students follow a Scheme of Work which builds on work covered in KS3 and covers a range of aspects from each of the broad content headings listed below: Shape, Space and Measures Handling Data Each of these aspects are re-visited, consolidated and extended throughout the year and in Year 11. Functional Mathematics (Real Life problems) and Mathematical Processes and Applications (Problem Solving skills) are an integral part of the content areas listed above. Year 11 - Curriculum Content All students follow a Scheme of Work which builds on work covered in Year 10 and covers a range of aspects from each of the broad content headings listed below: Shape, Space and Measures Handling Data Each of these aspects are re-visited, consolidated and extended throughout the year. Functional Mathematics (Real Life problems) and Mathematical Processes and Applications (Problem Solving skills) are an integral part of the content areas listed above. Key Stage 4 Approach The Mathematics GCSE does not just begin in Year 10. Students are building up their knowledge and skills throughout Key Stage 3 and the work then continues with consolidation and further development of these aspects. Students follow a Linear GCSE course with two examinations in the summer of Year 11, one calculator and one non-calculator. Students will be entered for either Foundation or Higher tier, the decision on which tier to enter is governed only by a student’s strengths in the subject. Being in a particular Mathematics set does not necessarily mean entry at a particular tier of the final exam. At Higher Tier, grades A* to D are available. At Foundation Tier, grades C to G are available. Homework is generally set weekly and may take a variety of forms, e.g. worksheets or activities to consolidate or assess understanding, preparation for future lessons, revision for exams, practice exam papers and online tasks. The department has its own dedicated homework website where parents can check the work which has been set. The URL for this is https://sites.google.com/site/ stbernardsmaths. Students also have individual passwords for the MyMaths.co.uk website and can use this to help them review classwork, revise for exams or complete assigned homework.
{"url":"http://www.st-bernards.cumbria.sch.uk/maths_ks4.php","timestamp":"2014-04-19T06:51:48Z","content_type":null,"content_length":"27459","record_id":"<urn:uuid:fd678bac-f54a-4db2-8140-fccc92918f7f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Bloomingdale, IL Geometry Tutor Find a Bloomingdale, IL Geometry Tutor ...Everyone learns and studies differently. I can help students explore what their style is, in order to benefit them academically. English was my second language and so I understand what is required and necessary for a student to learn another language. 20 Subjects: including geometry, English, writing, ESL/ESOL ...I have both lived and taught (English as foreign language) in France, and enjoy helping students from all different backgrounds. I graduated from Missouri State University with a 3.62 overall (4 point scale), so I feel that I am qualified to tutor in many areas. I look forward to helping you achieve success in your subjects, so please contact me as soon as you are ready to learn! 16 Subjects: including geometry, English, chemistry, French ...I am fully qualified to teach high school level math. As a mechanical engineer master's degree holder, I work as a product development and analysis engineer for an aerospace company in Rockford and Skokie, Il. Part of the analysis aspect involves a high level of statistics and probability. 20 Subjects: including geometry, calculus, physics, statistics ...Trigonometric Functions OF ANGLES. Angle Measure. Right Triangle Trigonometry. 17 Subjects: including geometry, reading, discrete math, GRE ...I have also taken A.P. Music Theory and received college level music course credit for it. Currently, I am engaged in composing music and developing my own skills and repertoire. 16 Subjects: including geometry, chemistry, English, algebra 1 Related Bloomingdale, IL Tutors Bloomingdale, IL Accounting Tutors Bloomingdale, IL ACT Tutors Bloomingdale, IL Algebra Tutors Bloomingdale, IL Algebra 2 Tutors Bloomingdale, IL Calculus Tutors Bloomingdale, IL Geometry Tutors Bloomingdale, IL Math Tutors Bloomingdale, IL Prealgebra Tutors Bloomingdale, IL Precalculus Tutors Bloomingdale, IL SAT Tutors Bloomingdale, IL SAT Math Tutors Bloomingdale, IL Science Tutors Bloomingdale, IL Statistics Tutors Bloomingdale, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Bloomingdale_IL_geometry_tutors.php","timestamp":"2014-04-16T13:32:41Z","content_type":null,"content_length":"24001","record_id":"<urn:uuid:8aec2c09-2ec1-4d78-92df-e161f4f36213>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
NOAA/National Severe Storms Laboratory Norman, Oklahoma Weather and Forecasting To appear in September 1996 issue The raw data used in the paper are available for your use. Note that there are 596 lines of data so that persistence forecasts can be calculated, leading to the 590 forecast days discussed in the We have carried out verification of 590 12-24 hour high temperature forecasts from numerical guidance products and human forecasters for Oklahoma City, Oklahoma using both a measures-oriented verification scheme and a distributions-oriented scheme. The latter captures the richness associated with the relationship of forecasts and observations, providing insight into strengths and weaknesses of the forecasting systems, and showing areas in which improvement in accuracy can be obtained. The analysis of this single forecast element at one lead time shows the amount of information available from a distributions-oriented verification scheme. In order to obtain a complete picture of the overall state of forecasting, it would be necessary to verify all elements at all lead times. We urge the development of such a national verification scheme as soon as possible since without it, it will be impossible to monitor changes in the quality of forecasts and forecasting systems in the future. The verification of weather forecasts is an essential part of any forecasting system. Producing forecasts without verifying them systematically is an implicit admission that the quality of the forecasts is a low priority. Verification provides a method for choosing between forecasting procedures and measuring improvement. It can also identifystrengths and weaknesses of forecasters, thus forming a crucial element in any systematic program of forecast improvement. As Murphy (1991) points out, however, "failure to take account of the complexity and dimensionality of verification problems may lead to ... erroneous conclusions regarding the absolute and relative quality and/or value of forecasting systems." In particular, Murphy argues that the reduction of the vast amount of information from a set of forecasts and observations into a single measure (or a limited set of measures), a measures-oriented approach to verification, can lead to misinterpretation of the verification results. Brier (1948) pointed out that "the search for and insistence upon a single index" can lead to confusion. Moreover, a measures-oriented approach fails to identify the situations in which forecast performance may be weak or strong. An alternative approach to verification involves the use of the joint distribution of forecasts and observations, hence leading to the name distributions-oriented verification (Murphy and Winkler 1987). A major difficulty in taking this approach to verification is that the dimensionality of the problem can be very large and, hence, the data sets required for a complete verification must be very large, particularly if two forecast strategies are being compared (Murphy 1991). For a joint comparison of two forecast strategies and observations, the dimensionality, D, of the problem is given by, D = IJK - 1, where I is the number of distinct forecasts from one strategy, J is the number from the second strategy and K is the number of distinct observations, respectively. Thus, if each forecast strategy produces 11 distinct forecasts and 11 distinct observations (e.g., cloud cover in intervals of 0.1 from 0 to 1), the dimensionality is given by D = (11)(11)(11) - 1 = 1320. Clearly, the data sets needed for complete verification and the description of the joint distribution can become prohibitively large. In practice, therefore, persons making evaluations of forecasts have to make compromises between the size of the data set and the completeness of the verification. In this paper, we show the richness of information that can be obtained from simple verification techniques using a relatively small forecast sample. We believe that the insights available from even this modest work show the importance of considering a broad range of descriptions of the forecasts and observations, in an effort to retain as much information as possible. Murphy (1993) described three types of "goodness" for forecasts. We summarize those types here in order to show where the present work fits. The three types are: 1) Consistency: How well does a forecast correspond to the forecaster's best judgments about the weather? 2) Value: What are the benefits (or losses) to users of the forecasts? 3) Quality: How well do forecasts and observations correspond to each other? We cannot say anything about "consistency," since we have no access to forecasters' judgments. This is typically true. Consistency is the only type of goodness that is completely under the control of the forecaster, but it is difficult for others to verify. We also cannot say anything quantitative about "value," since we have not done a study of the forecast's user community. We will make some general remarks about temperature forecasting, based on the premise that improvements of a few degrees in a forecast are unimportant to many users in most cases.[1] Almost all of our attention will be focused on the "quality" of the forecasts. Murphy (1993) defines ten different aspects of quality (see his Table 2 for more details). Traditional measures such as the mean absolute error and the root mean-square error are related to aspects such as accuracy and skill. By using a distributions-oriented approach, the complete relationship between forecasts and observations can be examined. Forecasts can be high in quality in one aspect although being low in another. For example, forecasting the high temperatures by simply using the annually-averaged high temperature every day would be an unbiased temperature forecast, but it would clearly not be very accurate over a long period. Overforecasting the high temperature by 10 ° every day might be more accurate than using the annually-averaged high temperature, but would be biased. A perfect forecast would perform equally well at all of the various aspects of quality. An important distributions-oriented study of temperature forecasts was done by Murphy et al. (1989), in which high temperature forecasts and observations for Minneapolis, Minnesota were compared. They concluded that the different measures of forecast quality gave different impressions about the quality of forecast systems. They also pointed out that the joint distribution approach highlights areas in which forecasting performance is especially weak or strong. In this paper, we will carry out a related study on a data set for Oklahoma City, Oklahoma. Our focus will be to show the vast wealth of additional information available that can be obtained through a distributions-based verification over a "traditional" measures-based approach. We will point out some particularly interesting aspects of forecasting performance that, in a forecasting system that encouraged continuous verification and training, could lead to improvements in forecast quality. Forecast and verification data set The data set consists of 590 high-temperature forecasts from 1993 and 1994 made by the National Weather Service (NWS) Forecast Office at Norman, Oklahoma (NWSFO OUN), and verified at Oklahoma City (OKC)[2]. The basic forecast systems are from the Limited-Area Fine Mesh (LFM)-based Model Output Statistics (MOS), the Nested-Grid Model (NGM)-based MOS, the NWSFO OUN human forecast, and persistence (PER). In addition, an average or consensus MOS forecast (CON) was created by averaging the LFM MOS and NGM MOS forecasts. Vislocky and Fritsch (1995) have shown that the simple averaging of the LFM and NGM MOS forecasts produced a significantly better forecast system over the long run than either of the individual MOS forecasts. The MOS forecasts are all based on the 0000 UTC model runs, verifying 12-24 hours later, although the NWSFO forecast is taken from the area forecast made at approximately 0800-0900 UTC, verifying later that day. PER is the observed high temperature from the previous day. All days for which all four basic forecasts are available, as well as verifying observations, are included in the data set. 3. A measures-oriented verification scheme It is possible to develop simple measures that convey some information about the forecast performance. In particular, the bias or mean error (ME) is given by where fi is the i-th forecast, xi is the i-th observation, and there are a total of N forecasts. In says nothing about the accuracy of forecasts since a forecaster making 5 forecasts that are 20° too warm and 5 forecasts that are 20° too cold will get the same ME as a forecaster making 10 forecasts that match the observations exactly. In order to correct that problem, the errors need to be nonnegative. There are two common ways of doing this. The mean absolute error (MAE) takes the absolute value of each forecast error and is given by The root mean square error (RMSE) squares each error and is given by Because of its formulation, the RMSE is much more sensitive to large errors than MAE. For instance, suppose a forecaster makes 10 forecasts, each of which is in error by 1°, while another forecaster makes 9 forecasts with 0° error and one with 10° error. In both cases, the MAE is 1°. The RMSE for the first forecaster is 1°, although it is 3.16° for the second forecaster. Thus, the RMSE rewards the more consistent forecaster, even though the two have the same MAE. For both MAE and RMSE, it is possible to compare the errors to those generated by some reference forecast system (e.g., climatology, persistence, MOS) by calculating the percentage improvement, IMP. IMP is given by where ER is the error statistic generated by the reference forecast system and EF is the error statistic from the other forecast system. This is often described as a skill score. The relative performance of the various forecast systems using the simple measures described above is summarized in Table 1. NGM MOS is seen to have a cold bias (-0.62° F), although the NWSFO has a warm bias (0.49° F). Although LFM MOS has a lower MAE than NGM MOS, it has a higher RMSE. The CON forecast represents a greater improvement in the MAE and RMSE over either the LFM MOS or NGM MOS than the human forecasters improve over CON, according to these measures. This leaves open the question of the value (in Murphy's context) of a decrease of 0.24 °F in MAE or RMSE by NWSFO over the numerical guidance. By using these simple measures, we are unable to determine the distribution of the errors leading to the statistics and their dependence upon the actual forecast or observation. Therefore, it is not possible to use these measures alone to determine the nature of the forecast errors. In the hypothetical case of the two forecasters discussed above, it is likely that for most users, the forecast with ten errors of 1 °F would provide more value than the forecast with 1 error of 10 °F. From that view, even though any single measure is clearly inadequate, the MAE may be potentially even more misleading about forecast performance than RMSE, depending upon the assumptions about the needs of the users of the forecasts. The MAE is one of the two temperature verification tools required by the NWS Operations Manual (NWS Southern Region Headquarters 1984; NOAA 1984). The other is the production of a table of forecast errors in 5 °F bins[3]. We have generated this table for the various forecast systems (Table 2). As expected, PER produces the highest number of large errors. Other than PER, one of the striking aspects of the table is the frequency with which forecast temperatures have small errors. All of the forecasts are within 5 °F more than 80% of the time. By using CON, the forecast was correct to within 5 °F 86.1% of the time. Thus, the numerical guidance produced forecast errors exceeding 5 °F approximately once per week, although the NWSFO reduced the number of such errors from 82 to 75, a decrease of 8.5%. Very large forecast errors are, of course, even less frequent. The worst forecast system by this measure (other than persistence), LFM MOS, is correct within 10 °F 96.6% of the forecasts (approximately once per month), although the best, NWSFO, is within 10 °F 98.6% of the time. Compared to the most accurate MOS forecast, CON, the NWSFO reduced the errors exceeding 10 °F from 12 to 8 (33.3%). Observe that there is an important difference in the distribution of errors for NWSFO and the various MOS forecasts. In all of those cases, forecast errors greater than 10 °F are much more likely to be positive (too warm) than negative (too cold). However, although the NWSFO distribution is skewed towards the overforecast (i.e., too warm) side at all bins, small MOS forecast errors are more likely to be cold than warm. This is particularly true for the NGM MOS, where 11 of the 14 (79%) errors larger than 10 °F are too warm, and 276 of the 429 (64%) errors of less than 6 °F are too cold. Knowledge of this asymmetry could be employed by forecasters to improve their use of numerical guidance products, and could be used by modellers to improve the statistically-based guidance as well. These two tables represent all the verification knowledge of temperature forecasts that is required of the forecast offices. This by no means exhausts the available information, however. The table of forecast errors (Table 2) represents one "level" at which a distributions-based approach to verification can be applied and is a step above the summary measures in sophistication. It gives the univariate distribution of forecast errors p(e) = p(f - x). However, this approach implicitly assumes that all errors of magnitude f - x are the same. A more useful approach, which we will explore in the next section, is to consider the joint (i.e., bivariate) distribution of p(f,x). This latter method allows us to consider the possibility that certain values of f or x are more important than others, or that forecast performance varies with f or x. A distributions-oriented verification scheme A more complete treatment of verification demands consideration of the relationship between forecasts and observations (see Murphy 1996 for a description of the early history of this issue). For 12-24 hour temperature forecasting, an appropriate method is to consider changes from the previous day's temperature. In a qualitative sense, persistence represents an appropriate no-skill forecast for most forecast users, particularly for forecasts on this time scale. As seen in section 2, that would lead to an error of 10 °F or less for almost 80% of the data set. Thus, we have chosen to verify forecasts and observations in the context of day-to-day temperature change. Persistence is then reduced to a single category in the joint distribution of forecast and observed temperature The range of forecast and observed changes is 72 °F (-39 °F to +33 °F). The dimensionality of doing a complete verification comparing two forecast systems over that range of temperatures is 73^3 - 1 = 389016. Clearly, the data set is much too small to span that space[4]. As a result, we have chosen to count forecasts and observations in 5 °F bins in order to reduce the dimensionality considerably. This also has the appeal of taking some account of the uncertainty in the observations and the variability of temperature over a standard forecast area. The bins are centered on 0 °F, going in intervals of 5 °F. Therefore, forecasts or observed changes of +/- 2 °F are counted in the 0 °F bin. We have chosen to collect all changes greater than or equal to 23 °F into a bin labelled +/- 25 °F. This is due to the sparseness of the data set even with 5 °F bins. In addition, we have chosen to evaluate each forecast system individually. The dimensionality of the verification problem has been reduced significantly by these processes. Since there are now 11 forecast and observation bins for each forecast system, the dimensionality of the binned problem for each system is 11^2 - 1 = 120. The joint distribution of the forecasts (f) and observations (x), p(f,x), contains all of the non-time-dependent information relevant to evaluating the quality of the forecasts (Murphy and Winkler 1987). These distributions for the LFM MOS, NGM MOS, CON, and NWSFO are given in Tables 3a-d. Note that numbers above the bold diagonal indicate forecasts that were too cold and numbers below the bold diagonal indicate forecasts that were too warm. Extreme temperature changes are, in general, underforecast, particularly by the numerical guidance, most especially by the LFM MOS. In the bins associated with 20 °F (or more) temperature changes (of either sign), there are only 21 LFM MOS forecasts, in comparison with 34 NGM MOS, 24 CON, 30 NWSFO, and 42 observations. The extent of this becomes clear when the ratio of forecasts to observations is plotted against the forecast temperature change (Fig. 1). Ideally, this ratio should be close to unity for all forecast values. Instead, the ratio is well below unity for large temperature changes and, for the most part, slightly above one for small changes. In comparison with the numerical guidance, the NWSFO forecast is, in fact better in this respect, with large departures from unity occurring only for forecasts of cooling of 15 °F and warming of 25 °F, which only had one forecast. Murphy and Winkler (1987) points out that much of the information in the joint distribution is more easily understood by factoring p(f,x) into conditional and marginal distributions. In particular, we want to look at two complementary factorizations of the joint distribution following Murphy and Winkler (1987). The first is the calibration-refinement factorization, involving the conditional distribution of the observations given the forecasts, denoted by p(x |f), and the marginal distribution of the forecasts, p(f) (Tables 4a-d). The factorization is given by p(f,x) = p(x |f)p(f). (5) The second factorization is the likelihood-base rate factorization, involving the conditional distribution of the forecasts given the observations, p(f |x), and the marginal distribution of the observations, p(x) (Tables 5a-d), given by p(f,x) = p(f |x)p(x). (6) Although we present both factorizations, we will make only brief comments about the contents. A number of important aspects about the quality of the forecasts are apparent from the tables. The values of p(x |f) and p(f |x) are dominated by the diagonal in both Tables 4 and 5 on the matrix almost without exception[5]. The significant exception is related to the cold bias of the NGM MOS. Over half of the forecasts of a 5 °F cooling are associated with no change in the observed temperature (Table 4b). As a result, the CON forecasts are also too cold at that range. Reliability (also known as conditional bias or calibration) is one of the aspects of forecast quality that can be derived from the calibration-refinement factorization. It represents the correspondence between the mean of the observations associated with a particular forecast (denoted <xf >) and that forecast (f) (Murphy 1993). It can be viewed as the difference between those quantities. For perfectly reliable forecasts, the value would be zero for all forecasts, f. In the case of our four systems producing forecasts of temperature change, the differences are typically less than a degree, indicating fairly reliable forecasts (Fig. 2). However, it is worth noting that there are potentially meaningful biases of 2- 3 °F at certain ranges of temperature changes. Operationally, the identification of these could be used to improve future forecasts. Consideration of p(f |x) has not received as much attention as p(x |f) in forecast verification (Murphy and Winkler 1987). This is perhaps due to the standard view of verification as one of seeing what happens after a forecast has been made. Consideration of the conditional probability of forecasts given the observations requires a view of verification as an effort to understand the relationship between forecasts and observations, rather than just looking at what happened after a forecast was made. As an example of something that appears much clearer from the perspective of p(f |x), we turn to the question of overforecasting and underforecasting the magnitude of temperature changes. It is not obvious that there is any reason to prefer one or the other and, given that errors will occur, one would like to have overforecasts and underforecasts be equally likely. The magnitude of the asymmetry between the two appears different from an inspection of the two tables of conditional probability. Accurate forecasts are associated with the bins along the main diagonal. Underforecasting of temperature changes is associated with bins to the left (right) of the main diagonal in the upper left (lower right) quarter of Table 4. Underforecasting of temperature changes is associated with bins below (above) the main diagonal in the upper left (lower right) quarter of Table 5. Underforecasting of changes in temperature appears to be a much more serious problem when viewed from the context of p(f |x) instead of p(x |f) (Fig.3). This paradox can be seen upon close inspection of Table 3 where the distributions appear more skewed along columns than along rows, but it is more dramatically evident when the conditional probabilities are considered. By using p(f |x ), the underforecasting of extreme temperature changes becomes more apparent. In passing, we note the asymmetry in the overforecasting by the NWSFO between forecasts and observations of warming and cooling. Warming is much more likely to be associated with overforecasting than cooling is. We will return to this point in the next section. The relationship between f and x can also be examined by creating linear regression models between the two to describe the conditional distributions, p(x |f) and p(f |x). The process is described in detail in Appendix A of Murphy et al. (1989). To summarize, the expected value of the observations given a particular forecast, E(x|f), is expressed as a linear function of the forecast[6], by E(x|f) = a + bf, (7) where a = <x> - b<f > and b = (sx /sf )rfx. Now, <x > and <f > are the sample means of the observations and forecasts, respectively, sx and sf are the sample standard deviations of the observations and forecasts, respectively, and rfx is the sample correlation coefficient between the forecasts and the observations (Table 6). By plotting the departure of the expected values from the forecast (i.e., E(x |f ) - f , rather than E(x |f )), the behavior of the models becomes more apparent (Fig. 4. The slope of the lines is related to the conditional bias of the forecasts. For example, the NGM MOS is high (low) for forecasts of cooling (warming). The conditional biases of the other forecasts are all of the other sign. Assuming that the bias varies linearly with the temperature forecast range, a user with that information might be able to adjust the forecasts in order to make better use of the forecasts. Over most of the forecast temperature range, the expected value of the observations associated with NWSFO forecasts departs less from the forecast than the expected value associated with the MOS products. Thus, the conditional bias of the NWSFO forecasts is less than that of the guidance products. Points of Interest a) The asymmetry in forecasting warming and cooling As mentioned earlier, there is an asymmetry in the forecasting of temperature changes by the NWSFO. Cooling is more likely to be underforecast than warming. To illustrate some facets of this asymmetry, we have considered the subset of the data related to observed moderate temperature changes of 3-17 °F (associated with the +/-5, 10 and 15 °F bins in the joint distribution tables). A cursory examination of some of the summary measures of the forecast performance reveals both the underforecasting and the asymmetry (Table 7). Positive (negative) values of ME for forecasts of cooling (warming) indicate underforecasting. The NWSFO forecasts have the largest ME for cases of cooling and the smallest ME for warming. In terms of MAE and RMSE, the NGM and CON forecasts outperform the NWSFO for cooling, although NWSFO does much better on warming. The asymmetry appears to result, for the most part, from the warm bias of the NWSFO forecasts. As seen in Table 1, NWSFO is 0.49 °F warmer than the observations. If we subtract 0.49 °F from each of the NWSFO forecast temperature changes in an effort to correct for the bias, we can recompute the summary measures and compare the adjusted NWSFO forecasts to the guidance (Table 8). The adjusted NWSFO performance is much less asymmetric than the unadjusted performance. Although the adjusted NWSFO still performs better in these summary measures for warming events than for cooling, the asymmetry is much less pronounced. The bias of the forecasts was a large part of the signal. This makes intuitive sense, since a warm bias will help in underforecasting of warm events, although hurting in the underforecasting of cool events. The forecasting of extreme temperature changes gives a different picture than that of moderate temperature changes. For observed changes of more than 17 °F, NWSFO improves more on guidance for cooling than for warming (Table 9). The large difference in performance of the LFM and NGM is particularly striking. It is the poor performance of the LFM in these extreme events that led to the difference seen in the overall MAE and RMSE noted in section 3. It also means that, unlike for smaller temperature changes, CON is outperformed by the NGM MOS in this case. The NGM MOS is the most accurate forecast for the warming events. This is interesting in light of the overall cold bias of the NGM. Sample sizes are much smaller, of course, so that this may be an artifact. It is likely that these very large day-to-day changes in temperature have the most impact on the public and for which value can be added by providing accurate forecasts. A histogram of forecast errors highlights the difference in the various forecasting systems (Fig. 5). Despite a bias towards underforecasting changes, the NWSFO has the fewest very large errors with only one forecast more than 12 °F too low compared to 5 or 6 for the guidance. In a sense, for these very large changes, the NWSFO forecast adds a great deal of potential value for users on this small number of days by avoiding extremely large forecast errors. b) The relationship of NWSFO to guidance A typical question considered in verification studies involving human forecasters is that of how much "value" the humans add to numerically generated guidance.[7] Here, we will touch briefly on this question, comparing the NWSFO to CON, which was the best of the objective guidance products discussed here. There are several possible approaches for considering the situations in which humans could add value. The first is to look at the kinds of errors associated with the spread between the LFM MOS and NGM MOS, used to generate CON. In this data set, the two MOS values never disagree by more than 12 °F. Combining the ends of the distribution of the spread of MOS differences, we have calculated the improvement in RMSE over CON by NWSFO as a function of the difference between the input MOS values (Fig. 6). Although the RMSE for CON is fairly constant (between approximately 3.5 °F and 4.5 °F) the relative performance of NWSFO varies markedly. In cases where the NGM MOS is 2-4 °F cooler than the LFM MOS, the NWSFO improves over CON by approximately 20% in RMSE. On the other hand, when NGM MOS is 1-4 °F warmer, the NWSFO does approximately 5-10% worse than CON. This latter feature is curious and we can offer no explanation for it, although it certainly warrants further study. A second approach is to look at the cases where the NWSFO disagreed with CON. In general, this did not happen very often during the period of study. There were 26 times when the NWSFO disagreed by more than 5 °F with CON, 13 on each side of the CON forecast. The RMSE plotted by the difference in forecasts shows that the NWSFO, in general, slightly outperforms CON (Fig. 7). It also shows that when the two forecasts are in close agreement, they are both more accurate, in terms of the RMSE. (Note that this is in contrast to the rather flat nature of the RMSE of CON as a function of the difference in NGM and LFM MOS, as seen in Fig. 6). There is approximately 2 °F lower RMSE when the NWSFO is 1 °F warmer than the CON than the RMSE when the NWSFO is either 5 °F warmer or 3 °F cooler than CON. An average forecast of the NWSFO and CON can be computed ("NWSCON") and, over most of the range, it adds little value to NWSFO and CON from the standpoint of the RMSE. This implies that, at least in some statistical respects, the NWSFO and CON forecasts are not very different. A final important step in verification is to look back at the cases that lead to some of the interesting results. As mentioned above, there were 26 times when the NWSFO and CON forecasts disagreed by more than 5 °F. These cases are listed in Table 10 in order of increasing improvement by the NWSFO over CON. As would be expected, most of the cases are from the winter or transition seasons, with only one being in the summer. Seven cases have errors of opposite sign from NWSFO and CON, where the errors are large enough that the average of the two forecasts (NWSCON) beats both NWSFO and CON. In the remaining 19 cases, NWSFO is more accurate in 11 (42% of the total). Of the five disagreements of 10 °F or more, the NWSFO is more accurate than CON in the two cases where the forecast errors are of the same sign. These cases of large disagreement between NWSFO and CON provide an opportunity for improvement in temperature forecasting. Their identification means that they can be studied more closely in an effort to understand the reasons why the NWSFO disagreed with CON and, of particular importance, it may be possible to discern when it is advantageous to disagree with the guidance products in the future. It would be hoped then, that forecasters could learn (a) when they have a better opportunity to improve upon MOS forecasts significantly and (b) when MOS is an adequate forecast and can be used without change. We have looked at the verification of 12-24 hour high temperature forecasts for Oklahoma City from a distributions-oriented approach. The impression one gets of the performance of the various forecast systems depends on how complete a set of descriptors one uses. If the approach to verification is limited to simple summary measures, the richness of the relationship between forecasts and observations is lost. What appear as issues of fundamental importance when considering a distributions-oriented approach to verification cannot even be asked with a measures-oriented approach, since the presentation of the data does not allow the issues to be identified. Simple summary measures of overall performance offer almost no information about the relationship between forecasts and errors and, as a result, it is difficult to learn about the occasions on which human forecasters can improve significantly on numerical guidance. If one believes that the point of human intervention in weather forecasting is to provide information that will allow users to gain value from forecasts, and that small improvements in accuracy (say 1-2 °F) have little significant impact on the large majority of users, then it is imperative to consider the distribution of errors. In particular, overall summary measures can confuse the potential value added in a small, but highly significant, set of cases by being swamped by information from the very large number of "less important" forecast situations. One interpretation of the errors in forecasting extreme temperature changes here is that the NWSFO adds significant value to the numerical guidance on about 5 days in the data set (as measured by the reduction in very large underforecasts of large temperature changes). In comparison to the 590 days in the data set, that number seems very small, but in comparison to the 42 days on which large changes took place, it becomes a much more significant contribution. This final point adds a cautionary note to the use of distributions-based verification systems associated with the large dimensionality of the verification problem. The use of distribution-based approaches means that the "impressions" of the forecast system will necessarily be based on smaller sample sizes. Thus, while the distributions-oriented verification potentially offers a more complete picture of forecast system performance, it must be used with care and adequate sample sizes collected. We also identified two interesting features in the NWSFO forecasts. The first is a pair of asymmetries in the forecasting of temperature changes. For moderate changes (3-17 °F), NWSFO forecasts warming events more accurately than cooling. In fact, the NGM MOS and CON forecasts outperform NWSFO on the cooling events over this range. The asymmetry appears in large part due to a bias towards higher temperatures in the NWSFO forecasts. For extreme events (>=18 °F), however, the NWSFO forecasts of cooling are much more accurate than those of warming and outperform the numerical guidance. The second feature is an improvement over guidance by NWSFO for those cases where the NGM MOS is a few degrees cooler than the LFM, although doing worse when NGM MOS is slightly warmer than the LFM. These two features suggest that it should be possible to improve the accuracy of temperature forecasts by using some fairly simple strategies taking into account the performance of the various guidance forecast systems. We have looked at only one forecast element at one forecast lead time. A complete verification would necessitate looking at all forecast elements at all lead times. In the absence of that, it is impossible to know what the current state of forecasting is. As a result, it will be impossible to monitor the impacts of future changes in forecasting techniques and in the forecasting environment, such as those associated with the modernization of the NWS. A fundamental question facing the NWS in the future is the allocation of scarce resources. An on-going comprehensive verification system has the potential to identify needs and opportunities for improving forecasts through entry-level training, on-going training, and improved forecast techniques. If small improvements leading to small value for users cost large sums of money, it is economically unwise to pursue them. If, on the other hand, opportunities exist for adding large potential value to forecasts, it is economically unwise to ignore them. Unfortunately, at this time, the verification system within the NWS is inadequate to provide decision makers enough information to make choices about the potential value of forecasts. Forecast verification is, of course, of importance to more than just the NWS. Private forecasters need to show that users get increased value from their products over those freely available from the NWS. As a result, the issue of the proper approach to forecast verification goes beyond the public sector. It is of importance to anyone who makes or uses forecasts on a regular basis. It is in the interest of both parties to move towards a complete distributions-oriented approach to verification. Failing to do so will limit the value of weather forecasting in the future. Acknowledgments We wish to thank the staff at NWSFO OUN for their willingness to share the data we have used. Allan Murphy provided inspiration for the project through ongoing conversations over a period of several years, as well as commenting on the draft manuscript. We also thank Arthur Witt of NSSL and an anonymous reviewer for their constructive comments on the manuscript. Brier, G. W., 1948: Review of "The verification of weather forecasts" by W. Bleecker. Bull. Amer. Meteor. Soc., 29, 475. Murphy, A. H., 1991: Forecast verification: Its complexity and dimensionality. Mon. Wea. Rev., 119, 1590-1601. _____, 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8, 281-293. _____, 1996: The Finley affair: A signal event in the history of forecast verification. Wea. Forecasting, 11, in press. _____, and R. L. Winkler, 1987: A general framework for forecast verification. Mon. Wea. Rev., 115, 1330-1338. _____, B. G. Brown, and Y.-S. Chen, 1989: Diagnostic verification of temperature forecasts. Wea. Forecasting, 4, 485-501. National Weather Service Southern Region Headquarters, 1984: Public weather verification. [Available from NWS Southern Region, Fort Worth, Texas], 4 pp. NOAA, 1984: Chapter C-43. National Weather Service Operations Manual. [Available from National Weather Service, Office of Meteorology, Silver Spring, Maryland.] Sanders, F., 1979: Trends in skill of daily forecasts of temperature and precipitation, 1966-78. Bull. Amer. Meteor. Soc., 60, 763-769. Vislocky, R. L., and J. M. Fritsch, 1995: Improved model output statistics forecasts through model consensus. Bull. Amer. Meteor. Soc., 76, 1157-1164. Fig. 1. Ratio of forecast to observed temperature changes by 5 °F bins for four forecast systems. Abscissa is center of temperature bin. Ordinate is ratio. Unity (horizontal dashed line) indicates same number of forecasts and observations. Values greater (less) than unity indicate more (fewer) forecasts than observations in a given temperature bin. Fig. 2. Departures from perfect reliability of various temperature forecasts. Abscissa is forecast temperature change in °F. Ordinate is difference between average temperature of observations associated with forecasts and the forecasts in each bin. Positive (negative) values indicate that observations are warmer (cooler) than the forecasts. Fig. 3. Percentage of overforecasts of temperature changes by a) forecast temperature change and b) observed temperature change. Abscissa is temperature bin and ordinate is percentage. Fig. 4. Lines associated with linear regression models of the expected value of observations given forecasts. Plotted lines are E(x |f ) - f. Abscissa is forecast temperature in °F and ordinate is difference in °F between the expected value of the observations from the linear regression model and the actual forecast. Positive (negative) values indicate expected value of observation is warmer (cooler) than the forecast. Fig. 5 Histogram of errors for forecast change for cases of observed changes more than 17 °F. Errors are binned in 5 °F bins centered on -20 °F, -15 °F, -10 °F, etc. Negative (positive) values indicate that the temperature change was underforecast (overforecast). Fig. 6. RMSE of CON forecast (light line) and percentage improvement by NWSFO over CON (heavy line) as a function of the disagreement between NGM MOS and LFM MOS. Light dashed line is zero improvement. Abscissa is difference between NGM MOS and LFM MOS such that positive values indicate that NGM MOS is warmer than LFM MOS. Left vertical scale indicates percentage improvement in RMSE by NWSFO compared to CON. Right vertical scale indicates RMSE of CON, multiplied by 10 and number of cases in each category (vertical bars). Fig. 7. RMSE of CON (heavy dashed line) and NWSFO (solid line) as function of the difference in the two forecasts. The RMSE of an average of CON and NWSFO (NWSCON) is plotted as the light dashed line. Vertical bars indicate number of cases in each category. Abscissa is the difference between NWSFO and CON in °F with positive values indicating NWSFO forecast is warmer. Left vertical scale is RMSE in °F. Right vertical scale is number of cases (vertical bars).
{"url":"http://www.nssl.noaa.gov/users/brooks/public_html/ntv/notable.html","timestamp":"2014-04-20T03:33:35Z","content_type":null,"content_length":"44240","record_id":"<urn:uuid:6c701547-9ab6-4d7e-b15c-38153c5dda8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Types: Where Vocabulary Meets Numbers Lesson Question: How can students use the Visual Thesaurus to evaluate different numbers according to type? Applicable Grades: Lesson Overview: Who knew there were so many words to describe numbers? In this math lesson, students use the VT to define vocabulary words that are often used to label numbers. Then, students will evaluate different numbers according to their features and decide how they can categorize them according to number type. Length of Lesson: One hour to one hour and a half Instructional Objectives: Students will: • define different number types using the Visual Thesaurus • learn how to complete a Semantic Feature Analysis grid • work collaboratively to classify different numbers according to type • analyze patterns from a Semantic Feature Analysis grid regarding number type • student notebooks • white board • computers with Internet access • "Analyzing Numbers According to Type" sheets (one per student) [click here to download] *Note: This lesson models the vocabulary instruction technique of using Semantic Feature Analysis grids. Traditionally, SFA grids are used to help students distinguish between closely related words by considering which features they share or don't share, but this lesson uses the technique to extend to students evaluating numbers according to number type categories. Introducing the concept of "natural numbers": • Ask students to reflect back on their first math lessons – at home, in preschool or in kindergarten. What did they learn? What did they "do" with numbers back then? • Elicit students' responses to the reflection prompt. Students most likely remember learning to count, beginning with the number one. They might also remember counting physical objects like blocks or beads. • Explain to students that the numbers children are first introduced to are considered "natural numbers." Defining natural number with the Visual Thesaurus: • On the white board, display the Visual Thesaurus word map for "natural number" and point out its definition as "the number 1 and any other number obtained by adding 1 to it repeatedly." • Explain that natural numbers make up the most simplistic category of numbers; that is why young children are first introduced to this concept of numbers. As students get older, they learn about more complicated types of numbers – numbers like zero, negative numbers, fractions, etc. • In the VT word map display for "natural number," follow the dashed "type of " line toward the word "number" and hover the cursor over the definition of number as "a concept of quantity involving zero and units." Click on that red meaning bubble in order to make that meaning the center of a map that reveals all the types of numbers the VT contains in its database: Introducing a sample Semantic Feature Analysis grid (see Note above): • If students are unfamiliar with a Semantic Feature Analysis grid, model creating a simple one based on a set of related math terms. For example, you could analyze the features of different geometric shapes. • Ask students to brainstorm a list of features or attributes that are commonly used to classify or describe geometric shapes (e.g., round, symmetrical, acute angles, parallel sides, number of sides, right angles, two-dimensional, three-dimensional, number of vertices, etc.). • On the board, choose three or four of shape features to act as column titles for a Semantic Feature Analysis Grid for shapes. Then, create three or four rows in the grid, each listing a different • For example, here is a sample Semantic Feature Analysis grid that has students distinguish between different types of shapes based on four different features. │Shapes│ Characteristics or Features │ │ ├──────────────────────┬───────────────┬─────────────────┬──────────────────────┤ │ │Contains right-angles │Two-dimensional│Three-dimensional│Contains curved lines │ │cube │ │ │ │ │ │square│ │ │ │ │ │cone │ │ │ │ │ │circle│ │ │ │ │ • Explain that each empty box should be filled in with either a plus sign (to indicate that the shape has that particular feature) or a minus sign (to indicate that the shape does not have that attribute or feature). Sample completed SFA grid: │shapes│ Characteristics or Features │ │ ├──────────────────────┬───────────────┬─────────────────┬──────────────────────┤ │ │Contains right-angles │Two-dimensional│Three-dimensional│contains curved lines │ │cube │ + │ - │ + │ - │ │square│ + │ + │ - │ - │ │cone │ - │ - │ + │ + │ │circle│ - │ + │ - │ + │ Defining Number Types: • Explain that just as shapes can be analyzed according to different features, numbers can also be analyzed in a similar way – according to type. • Provide each student with an "Analyzing Numbers According to Type" worksheet (download here). • Have students define each number type (i.e., natural number, integer, rational number, irrational number, real number) using the Visual Thesaurus. Note: students should enter two words into the search box as they define each two-part terms – e.g., search for "real number" not just for "real." Completing Number Type grids: • Organize the class in partners or small groups, depending on the availability of computers. • Ask students to analyze each number listed in the far left column of the grid to determine how many Number Type categories it fits in. (If it fits in a number type category, students should draw a plus sign (+) in the box that corresponds to the number's row and that Number Type column. If the number does not fit into a Number Type category, students should draw a minus sign (-) in the appropriate box.) • If students are unsure about whether or not a particular number fits a number type, they can consult the VT definition for the number type and/or right-click on the number type in the word map display to do an Internet search to find additional information. Discussing patterns on the Number Analysis grid: • Display the "Analyzing Numbers According to Type" grid on the board and discuss how students completed each box. • What patterns do students see in the grid? What conclusions can they make based on these patterns? For example, each number is either marked with a + as being "rational" or "irrational," but never both. (Therefore, students can conclude that numbers cannot be both rational and irrational.) Extending the Lesson: • Challenge students to create original Semantic Feature Analysis grids to distinguish between another set of related math terms. • For advanced students, you could extend this lesson to include complex numbers. • Check groups' completed "Analyzing Numbers According to Type" sheets to assess whether or not students accurately identified the different number types for each number listed on the grid. Standard 2. Understands and applies basic and advanced properties of the concepts of numbers Level III (Grades 6-8) 1. Understands the relationships among equivalent number representations (e.g., whole numbers, positive and negative integers, fractions, ratios, decimals, percents, scientific notation, exponentials) and the advantages and disadvantages of each type of representation 2. Understands the characteristics and properties (e.g., order relations, relative magnitude, base-ten place values) of the set of rational numbers and its subsets (e.g., whole numbers, fractions, decimals, integers) 3. Understands the role of positive and negative integers in the number system 7. Understands the concepts of ratio, proportion, and percent and the relationships among them Level IV (Grades 9-12) 1. Understands the properties (e.g., relative magnitude, density, absolute value) of the real number system, its subsystems (e.g., irrational numbers, natural numbers, integers, rational numbers), and complex numbers (e.g., imaginary numbers, conjugate numbers) Do you have a comment? Share it with the Visual Thesaurus community.
{"url":"http://www.visualthesaurus.com/cm/lessons/number-types-where-vocabulary-meets-numbers/","timestamp":"2014-04-20T21:53:57Z","content_type":null,"content_length":"26171","record_id":"<urn:uuid:ea90f477-a976-4af5-898e-56c80045b9d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Men Of Mathematics line r through P such that V never meets Z no matter how far /' and I are extended (in either direction). Merely as a nominal definition -we say that two straight lines lying in one plane - which never meet are parallel Thus the fifth postulate of Euclid asserts that through P there is precisely one straight line parallel to L Euclid's penetrating insight into the nature of geometry convinced him that this postulate had not, in his time, been deduced from the others, although there had been many attempts to prove the postulate. Being unable to deduce the postulate himself from his other assumptions, and wishing to use it in the proofs of many of his theorems, Euclid honestly set it out with his other postulates. There are one or two simple matters to be disposed of before we come to Lobatchewsky's Copernican part in the extension of geometry. We have alluded to 'equivalents' of the parallel postulate. One of these, 'the hypothesis of the right angle', as it is called, will suggest two possibilities, neither equivalent to Euclid's assumption, one of which introduces Lobatchewsky's geometry, the other, RiemamVs. Consider a figure AXYB which 'looks like' a rectangle, con- sisting of four straight lines AX, XY9 YJ3, BA, in which BA (or AB) is the base, AX and YB (or BY) are drawn equal and perpendicular to AB, and on the same side of AB. The essential things to be remembered about this figure are that each of the angles XAB> YBA (at the base) is a right angle, and that the sides „«, BY are equal in length. Without using the parallel postulate, it can be proved that the angles AXY, *BYX, are
{"url":"http://archive.org/stream/MenOfMathematics/TXT/00000017.txt","timestamp":"2014-04-19T12:57:19Z","content_type":null,"content_length":"12154","record_id":"<urn:uuid:78ecdd98-1893-420d-9e33-d2b4e8eff90b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Why planets revolve around sun in elliptical orbit? • one year ago • one year ago Best Response You've already chosen the best response. There is no special requirement that planetary and lunar orbits must be non-circular. There is one lunar orbit in our solar system that is about as close to circular as you can get. Think of it this way. Absolutely every stable orbit in the universe is elliptical (at least if our physics applies everywhere). It happens that some of those ellipses have both loci occupying the same point in space (defining a circle). When you consider all the possibilities in all orbits (distances between loci) it is no surprise that so few orbits are 'circular'. Best Response You've already chosen the best response. because of space warp. Space warp is like a casino roulette. when you spin a roulette, the thingy inside it will keep on revolving around the center in an elliptical way. that is a physical idea of gravity. another reason is because gravitational force is inverse square with distance. so as the distance increases, the gravity lessens. this affects the orbit of the planets one way or the Best Response You've already chosen the best response. simple if not ,than they would have fallen into sun...and u woudn't be asking q's now... Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5034ef4ae4b02ee68c7a27f0","timestamp":"2014-04-20T18:59:31Z","content_type":null,"content_length":"36532","record_id":"<urn:uuid:332d129e-56fe-46fd-b1a2-d53e99d2683f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guide to Simulation. 2nd ed Results 11 - 20 of 27 "... A combination of experimental and simulation approaches was used to analyze clonal growth of glutathione-S-transferase � (GST-P) enzyme-altered foci during liver carcinogenesis in an initiation-promotion regimen for 1,4-dichlorobenzene (DCB), 1,2,4,5-tetrachlorobenzene (TECB), pentachlorobenzene (PE ..." Cited by 2 (0 self) Add to MetaCart A combination of experimental and simulation approaches was used to analyze clonal growth of glutathione-S-transferase � (GST-P) enzyme-altered foci during liver carcinogenesis in an initiation-promotion regimen for 1,4-dichlorobenzene (DCB), 1,2,4,5-tetrachlorobenzene (TECB), pentachlorobenzene (PECB), and hexachlorobenzene (HCB). Male Fisher 344 rats, eight weeks of age, were initiated with a single dose (200 mg/kg, ip) of diethylnitrosamine (DEN). Two weeks later, daily dosing of 0.1 mol/kg chlorobenzene was maintained for six weeks. Partial hepatectomy was performed three weeks after initiation. Liver weight, normal hepatocyte division rates, and the number and volume of GST-P positive foci were obtained at 23, 26, 28, 47, and 56 days after initiation. A clonal growth stochastic model separating the initiated cell population into two distinct subtypes (referred to as A and B cells) was successfully used to describe the foci development - Measures in Generalized Semi-Markov Processes. Management Science , 1997 "... This paper investigates the likelihood ratio method for estimating derivatives of finite-time performance measures in generalized semi-Markov processes (GSMPs). We develop readily verifiable conditions for the applicability of this method. Our conditions mainly place restrictions on the basic buildi ..." Cited by 1 (0 self) Add to MetaCart This paper investigates the likelihood ratio method for estimating derivatives of finite-time performance measures in generalized semi-Markov processes (GSMPs). We develop readily verifiable conditions for the applicability of this method. Our conditions mainly place restrictions on the basic building blocks (i.e., the transition probabilities, the distribution and density functions of the event lifetimes, and the initial distribution) of the GSMP, which is in contrast to the structural conditions needed for infinitesimal perturbation analysis. We explicitly show that our conditions hold in many practical settings, and in particular, for large classes of queueing and reliability models. One intermediate result which we obtain in this study, which is of independent value, is to formally show that the random variable representing the number of occurring events in a GSMP in a finite time horizon, has finite exponential moments in a neighborhood of zero. 1 Introduction When running a si... - Journal of Service Research , 2000 "... This article formulates a model for finding the optimal delivery time performance guarantee. The expected profit model is solved to find a closed-form expression for the optimal delivery time promise. The simple, yet powerful model gives new insights into performance service guarantees in general an ..." Cited by 1 (0 self) Add to MetaCart This article formulates a model for finding the optimal delivery time performance guarantee. The expected profit model is solved to find a closed-form expression for the optimal delivery time promise. The simple, yet powerful model gives new insights into performance service guarantees in general and delivery time guarantees in particular. Many manufacturing and distribution firms guarantee to meet a standard delivery time promise and pay significant compensation to customers when deliveries are late. If the firm promises its customers a delivery time that is too short, it will frequently not make the promise, have to pay significant compensation, and possibly lose market share over time. If the delivery time guarantee is too long, customers will find the delivery time unattractive and will buy elsewhere. Hart (1993) and others (Hill 1995) call this a “performance service guarantee.” Although a delivery time performance-guarantee scenario will be used as the context for this article, other performance service guarantee contexts could have been used as well. Other similar performance service guarantees include no-stockout guarantees (Hart 1993), waiting time guarantees (Friedman and Friedman 1997; Kumar, Kalwani, and Dada 1997), and up-time maintenance guarantees (Hill 1992). It should be noted, however, that an unconditional satisfaction guarantee (Hart 1988) is more complex and is not addressed in this article. According to data collected by the Center for Advanced Purchasing Studies, delivery promises are far from perfect in many industries in the United States. On-time delivery benchmarks for several industries are summarized in "... The purpose of this paper is twofold. First, it serves to describe a new strategy, called Structured Database Monte Carlo (SDMC), for efficient Monte Carlo simulation. Its second aim is to show how this approach can be used for efficient pricing of path-dependent options via simulation. We use effic ..." Cited by 1 (0 self) Add to MetaCart The purpose of this paper is twofold. First, it serves to describe a new strategy, called Structured Database Monte Carlo (SDMC), for efficient Monte Carlo simulation. Its second aim is to show how this approach can be used for efficient pricing of path-dependent options via simulation. We use efficient simulation of a sample of path-dependent options to illustrate the application of SDMC. Extensions to other path-dependent options are straightforward. 1 , 2007 "... Batch means are sample means of subsets of consecutive subsamples from a simulation output sequence. Independent and normally distributed batch means are not only the requirement for constructing a confidence interval for the mean of the steady-state distribution of a stochastic process, but are als ..." Cited by 1 (1 self) Add to MetaCart Batch means are sample means of subsets of consecutive subsamples from a simulation output sequence. Independent and normally distributed batch means are not only the requirement for constructing a confidence interval for the mean of the steady-state distribution of a stochastic process, but are also the prerequisite for other simulation procedures such as ranking and selection (R&S). We propose a procedure to generate approximately independent and normally distributed batch means, as determined by the von Neumman test of independence and the chi-square test of normality, and then to construct a confidence interval for the mean of a steady-state expected simulation response. It is our intention for the batch means to play the role of the independent and identically normally distributed observations that confidence intervals and the original versions of R&S procedures require. We perform an empirical study for several stochastic processes to evaluate the performance of the procedure and to investigate the problem of determining valid batch sizes. , 1997 "... INTRODUCTION Simulation is a technique for numerically estimating the performance of a complex stochastic system when analytic solution is not feasible [LaKe91, Sc90]. This section discusses both discrete-event and Monte Carlo simulation techniques. In discrete-event simulation models, the passage ..." Add to MetaCart INTRODUCTION Simulation is a technique for numerically estimating the performance of a complex stochastic system when analytic solution is not feasible [LaKe91, Sc90]. This section discusses both discrete-event and Monte Carlo simulation techniques. In discrete-event simulation models, the passage of time plays a key role, as changes to the state of the system occur only at certain points in simulated time. Queueing and inventory systems can be studied by discrete-event simulation models. Monte Carlo simulation models, on the other hand, do not require the passage of time. Monte Carlo simulation models have been used in estimating eigenvalues, estimating ß, and estimating the quantiles of a mathematically intractable test statistic in hypothesis testing. Simulation has been described [BrEtal87] as "driving a model of a system with suitable inputs and observing the corresponding outputs." Accordingly, the following three subsections d - Management Science 43:1288–1295 , 1996 "... In analyzing the output process generated by a steady-state simulation, we often seek to estimate the expected value of the output. The sample mean based on a finite sample of size n is usually the estimator of choice for the steady-state mean; and a measure of the sample mean's precision is the var ..." Add to MetaCart In analyzing the output process generated by a steady-state simulation, we often seek to estimate the expected value of the output. The sample mean based on a finite sample of size n is usually the estimator of choice for the steady-state mean; and a measure of the sample mean's precision is the variance parameter, i.e., the limiting value of the sample size multiplied by the variance of the sample mean as n becomes large. This paper establishes asymptotic properties of the conventional batch-means (BM) estimator of the variance parameter as both the batch size and the number of batches become large. In particular, we show that the BM variance estimator is asymptotically unbiased and convergent in mean square. We also provide asymptotic expressions for the variance of the BM variance estimator. Exact and empirical examples illustrate our findings. Authors' addresses: Chiahon Chien, Synopsys, Inc., Mountainview, CA 94043, U.S.A., chien@Synopsys.com; David Goldsman, School of Industrial... - Journal of Statistical Computation and Simulation , 1998 "... this paper is the development of statistical estimation procedures for fitting the rate and mean-value functions of an NHPP. In particular, we have developed a least squares procedure for estimating the parameters of an NHPP with an EPTMP-type rate function. This procedure uses a least squares metho ..." Add to MetaCart this paper is the development of statistical estimation procedures for fitting the rate and mean-value functions of an NHPP. In particular, we have developed a least squares procedure for estimating the parameters of an NHPP with an EPTMP-type rate function. This procedure uses a least squares method to fit the mean-value function. In addition, we have developed a weighted least squares formulation of this problem, along with a diagnostic analysis of why weighted least squares fails in problems with certain first- and second-order moment structures such as that arising in the estimation of the mean-value function of an NHPP. Furthermore, we have conducted a comprehensive experimental performance evaluation the least squares procedure. The results of these experiments as well as our experience in using these procedures to fit the mean-value function of an NHPP to actual data such as the arrival processes of organ donors and patients, indicates that these procedures are capable of adequately modeling a large class of arrival process encountered in practice "... A simulation model is successful if it leads to policy action, i.e., if it is implemented. Studies show that for a model to be implemented, it must have good correspondence with the mental model of the system held by the user of the model. The user must feel confident that the simulation model corre ..." Add to MetaCart A simulation model is successful if it leads to policy action, i.e., if it is implemented. Studies show that for a model to be implemented, it must have good correspondence with the mental model of the system held by the user of the model. The user must feel confident that the simulation model corresponds to this mental model. An understanding of how the model works is required. Simulation models for implementation must be developed step by step, starting with a simple model, the simulation prototype. After this has been explained to the user, a more detailed model can be developed on the basis of feedback from the user. Software for simulation prototyping is discussed, e.g., with regard to the ease with which models and output can be explained and the speed with which small models can be written. "... The model used in this report focuses on the analysis of ship waiting statistics and stock fluctuations under different arrival processes. However, the basic outline is the same: central to both models are a jetty and accompanying tankfarm facilities belonging to a new chemical plant in the Po ..." Add to MetaCart The model used in this report focuses on the analysis of ship waiting statistics and stock fluctuations under different arrival processes. However, the basic outline is the same: central to both models are a jetty and accompanying tankfarm facilities belonging to a new chemical plant in the Port of Rotterdam. Both the supply of raw materials and the export of finished products occur through ships loading and unloading at the jetty. Since disruptions in the plants production process are very expensive, buffer stock is needed to allow for variations in ship arrivals and overseas exports through large ships. Ports provide jetty facilities for ships to load and unload their cargo. Since ship delays are costly, terminal operators attempt to minimize their number and duration. Here, simulation has proved to be a very suitable tool. However, in port simulation models, the impact of the arrival process of ships on the model outcomes tends to be underestimated. This article considers three arrival processes: stock-controlled, equidistant per ship type, and Poisson. We assess how their deployment in a port simulation model, based on data from a real case study, affects the efficiency of the loading and unloading process. Poisson, which is the chosen arrival process in many client-oriented simulations, actually performs worst in terms of both ship delays and required storage capacity. Stock-controlled arrivals perform best with regard to ship delays and required storage capacity. In the case study two types of arrival processes were considered. The first type are the so-called stock-controlled arrivals, i.e., ship arrivals are scheduled in such a way, that a base stock level is maintained in the tanks. Given a base stock level of a raw material or
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=521301&sort=cite&start=10","timestamp":"2014-04-18T15:08:42Z","content_type":null,"content_length":"40694","record_id":"<urn:uuid:8b1fb4d3-1cba-466a-a2e1-5900d6dd7c71>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Perfect Forward Secrecy SSL/TLS & Perfect Forward Secrecy Vincent Bernat Once the private key of some HTTPS web site is compromised, an attacker is able to build a man-in-the-middle attack to intercept and decrypt any communication with the web site. The first step against such an attack is the revocation of the associated certificate through a CRL or a protocol like OCSP. Unfortunately, the attacker could also have recorded past communications protected by this private key and therefore decrypt them. Forward secrecy allows today information to be kept secret even if the private key is compromised in the future. Achieving this property is usually costly and therefore, most web servers do not enable it on purpose. Google recently announced support of forward secrecy on their HTTPS sites. Adam Langley wrote a post with more details on what was achieved to increase efficiency of such a mechanism: with a few fellow people, he wrote an efficient implementation of some elliptic curve cryptography for OpenSSL. Without forward secrecy To understand the problem when forward secrecy is absent, let’s look at the classic TLS handshake when using a cipher suite like AES128-SHA. During this handshake, the server will present its certificate and both the client and the server will agree on a master secret. This secret is built from a 48byte premaster secret generated and encrypted by the client with the public key of the server. It is then sent in a Client Key Exchange message to the server during the third step of the TLS handshake. The master secret is derived from this premaster secret and random values sent in clear-text with Client Hello and Server Hello messages. This scheme is secure as long as only the server is able to decrypt the premaster secret (with its private key) sent by the client. Let’s suppose that an attacker records all exchanges between the server and clients during a year. Two years later, the server is decommissioned and sent for recycling. The attacker is able to recover the hard drive with the private key. They can now decrypt any session they recorded: the encrypted premaster secret sent by a client is decrypted with the private key and the master secret is derived from it. The attacker can now recover passwords and other sensitive information that can still be valuable today. The main problem lies in the fact that the private key is used for two purposes: authentication of the server and encryption of a shared secret. Authentication only matters while the communication is established, but encryption is expected to last for years. Diffie-Hellman with discrete logarithm One way to solve the problem is to keep using the private key for authentication but uses an independent mechanism to agree on a shared secret. Hopefully, there exists a well-known protocol for this: the Diffie-Hellman key exchange. It is a method of exchanging keys without any prior knowledge. Here is how it works in TLS: 1. The server needs to generate once (for example, with openssl dhparam command): 2. The server picks a random integer ·a· and compute ·g^a \mod p·. After sending its regular Certificate message, it will also send a Server Key Exchange message (not included in the handshake depicted above) containing, unencrypted but signed with its private key for authentication purpose: □ random value from the Client Hello message, □ random value from the Server Hello message, □ ·p·, ·g·, □ ·g^a\mod p=A·. 3. The client checks that the signature is correct. It also picks a random integer ·b· and sends ·g^b \mod p=B· in a Client Key Exchange message. It will also compute ·A^b\mod p=g^{ab}\mod p· which is the premaster secret from which the master secret is derived. 4. The server will receive ·B· and compute ·B^a\mod p=g^{ab}\mod p· which is the same premaster secret known by the client. Again, the private key is only used for authentication purpose. An eavesdropper will only know ·p·, ·g·, ·g^a\mod p· and ·g^b\mod p·. Computing ·g^{ab}\mod p· from those values is the discrete logarithm problem for which there is no known efficient solution. Because the Diffie-Hellman exchange described above always uses new random values ·a· and ·b·, it is called Ephemeral Diffie-Hellman (EDH or DHE). Cipher suites like DHE-RSA-AES128-SHA use this protocol to achieve perfect forward secrecy^1. To achieve a good level of security, parameters of the same size as the key are usually used (the security provided by the discrete logarithm problem is about the same as the security provided by factorisation of two large prime numbers) and therefore, the exponentiation operations are pretty slow as we can see in the benchmark below: Diffie-Hellman with elliptic curves Fortunately, there exists another way to achieve a Diffie-Hellman key exchange with the help of elliptic curve cryptography which is based on the algebraic structure of elliptic curves over finite fields. To get some background on this, be sure to check first Wikipedia article on elliptic curves. Elliptic curve cryptography allows one to achieve the same level of security than RSA with smaller keys. For example, a 224bit elliptic curve is believed to be as secure as a 2048bit RSA key. Some theory Diffie-Hellman key exchange described above can easily be translated to elliptic curves. Instead of defining ·p· and ·g·, you get some elliptic curve, ·y^2=x^3+\alpha x+\beta·, a prime ·p· and a base point ·G·. All those parameters are public. In fact, while they can be generated by the server, this is a difficult operation and they are usually chosen among a set of published ones. The use of elliptic curves is an extension of TLS described in RFC 4492. Unlike with the classic Diffie-Hellman key exchange, the client and the server need to agree on the various paremeters. Most of this agreement is done inside Client Hello and Server Hello messages. While it is possible to define some arbitrary parameters, web browsers will only support a handful of predefined curves, usually NIST P-256, P-384 and P-521. From here, the key exchange with elliptic curves is pretty similar to the classic Diffie-Hellman one: 1. The server picks a random integer ·a· and compute ·aG· which will be sent, unencrypted but signed with its private key for authentication purpose, in a Server Key Exchange message. 2. The client checks that the signature is correct. It also picks a random integer ·b· and sends ·bG· in a Client Key Exchange message. It will also compute ·b\cdot aG=abG· which is the premaster secret from which the master secret is derived. 3. The server will receive ·bG· and compute ·a\cdot bG=abG· which is the same premaster secret known by the client. An eavesdropper will only see ·aG· and ·bG· and won’t be able to compute efficiently ·abG·. Using ECDHE-RSA-AES128-SHA cipher suite (with P-256 for example) is already a huge speed improvement over DHE-RSA-AES128-SHA thanks to the reduced size of the various parameters involved. Web browsers only support a handful of well-defined elliptic curves, chosen to ease an efficient implementation. Bodo Möller, Emilia Käsper and Adam Langley have provided 64bit optimized versions of NIST P-224, P-256 and P-521 for OpenSSL. To get even more details on the matter, you can read the end of the introduction on elliptic curves from Adam Langley, then a short paper from Emilia Käsper which presents a 64bit optimized implementation of the NIST elliptic curve NIST P-224. In practice First, keep in mind that elliptic curve cryptography is not supported by all browsers. Recent versions of Firefox and Chrome should handle NIST P-256, P-384 and P-521 but for most versions Internet Explorer, you are currently out of luck. Therefore, you need to keep accepting other cipher suites. You need a recent version of OpenSSL. Support for ECDHE cipher suites has been added in OpenSSL 1.0.0. Check with openssl ciphers ECDH that your version supports them. If you want to use the 64bit optimized version, you need to run a snapshot of OpenSSL 1.0.1, configured with enable-ec_nistp_64_gcc_128 option. A recent GCC is also required in this case. Next, you need to choose the appropriate cipher suites. If forward secrecy is an option for you, you can opt for ECDHE-RSA-AES128-SHA:AES128-SHA:RC4-SHA cipher suites which should be compatible with most browsers. If you really need forward secrecy, you may opt for ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:EDH-DSS-DES-CBC3-SHA instead. Then, you need to ensure the order of cipher suites is respected. On nginx, this is done with ssl_prefer_server_ciphers on. On Apache, this is SSLHonorCipherOrder on. UPDATED: You need to check ECDHE support for your web server. For nginx, the support has been added in 1.0.6 and 1.1.0. The curve selected defaults to NIST P-256. You can specify another one with ssl_ecdh_curve directive. For Apache, it has been added in 2.3.3 and does not exist in the current stable branch. Adding support for ECDHE is quite easy. You can check how I added it in stud. This issue also exists for DHE cipher suites, in which case you also might have to specify DH parameters to use (generated with openssl dhparam) using some special directive or by appending the parameters to the certificate. Check Immerda Techblog article for more background about this point. The implementation of TLS session tickets may be incompatible with forward secrecy, depending on how they are implemented. When they are protected by a random key generated at the start of the server, the same key could be used for months. Some implementations^2 may derive the key from the private key. In this case, forward secrecy is broken. If forward secrecy is a requirement for you, you need to either disable tickets or ensure that key rotation happens often. Check with openssl s_client -tls1 -cipher ECDH -connect 127.0.0.1:443 that everything works as expected. Some benchmarks With the help of the micro-benchmark tool that I developed for my previous article, we can compare the efficiency of cipher suites providing forward secrecy: I have used a snapshot of OpenSSL 1.0.1 (2011/11/25). The optimized version of ECDHE is the one you get by using enable-ec_nistp_64_gcc_128 option when configuring OpenSSL. Let’s focus on the server part. Enabling DHE-RSA-AES128-SHA cipher suite hinders the performance of TLS handshakes by a factor of 3. Using ECDHE-RSA-AES128-SHA instead only adds an overhead of 27%. However, if we use the 64bit optimized version, the cost is only 15%. The overhead is only per full TLS handshake. If 3 out of 4 of your handshakes are resumed, you need to adjust the numbers. Your mileage may vary but the computational cost for enabling perfect forward secrecy with an ECDHE cipher suite seems a small sacrifice for better security. 1. Perfect forward secrecy is an enhanced version of forward secrecy. It assumes each exchanged key are independent and therefore a compromised key cannot be used to compromise another one. ↩ 2. For example, this is the case of the implementation I have proposed for stud to enable sharing of tickets between multiple hosts. ↩
{"url":"http://vincent.bernat.im/en/blog/2011-ssl-perfect-forward-secrecy.html","timestamp":"2014-04-21T04:31:14Z","content_type":null,"content_length":"24764","record_id":"<urn:uuid:366b4902-d49e-4a3f-8001-b48a66523e92>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Cranston Precalculus Tutor Find a Cranston Precalculus Tutor ...I have had success preparing students for the ASVAB, and I especially enjoy helping them tackle the word problems on the Arithmetic Reasoning section. Time management is important, and test-taking strategies are key. When I taught at a Catholic middle school, I helped eighth-graders prepare for these tests. 45 Subjects: including precalculus, chemistry, Spanish, French ...I also have a Bachelor's degree in Computer Science & Psychology. I spent 12 successful years in software engineering and left the corporate world to teach at the high school level to better balance needs of my family. Although I taught high school Math, my professional training was in software... 25 Subjects: including precalculus, geometry, statistics, algebra 1 I have been teaching high school and middle school math courses for about 20 years. I currently teach at a local high school and teach on Saturdays at a school for Asian students in Boston. I am currently teaching Honors Algebra 2, Senior Math Analysis, and MCAS prep courses, as well as 7-8 grade math, and SAT Prep courses. 12 Subjects: including precalculus, geometry, algebra 2, algebra 1 ...I received an 800 on the math section of the SAT and a 750 on the SAT II for Math 2. I received an 800 in SAT Math. I was a member of the Math League at my high school. 20 Subjects: including precalculus, calculus, French, physics ...I'm that geeky math-loving girl, that was also a cheerleader, so I pride myself in being smart and fun!! I was an Actuarial Mathematics major at Worcester Polytechnic Institute (WPI), and worked in the actuarial field for about 3.5 years after college. Since then I have been a nanny and a tutor ... 17 Subjects: including precalculus, calculus, geometry, statistics
{"url":"http://www.purplemath.com/cranston_ri_precalculus_tutors.php","timestamp":"2014-04-18T00:30:13Z","content_type":null,"content_length":"24007","record_id":"<urn:uuid:ec5ffa3d-203f-4708-a6ad-93e826d97713>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Dockweiler, CA Find a Dockweiler, CA Calculus Tutor ...She specifically asked me to tutor under her because of my communication skills and ability to connect with those around me. Recently, I helped my older sister who is finishing her senior year in college and is taking upper division courses for her major with a paper she was assigned. She was f... 22 Subjects: including calculus, English, reading, algebra 2 ...I now teach at a high school in San Pedro. I teach AP physics, conceptual physics, algebra 2, and honours trigonometry/precalculus. I hope to leave the same kind of impact on my students, as my high school physics and maths teachers left on me.Calculus is the study of rates-of-change. 11 Subjects: including calculus, physics, statistics, SAT math ...I am extremely patient and understanding, with an adaptable teaching style based on the student's needs. I specialize in high school math subjects like Pre-Algebra, Algebra, Algebra 2/ Trigonometry, Precalculus and Calculus. I can also tutor college math subjects like Linear Algebra, Abstract Algebra, Differential Equations, and more. 9 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I have taken many teaching classes and have found easy ways for this particular subject to be learned. I have been tutoring various levels of math for 4+ years. I have been swimming for 12+ 14 Subjects: including calculus, physics, algebra 1, algebra 2 ...Then I will let you take on the problem itself and make sure you understand why each step is necessary. Depending on your work load I can assign homework that will challenge your ability to solve hard problems so you can keep progressing. I hold my standards very high. 9 Subjects: including calculus, algebra 1, differential equations, physical science Related Dockweiler, CA Tutors Dockweiler, CA Accounting Tutors Dockweiler, CA ACT Tutors Dockweiler, CA Algebra Tutors Dockweiler, CA Algebra 2 Tutors Dockweiler, CA Calculus Tutors Dockweiler, CA Geometry Tutors Dockweiler, CA Math Tutors Dockweiler, CA Prealgebra Tutors Dockweiler, CA Precalculus Tutors Dockweiler, CA SAT Tutors Dockweiler, CA SAT Math Tutors Dockweiler, CA Science Tutors Dockweiler, CA Statistics Tutors Dockweiler, CA Trigonometry Tutors Nearby Cities With calculus Tutor Cimarron, CA calculus Tutors Dowtown Carrier Annex, CA calculus Tutors Farmer Market, CA calculus Tutors Foy, CA calculus Tutors Green, CA calculus Tutors Lafayette Square, LA calculus Tutors Miracle Mile, CA calculus Tutors Oakwood, CA calculus Tutors Pico Heights, CA calculus Tutors Rimpau, CA calculus Tutors Sanford, CA calculus Tutors Vermont, CA calculus Tutors Westvern, CA calculus Tutors Wilcox, CA calculus Tutors Wilshire Park, LA calculus Tutors
{"url":"http://www.purplemath.com/Dockweiler_CA_Calculus_tutors.php","timestamp":"2014-04-21T07:18:44Z","content_type":null,"content_length":"24253","record_id":"<urn:uuid:13307c7e-24b0-4be1-88d5-36416d8290c1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: MATA all combinations / pairs of a row Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: MATA all combinations / pairs of a row From Nick Cox <njcoxstata@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: MATA all combinations / pairs of a row Date Thu, 6 Oct 2011 01:16:26 +0100 One idea only from me. Look at -select()- and think of rowsums being 2. On Wed, Oct 5, 2011 at 10:08 PM, Sebastian Eppner <eppner@uni-potsdam.de> wrote: > Hi, > I have N elements. In this example N will be 3. But in my application > it will be usually around 7, going up to 20 in some cases… > Each element can be either 1 or 0. My basic matrix has all 2^N, here > 2^3 permutations of the elements: > BASIC = ( 1,1,1) \ (1,1,0) \ (1,0,1) \ (0,1,1) \ (0,0,1) \ (0,1,0) \ > (1,0,0) \(0,0,0) > The elements have 2 “properties” , PROP1 and PROP2 each, stored in 2 > rows of another matrix, for example: > PROPERTIES = (1.4,4.6,8.1) \ (0.2, 0.4, 0.4) > Every row in BASIC resembles a unique combination of the elements. > What I want to do, is to calculate some indicators for every > combination, using the Properties of the elements. > An easy example would be to calculate the total sum of PROP1 for each > combination. I just need to multiply the “diagonalized” first row of > the PROPERTIES Matrix with the BASIC Matrix and then calculate the > rowsum of this resulting matrix. > What I want to do now is more complex and I am really stuck. The > indicator I want to calculate now is the following: > For each row/combination in BASIC I need to find all PAIRS of Ones > that are contained in the combination. For any pair, I need to > multiply: > PROP1 of Element1 * PROP1 of Element2 * abs(PROP2 of Element1-PROP2 > of ELEMENT2) > It is the sum of all PAIRS that I need to know for every row…. > Of course, not every row has PAIRS in it… all rows with only 1 One > have no pairs, so the result for these rows should be 0 (or missing, I > don’t care). > Also, many rows (those with only 2 Ones) have only a single Pair… > In my Example with 3 elements, only the first row sums up more than > one (3) Pair(s)…. > I am looking for a way to solve this problem by not using loops… and > as much matrix algorithms as possible. Since I have a lot of BASIC > Matrices with more elements than 3… I guess the whole thing would be > very very time consuming if I started smth with loops… Maybe there is > no way to get rid of any loop… any idea would be very welcome (even > some, with a little looping). * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-10/msg00179.html","timestamp":"2014-04-20T01:06:23Z","content_type":null,"content_length":"10050","record_id":"<urn:uuid:55ace55e-4774-42a2-be12-112e3ee3d28f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Directions: Your Solutions Should Be Written Out ... | Chegg.com Directions: Your solutions should be written out fully in paragraph form using complete sentences and correct punctuation Let n, p be elements of Z with n,p > 0 and p is prime. (a) Prove or disprove: if a, b, c are elements of Z subscript n, with a not equal to 0 and ab=ac, the b=c (b) Prove or disprove: if a, b, c are elements of Z subscript p, with a not equal to 0 and ab=ac, the b=c For the disproof I just need an example with numbers that makes it not work
{"url":"http://www.chegg.com/homework-help/questions-and-answers/directions-solutions-written-fully-paragraph-form-using-complete-sentences-correct-punctua-q2874206","timestamp":"2014-04-24T11:44:52Z","content_type":null,"content_length":"21299","record_id":"<urn:uuid:9173fd94-56ac-40e2-a1f1-7ccf10185c35>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to ICS2011 Innovations in Computer Science - ICS 2011, Tsinghua University, Beijing, China, January 7-9, 2011. Proceedings, 444-459, 978-7-302-24517-9 Tsinghua University Press Complex Semidefinite Programming Revisited and the Assembly of Circular Genomes Authors : Konstantin Makarychev, Alantha Newman Download PDF We consider the problem of arranging elements on a circle so as to approximately preserve specified pairwise distances. This problem is closely related to optimization problems found in genome assembly. The current methods for genome sequencing involve cutting the genome into many segments, sequencing each (short) segment, and then reassembling the segments to determine the original sequence. A useful paradigm has been using "mate pair" information, which, for a circular genome (e.g. bacterial genomes), generates information about the directed distance between non-adjacent pairs of segments in the final sequence. Specifically, given a set of equations of the form xv − yu ≡ duv (mod q), we study the objective of maximizing a linear payoff function that depends on how close the value xv − yu (mod q) is to duv. We apply the rounding procedure used by Goemans and Williamson for "complex" semidefinite programs. Our main tool is a simple geometric lemma that allows us to easily compute the expected distance on a circle between two elements whose positions have been computed using this rounding procedure. Copyright 2010-2011, Institute for Computer Science, Tsinghua University, All Rights Reserved.
{"url":"http://conference.itcs.tsinghua.edu.cn/ICS2011/content/papers/27.html","timestamp":"2014-04-20T03:50:56Z","content_type":null,"content_length":"10602","record_id":"<urn:uuid:2260ddd6-90aa-4540-8f47-85480ea26592>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 432.05038 Autor: Burr, Stefan A.; Erdös, Paul; Faudree, Ralph J.; Schelp, R.H. Title: A class of Ramsey-finite graphs. (In English) Source: Proc. 9th southeast. Conf. on Combinatorics, graph theory, and computing, Boca Raton 1978, 171-180 (1978). Review: [For the entire collection see Zbl 396.00003.] The notation F > (G,H) is used to imply that if the edges of Fare colored with two colors, say red and blue, then either there exists a red copy of G or a blue copy of H. The class of all graphs F or which F > (G,H) is denoted R'(G,H). The class of minimal graphs in R'(G,H) is denoted R(G,H). The authors show that if G is an aritrary graph on n vertices and m is a positive integer, then whenever F in R(mK[2],G), we always have |E(F)| \leq sum[i = 1]^bn^i where b = (m-1)(\binom{2m-1}{2})+1)+1. As a corollary, they conclude that he class R(mK[2],G) is finite. It should be noted that there are large classes of graphs for which R(G,H) is infinite but few nontrivial examples are known where R(G,H) is finite. Reviewer: W.T.Trotter Classif.: * 05C55 Generalized Ramsey theory Keywords: minimal graphs Citations: Zbl.396.00003 © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/43205038.htm","timestamp":"2014-04-18T18:22:33Z","content_type":null,"content_length":"4124","record_id":"<urn:uuid:c81ab737-01a8-40d0-b0da-bc579bf1d7fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Los Altos Science Tutor Find a Los Altos Science Tutor ...As an teacher, I am friendly, patient and encouraging to those learning about a subject. I take care to ensure understanding through quizzes, and by conversing with the student about the subject. I also am talented at breaking down difficult material and explaining it in a way easy to understand, tailored to the level the student is at. 24 Subjects: including organic chemistry, ACT Science, anatomy, philosophy ...Later, his mom asked me to tutor his younger brother in AP Calculus and AP Physics. My first one-on-one class student (described above) asked me to help him with his precalculus/calculus course after I tutored him SAT Physics subject test. The mom of my very first SAT Physics Subject test stude... 8 Subjects: including physics, calculus, geometry, algebra 2 ...I teach prerequisite courses for programs such as LVN, MRI Tech, Ultrasound Tech and Dental Hygiene. I have been an adjunct faculty at San Jose City College for 3 years, where I've taught subjects such as Anatomy, Physiology, and General Biology. I teach both lecture and laboratory sections. 3 Subjects: including biology, anatomy, physiology ...A lot of people underestimate just how much algebra goes into this class and i think taking an algebraic approach to pre calc is the key to doing well in the class. Part of my job at the school is to work in the campus Writing Center. A great deal of my time is spent helping people read through their material and collect the necessary information for the writing process. 17 Subjects: including biology, reading, English, writing ...I also believe that most subjects can be learned in ways that capture the learner's attention and put it to work for them. I received my teaching certificate in 1992 after earning my Master's in Education from the University of Puget Sound (Tacoma, Washington.) In the past, I have tutored adults... 44 Subjects: including philosophy, psychology, reading, GRE
{"url":"http://www.purplemath.com/los_altos_ca_science_tutors.php","timestamp":"2014-04-18T23:42:39Z","content_type":null,"content_length":"23962","record_id":"<urn:uuid:241e7797-a245-4b26-9e7a-4851711343c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
finding the square number of feet. March 6th 2010, 12:03 PM #1 Senior Member Aug 2009 finding the square number of feet. not sure if i have the right setup for this problem. if someone could check my work i would appreciate thanks.IMG.pdf thanks in advance Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/132337-finding-square-number-feet.html","timestamp":"2014-04-16T04:20:50Z","content_type":null,"content_length":"28715","record_id":"<urn:uuid:66b8de64-4145-4cc5-9b9a-5c6d5e1cfcd1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: x^2 - 36 / x - 6 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/504642cfe4b08ef7b166cf7b","timestamp":"2014-04-19T04:25:46Z","content_type":null,"content_length":"56401","record_id":"<urn:uuid:62dfbe97-d29d-41b6-b5c8-1e4e50083066>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
[Python-ideas] Why is nan != nan? Greg Ewing greg.ewing at canterbury.ac.nz Fri Mar 26 02:25:06 CET 2010 spir ☣ wrote: > (Else there should be a distinction between equality assignment and identity assignemt? > b = a # ==> a is b and a == b > b := a # ==> a is b and possibly a == b Eiffel's position on this seems to be that there should be no distinction -- a copy of a value should always compare equal to the original value, regardless of type. Eiffel even seems to extend this to conversions, so that if you convert an int to a float, the resulting float should compare equal to the original int, even if some precision was lost in the conversion. (Incidentally, that's one principle we would be choosing *not* to follow if we decide to compare floats and Decimals based on their exact values.) More information about the Python-ideas mailing list
{"url":"https://mail.python.org/pipermail/python-ideas/2010-March/006951.html","timestamp":"2014-04-20T06:47:37Z","content_type":null,"content_length":"3405","record_id":"<urn:uuid:363aad44-fd26-49ad-b069-9475ef167edf>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Updating SciPy from 0.9.0 to 0.10.1 triggers undesired behavior in NumPy 1.6.2 Ralf Gommers ralf.gommers@googlemail.... Tue Jul 31 11:43:36 CDT 2012 On Mon, Jul 30, 2012 at 6:33 PM, Wolfgang Draxinger < Wolfgang.Draxinger@physik.uni-muenchen.de> wrote: > Hi, > first my apologies for crossposting this to two maillists, but as both > projects are affected I think this is in order. > Like the subject says I encountered some undesired behavior in the > interaction of SciPy with NumPy. > Using the "old" SciPy version 0.9.0 everything works fine and smooth. > But upgrading to SciPy-0.10.1 triggers some ValueError in > numpy.histogram2d of NumPy-1.6.2 when executing one of my programs. > I developed a numerical evaluation system for our detectors here. One of > the key operations is determining the distribution of some 2-dimensional > variable space based on the values found in the image delivered by the > detector, where each pixel has associated values for the target > variables. This goes something like the following > ABdist, binsA, binsB = numpy.histogram2d( > B_yz.ravel(), > A_yz.ravel(), > [B_bins, A_bins], > weights=image.ravel() ) > The bins parameter can be either [int, int] or [array, array], that > makes no difference in the outcome. > The mappings A_yz and B_yz are created using > scipy.interpolate.griddata. We have a list of pairs of pairs which are > determined by measurement. Basically in the calibration step we vary > variables A,B and store at which Y,Z we get the corresponding signal. > So essentially this is a (A,B) -> (Y,Z) mapping. In the region of > interest is has a bijective subset that's also smooth. However the > original data also contains areas where (Y,Z) has no corresponding > (A,B) or where multiple (A,B) map to the same (Y,Z); like said, those > lie outside the RoI. For our measurements we need to reverse this > process, i.e. we want to do (Y,Z) -> (A,B). > So I use griddata to evaluate a discrete reversal for this > mapping, of the same dimensions that the to be evaluated image has: > gry, grz = numpy.mgrid[self.y_lim[0]:self.y_lim[1]:self.y_res*1j, > self.z_lim[0]:self.z_lim[1]:self.z_res*1j] > # for whatever reason I have to do the following > # assigning to evalgrid directly breaks the program. > evalgrid = (gry, grz) > points = (Y.ravel(), Z.ravel()) > def gd(a): > return scinp.griddata( > points, > a.ravel(), > evalgrid, > method='cubic' ) > A_yz = gd(A) > B_yz = gd(B) > where A,B,Y,Z have the same dimensions and are the ordered lists/arrays > of the scalar values of the two sets mapped between. As you can see, > this approach does also involve the elements of the sets, which are not > mapped bijectively. As lying outside the convex boundary or not being > properly interpolatable they should receive the fill value. > As long as I stay with SciPy-0.9.0 everything works fine. However after > upgrading to SciPy-0.10.1 the histogram2d step fails with a ValueError. > The version of NumPy is 1.6.2 for both cases. > /usr/lib64/python2.7/site-packages/numpy/ma/core.py:772: RuntimeWarning: > invalid value encountered in absolute > return umath.absolute(a) * self.tolerance >= umath.absolute(b) > Traceback (most recent call last): > File "./ephi.py", line 71, in <module> > ABdist, binsA, binsB = numpy.histogram2d(B_yz.ravel(), A_yz.ravel(), > [B_bins, A_bins], weights=image.ravel()) > File "/usr/lib64/python2.7/site-packages/numpy/lib/twodim_base.py", line > 615, in histogram2d > hist, edges = histogramdd([x,y], bins, range, normed, weights) > File "/usr/lib64/python2.7/site-packages/numpy/lib/function_base.py", > line 357, in histogramdd > decimal = int(-log10(mindiff)) + 6 > ValueError: cannot convert float NaN to integer > Any ideas on this? Looks to me like the issue only has to do with a change in griddata, and nothing with histogram2d. The return values from griddata must be I suggest that you compare the return values from griddata for 0.9.0 and 0.10.1 with your input data, then try to reduce that comparison to a small self-contained example that shows the difference. This will allow us to debug the issue further. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/scipy-user/attachments/20120731/e971ba81/attachment-0001.html More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2012-July/032682.html","timestamp":"2014-04-16T19:27:14Z","content_type":null,"content_length":"8398","record_id":"<urn:uuid:82374bea-be2b-43ef-a7d6-9c97c8511431>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Has the UK's A-Level Physics Been Diluted Over The Years? I don't know since I have no first-hand knowledge of it. But this letter-writer to The Telegraph seems to think so. I have taught physics at A-level since 1988, having taken the exam myself in 1983. In the past 25 years, the material on the syllabus has been diluted, with most of the mathematical requirements removed. We are left with a subject more akin to “physics studies” than “physics”. The style of questions has changed so that there are continuous prompts. The questions frequently occupy more of the paper than the spaces for the answers. The marking has become more lenient and forgiving of terminological errors. If you are in the UK and have some knowledge of this, I'd like to hear your opinion. 3 comments: Neil said... I have experience of A-Level physics in the Uk from a couple of years ago, and I'm now an undergraduate Physics student. Whilst at Sixth Form, taking my A-Levels, I knew I wanted to do Physics at university and was encouraged by my teachers. One way to get my interest was giving me some old physics A-Level text books from the mid-80s. And all I can say is that there is no doubt to the dilution of physics. In fact, a lot of the subject which used to be taught at A-Level is now 1st/2nd year undergraduate level. Not only this, the main problem with A-Level physics is the decision to make it accessable to students not taking A-Level mathematics. This wipes out any hope of using Calculus in A-Level physics, for that is no longer taught in mathematics until A-Level. It also wipes out the use of logarithms, natural logarithms, exponents, and even anything beyond basic trigonometry. So students are presented with material which resorts to memorising simple harmonic motion (SHM) displacement, velocity and acceleration, as this has to be taught without the use of calculus. Such examples are rife - not being taught Newton's Second Law is anything to do with Calculus, being taught to find the area under a graph by counting squares. It's preposterous. I took Maths, Physics, Chemistry, Psychology, Further Maths at A-Level, and to be blunt physics was by far the easiest of any of them (even Psychology - which is seen as an easy subject). Physics certainly has been reduced to learning some wishy-washy concepts, and Physics Studies would be a much apter name. In the hope of broadening participation, the physics itself has been sacrificed. Through-out my time at University lecturers, tutors and professors have been amazed firstly at how little we have been taught of anything advanced (alot of which is assumed as known due to not having looked at the syllabus in a few years) and amazed at how well we cope with this lack of knowledge, and catch up. But undoubtedly this makes life harder as a student. Physics is not easy. On the whole, it is not popular plus the grade boundaries are so low because its hard to get anything above half. The teachers are not that good so who is to say its easier? Speak for yourself people. David said... Unfortunately I cannot comment directly on the question, as I only have experience of the most recent physics A level. However, Neil is correct. University admissions tutors have explained that physics undergraduate courses are all 4 year courses as the mathematical understanding is too low for students entering. One lecturer in engineering, who taught freshers maths for engineering, despaired as he had students who knew how to perform techniques, but no idea when, where, why or how they worked. Admittedly, this is the fault of mathematics teaching. However, the most advanced mathematical technique in the physics A level was (last year) perhaps simultaneous equations involving kinetic energy and momentum in perfectly elastic collisions. The biggest problem is, in my opinion, even more simple. The concept of 'significant figures' i.e. precision, is a brief side note. Unfortunately, the AEA (the only hard physics exam) is being scrapped, and 'social concerns' receiving increased prominence in the specifications. This would be fine for chemistry, if 'environmentally friendly' principles were taught, although that does not appear to be the case. Physics, however has no such potential resource. It is disgraceful.
{"url":"http://physicsandphysicists.blogspot.com/2008/08/has-uks-level-physics-been-diluted-over.html","timestamp":"2014-04-20T08:39:37Z","content_type":null,"content_length":"145669","record_id":"<urn:uuid:649b2486-48ad-4af1-931e-75f992f497e8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Karl Walter Gruenberg Born: 3 June 1928 in Vienna, Austria Died: 10 October 2007 in London, England Click the picture above to see four larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Karl Gruenberg was born into a Jewish family in Vienna. While he was still a young child his parents separated and the young boy was brought up in a country which became increasingly hostile to those of Jewish origin. In 1933 Hitler came to power in Germany and Dollfuss took control of Austria in what was essentially a Fascist coup. Austria attempted to remain independent of Germany but on 12 March 1938, the day before a plebiscite was to be held on the independence issue, German troops invaded. Many Austrians of Jewish origin, seeing what was happening, tried to leave the country. The Kindertransport allowed Karl to be sent to England in March of 1939 but life was not easy for the ten year old German speaking boy. He was extremely unhappy for a few months but life became somewhat easier when his mother was able to join him before World War II began in September 1939. At first Karl attended Shaftesbury Grammar School in Dorset. With Britain at war with Germany, life was not easy for a schoolboy who spoke German and little English. Gradually life got better and in 1943 Karl and his mother moved to London where he entered Kilburn Grammar School. Soon Gruenberg flourished, achieving excellent school grades, and his broad international based views meant he became very happy with his new life. He won a scholarship to study mathematics at Magdalene College, Cambridge, and after the award of a BA in 1950 he continued to undertake research at Cambridge under Philip Hall's supervision. Roseblade writes [2]:- Before Hall's lectures young mathematicians were mostly silent; only one stood out - Karl Gruenberg - always talking animatedly. That animation was characteristic; sometimes one felt he would never stop, particularly when he was in argumentative mood. But it was always friendly and he never had quarrels. By this time Gruenberg had become a British citizen, all the procedures being completed by 1948. He was awarded a doctorate in 1954 for his thesis A Contribution to the Theory of Commutators in Groups and Associative Rings. Before the award of his doctorate Gruenberg had published a number of papers such as Some theorems on commutative matrices (1951), A note on a theorem of Burnside (1952), Two theorems on Engel groups (1953), and Commutators in associative rings (1953). The first and last of these papers were written jointly with M P Drazin. Also before he was awarded his PhD, Gruenberg had been appointed as an Assistant Lecturer in Mathematics at Queen Mary College, part of London University [1]:- ... this was the first of Kurt Hirsch's many inspired appointments. He was awarded a Commonwealth Fund Fellowship which enabled him to spend 1955-56 at Harvard then 1956-57 at the Institute for Advanced Studies at Princeton [2]:- In between he caught the travel bug that never left him. Back in England after his two years in the United States, Gruenberg returned to Queen Mary College where he was appointed as a Lecturer in Mathematics. He remained at Queen Mary College for the rest of his career being promoted to Reader in 1961, then Professor in 1967. He was Head of the Pure Mathematics Department at Queen Mary College from 1973 until 1978. Roseblade writes [2]:- ... throughout his 70s, he was an active mathematical visitor to top-ranking universities all over the world. For many years he helped organise algebra meetings at the mathematical research institute in Oberwolfach, where he loved Black Forest walks. After dinner, he would take on all comers at table tennis - and win. Gruenberg's first research topic led him to a study of Engel groups. However the direction of his research moved towards cohomology theory, particularly its applications to group theory. Typical of this is his famous Some cohomological topics in group theory which appeared in the Queen Mary College Mathematics Notes series in 1967. I B S Passi writes in a review:- The subject of these notes - which are based on the lectures the author gave at Queen Mary College, London, in 1965 - 6 and at Cornell University in 1966 - 7 - is "group theory with a cohomological flavour". These notes are of great interest to workers in group theory who have some background in homological algebra. The chapters are: (1) Fixed point free action; (2) The cohomology and homology groups; (3) Presentations and resolutions; (4) Free groups; (5) Classical extension theory; (6) More cohomological machinery; (7) Finite p-groups. Results of many mathematicians such as Burnside, Thompson, Serre, Mac Lane, Magnus, Fox, Iwasawa, Golod, Safarevic, Roquette, and Gaschütz are discussed, but large parts of the work was based on results by Gruenberg himself. In 1970 these notes were republished as Cohomological topics in group theory by Springer-Verlag with four additional chapters: (8) Cohomological dimension; (9) Extension categories: general theory; (10) More module theory; (11) Extension categories: finite groups. In 1976 he published Relation modules of finite groups which appeared in the Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics of the American Mathematical Society. He writes in the Preface:- This booklet reproduces a course of ten lectures that I gave at an N S F Regional Conference at the University of Wisconsin-Parkside 22 July-26 July 1974. The aim of the lectures was twofold. On the one hand, I wanted to show group theorists how the presentation theory of finite groups can nowadays be successfully approached with the help of integral representation theory. On the other hand, I hoped to persuade ring theorists that here was an area of group theory well suited to applications of integral representation theory. As a result, the course had to be constructed so that only a modicum of either group theory or module theory would be presupposed of the audience. The aim of this printed version remains the same. To achieve it, I have felt it advisable to fill in and expand the Parkside lectures at a number of places. For this I have drawn on lectures that I gave at the Australian Summer Research Institute held at the University of Sydney in 1971 and at the Australian National University at Canberra in 1974. For a fuller version of the Preface see Relation modules. Martin Dunwoody writes:- The author gives an excellent account of the theory of the relation modules of finite groups. Practically all of the theory has previously appeared in papers of the author and his collaborators. In addition to these research level texts, Gruenberg also published an undergraduate level text (written jointly with A J Weir) Linear geometry. L M Kelly writes:- The authors observe that with a limited number of selective omissions the book could be used as a first course in linear algebra. The method of exposition is purely algebraic. The book is not graced (sullied) with a single diagram because "we feel that diagrams are of most help if drawn by the reader". Thus the appeal is more to the algebraist than to the geometer. ... The writing style is direct, clear and efficient. The introduction of topics is generally well motivated. There are over 250 exercises mostly non-routine, designed "to shed further light on the subject The authors of [1] write about Gruenberg's "forty almost continuous years at Queen Mary College":- During this period Queen Mary College has become one of the major centres for Mathematics in Britain and Karl Gruenberg has been at the heart of that development. This is due not only to his mathematical ability but also his personality and enthusiasm for mathematics. Gruenberg was always encouraging to his mathematical colleagues, and he showed particular kindness to those embarking on a mathematical career. I [EFR] remember the first British Mathematical Colloquium I attend at Swansea in 1967. I gave a talk in the 'Group Theory' splinter group session on work which I was doing for my doctorate. Kurt Hirsch and Karl Gruenberg sat in the front row and both made encouraging and helpful comments to me after my talk. As to Gruenberg's life outside mathematics Wehrfritz writes [3]:- Gruenberg was a very cultured person with many interests well outside of mathematics. Particular interests of his were the theatre, music, architecture and painting. Years ago, I remember him as a pretty nifty left-handed table tennis player, though he always wrote with his right hand. Article by: J J O'Connor and E F Robertson List of References (3 books/articles) A Poster of Karl Gruenberg Mathematicians born in the same country Additional Material in MacTutor Honours awarded to Karl Gruenberg (Click below for those honoured in this way) BMC morning speaker 1958, 1974, 1994 Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © July 2008 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Gruenberg.html","timestamp":"2014-04-16T04:12:51Z","content_type":null,"content_length":"19678","record_id":"<urn:uuid:f5905de3-051b-42a5-a4c8-adc16d366f45>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 342 F The Use of Selection Modeling to Evaluate AIDS Interventions with Observational Data Robert Moffitt I. INTRODUCTION This paper considers the potential applicability to AIDS interventions of nonexpenmental evaluation methods developed by econome~mcians in the evaluation of social programs. Econometric methods for program evaluation have been studied by economists over the past twenty years and are by now quite well-developed. To give the discussion a focus, two types of interventions are considered: AIDS counseling and testing (C&T) programs, and AIDS programs run by community-based organi- zations (CBOs). While both C&T programs and CBOs are quite diverse, especially He CBOs, many are designed to encourage the adoption of sexual prevention behaviors and to encourage risk reduction behaviors more generally. It is this outcome that will be Me focus of the analysis here. This paper was presented at the Conference on Nonexperimental Approaches to Evaluating AIDS Prevention Programs convened on January 12-13, 1990 by the Panel on the Evaluation of AIDS Inter- ventions of the National Research Council. The views expressed in this paper are those of the author, they should not be attributed to the Panel or to tile NRC. Comments on an earlier version of the pa- per from James Heckman, V. Joseph Hotz, Roderick Little, Charles Manski, and Lincoln Moses are appreciated. A version of this paper is to appear in Evaluation Review. 342 OCR for page 342 APPENDIX F | 343 In the next section of the paper, a brief historical overview of the development of econometric methods for program evaluation is given. Following that, in Section HI, a more formal statistical exposition of those methods is given. This section constitutes the major part of the paper. The conclusion of the discussion is that nonexperimental evaluation in general requires either adequate data on the behavioral histories of par- ticipants and non-participants In the interventions, or the availability of identifying vanables ("Z's") that affect the availability of the treatment to different individuals but not their behavior directly. Whether either of these conditions can be met in the evaluation of AIDS interventions is then discussed In Section IV for C&T and CBO programs. A summary and conclusion is provided in Section V. II. HISTORICAL DEVELOPMENT OF ECONOMETRIC METHODS FOR PROGRAM EVALUATION Most of the econometric methods for program evaluation have been de- signed to evaluate government-sponsored manpower training programs, where the major issue has been whether such programs increase ~ndi- vidua] earnings and other indicators of labor market performance. Such programs began to appear in the early 1960s with the Manpower Devel- opment and Training Act (MDTA) of 1962, and grew more extensive in the late 1960s as part of the War on Poverty. They became a fixture in the 1970s and 1980s, though changing in name and form from, for example, the Comprehensive Employment and Training Act (CETA) program In the 1970s to the Job Training and Partnership Act (JTPA) program in the 1980s. However, economists have also conducted extensive studies of welfare programs of other types, of health and education programs, and many others. One of He earliest studies (Ashenfelter, 1978) presented an econo- me~ic mode] for the estimation of the effect of the MDTA program on earnings using observational data. Many studies were later conducted of the CETA program and have been surveyed by Barnow (1987~. No major evaluation studies of the J IPA program have been completed, although one is currently underway. A recent study of an experimental training program called Supported Work has been published by Heckman and Hotz (1989), and win be discussed further below. The econometric literature on program evaluation underwent a ma- jor alteration in its formal framework after the separate development of "selectivity bias" techniques in the mid-1970s. Originally, the selectivity bias issue in economics concerned a miss~ng-data problem that arises In the study of individual labor market behavior, namely, the inherent OCR for page 342 344 ~ EVALUATING AIDS PREVENTION PROGRAMS unobservability of the potential market earrungs of individuals who are not working. The development of techniques for properly estimating such potential earnings (Gronau, 1974; Lewis, 1974; Heckman, 1974) was quickly realized to have relevance to the estimation of the effects of public programs on economic behavior. As will be discussed extensively in the next section, a similar selectivity bias problem arises in obser- vational evaluation studies through the inherent unobservability of what would have happened to program participants had they not received the treatment, and of what would have happened to non-par~cipants had they undergone the treatment. The connecting link was first explicitly made by Bamow, Cain, and Goldberger (1980), which included a comparison of the new technique with earlier techniques. A textbook treatment of the applicability of selectivity bias methods to program evaluation be- came available shortly thereafter (Maddala, 1983), as well as a survey of the applicability of those methods to health interventions in particular (Maddala, 1985~. The recent work of Heckman and Robb (1985a, 1985b) represents the most complete and general statement of the selectivity bias problem In program evaluation using observational data, and provides He most Borough analysis of the conditions under which the methods win yield good estimates and of the estimation me~ods available to obtain such estimates. The analysis In the next section of this paper is heavily influenced by the work of Heckman and Robb, which is itself built upon the almost twenty years of work on econometric methods for program evaluation. III. THE STATISTICS OF PROGRAM EVALUATION WITH OBSERVATIONAL DATA Although the statistical methods exposited in this section are applicable to any program in principle and will be developed fairly abstractly, it may help for specificity to consider the evaluation of C&T programs. Such programs have many goals but, for present purposes, it will be assumed that the major goal is to encourage those who receive the services of the program to adopt risk reduction activities and sexual prevention behaviors to reduce the likelihood of HIV infection to themselves and to others. The aim of He evaluation is to determine whether such programs do indeed have such effects and, if so, to provide an estimate of them magnitude. To begin the formal analysis, let Y be the outcome variable (e.g., level of prevention behavior) and make He following definitions: YE = Level of outcome variable for individual i at time t, assum- ~ng he has not received the "treatment" (i.e., the services of a C&T program) OCR for page 342 APPENDIX F ~ 345 Yin** = Level of outcome variable for individual i at time t, assum- ing he has received the treatment at some prior date The difference between these two quantities is the effect of the treatment, denoted c>: Yit =Yit+~ (~) or C} = Yit —Yit (2) The aim of the evaluation is to obtain an estimate of the value of a, the treatment effect, from whatever data are ava~lable.~ The easiest way to Wink about what we seek in an estimate of cat is to consider individuals who have gone through a C&T program and therefore have received the treatment, and for whom we later measure their value of Yin**. Ideally, we wish to know the level of Yip for such individuals we would like to know what their level of prevention behaviors, for example, would have been had they not gone through the program. If Yi'* could be known, the difference between it and Yin** would be a satisfactory estimates of a. The difficulty that arises is that we do not observe Yi`* directly but only the values of Yin* for non-participants. Define a dummy variable for whether an individual has or has not received the treatment: di = ~ if individual i has received the treatment di = 0 if individual i has not received the treatment Then a satisfactory estimate of ~ could be obtained by estimating the difference between Yin** and Yi'* for those who went through the program: = E (Yit**~di = I)—E(Yit*~di = I) (3) where E is the expectation operator. The estimate & in (3) is, in fact, the estimate that would be obtained if we had administered a randomized trial for the evaluation. For example, as individuals come In through the door of a C&T program, they could be randomly assigned to treatment status or control status, where the latter would involve receiving none of He services of the program. At some later date we could measure the levels of Y for the two groups and calculate (3) to obtain an estimate of the effect of the program. 1 For simplicity, the treatment effect, a, is assumed to be constant over time and across individuals, and to be non-random. Random treatment effects across individuals have been incorporated by B3orklund and Moffitt (1987) and are discussed by Heckman and Robb (1985a, 1985b). 21n standard econometric practice, Y.` is set equal to X,6~e, where X is a vector of observed variables, ,B is its coefficient vector, and ~ is an error term. OCR for page 342 346 ~ EVALUATING AIDS PREVENTION PROGRAMS The Problem The first key point of the statistical literature on evaluation is Mat ob- servational, nonexperimental data do not allow us to calculate (3) and therefore do not allow us to compute the estimate that could be obtained with a randomized trial. This is simply because we generally do not observe in such data any individuals who would have taken the treatment but do not; we only generally observe individuals who did not take the treatment at all.3 What we can estimate with nonexperimental data is an effect denoted here as a: cat = E(Yit** Eli = I)—E(Yit Eli = 0) (4) which is just the difference between mean Y for participants, those who did take the treatment (di = I) and He mean Y for non-participants, those who did not undergo the treatment (ah = 0) When will the estimate we are able to calculate, a, equal the estimate we would have obtained with a randomized teal, &? Comparison of (3) and (4) shows that the two will be equal if and only if the following condition is true: E(Yit Eli = 1) = E(Yit Eli = 0) (5) In words, the two estimates of ax are equal only if the value of YE for those who did not take the treatment equals the value of Yi' that Hose who did take the treatment would have had, had they not gone through the program. The heart of the nonexperimental evaluation problem is reflected In equation (5), and an understanding of that equation is necessary to understand the pervasiveness and unavoidability of what is termed the "selection bias" problem when observational data are employed. The equation will fall to hold under many plausible circumstances. For example, if those who go through a C&T program are concerned with their health and have already begun adopting prevention behaviors even before entering the program, they wiD be quite different from those who do not go through He program even prior to receiving any program services. Hence equation (5) win fall to hold because those who go through the program have different levels of Yi`, that is, different levels of prevention behavior even in the absence of receiving any program services. Our estimate of ~ will be too high relative to &, for He 3 Some evaluation designs make use of individuals on waiting Lsts as controls. Unfortunately, these individuals may not be randomly selected from the applicant pool; if they are not, Weir Y values will not be an accurate proxy for those of participants. OCR for page 342 APPENDIX F | 347 greater level of prevention behaviors observed for the treatment group subsequent to receiving services was present even prior to the treatment, and is therefore not a result of the treatment itself. Those who are observed to have actually gone through the program are therefore a "self- selected" group out of the pre-treatment population, and the estimate of is contaminated by "selectivity bias" because of such self-selection. The unavoidability of the potential for selectivity bias arises because the validity of equation (5) cannot be tested, even in principle, for the left-hand side of that equation is inherently unobservable. It is impossible In principle to know what the level of Yip for those who went through the program would have been had they not gone through it, for that level of Yip is a "counterfactual" that can never be observed. We may know, as discussed further below, the pre-treatment level of Ye for those who later undergo treatment, but this is not the same as the Yip we seek for the left-hand side of (5), we need to know the level of Yip for program participants that they would have had at exactly the same time as they went through the treatment, not at some previous time.4 Solutions There are three general classes of potential solutions to the selection bias problem (Heckman and Robb, 1985a, 1985b). None guarantees the elimination of the problem altogether, but rather each seeks to determine possible circumstances under which the problem could be eliminated. The question is then whether Lose circumstances hold. At the outset, it is important to note that two of the three solution methods have important implications for evaluation design because they require Be collection of certain types of data. Whether those data can be collected for AIDS programs then becomes the most important question, which is discussed In detail In Section IV. Solution ]: Identifying Variables ("Z's"J The selection bias problem can be solved if a vanable Zi is available, or one can be found, Cat (1) affects the probability that an individual receives Me treatment but which (2) has no direct relationship to Yip (e.g., no direct relationship to individual prevention behavior). What is an example of such a Zi? A Zi could be constructed if, for example, a C&T program were funded by CDC In one city and not In another for political 4Ie may be noted that Manski (1990) has pointed out that if Yz' is bounded (e.g., between O and 1), a worst-case/best-case analysis can be conducted in which the unobserved counterfactual is taken to equal each of the bounds in turn. This gives a range in which the true effect must lie instead of a point estimate. OCR for page 342 348 ~ EVALUATING AIDS PREVENTION PROGRAMS or bureaucratic reasons unrelated to the needs of the populations in the two cities, and therefore unrelated to the likelihood that the individuals in He two cities practice prevention behaviors. If a random sample of the relevant subpopulations were conducted in the two cities and data on Y were collected the data would include both participants and non-participants in Be city where the C&T program was funded—a comparison of the mean values of Y In the two could form the basis for a valid estimates of a. The variable Zi In this case should be Fought of as a dummy vanable equal to 1 In the C&T city and O in the over. The vanable satisfies the two conditions given above- it obviously affects whether Individuals In the two cities receive the treatment, since if Zi = 0 no C&T treatment is available, and it is unrelated to the level of YE in the two cities because the funding decision was made for reasons unrelated to sexual prevention behavior.6 This estimation method is known in econometrics as an "instrumental-var~able" method and Zi is termed an "instrument." It is an instrument In He sense that it is used to proxy the treatment variable itself. What is an example of an illegitimate Zi? The same dummy variable defined In the previous example would be illegitimate if the CDC funding decision were based not on political or bureaucratic decisions but on the relative level of need in the two cities. For example, if He C&T program were placed in the city with the higher rate of HIV infection, then Zi would not be independent of Yip the presence of the C&T program would be associated with lower levels of prevention behavior not because of a negative causal effect of He program but because of the reason for its placement. Further examples of legitimate and illegitimate "Z's" will be dis- cussed in Section IV. Solution 2: Parametric Distributional Assumptions on Yip A second solution to the selection bias problem arises if a paramedic distributional assumption on YE can be safely made or determined win reasonable certainty (Heckm~ and Robb, 1985a, 1985b). For example, if Yip follows a normal, logistic, or some over distribution with a finite set of parameters, identification of a program effect free of selection bias 5 For example, if Y1 is the mean value of the outcome variable in the city with the program and Yo is the mean value in the city without one, and if p is the fraction of the relevant subpopulation in the first city that participated in the program, the impact estimate would be calculated as (Ye—YO)/p. 6Essentially, this is a case of what is often telexed a "natural" experiment. It is similar to an experiment inasmuch as He probability of having the treatment available is random with respect to the outcome variable under study. OCR for page 342 APPENDIX F 349 is possible. The reasons are relatively technical and difficult to explain in simple terms. However, this method will not be especially useful for the AIDS interventions because very little is known about the distribution of sexual prevention behaviors, for example, in the total population or even the high-risk population. Consequently, this method will not be considered further. Solution 3: Availability of Cohort Data A third solution memos requires the availability of "cohort," "Iongitudi- nal," or "panel" data, that is, data on the same individuals at several points In time before and after some of them have undergone the treatment. In the simplest case, data on Ye are available not only after the treatment but also before, giving a data set with one pre-treatment observation and one post-treatment observation for each individual, both participants and non-participants. In the more general case, three or more points in time may be available in the data. The use of such cohort data is sufficiently important to warrant an extended discussion. To illustrate this method, first consider the situation that would arise if data at two points In time were available, one before the treatment and one after it. Let"t" denote the post-treatment point and "tat" denote the pre-treatment point. Then, analogously with the cross-section case considered previously, Yit —Yi*t_, = Change in Yit from t-] to t in the absence of having undergone the treatment Yi`* —Yi*~_~ = Change in Yin* from t-] to t if having undergone the treatment Then the effect of the treatment is a, and Yit —Yitt_1 = (Yit—Yi,t-l) + ~ (6) Since Yi*~_~ cancels out on both sides of (6), (6) is the same as (I) and therefore the true effect, a, is the same. As before, a preferred estimate of the effect of the program could be obtained by a randomized tnal in which those wishing to undergo the treatment (di = I) are randomly assigned to participation or non- participation status. With data on both pre-treatment and post-treatment status, the estimate of the program effect could be calculated as: & = E(Yi~*—Yi*~_~di = 1)- E(Yi~—Yi,~_~di = I) (7) However, with observational data the second term on the nght-hand side of (7) is not measurable since, once again, we cannot measure Yi`* for OCR for page 342 350 ~ EVALUATING AIDS PREVENTION PROGRAMS those who undergo the treatment. We can instead only use the data on Yip available from non-participants to estimate the program effect as follows: ~ = EtY**—Y*^ . ~d. = 1N—E(Y*—Y.*. . id = no `8y Milt—loci— 11 ~\lit — wilt—tori—~J The estimate cat is often called a "differences" estimate because it is computed by comparing the first-differenced values of Y for participants and non-pariicipants. The estimate we are able to obtain in (~) will equal that we could have obtained in the randomized teal, (7), if and only if E(Yit—Yi*~_~di = I) = E (Yit—Yi,~_~4i = 0) (9) Equation (9) is the key equation for the two data-po~nt case and is the analogue to equation (5) in the single post-treatment data case. The equation shows that a data set with a pre-treatment and post-treatment observation will yield a good estimate of cat if the change in YE from pre to post would have been the same for participants, had they not undergone the treatment, as it actually was for non-participants. Sometimes He change in Yip is referred to as the "growth rate" of Yip, in which case we may say that our nonexperimental estimate requires that the grown rate of participants and non-participants be the same in the absence of the treatment. Perhaps the most important point is that this condition may hold even though the condition in (5) does not. Equation (5), the condition Mat must hold for the nonexperimental estimate in a single post-treatment cross-section to be correct, requires that the levels of Yin be the same for participants and non-participants in the absence of He treatment. Equation (9), on the over hand, only requires that the growth rates of Yin be the same for participants and non-participants in the absence of the treatment, even though the levels may differ. The latter is a much weaker condition and win more plausibly hold. The nature of the condition is illustrated in panels (a) and (b) of Figure 1. In panel (a), the pretreatment levels of non-participants and participants, A and A', respectively, are quite different participants have a higher level of Y. as would be the case, for example, if those who later undergo C&T have higher prevention behaviors in the first place. From t-] to t, the level of Y for non-participants grows from A to B. as might occur if everyone in the population under consideration (e.g., homosexual men) were increasing their degree of risk reduction behaviors even without participating in a C&T program. The figure shows, for illustration, a growth rate of Y for participants from A' OCR for page 342 - APPENDIX F | 351 UJ 12 _ 10 _ 8 _ 6 _ 4 _ _ 2 Pars ~ Par~dpa~l A ~ A / Non-participants 1 ~ t-1 t - LIJ A' . . Non-participants t I t—1 t (a) Condition (9) holds. (b) Condition (9) does not hold. Pa~apants /) 12 As/ C'| 10 / '' ~ 8 B' /' / i All A Non-participants t—2 t—1 6 4 2 D /C' Parbapants / ', Y.' /~, R' ' (d) Condition (12) does not hold. FIGURE F-1 Examples in which conditions (9) and (12) hold and do not hold. C to C, which is a larger rate of growth Han for non-par~cipants. The estimate of the treatment effect, cr. is also shown in the figure and is based on the assumption that, in the absence of undergoing C&T, the Y of participants would have grown from A' to B' In other words, by the same amount as the Y of non-participants grew. Of course, this OCR for page 342 352 ~ EVALUATING AIr)S PREVENTION PROGRAMS assumption cannot be verified because point B' is not observed; it is only a "counterfactual." But clearly the estimate in the figure would be a much better estimate than that obtained from a single post-~eatment cross-section, which would take the vertical distance between B and C as the treatment estimate. This would be invalid because equation (5) does not hold. Pane} (b) In Figure 1 shows a case where condition (9) breaks down. On that panel, a case is shown in which He Y of participants would have grown faster than that for non-participants even In the absence of the treatment (A' to B' is greater than A to By. This might anse, for example, if those individuals who choose to undergo C&T are adopting risk reduction behaviors more quickly than non-participants. In this case, our estimate of ~ is too high, since it measures the vertical distance between B" and C instead of between B' and C. Neither B' nor B" is observed, so we cannot know which case holds. The primary conclusion to be Lawn from this discussion is that we may be able to do better in our estimate of program effect with more data. Adding a single pre-treatment data point permits us to compute an estimate of the treatment effect the differences estimator In (~ - that may be correct In circumstances in which the estimator using a single post-treatment is not. The importance of having additional data on the histories of Y. or the sexual behavior histories of C&T participants and non-participants, for example, stands in contrast to the situation faced when conducting a randomized tIia] where, strictly speaking, only a single post-treatment cross section is required. Thus we conclude that more data may be required for valid inference In nonexpenmental evaluations than in experimental evaluations. This point extends to the availability of additional pre-treahnent observations.7 Suppose, for example, that an additional pretreatment ok servation is available at time t-2. The estimate calculable in a randomized tnal is = EE(Yit —Yi,`_~)—(Yi,~_i—Yi,`-2~di = I] —EE(Yit—Yi,`_~)—(Yi,~_i—Yi,`-2~di = I] (10) while the estimate permitted In an observational study is at = E[(Yit —Yitt-l)—(Yi,t_1—Yi*t_2)ldi = 1] - Er(Y3t—Yitt-l)—(Yi,t-1—Yi,t_2)ldi = 0] (11) 7 Gathering data from additional post-treatment observations is easier but does not serve the appropriate control function. Prior to the treatment, it is known with certainty that the program could have no true effect; after the treatment, it cannot be Mown with certainty what the pattern of the effect is, assuming it has an effect. Consequently, participant/non-participant differences in Yin after the treatment can never be treated with absolute certainty as reflecting selection bias rather than a true effect. OCR for page 342 354 ; EVALUATING AIDS PREVENTION PROGRAMS the nonexpenmenta] estimator. In the general case, a slight modification in the mode] allows us to write the estimate of the treatment effect as the follow~ng:8 = E(Yit phi = 1, Yin,_, Yin,_ 2'· - -, Yin,—k) E(Yit ~ Hi = 0, Yi,`—~ ' Yi,~—2, · , Yip,—k (13) assuming that data are available for k pre-treatment periods. This esti- mator will equal that obtainable in a randomized mal if and only if the following condition holds: E(Yit ~ di = 1 , Yin,_ ~ , · . ., Yin,_ k) = E(Yit ~ di = 0, Yi,~_ ~ , . . ., Yi,~_k) ~ 14) This condition can be interpreted as requiring that the values of di and Yi' must be independent of one another conditional upon the history of Yi' up to to-. Put differently, it must be the case that it if we observe two individuals at time t-] who have exactly the same history of Yip up to that time (e.g., the exact same history of sexual prevention behaviorsWand who therefore look exactly alike to the investigators they must have the same value of YE in the next time period regardless of whether they do or do not undergo the treatment. If, on the other hand, the probability of entering a C&T program is related to the value of Yi`* they would have had if the treatment were not available, the condition in equation (14) will not hold and the nonexperimental estimate will be inaccurate. The Relationship between Data Availability and Testing of Assumptions The discussion thus far has demonstrated that the availability of certain types of data—information on legitimate "Z" vanables, or on individual histories is related to He conditions that must hold, and the assumptions that must be made, In order to obtain an estimate of program effect similar to that obtainable In a randomized trial. A natural question is whether any of the assumptions can be tested, and whether it can be determined if the conditions do or do not hold. The answer to this question once again is related to data availability. The technical answer to the question is that "overidentifying" as- sumptions can be tested but that "just identifying" assumptions cannot This autoregressive model was estimated in an early economic study by Ashenfelter (1978). A simpler model but one more focused on the evaluation question was also analyzed by Goldberger (1972) in a study of methods of evaluating the effect of compensatory education programs on test scores when the treatment group is selected, in pan, on the basis of a pretest score. OCR for page 342 APPENDIX F | 355 Model I A1: Zj independent of Y.' conditional on d A2: No selection bias in levels: (9) holds A3: No selection bias in differences: (12) holds Data Set\ 1 _ A1 holds A2 does not hold A3 holds Data Set 2 A1 holds A2 does not hold ~ Model IV A3 does not hold Data Set 3 Model II Model - A1 does not hold | A2 holds A3 holds Model V A1 does not hold A2 does not hold A3 holds FIGURE F-2 Estimable models win different data sets. Data set 1: Single post-program, no Zi. Data set 2: Single post-program, Z:. Data set 3: Pre-program and post-program. Zi. be (Heckman and Hotz, 19893. For present purposes, a less technical answer is that assumptions can be tested if the data available are a bit more than are actually needed to estimate the model in question. This is illustrated in Figure 2, which shows five different models that can be estimated on different data sets. The model at the top of the figure can be estimated on Data Set I, while the two models below can be estimated on a richer data set, Data Set 2, and the two models below that can be estimated on a yet richer data set, Data Set 3. At the top of the figure, it is presumed that the evaluator has a data set (Data Set 1) consisting of a single post-treatment data point with Ye information, but no other vanables at all—in particular, no Zi variable is in the data set. The best the analyst can do in this circumstance is to compare the Yin means of participants and non-participants to calculate ~ as in equation (4) above. OCR for page 342 356 ~ EVALUATING AIDS PREVENTION PROGRAMS This estimate will equal that obtainable from a randomized trial under the three assumptions shown In the box for Model ~ In the figure: that the missing Zi is independent of YE conditional on di, and that there is no selection bias In either levels or first differences. The first assumption is necessary to avoid "omitted-vanable" bias, the bias generated by leaving out of the mode! an important variable that is correlated with both the probability of receiving the treatment and Yip. Suppose, for example, that Zi is a dummy for city location, as before. If city location is an important determinant of sexual behavior, and if the probability of treatment also varies across cities, then not having a variable for city location in the data set will lead to bias because the estimate of program impact (the dif- ference in mean Y between participants and non-participants) reflects, in part, ~ntercity differences in sexual behavior that are not the result of the treatment but were there to begin with. The second and third assumptions are necessary in order for the value of Yip for non-participants to be the proper counterfactual, that is, for it to equal He value that participants would have had, had they not undergone the treatment.9 Models ~ and m in the Figure can be estimated if the data set contains information on a potential Zi, like city location, but still only a single post-treatment observation on Ye (Data Set 21. Each of these models requires only two assumptions instead of three, as In Mode! I, but each model drops a different assumption. Mode] ~ drops the assumption that there is no selection bias in levels that is, it drops the assumption that (5) holds. This assumption can be dropped because a Zi is now available and the instrumental-vanable technique described above as Solution ~ is now available. In this method, Me values of Yin for participants and non-participants In a given city are not compared to one another to obtain a treatment estimate that estimate would be faulty because participants are a self-selected group. Instead, mean values of Yin across cities are compared to one another, where the cities differ in the availability of the treatment and therefore have different treatment proportions (e.g., a proportion of O if the city has no program at all, as in the example given previously). For the treaunent-effect estimate from this model to be accurate still requires the assumption Mat Be Zi is a legitimate instrument that Me differential availability of Me program across cities is not related to the basic levels of prevention behavior In each city (i.e., that Zi and Yip are independent). 9~ this case, Me third assumption is technically redundant because there will be no selection bias in differences if there is none in levels. This will not be true in other sets of three assumptions. Note too that, of course, more than three assumptions must be made, but these three are focused on for illustration because they are the three relevant to the richest data set considered, Data Set 3. With yet richer data sets, additional assumptions could be OCR for page 342 APPENDIX F 1 357 Not only does Model ~ require one less assumption than does Mode! I, it also permits the testing of that assumption and therefore He testing of the validity of Mode! I. The test of the Copped assumption that there is no selection bias in levels is based upon a comparison of impact estimates obtained from the two models. If the two are the same or close to one another, then it must be the case that there is, in fact, no selection bias in levels because the impact estimate in Mode] ~ is based upon participant/non-pa~ticipant comparisons whereas that in Mode! II is not. If the two are different, then there must be selection bias if the participant/nonparticipant differences within cities do not generate the same impact estimate as that generated by the differences in Yi' across different cities, the former must be biased since the latter is accurate (under the assumption Mat the Zi available is legitimate). Mode} m takes the opposite tack and drops the assumption that Zi is legitimate but maintains the assumption that there is no selection bias In levels. The mode] estimates the treatment effect by making participant- non-participant comparisons only within cities, that is, conditional on Zi. If there are cities where die program is not present at all, data on Yi' from those cities are not utilized at ah, unlike the method in Mode} H. The Model m impact estimate win be accurate if there is no selection bias into participation but it will also be accurate even if intercity variation is not a legitimate Zi (e.g., if program placement were based upon need). In this case, a comparison of the impact estimate with that obtained from Mode! where participants and non-participants across cities were pooled into one data set and city location was not controlled for because the vanable was not available provides a test for whether intercity variation is a legitimate Zi. If it is not (e.g., if program placement across cities is based on need~then Models ~ and m will produce quite different treatment estimates, for Model ~ does not control for city location but Mode} m does (Mode] m eliminates cross-city variation entirely by examining only participar~t/non-participant differences within cities). On He other hand, if city location is a legitimate Zi (e.g., if program placement is independent of need) Den the two estimates should be close to one another. The implication of this discussion is that Data Set 2 makes it possible to reject Model I by finding its assumptions to be invalid. This testing of Mode} ~ is possible because Data Set 2 provides more data than is actuaBy necessary to estimate the model. Unfortunately, this data set does not allow the evaluator to test the assumptions of Models ~ and m necessary to assure their validity. Each makes a different assumption Model assumes that Zi is legitimate, while Model m assumes no selection bias to be present- and the estimates from the two need not be the same. If OCR for page 342 358 ~ EVALUATING AIDS PREVENTION PROGRAMS Hey are different, the evaluator must gather additional Information. Such additional information may come from detailed institutional knowledge—for example, of whether Zi is really legitimate (e.g., detailed knowledge of how programs are placed across cities). But another source of additional information is additional data, for example, information on a pre-program measure of Yi`. For example, if Data Set 2 is expanded by adding a pre-program measure of Y (Data Set 3) the assumptions of Models ~ and m can be tested by estimating Models IV and V shown In the Figure. Each of these models drops yet another assumption, although a different one in each case. Model IV drops the assumption that there is no selection bias In differences but continues to make the assumption that Zi is a legitimate instrument. The impact estimate is obtained by the instrumental-var~able technique, as in Mocle] it, but in this case by comparing the means of LYE - You) across cities, thereby eliminating selection bias in levels if there is any. Mode] V drops the assumption that there is no selection bias in levels by applying the difference estimate in (~) but still assumes that there is no selection bias in differences. Once again, the richer data set permits the testing of the assumptions that went into Models ~ and m and therefore permits their rejection as invalid. The arrows in the Figure between Models show which models can be tested against one another. A comparison of the estimates of Mode] IV to Dose of Mode} ~ provides a test of the Gird assumption (that there is no selection bias in differences); a comparison of the estimates of Mode] V and Mode] ~ provides a test of Me first assumption (that Zi is a legitimate instruments, a comparison of the estimates of Mode] V and Mode} m provides a test of whether the second assumption holds (that there is no selection bias In levels). If each comparison indicates estimates that are similar to one another, the relevant assumption in Me more restricted mode] (Mode] II or Mode] my should be taken to be valid; when estimates differ, however, the assumption involved should be taken as invalid and Me more restricted mode] should be rejected. Thus Models II and m may be discarded. As before, Models IV and V now require certain assumptions in order for their impact estimates to be valid. The estimates required for each are different, but neither can be tested unless more information or more data were available. An additional pre-program data point or an additional Zi variable would ennch the data set and permit the assumptions of Me two models to be tested. New models made possible by increasing the richness of the data set permit the evaluator to discard more and more assumptions arid therefore obtain impact estimates that are more and OCR for page 342 APPENDIX F 359 more reliable. This strategy can be pursued until models are found that are not rejected by richer data sets.~° IV. APPLICATION TO AIDS INTERVENTIONS Two of the interventions being considered are C&T and CBO programs. In 1989, the CDC funded from 1,600 to 2,000 C&T programs across the country. The programs offer HIV testing and some pre-test and post-test counseling to individual clients, and sometimes partner notification and referral as well. The programs are often offered in local health depart- ments or other local health facilities. The HIV testing and counseling are confidential and often also anonymous. There is considerable diversity across programs In the exact types of services offered, for local operators have considerable discretion In designing the type of program offered. The CBO programs under consideration here are Pose which conduct local community health education and risk reduction projects. The types of programs offered are more diverse than those offered In the C&T programs, ranging from educational seminars for AIDS educators to the establishment of focus groups, conducting counseling, educating high-risk groups about risk reduction strategies, and the sponsoring of street fairs and performing arts activities In support of education and risk reduction. The organizations conducting the activities are often small and have close ties to the community, and usually target their activities on specific high- risk groups or other subsegments of the community. At present there is little systematic knowledge of the types of activities sponsored by CBOs on a nationwide basis. Although C&T and CBO programs are quite distinct in their m~s- sions, they pose similar evaluation problems since both are generally aimed at altering sexual behavior in a target population. To evaluate whether the various programs have any impact at all, and to estimate the magnitude of the impact of different types of programs, systematic and careful evaluation strategies are required. The Panel on the Evaluation of AIDS Interventions recommends randomized trials wherever possible to evaluate these programs.l2 Un- fortunately, randomization win be difficult to apply In many cases. First 10Never~eless, as I have stressed elsewhere (Moffitt, 1989), at least one untested assumption must, by definition, always be made in any nonexperimental evaluation. It is only in a randomized trial that such an assumption is no longer necessary for valid impact estimates to be obtained. ~1 Of course, this is not the only goal of these programs and there are many other important ones as well. The techniques discussed in Section m will often be applicable to the evaluation of program impact on over goals, albeit with appropriate modification. 12The panel qualifies this recommendation in several respects. First, it recommends evaluation of only new CBO projects in order not to disrupt the operations of on-going ones. Second, for ethical OCR for page 342 360 ~ EVALUATING AIDS PREVENTION PROGRAMS and foremost are the ethical issues involved in denying treatment at all, or denying a particular type of treatment, to individuals in the target population. The ethics in this case are not always a clear-cut issue. It is often argued, for example, that the ethical issues are less serious if individuals are not assigned to a zero-treatment cell but only to different types of treatments, each of which represents a gain over the ~ndivid- ual's alternatives outside the experiment. However, even here there are ethical issues involved in any alteration of the natural order of priority In treatment assignment that would occur in the absence of random~za- tion, especially if those operating the program believe that individuals are already being assigned to the "best" treatment for each individual. Second, there are likely to be serious political difficulties as well, for AIDS treatment has already become a highly politicized issue in local communities, and popular resistance to randomization will no doubt be even more difficult to overcome Can it already is for other programs. Third, more than in most randomized trials, those In the AIDS context require a high degree of cooperation from the indigenous staff operating the programs, both to elicit accurate responses from the subjects, to re- duce attntion, and in light of confidentiality requirements that often make it difficult for outside evaluators to be integrally involved In the operation and data collection of the experiment. Such cooperation may be difficult to achieve if randomization is talcing place. ~ any case, it is clear that observational, nonexperimental evaluation techniques must be given serious consideration in the evaluation of AIDS interventions. The techniques outlined in Section m are of potential applicability to such interventions. It is no doubt obvious that in both C&T and CBO programs selectivity bias is likely to be a problem that those who choose to voluntarily make use of the services are likely to be quite different from those who do not, even if they had not received any program services. The techniques outlined in Section m for addressing the selectivity bias problem point In very specific directions for a solution to the problem, namely, (~) the search for appropriate "Z's," and (2) the collection of sexual behavior histories. ~ addition, although it has not been heavily emphasized thus far, those techniques implicitly require the collection of data on non-participants as well as participants. If data on only participants are available, and therefore only a before-and-after study can be conducted, it will be very difficult to identify the effects of the treatment on behavior given Me rapid increases in AIDS knowledge In the reasons, it recommends against randomization for C&T evaluations if a zero-treannent cell is involved. prefemng Hat all cells involve some type of treatment. OCR for page 342 APPENDIX F ~ 361 general population and the presumed steady change in sexual prevention behaviors that are occulting independently of these programs. The Search for Z's First, consider the issue of whether appropriate Z's can be found for AIDS interventions. It is likely to be difficult to locate such Z's, but not necessarily impossible. It is much easier, in fact, to identify variables Cat are inappropriate as Z's than variables that are appropriate. For example, it is extremely unlikely that arty sociodemographic or health characteristic of individuals themselves would be appropriate. Health status, education level, prior sexual history, and other such characteristics no doubt affect the probability that an individual enrolls in a C&T or CBO program but also unquestionably are independently related to prevention behavior as well. Indeed, to use the language of economics, it is probably not possible to locate appropriate Z vanables on the "demand" side of the market—that is, among those individuals who are availing themselves of the programs—and it would be more fruitful to look on the "supply" side, where availability of programs is determined in the first place. On the availability side, the C&T and CBO programs are indeed differentially placed across neighborhoods within cities, between cities and suburbs, across metropolitan areas, a~nd across states and regions. Unfortunately for the evaluation effort, however, differential availability in most cases is certain to be closely related to need. Those cities most likely to have an extensive set of programs are those with large subsegments of the high-risk population a~nd those where REV incidence has already been determined to be high. Within cities, it is no doubt also the case that programs are more likely to be located in neighborhoods close to high-nsk populations than in neighborhoods far from them. With this process of program placement, differential availability win not constitute an appropriate Zi. If appropriate Z's are to be identified, it wild require a more detailed investigation than is possible here but there are several directions in which such an investigation could be pursued. First, a detailed examination of the funding rules of CDC and other federal agencies would be warranted. Grants are made to applying C&T and CBO sponsors, and no doubt the need of the population to be served is a criterion in the funding decision. But He availability of a Zi does not require that need not be used at all in the decision, only that it not be the sole cntenon. To the extent that other criteria are used to make funding decisions, criteria unrelated to HIV incidence in the area, Z's may be identified. In addition, it is rarely the case that federal funding decisions are as rational and clear-cut OCR for page 342 362 ~ EVALUATING AIDS PREVENTION PROGRAMS as published funding formulas and formal points criteria suggest. It is almost always the case that some agency discretion, political factors, or bureaucratic forces come into play in some fraction of the decisions. To the extent Hat they are, appropriate Z's will be available. Second, a detailed study of several large cities may result In the identification of other Z's. For example, it has been estimated that 60 percent of the male homosexual population in San Francisco has not been tested for REV infection and has, therefore, almost certainly also not enrolled in a C&T or CBO program.~3 Why this percent is so high could be investigated. Perhaps the 60 percent who have not been tested are those with low probabilities of HIV In the first place, or those who are already practicing prevention behaviors -in this case, no appropriate Zi would be available. On the other hand, some of the non-par~cipants may be located in areas where no C&T or CBO program is present for example, if they do not live in particular neighborhoods that have been targeted. If so, differential access to a program could serve as the basis for a Z. Collection of Histories The collection of data In general, and histories in particular, is likely to be difficult for the evaluation of AIDS interventions. The confidentiality of the testing and counseling process as well as the inherently sensitive nature of the questions that must be asked to obtain information on the necessary behaviors makes the prospect of obtaining reliable data highly uncertain at our present state of knowledge. Obtaining informed consent from those receiving the treatment as well as others may be problematical, and may result in self-selected samples that threaten the integrity of the design and consequently the validity of any impact estimates obtained. These considerations make difficult the prospect of obtaining even a single wave of post-program data, much less multiple penods of pre- program data.~4 Randomized trials have the advantage of requiring less data collection than observational studies, as noted In Section m, and hence are relatively favored in this respect. Nevertheless, cohort studies In this area have been undertaken and and have often been successful In retaining individuals In the sample, more cohort collection efforts are underway. For example, Kasiow and colleagues (1987) report the results of a survey of sexual behavior of . _ Washington Post, January 9, 1990. 14Histories can be collected from retrospective questions as well as reinterviews. For example, one or two pre-program interviews could be conducted, with the earliest one also containing a retrospective battery. OCR for page 342 APPENDIX F ~ 363 5000 asymptomatic homosexual men In which a baseline survey and lab tests were followed by reinterviews and tests at s~x-month intervals. As of the latest (IOth) wave, about 5 years into the study, from 76 percent to 97 percent of the individuals (across areas and risk groups) are still in the sample, a very high percentage. The success of the cohort is partly a result of solid confidentiality measures as well as the heavy involvement of local gay community leaders and trained local staff from the beginning of the study. Other cohort collection efforts include the CDC cross-city study of O'ReiBy, involving both homosexual men as well as IV drug users; the study of seven CBOs headed by Vincent Mor at Brown University; He San Francisco city clinic cohort and Hepatitis B cohort; and the upcoming Westat cohort sponsored by NCHSR. How successful these efforts win be remains to be seen, but there is no question that serious cohort studies are being undertaken In increasing number. If they are successful, and if the histories described In Section m can be obtained, program evaluation designs will be greatly enhanced and impact estimates will be obtainable with much greater reliability. V. SUMMARY AND CONCLUSIONS The evaluation of AIDS interventions poses difficult conceptual and prac- tical issues. Since randomized trials are unlikely to be feasible in many circumstances, evaluation methods for observational, nonexperimental data must be applied. Statistical methods developed by econorn~sts for the evaluation of the impact of social and economic programs over the past twenty years are applicable to this problem and have several ~rnpor- tant lessons for AIDS evaluations. The most important are that accurate estimates of program impact require (~) a systematic search for iden- ~fy~g "Z" vanables, vanables that affect the availability of program services to different populations but which are not direct detenn~nants of REV incidence or the adoption of prevention behaviors; or (2) the collection of sufficiently lengthy sexual histories from participants and non-participaIlts in the programs that can be used to reduce the selec- tion bias attendant upon participant/non-participant compansons. Both of these implications are quite concrete and should provide funding agen- cies and program evaluators win specific directions to search for and In which to pursue evaluation designs that win yield reliable estimates of program impact. REFERENCES Ashenfelter, O. (1978) Estimating the effect of Mining programs on earnings. Review of Economics arid Statistics 60:47-57. OCR for page 342 364 ~ EVALUATING AIDS PREVENTION PROGRAMS Barnow, B. (1987) We impact of CETA programs on earnings: A review of the literature. Journal of Human Resources 22:157-193. Barnow, B. Cain, G. and Goldberger, A. (1980) Issues in the analysis of selectivity bias. In E. Stromsdorfer and G. Parkas, eds., Evaluation Studies Review Annual, Volume 5. Beverly Hills, Calif.: Sage. Bjorklund, A. and Moffitt, R. (1987) Estimation of wage gains and welfare gains in self-selection models. Review of Economics and Statistics 69:42~9. Goldberger, A. (1972) Selection bias in evaluating treatment effects: Some formal illustrations. Discussion paper 123-72. Madison, Wisconsin: Institute for Research on Poverty. Gronau, R. (1974) Wage compansons- a selectivity bias. Journal of Political Economy 82:1119-1143. Heckman, J. J. (1974) Shadow prices, market wages, and labor supply. Econometrica 42:679-694. Heckman, J. J. and Hotz, V. J. (1989) Choosing among alternative nonexperimental methods for estimating the impact of social programs: The case of manpower training. Journal of the American Statistical Association 84:862-874. Heckman, J. J. and Robb, R. (1985a) Alternative methods for evaluating the impact of interventions: An overview. Journal of Econometrics 30:239-267. Heckman, J. J. and Robb, R. (1985b) Alternative methods for evaluating the impact of interventions. In J. Heckman and B. Singer, eds., Longitudinal Analysis of Labor Market Data. Cambndge: Cambridge University Press, 1985b. Kaslow, R. W. Ostrow, D. G. Detels, R., Phair, J. P. Polk, B. F. and Rinaldo, C. R. (1987) The Multicenter AIDS cohort study: Rationale, organization, and selected characteristics of the participants. American Journal of Epidemiology 126:310-318. Lewis, H. G. (1974) Comments on Selectivity Biases in Wage Comparisons. Journal of Political Economy 82: 1145-1155. Maddala, G. S. (1983) Limited-Dependent Variable and Qualitative Variables in Econo- metrics. Cambridge: Cambridge University Press, 1983. Maddala, G. S. (1985) A survey of the literature on selectivity bias as it pertains to health care markets. In R. M. Schemer, Ed., Advances in Health Economics and Health Services Research, Vol. 6. Greenwich, Conn.: JAI Press. Manski, C. (1990) Nonparame~c bounds for treatment effects. American Economic Review 80:319-323. Moffitt, R. (1989) Comment on Heclunan and Hotz. Journal of the American Statistical Association 84:877-878.
{"url":"http://books.nap.edu/openbook.php?record_id=1535&page=342","timestamp":"2014-04-18T10:47:27Z","content_type":null,"content_length":"89176","record_id":"<urn:uuid:58464b54-5a8e-48b0-ad07-600ae92b249d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Josephine, TX Trigonometry Tutor Find a Josephine, TX Trigonometry Tutor ...I have taught undergraduate physics that deals with MECHANICS(vectors, projectile, newtons laws, forces, rotational motion, pressure and fluid motion),THERMODYNAMICS(thermal expansion, calorimetry,phase change, conduction), ELECTROMAGNETISM(electric and magnetic forces, Fields, electric circuits... 25 Subjects: including trigonometry, chemistry, physics, calculus ...I taught courses at Richland College and Collin County Community College. My specialties are Physics I and Physics II, both algebra and calculus based. I also have experience with laboratory experiments and writing lab reports. 8 Subjects: including trigonometry, calculus, physics, geometry ...As with all of my math classes I had an A in it, but, more importantly, as a result of my experience in teaching it I've developed not only a keen grasp of the subject but a style of presenting it that my students get. I have taken over two years of programming in Pascal and a year of programmin... 41 Subjects: including trigonometry, chemistry, French, calculus ...I have helped two students achieve Semifinalist status (224+ scores). I teach an ACT prep class at a private school, helping students grasp underlying concepts so they can work a variety of problems. Students improve an average of 6+ points. I show students multiple methods so they learn new approaches and pick the one that makes most sense. 11 Subjects: including trigonometry, algebra 1, algebra 2, precalculus ...I am flexible, encouraging and patient with reputations for providing support for students who are struggling with mathematical concepts and quickly diagnose and develop strategies to fill the gaps with appropriate materials, also I am a very creative and talented instructor skilled at developing... 20 Subjects: including trigonometry, calculus, statistics, physics
{"url":"http://www.purplemath.com/Josephine_TX_Trigonometry_tutors.php","timestamp":"2014-04-19T09:50:04Z","content_type":null,"content_length":"24234","record_id":"<urn:uuid:5929578a-af4d-436f-969e-a80e90d16729>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: FW: Semantics and the problem of reference Cristian Cocos cristi at ieee.org Mon Oct 23 13:23:30 EDT 2000 Here's a few answers to Ketland's questions given from the point of view of a philosopher of mind, i.e. of someone who regards logic/mathematics from a conceptualist perspective (remember?: nominalism, realism AND conceptualism (I struggled to find an explanation why logicians and philosophers of mathematics tend to "forget" about the third solution to the medieval Problem of Universals; two answers come to mind: (1) the conceptualist solution has been associated with one form of it, i.e. intuitionism and (2) (stupid) fears of psychologism)). Anyway, here are some ideas: > 1. What is set theory about? Set theory is not about anything, at least not about anything in the sense in which Zoology is about animals. In order for some syntax to be a THEORY (of something) the existence of that something has to *preceede* the theory, that is the cognitive agent has to become aware of (identify) that something by means *other than* the syntax itself, which, obviously, doesn't happen in the case of set "theory" (except for, perhaps, extreme Godel-type Platonistic views). The attribute "theory" is, unfortunately, completely misleading in this case and should be taken cum grano salis, most likely regarded as a metaphor, kind of like "Sally took a walk in the park" is not to be regarded as if there were such things as walks which Sally grabbed with both hands and put one of them in her pocket (N.B. the whole scene took place in the park). The sort of answer I am inclined to endorse can be traced back to at least as far as Hume, Kant, Boole and Carnap: set theory is a syntax which describes the functioning of the mind, the workings of the brain at the mental level. N.B., the mind is not a *model* of the set-theoretic syntax, at least not in any non-trivial way: Set-theoretic syntax simply IS how the mind works. Mathematics is the syntax of language (Carnap), specifically of the language OF THOUGHT. Or, in Kantian terms, mathematics is the science the "Forms of Sensibility" etc. Still, if we persist in finding an analogue for the presumed subject which matehmatics is about, a good candidate could be the "data structures" or "data patterns" on the cerebral cortex, and this would turn mathematics/logic into some sort of physiology of the (generic) human brain (not at the neural level, of course, but at the mental level). Mathematics would then be, if you want, the science of the activation/deactivation patterns of the memory cells or other cognitive structures constitutive of the mental processor. (I make no distinction b/w mathematics and logic for reasons explained in one of my previous postings.) > 2. What exactly is the (intended) semantics for set theory? See above (1). > 3. What is the language of mathematics about? See above (1). > 4. What does it mean to say that a sentence of mathematics is true? It means to say that it accurately reflects the processes unfolding on the cerebral cortex (...of the generic epistemic subject...). > 5. Does mathematical truth mean "having a mathematical proof"? (Kanovei) "Obviously NOT." See above (4). > 6. How do we *know* that a mathematical statement/axiom is true? Advances in neurophysiology will hopefully make that possible in the We will hopefully manage to map the brain in a way that should make the identification of the component subsystems, just like the modular of a computer (processor, memory, peripherals etc.). > 7. The problem of reference (Quine, Putnam, the Skolem paradox, etc.): > The predicate "set" refers to sets (just as "chicken" refers to If you mean sets in some Platonic heaven , then no, it doesn't. See above Cristian Cocos, Dept. of Philosophy UWO & St. Andrews More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-October/004470.html","timestamp":"2014-04-17T18:26:01Z","content_type":null,"content_length":"6325","record_id":"<urn:uuid:27ee1077-edaa-4ac3-bf15-7620d0982378>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Highland Park, MI Algebra 2 Tutor Find a Highland Park, MI Algebra 2 Tutor ...Obviously, Math is my specialty and ACT/SAT Test prep is my other passion. When I was in school Math did not come easily, and now it has become my gift. My ability to understand what students are struggling with and guide them into the right direction is my greatest strength. 9 Subjects: including algebra 2, geometry, algebra 1, SAT math ...I believe in the tutoring experience, that sometimes, all it takes is to have a tutor work with the student and fit the missing pieces of understanding back into place. I have had much success over the years and I look forward to being the jigsaw piece that helps complete your student's academic portrait! Leadership: There is a growing need to teach students leadership skills. 22 Subjects: including algebra 2, Spanish, reading, biology ...I have taught for five years algebra, plane geometry, solid geometry, calculus, physics, and many other science subjects to junior and high school students before. My tutoring approach begins with identifying the strengths/weaknesses of students and their learning capabilities. Based on this in... 33 Subjects: including algebra 2, chemistry, physics, calculus ...I love the subject, and I understand how to teach it to others in a clear precise way. I enjoy not only teaching precalculus, but I also like to do the problems myself. I have never received a grade less than a A- in this subject. 11 Subjects: including algebra 2, chemistry, algebra 1, organic chemistry ...I strive to be the one who can present you a new perspective and ignite your passion for learning. I've always been a good student, and my teaching experience started since elementary school when I helped classmates with their homework questions. I learned early on that it's best to teach the concepts and methodology rather than giving the answers. 26 Subjects: including algebra 2, reading, geometry, Chinese Related Highland Park, MI Tutors Highland Park, MI Accounting Tutors Highland Park, MI ACT Tutors Highland Park, MI Algebra Tutors Highland Park, MI Algebra 2 Tutors Highland Park, MI Calculus Tutors Highland Park, MI Geometry Tutors Highland Park, MI Math Tutors Highland Park, MI Prealgebra Tutors Highland Park, MI Precalculus Tutors Highland Park, MI SAT Tutors Highland Park, MI SAT Math Tutors Highland Park, MI Science Tutors Highland Park, MI Statistics Tutors Highland Park, MI Trigonometry Tutors Nearby Cities With algebra 2 Tutor Berkley, MI algebra 2 Tutors Center Line algebra 2 Tutors Detroit, MI algebra 2 Tutors East Detroit, MI algebra 2 Tutors Eastpointe algebra 2 Tutors Ferndale, MI algebra 2 Tutors Garden City, MI algebra 2 Tutors Hamtramck algebra 2 Tutors Hazel Park algebra 2 Tutors Inkster, MI algebra 2 Tutors Madison Heights, MI algebra 2 Tutors Oak Park, MI algebra 2 Tutors Royal Oak Twp, MI algebra 2 Tutors Royal Oak, MI algebra 2 Tutors Southfield Township, MI algebra 2 Tutors
{"url":"http://www.purplemath.com/highland_park_mi_algebra_2_tutors.php","timestamp":"2014-04-20T11:36:07Z","content_type":null,"content_length":"24494","record_id":"<urn:uuid:891e1dcb-7acf-42b1-bd20-2328524f2f7e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 Athar Ravish Khan Department of Electronics & Telecommunication Jawaharlal Darda Institute of Engineering and Technology, Yavatmal, Maharashtra, India The author evaluated the performance of synchronous DS-CDMA systems over multipath fading channel and AWGN Channel. The synchronous DS-CDMA system is well known for eliminating the effects of multiple access interference (MAI) which limits the capacity and degrades the BER performance of the system. This paper investigated the bit error rate (BER) performance of a synchronous DS-CDMA system over AWGN and Rayleigh channel, which is affected by the different number of users, as well as different types spreading codes. The promising simulation results explore the comparative study of different DS-CDMA system parameter and showed the possibility of applying this system to the wideband channel. Different MATLAB functions and MATLAB program segments are explained for the simulation of DS-CDMA system. KEYWORDS: CDMA system, QPSK, BER, Rayleigh Channel, AWGN channel, MATLAB program segment, Gold Sequence, M- sequence. I. INTRODUCTION Direct-sequence code-division multiple access (DS-CDMA) is currently the subject of much research as it is a promising multiple access capability for third and fourth generations mobile communication systems. Code-division multiple access (CDMA) is a technique whereby many users simultaneously access a communication channel. The users of the system are identified at the base station by their unique spreading code. The signal that is transmitted by any user consists of the user’s data that modulates its spreading code, which in turn modulates a carrier. An example of such a modulation scheme is quadrature phase shift keying (QPSK). In this paper, we introduce the Rayleigh channel and AWGN Channel, and investigated the bit error rate (BER) performance of a synchronous DS-CDMA system over these channels. In the DS-CDMA system, the narrowband message signal is multiplied by a large bandwidth signal, which is called the spreading of a signal. The spreading signal is generated by convolving a M-sequence & GOLD sequence code with a chip waveform whose duration is much smaller than the symbol duration. All users in the system use the same carrier frequency and may transmit simultaneously. The receiver performs a correlation operation to detect the message addressed to a given user and the signals from other users appear as noise due to de- correlation. The synchronous DS-CDMA system is presented for eliminating the effects of multiple access interference (MAI) which limits the capacity and degrades the BER performance of the system. MAI refers to the interference between different direct sequences users. With increasing the number of users, the MAI grows to be significant and the DS-CDMA system will be interference limited. The spreading M & GOLD sequences in a DS-CDMA system need to have good cross-correlation characteristics as well as good autocorrelation characteristics [P. Alexander et.al],[ E. Dinan et.al]. The goal is to reduce the fading effect by supplying the receiver with several replicas of the same 269 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 information signal transmitted over independently fading paths. The remainder of the paper is organized as follows. In the next section we present channel modelling. Section 3 deals with modulation and Demodulation scheme used in the system .Section 4 deals with proposed transmitter and receiver model for simulation. Different MATLAB functions, program segments and flow of program segment are explained in the Section 5 & 6 respectively, the paper ends with simulation results and conclusion. II. CHANNEL MODEL 2.1. Rayleigh fading channel Model: Rayleigh fading is a statistical model for the effect of a propagation environment on a radio signal, such as that used by wireless devices. Rayleigh fading models assume that the magnitude of a signal that has passed through such a transmission medium will vary randomly, or fade, according to a Rayleigh distribution the radial component of the sum of two uncorrelated Gaussian random variables. [C.Trabelsi et.al]. Rayleigh fading is viewed as a reasonable model for tropospheric and ionospheric signal propagation as well as the effect of heavily built-up urban environments on radio signals. Rayleigh fading is most applicable when there is no dominant propagation along a line of sight between the transmitter and receiver Rayleigh fading is a reasonable model when there are many objects in the environment that scatter the radio signal before it arrives at the receiver, if there is sufficiently much scatter, the channel impulse response will be well modelled as a Gaussian process irrespective of the distribution of the individual components. If there is no dominant component to the scatter, then such a process will have zero mean and phase evenly distributed between 0 and 2π radians. The envelope of the channel response will therefore be Rayleigh distributed [Theodore S. 2.2 . AWGN channel Model Additive White Gaussian Noise channel model as the name indicate Gaussian noise get directly added with the signal and information signal get converted into the noise in this model scattering and fading of the information is not considered[Theodore S. Rappaport]. III. MODULATOR AND DEMODULATOR A QPSK signal is generated by two BPSK signals. To distinguish the two signals, we use two orthogonal carrier signals. One is given by cos2πfct, and the other is given by sin2πfct. A channel in which cos2πfct is used as a carrier signal is generally called an in-phase channel, or Ich, and a channel in which sin2πfct is used as a carrier signal is generally called a quadrature-phase channel, or Qch. Therefore, dI(t) and dq(t) are the data in Ich and Qch, respectively. Modulation schemes that use Ich and Qch are called quadrature modulation schemes. The mathematical analysis shows that QPSK [X.Wang et..al] ( )= +( − ) = , , , (1) This yields the four phases π/4, 3π/4, 5π/4 and 7π/4 as needed. This results in a two-dimensional signal space with unit basis functions. The even Equation(2) and odd Equation(3) samples of signal are given by, ∅ ( )= ( ) ( ) The first basis function is used as the in-phase component of the signal and the second as the quadrature component of the signal. An illustration of the major components of the transmitter and receiver structure is shown below. 270 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 Figure.1 QPSK Modulator The binary data stream is split into the in phase in-phase and quadrature-phase components. These are then separately modulated onto two orthogonal basis functions. In this implementation, two sinusoids are used. Afterwards, the two signals are superimposed, and the resulting signal is the QPSK signal. Note the use of polar non-return-to-zero encoding. These encoders can be placed before for binary data source, but have been placed after to illustrate the conceptual difference between digital and analog signals involved with digital modulation. In the receiver structure for QPSK matched filters can be replaced with correlators. Each detection device uses a reference threshold value to determine whether a 1 or 0 is detected as shown in the Figure (2) Figure.2 QPSK Demodulator IV. PROPOSED SYSTEM MODEL 4.1 Proposed Transmitter Model: The randomly generated data in system can be transmitted with the help of proposed transmitter model which is shown in Figure(3) given below Figure.3 DS-CDMA transmitter 271 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 At first, the data generator generates the data randomly, that generated data is further given to the mapping circuit. Mapping circuit which is consisting of QPSK modulator converts this serially random data into two parallel data streams even and odd samples i.e. Ich (in-phase) and Qch (quadrature phase) [X.Wang.et.al]. This Ich and Qch are then convolved with codes and spreaded individually by using M-sequence or Gold sequence codes .The spreaded data is given to the over sampler circuit which converts unipolar data into bipolar one, then this oversampled data is convolved using with help of filter coefficients of T filter. Then these two individual data streams are summed up and passed through Band pass filter (BPF) which is then transmitted to channel. 4.2 Proposed Receiver Model: The randomly generated data in system which is transmitted through the channel can be received with the proposed receiver model which is shown in Figure (4) given below. Figure.4 DS-CDMA receiver At the receiver ,the received signal passes through band pass filter (BPF).where s spurious signals eliminated. Then signal divided into two streams and convolved using filter co co-efficient, by which Inter Symbol Interference (ISI) in the signal is eliminated. This signal is dispreaded using codes, also synchronized. This two dispreaded streams are then faded to Demapping circuit which is consisting of QPSK demodulator. Demodulator circuit converts the two parallel data streams into single serial data stream. Thus the received data is recovered at the end. V. MATLAB SIMULATIONS 5.1 DS-CDMA System: This section shows the procedure to obtain BER of a synchronous DS-CDMA. In the synchronous DS-CDMA, users employ their own sequences to spread the information data. At each user's terminal, the information data are modulated by the first modulation scheme. Then, the first bits of the modulated data are spread by a code sequence, such as an M-sequence or a Gold sequence. The spread data of all the users are transmitted to the base station at the same time. The base station detects the information data of each user by correlating the received signal with a code sequence allocated to each user. In the simulation, QPSK is used as the modulation scheme. The parameters used for the simulation are defined as follows [Hiroshi Harada et.al]: sr = 2560000.0; ; % symbol rate ml = 2; % number of modulation levels br = sr * ml; % bit rate nd = 200; % number of symbol ebn0 = [0:20]; % Eb/No irfn = 21; % number of filter taps IPOINT = 8; % number of oversample alfs = 0.5; % roll off factor 272 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 The coefficient of the filter is defined as given in the above program segment ,evaluates the performance of QPSK and the MATLAB function hrollfcoef is use to evaluate the filter coefficient based on the above parameter. % T Filter Function [xh2]=hrollfcoef(irfn,IPOINT,sr,alfs,0 ); % R Filter Function The parameter for the spread sequences, namely M-sequence and Gold sequences are used. By denoting variables as seq 1, or 2 a code sequence is selected. Next, the number of registers is set to generate an M-sequence. In synchronous DS-CDMA, the number of code sequences that can be allocated to different users is equal to the number of code lengths. Therefore, the length of the code sequence must be larger than the number of users. To generate a code sequence, we must specify the number of registers, the position of the feedback tap, and the initial values of the registers. To generate a Gold sequence and an orthogonal Gold sequence, two M-sequences are needed. Therefore, the following parameters are used. By using these parameters, a spread code is generated, and the generated code is stored as variable code. user = 3 % number of users seq = 1; % 1:M-sequence 2:Gold stage = 3; % number of stages ptap1 = [1 3]; % position of taps for 1st ptap2 = [2 3]; % position of taps for 2nd regi1 = [1 1 1]; % initial value of register for 1st regi2 = [1 1 1]; % initial value of register for 2nd Here, code is a matrix with a sequence of the number of users multiplied by the length of the code sequence. An M-sequence is generated by MATLAB function mseq.m, and a Gold sequence is generated by MATLAB function goldseq.m. An orthogonal Gold sequence can be generated by adding a 0 bit of data to the top or bottom of a Gold sequence. Because the generated code sequence consists of 0 and 1, the program converts it into a sequence consisting - 1 and 1. switch seq case 1 % M-sequence code = mseq(stage,ptap1,regi1,user); case 2 % Gold sequence m1 = mseq(stage,ptap1,regi1); m2 = mseq(stage,ptap2,regi2); code = goldseq(m1,m2,user); code = code * 2 - 1; clen = length(code); When rfade is 0, the simulation evaluates the BER performance in an AGWN channel. When rfade is 1, the simulation evaluates the BER performance in a Rayleigh fading environment [C.Trabelsi et.al]. rfade = 1; % Rayleigh fading 0:nothing 1:consider itau = [0,8]; % delay time dlvl1 = [0.0,40.0]; % attenuation level n0 = [6,7]; % number of waves to generate fading th1 = [0.0,0.0]; % initial Phase of delayed wave itnd1 = [3001,4004]; % set fading counter now1 = 2; % number of directwave + delayed wave tstp = 1 / sr / IPOINT / clen; % time resolution fd = 160; % doppler frequency [Hz] 273 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 flat = 1; % flat Rayleigh environment itndel = nd * IPOINT * clen * 30; % number of fading counter to skip Then, the number of simulation loops is set. The variables that count the number of transmitted data bits and the number of errors are initialized. nloop = 10; % simulation number of times noe = 0; nod = 0; The transmitted data in the in-phase channel and quadrature phase modulated by QPSK are multiplied by the code sequence used to spread the transmitted data. The spread data are then oversampled and filtered by a roll-off filter and transmitted to a communication channel. Here, MATLAB functions compoversamp2.m, compconv2 .m and qpskmod.m used for oversampling filtering, and modulation, filter parameter xh form T –filter is provided in compconv2 function. data = rand(user,nd*ml) > 0.5; [ich, qch] = qpskmod(data,user,nd,ml); % QPSK modulation [ich1,qch1] = spread(ich,qch,code); % spreading [ich2,qch2] = compoversamp2(ich1,qch1,IPOINT); % over sampling [ich3,qch3] = compconv2(ich2,qch2,xh); % filter Above program segment demonstrate the transmitter section of the DS-CDMA system. During this process ich1,qch1 get transformed into ich3 and qch3. The signals transmitted from the users are synthesized by considering the if-else statement depending upon the number of user ich4 and qch4 is if user == 1 % transmission based of Users ich4 = ich3; qch4 = qch3; ich4 = sum(ich3); qch4 = sum(qch3); The synthesized signal is contaminated in a Rayleigh fading channel as shown in below program segment . In reality, the transmitted signals of all users are contaminated by distinctive Rayleigh fading. However, in this simulation, the synthesized signal is contaminated by Rayleigh fading. Function sefade.m used to consider the Rayleigh fading if rfade == 0 ich5 = ich4; qch5 = qch4; else % fading channel [ich5,qch5] = sefade(ich4,qch4,itau,dlvl1,th1,n0,itnd1,now1,.... itnd1 = itnd1 + itndel; At the receiver, AWGN is added to the received data, as shown in the simulation for the QPSK transmission in Program Segment (5). Then, the contaminated signal is filtered by using a the root roll-off filter. Below program segment calculate the attenuation and add AWGN to the signal ich6 and qch6 and transform the signal to ich8 and qch8 using the filter coefficient xh2. spow = sum(rot90(ich3.^2 + qch3.^2)) / nd; % attenuation Calculation attn = sqrt(0.5 * spow * sr / br * 10^(-ebn0(i)/10)); 274 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 [ich6,qch6] = comb2(ich5,qch5,attn); % Add White Gaussian Noise (AWGN) [ich7,qch7] = compconv2(ich6,qch6,xh2); % filter sampl = irfn * IPOINT + 1; ich8 = ich7(:,sampl:IPOINT:IPOINT*nd*clen+sampl-1); qch8 = qch7(:,sampl:IPOINT:IPOINT*nd*clen+sampl-1); The resampled data are now the synthesized data of all the users. By correlating the synthesized data with the spread code used at the transmitter, the transmitted data of all the users are detected. The correlation is performed by Program, [ich9 qch9] = despread(ich8,qch8,code); % dispreading The correlated data are demodulated by QPSK. [ Fumiyuki ADACHI] Then, the total number of errors for all the users is calculated. Finally, the BER is calculated. demodata = qpskdemod(ich9,qch9,user,nd,ml); % QPSK demodulation noe2 = sum(sum(abs(data-demodata))); nod2 = user * nd * ml; noe = noe + noe2; nod = nod + nod2; VI. SIMULATION FLOWCHART In order to simulate the system following step are • Initialized the common variable • Initialized the filter coefficient • Select the switch for m-sequence and gold sequence • Generate the spreading codes • Initialize the fading by using variable rfade • Define the variables for signal to noise ratio and the number of simulation requires as the data is random BER must have the average value of number of simulation. • Simulate the system by using the proposed transmitter and receiver for different type of channel and codes • Theoretical value of BER for Rayleigh channel and AWGN channel can be calculated by ℎ ( ) = ( / ) -----(3) ℎ ( ℎ) = 1− -----(4) 275 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 276 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 277 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 VII. SIMULATION RESULTS OBTAINED Figure.6 Performance of DS CDMA System in AWGN Environment With M Sequence Figure.7 Performance of DS CDMA System in AWGN Environment With GOLD Sequence Figure.8 Performance of DS CDMA System in Rayleigh Environment With Gold Sequence 278 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 Figure.9 Performance of DS CDMA System in Rayleigh Environment With M Sequence Figure.10 Performance of DS CDMA System in Rayleigh Environment With M & Gold Sequence Figure.11 Performance of DS CDMA System in AWGN Environment With M & GOLD Sequence 279 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 Figure.12 Performance of DS CDMA System in AWGN & Rayleigh Environment With M Sequence Figure.13 Performance of DS CDMA System In AWGN & Rayleigh Environment With Gold sequence VIII. RESULTS AND CONCLUSION In AWGN environment, when gold sequence or m sequence is used, for the different users the practical BER value for the minimum number of user is nearly approaches to the theoretical value of BER. In RAYLEIGH environment, when gold or m sequence is used, at the initial SNR value the practical and theoretical value of BER are same, as the SNR increases the practical BER value increases as compared to the theoretical value of BER. When the m sequence and gold sequence is considered in RAYLEIGH environment, at initial state the practical BER value and theoretical BER is same. But as the SNR increases, the practical BER value increases rapidly as compared to the theoretical BER value. When the m sequence and gold sequence is considered in AWGN environment, with single user, initially the practical BER value is same as the theoretical value, and with increasing SNR the practical value increases as compared to the theoretical value of BER. When either sequence is used in the system for AWGN and Rayleigh environment, initially the BER theoretical and practical value are nearly same. But, as the SNR value increases in case of the AWGN, the practical BER value increases rapidly as compared to the theoretical value, and in case of Rayleigh the practical value approaches to the theoretical value. 280 Vol. 2, Issue 1, pp. 269-281 International Journal of Advances in Engineering & Technology, Jan 2012. ©IJAET ISSN: 2231-1963 The authors would like to thank firstly, our GOD, and all friends who gave us any help related to this work. Finally, the most thank is to our families and to our country INDIA which born us. [1] Dr. Mike Fitton, Mike Fitton, “Principles of Digital Modulation Telecommunications” Research Lab Toshiba Research Europe Limited. [2] P. Alexander, A. Grant and M. C. Reed, “Iterative Detection Of Code-Division Multiple Access With Error Control Coding” European Trans. [3] Hiroshi Harada and Ramjee Prasad, ”Simulation and Software Radio” for Mobile [4] X.Wang and H.V.Poor, “Wireless Communication Systems: Advanced Techniques for Signal [5] J. Proakis, Digital Communications, McGraw-Hill, McGraw-Hill. [6] Sklar B., “A Structured Overview Of Digital Communications - A Tutorial Review - Part I ”, IEEE Communications Magazine, August 2003. [7] Sklar B., “A Structured Overview Of Digital Communications - A Tutorial Review - Part II ”, IEEE Communications Magazine, October 2003. [8] E. Dinan and B. Jabbari, “Spreading Codes for Direct Sequence CDMA and Wideband CDMA Cellular Networks”, IEEE Communications Magazine. [9] Shimon Moshavi, Bellcore, “Multi-user Detection for DS-CDMA Communications” , IEEE Communications Magazine. [10] Hamed D. Al-Sharari, “Performance of Wideband Mobile Channel on Synchronous DS-CDMA”, College of Engineering, Aljouf University Sakaka, Aljouf, P.O. Box 2014,Kingdom Of Saudi [11] Theodore S. Rappaport, “Wireless Communications Principles And Practice”. [12] Wang Xiaoying “Study Spread Spectrum In Matlab” School of Electrical & Electronic Engineering Nanyang Technological University Nanyang Drive, Singapore 639798. [13] Zoran Zvonar and David Brady, “On Multiuser Detection In Synchronous CDMA Flat Rayleigh Fading Channels” Department of Electrical and Computer Engineering Northeastern University Boston, MA 02115. [14] C.Trabelsi and A. Yongacoglu “Bit-error-rate performance for asynchronous DS-CDMA over multipath fading channels” IEE Proc.-Commun., Vol. 142, No. 5, October 1995 [15] Fumiyuki ADACHI “Bit Errror Rate Analysis of DS-CDMA with joint frequency –Domain Equalization and Antenna Diversity Combinning”IEICE TRANS.COMMUN.,VOL.E87-B ,NO.10 OCTOBER 2004 Athar Ravish Khan was born in Maharashtra, INDIA, in 1979.He received the B.E. degree in electronics and telecommunication engineering, M.E. degree in digital electronics from SGBA University Amravati Maharashtra India, in 2000 and 2011 respectively. In 2000, he joined B.N College of Engineering Pusad and worked as lecturer. In 2006 he joined as lecturer in J.D Institute of Engineering and Technology Yavatmal, Maharashtra INDIA and in March 2011 he became an honorary Assistant Professor there. He is pursuing Ph.D. degree under the supervision of Prof. Dr. Sanjay M. Gulhane. His current research interests include digital signal processing, neural networks and wireless communications, with specific emphasis on UWB in underground Mines-Tunnels. 281 Vol. 2, Issue 1, pp. 269-281
{"url":"http://www.docstoc.com/docs/109997895/PERFORMANCE-EVALUATION-OF-DS-CDMA-SYSTEM-USING-MATLAB","timestamp":"2014-04-16T15:16:54Z","content_type":null,"content_length":"82868","record_id":"<urn:uuid:35cac8ac-32b3-4795-ab7c-8a935642c009>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
MicroEconomics Oligopoly Slide 1 • Students: □ Ana Oliveira □ Fernando Vendas □ Miguel Carvalho □ Paulo Lopes □ Vanessa Figueiredo Slide 2 Competition Model Sequential Game Quantity Leadership Price Leadership Simultaneous Game Simultaneous Price setting Simultaneous Quantity Price setting Collude (corporate game) P. Lopes and F. Vendas V. Figueiredo M. Carvalho Presentation Structure Slide 3 Market Structure Pure competition Small competitors Pure Monopoly One Large Firm Initial Framework Slide 4 Monopolistic Competition Form : OLIGOPOLY “ Strategic interaction that arise in an industry with small number of firms.” – Varian, H. (1999 , 5th) Many Different Behavior Patterns of Behavior Class Framework Slide 5 Restrict to the case of 2 firms Simple to understand Strategic interaction Homogeneous product Study Framework Slide 6 In this case, one firm makes a choice before the other firm, according Stackelberg model, thus, our study will start from this model. Suppose, firm 1 (leader) and it chooses to produce a quantity (y1) and firm 2 (follower) responds by choosing a quantity (y2). Each firms knows that equilibrium price in the market depends on the total output. So we use the inverse demand function p(Y) to indicate that equilibrium, as function of industry output. Y = y1 + y2 The leader has to consider the follower´s profit-maximization problem, then we should think : What output should the leader choose to max its profits ? Sequential GameQuantity Leadership Slide 7 Assume that the follower wants to maximize its profits max p(y1+y2)y2 – c2(y2) The follower's profit depends on the output choice of the leader, but the leader´s output is predetermined and the follower simply views it as a constant.The follower wants to choose an output level such that marginal revenue (MR) equals marginal cost : When the follower increases its output, it increase its revenue by selling more output at the market price, but it also pushes the price down by ∆p, and this lowers its profits on all the units that were previously sold at the higher price. Sequential GameThe follower's Problem MR2 = p(y1+y2) + y2= MC2 Slide 8 The profit max choice of the follower will depend on the choice made by leader – the relationship is given by : y2 = f2 (y1) - Reaction function (Profit output of the follower as a function of the leader´s choice.) How follower will react to the leaders choice of output p(y1+y2) = a – b (y1+y2)(consider cost (C) equal to 0)So the profit function to firm 2 (follower) is : ∏2 (y1+y2) = ay2 – b y1y2 – by22 So, we use this form to draw the isoprofit lines (Fig.1) Sequential GameThe follower's Problem Slide 9 Sequential GameThe follower's Problem Fig.1 - The isoprofit lines graffic This reaction curve gives the profit-maximizing output for the follower Slide 10 There are lines depicting those combination of y1 and y2 that yield a constant level of profit to firm 2. Isoprofit lines are comprised of all points which satisfy equations ay2 – b y1y2 – by22 = ∏2 Firm 2 will increase profits as we move to Isoprofit lines that are further to the left. Firm 2 will make max possible profits when it's a monopolist, thus, when firm 1 chooses to produce zero units of output, as illustrated in fig 1. This point will satisfy the usual sort of tangency condition (RF). To understand it , we use : MR2(y1,y2)= a – by1 – 2by2(MR=MC ; MC=0) So, we have reaction curve of firm 2 y2 = Sequential Game Slide 11 It's action influence the output choice of the follower. This relationship is given by f2= (y1) [y2= f2(y1) ]. As we made , in case of the follower, the profit max problem for the leader is max p(y1+y2)y1 – c1(y1) Note that the leader recognizes that when it chooses output y1, the total output produced will be y1+ f2(y1) , its own output plus the output of the follower, so he has the influence in output of the follower. Let's see what happen : f2 (y1) = y2= It is the reaction function as illustrated in the previous slide Sequential GameLeadership problem Slide 12 Since we assume MC=0 the leader´s profit are : ∏1 (y1+y2)= p(y1+y2)y1= ay1 – by12 –by1y2 But the ouput of the follower , y2 , will depend on the leader´s choice via reaction function y2= f2 (y1). Simplifying all the calculus and set the MC as zero and MR as (a /2) – by1 , we simple find In order to find the follower output we substitute y*1 into the the reaction function: Sequential GameLeadership problem Slide 13 This two equations give a total industry output + = The Stackelberg solution can also be illustrated graphically using the Isoprofit curves (Fig.2). Here we have illustrated the reaction curves for both firms and the isoprofit curves for firm 1. To understand the graffic, firm 2 is behaving as a follower, which means that it will choose an output along its reaction curve , f2(y1). Thus, firm 1 wants to choose an output combination on the reaction curve that gives it the highest possible profits. But, it means, picking that point on the reaction curve that touches the lowest isoprofit line (as illustrated). It follows by the usual logic of maximization that the reaction curve must be tangent to the isoprofit curve at this point. Sequential GameLeadership problem Slide 14 Sequential GameLeadership problem Fig.2- Isoprofit curves (Stackelberg equilibrium) Slide 15 What is the follower problem? In equilibrium the follower must always set the same price as the leader. Suppose that the leader has a price: The follower takes this price and wants to maximize profits: The follower wants to choose an output level where the price equals to the marginal cost. Price Leadership Instead of setting quantity, the leader may instead set the price, in this case the leader must forecast the follower behaviour. • If one firm charged a lower price… • In this model the follower takes the price as being outside of is control since it was already set by the leader. • This determines the supply curve to the follower S(p); Slide 16 Price Leadership • What is the leader problem? • The amount of output that the leader will sell will be… • Supose that the leader has a a constant marginal cost of production: • Then the profits that achieves for any price “p” are given by: • In order to maximize the profits the leader wants to chose a price and a output combination... • It realizes if it sets a price “p” the follower will supply S(p) • ∏1(p)=(p-c)[D(p)– S(p)]= • =(p-c)R(p) • Where the marginal revenue equals the marginal cost. However, the marginal revenue should be the marginal revenue for the residual demand curve (the curve that actually measures how much output it will be able to sell at a each given price). Slide 17 Price LeadershipGraphical illustration The marginal revenue curve associated will have the same vertical intercept and be twice the step. Slide 18 Inverse Demand Curve: Follower cost function: Leader cost function: The follower wants to operate where price is equal to marginal cost: Setting price equal to marginal cost Price LeadershipAlgebraic example 1/2 Slide 19 Solving for the supply curve: The demand curve facing leader (residual demand curve) is: Solving for p as function of the leader’s output y1: This is the inverse demand function facing the leader. Setting marginal revenue equal to marginal cost: Solving for the leader’s profit maximization output: Price LeadershipAlgebraic example 2/2 • R(p) = D(p)-S(p)= • =a-bp-p=a-(b+1)p • MR1 = a/(b+1) – 2y1/(b+1) • MR1=a/(b+1)–2y1/(b+1)= • =c=MC1 Slide 20 Comparing Price Leadership and Quantity Leadership We’ve seen how to calculate the equilibrium price and output in case of quantity leadership and price leadership. Each model determines a different equilibrium price and output combination. • Price leadership • Price setting • Price and supply decision Quantity leadership Capacity choice Quantity Leader “We have to look at how the firms actually make their decisions in order to choose the most appropriate model” Slide 21 Simultaneous Quantity Setting Leader – follower model is necessarily asymmetric. Cournot Model Each firm has to forecast the other firm´s output choice. Given its forecasts, each firm then chooses a profit-maximizing output for itself. Each firm finds its beliefs about the other firm to be confirmed. Simultaneous Game Slide 22 Simultaneous Quantity Setting Firm 1decides to produce y1 units of output, and believes that firm will produced y2e Total output produced will be Y = y1 + y2e Output will yield a market price of p(Y) = p( y1 + y2e ) The profit-maximization problem of firm 1 is them max p(y1 + y2e ) y1 – c(y1) For any given belief about the output of firm 2 (y2e), there will be some optimal choice of output for firm 1 (y1). y1 = f1(y2e ) This reaction function gives one firm´s optimal choice as a function of its beliefs about the other firm´s choice. Slide 23 Simultaneous Quantity Setting Similarly, we can write:y2 = ƒ2(y1e )Which gives firm2´s optimal choice of output for a given expectation about firm 1´s output, y1e. • Each firm is choosing its output level assuming that the other firm´s output will be at y1eor y2e. • For arbitrary values of y1eand y2ethis won´t happen - in general firm 1´s optimal level of output, y1, will be different from what firm 2 expects the output to be, y1e. • Seek an output combination (y1*, y2*) • Optimal output level for firm1 (assuming firm 2 produces y2*) isy1* • Optimal output level for firm2 (assuming firm 1 produces y1*) isy2* y1* = ƒ1(y2* ) y2* = ƒ2(y1* ) So the output choices (y1*, y2*) satisfy Slide 24 Reaction curve for firm 1 Cournot Equilibrium Reaction curve for firm 2 Cournot Equilibrium • Each firm is maximizing its profits, given its beliefs about the other firm´s output choice. • The beliefs that optimally chooses to produce the amount of output that the other firm expects it to produce are confirmed in equilibrium. • In a Cournot equilibrium neither firm will find it profitable to change its output once it discovers the choice actually made by the other firm. Figure - Cournot Equilibrium Is the point at which the reaction curves cross. Slide 25 Adjustment to Equilibrium At time t the firm are producing outputs (y1t, y2t), not necessarily equilibrium outputs. If firm 1 expects that firm 2 is going to continue to keep its output at y2t, then next period firm 1 would want to choose the profit–maximizing output given that expectation, namely ƒ1(y2t). Grafico livro pag 480 fig27.4 Firm 2 can reason the same way, so firm 2 choice next period will be: Firm 1 choice in period t +1 will be: These two equations describe how each firm adjusts its output in the face of the other firm´s choice Slide 26 Adjustment to Equilibrium The Cournot equilibrium is a stable equilibrium when the adjustment process converges to the Cournot equilibrium. Some difficulties of of this adjustment process: Each firm is assuming that the other´s output will be fixed from one period to the next, but as it turns out, both firms keep changing their output. Only in equilibrium is one firm´s output expectation about the other firm´s output choice actually satisfied. Slide 27 Many firms in Cournot Equilibrium More than two firms involved in a Cournot equilibrium Each firm has an expectation about the output choices of the other firms in the industry and seek to describe the equilibrium output. Suppose that are n firms: Total industry output The marginal revenue equals marginal cost condition for firm is Using the definition of elasticity of aggregate demand curve and letting si=yi/Y be firm i´s share of total market output Like the expression for the monopolist, except (si) Slide 28 Many firms in Cournot Equilibrium Think of Є(Y)/sias being the elasticity of the demand curve facing the firm: < market share of the firm > elastic the demand curve it faces If its market share is 1 Demand curve facing the firm is the market demand curve Condition just reduces to that of the monopolist. If its market is a very small part of a large market market share is effectively 0 Demand curve facing the firm is effectively flat condition reduces to that of the pure competitor: price equals marginal cost. If there are a large number of firms, then each firm´s influence on the market price is negligible, and the Cournot equilibrium is effectively the same as pure competition. Slide 29 Simultaneous Price Setting Cournot Model described firms were choosing their quantities and letting the market determine the price. Firms setting their prices and letting the market determine the quantity sold Bertrand competition. What does a Bertrand equilibrium look like? Assuming that firms are selling identical products Bertrand equilibrium is the competitive equilibrium, where price equals marginal cots. Consider that both firms are selling output at some price > marginal cost. Cutting its price by an arbitrarily small amount firm 1 can steal all of the customers from firm 2. Firm 2 can reason the same way! Any price higher than marginal cost cannot be an equilibrium The only equilibrium is the competitive equilibrium Slide 30 CollusionKey Findings • Companies collude so as to jointly set the price or quantity of a certain good. This way it is possible to maximize total industry profits. • The output produced by multiple firms that are colluding will be equal to the one produced by one firm that has a monopoly. • When firms get together and attempt to set prices and outputs so as to maximize total industry profits, they are known as a Cartel. • A cartel will typically be unstable in the sense that each firm will be tempted to sell more than its agreed upon output if it believes that the other firms will stick to what was agreed. • EXAMPLES OF COLLUSION: □ De Beers □ Organization of the Petroleum Exporting Countries (OPEC) □ Port Wine Institute (IVP) Slide 31 CollusionProfit-maximization when colluding maxy1, y2 p(y1, y2)[y1+y2] – c1(y1) – c2(y2) • The optimality quantity is given by p(y1*, y2*) + (∆p/∆Y)[y1* +y2* ] = MC1 (y1* ) p(y1*, y2*) + (∆p/∆Y)[y1* +y2* ] = MC2 (y2* ) • From there we may conclude that in equilibrium MC1 (y1* ) = MC2 (y2* ) If one firm has a cost advantage, so that it’s marginal cost curve always lies bellow that of the other firm, then it will necessarily produce more output in the equilibrium in the cartel solution. Slide 32 CollusionIncentives not to respect the deal (1) • The profit-maximizing point is D but if firm 1 assumes that firm 2 will stick with the deal, it will have incentives to produce G because it will produce more and will therefore produce more • Worse, if firm 1 thinks that firm 2 isn’t going to stick with the deal, it will want to start to produce G as fast as possible so as to gain the maximum profits it can. Slide 33 CollusionIncentives not to respect the deal (2) ∆π1/ ∆y1 = p( y*1 + y*2) + (∆p/ ∆y) Y*1 – MC1(y*1) p( y*1 , y*2 ) + (∆p/∆y) y*1 + (∆p/∆y) y*2 – MC1 (y*1 ) = 0 ∆π1/ ∆y1 =p( y*1 , y*2 ) + (∆p/∆y) y*1 – MC1 (y*1 ) = - (∆p/∆y) y*2 ∆π1 / ∆y1 > 0 • So that are always incentives for firm 1 individually to cheat firm 2 if it thinks that firm 2 will stick to the agreement. Slide 34 CollusionGame Theory – Brief example • Each prisoner is in a different cell and may assume that the other one is not going to talk. • The dominant strategy in this example is to confess. • But if both stay silent they will only get 1 year each. Slide 35 CollusionExample of failed collusion • OPEC has tried and succeeded to maintain a cartel for the oil market. However they had some drawbacks, like in 1986 when Saudi Arabia dropped the price from $28 to $10 for barrel. Slide 36 CollusionHow to maintain a Cartel? (1) • Monitor others participants behavior □ “Beat any price” strategy • Threat participants to respect the deal • “If you stay at the production level that maximizes joint industry profits, fine. But if i discover that you are cheating by producing more than this amount, i will punish you by producing the Cournot level of output forever.” Slide 37 CollusionHow to maintain a Cartel? (2) • Punish disrespects to the deal □ tit-for-tat - “I’ll do this time what you did last time” Πm – monopoly profits Πd – one time profit Πc – Cournout profit • Present value of cartel behaviour - Πm + (Πm/r) • Present value of cheating - Πd + (Πc/r) • Πd > Πm > Πc r < (Πm - Πc) / (Πd - Πm) As long as the prospect of future punishment is high enough, it will pay the firms to stick to their quotas. • Regulation □ Government Regulation □ Examples ☆ Instituto do Vinho do Porto Slide 38 Resume 1 • Homogeneous or different products • Strategic interactions (the decisions of one firm influence the results of the others) • It is not possible to describe the oligopoly behavior in just one model • The oligopoly behavior depends on the characteristics of the market Slide 39 Resume 2 • Questions: • -What if they change the price? • What if they change amount produced? • What if they introduced a new product? Sequential, Simultaneous or Cooperative game Example: Television broadcasting in Portugal RTP, SIC, TVI Slide 40 Resume 3 Stackelberg Model – Quantity Leadership • A firm (leader) decides its own production before the others – dominant firm or natural leader • The others firms (followers) decide after they know the leader’s decision • When the leader chooses an output, it will take into account how the follower will respond Example: Computer firm, IBM Slide 41 Resume 4 Price Leadership • A firm (leader) sets the price and the others choose how much they will produce at that price • When the leader chooses a price, it will take into account how the follower will respond Example: McDonalds Slide 42 Resume 5 Cournot Model – Simultaneous Quantity Setting • It is supposed that both firms make their output choices simultaneously and the expectations about the other firm’s choices are confirmed • Each firm believes that a change in its output will not lead to followers to change their productions • Each firm has a small market share, that implies that price will be very close to the marginal price – nearly competitive Example: Banking business Slide 43 Resume 6 Bertrand Competition – Simultaneous Price Setting • Each firm chooses its price based on that it expects the price of the other firms will be Example: PumpGas Slide 44 Resume 7 • Group of firms that jointly collude to set prices and quantities that maximize the sum of their profits • Behave like a single monopolist Problem: temptation to cheat to make higher profits (may break the cartel) Firms need a way to detect and punish cheating Punish Strategies (clients, governments…) Example: Cartel Slide 45 Comparing Oligopoly models... • Evidences... • The Firm 1 profit in the Stackelberg Model. • From Stackelberg Model to Bertrand Model. • In the model Stackelberg the total output is bigger than in Cournot model; • In Shared Monopoly model: smallest output and highest price; • In Bertrand model: highest output and smallest price; Slide 46 The Demand Curve is: Marginal cost for Leader and Follower: ExerciseStackelberg model 1/2 • Questions: • What will be the equilibrium price for both? • What will be the equilibrium quantity for both? Slide 47 The Marginal Revenue Curve is: Marginal cost: The Firm 2 Reaction Function: Replacing in the Firms’1 demand function: The Marginal Revenue for firm 1 is: ExerciseStackelberg model 2/2 • MR2=P(Q1+Q2)+(∆P/∆Q2)*Q2 • MR2 = 10-Q1-2Q2 • R2(Q1) = Q2* = • = 4-(Q1/2) • P1=10 – Q1– 4 + (Q1/2) = • = 6 - (Q1/2) • MR1 = 6-Q1 • And • MR1 = MC = 2 • Anwsers: • What will be the equilibrium price for both: = 4 • What will be the equilibrium quantity for both? Q1 = 4; Q2=2 Slide 48 Intermediate Microeconomics- Varian, H. Price Theory and Apllications- Landsburg, S.
{"url":"http://www.slideserve.com/elga/microeconomics-oligopoly","timestamp":"2014-04-19T09:35:35Z","content_type":null,"content_length":"94413","record_id":"<urn:uuid:b3e3d39e-6972-4cf4-8ee2-520256accd54>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
-cgi -package Maybe -cgi -package The Maybe type encapsulates an optional value. A value of type Maybe a either contains a value of type a (represented as Just a), or it is empty (represented as Nothing). Using Maybe is a good way to deal with errors or exceptional cases without resorting to drastic measures such as error. The Maybe type is also a monad. It is a simple kind of error monad, error monad can be built using the Data.Either.Either type. The Maybe type, and associated operations. The MaybeT monad transformer adds the ability to fail to a monad. A sequence of actions succeeds, producing a value, only if all the actions in the sequence are successful. If one fails, the rest of the sequence is skipped and the composite action fails. For a variant allowing a range of error values, see Control.Monad.Trans.Error. The parameterizable maybe monad, obtained by composing an arbitrary monad with the Maybe monad. Computations are actions that may produce a value or fail. The return function yields a successful computation, while >>= sequences two subcomputations, failing on the first error. The maybe function takes a default value, a function, and a Maybe value. If the Maybe value is Nothing, the function returns the default value. Otherwise, it applies the function to the value inside the Just and returns the result. Allocate storage and marshal a storable value wrapped into a Maybe * the nullPtr is used to represent Nothing Convert a peek combinator into a one returning Nothing if applied to a nullPtr Converts a withXXX combinator into one marshalling a value wrapped into a Maybe, using nullPtr to represent Nothing. The fromMaybe function takes a default value and and Maybe value. If the Maybe is Nothing, it returns the default values; otherwise, it returns the value contained in the Maybe. The mapMaybe function is a version of map which can throw out elements. In particular, the functional argument returns something of type Maybe b. If this is Nothing, no element is added on to the result list. If it just Just b, then b is included in the result list. O(n). Map values and collect the Just results. > let f x = if x == "a" then Just "new a" else Nothing > mapMaybe f (fromList [(5,"a"), (3,"b")]) == singleton 5 "new a" O(n). Map values and collect the Just results. > let f x = if x == "a" then Just "new a" else Nothing > mapMaybe f (fromList [(5,"a"), (3,"b")]) == singleton 5 "new a" O(n). Map keys/values and collect the Just results. > let f k _ = if k < 5 then Just ("key (:) " ++ (show k)) else Nothing > mapMaybeWithKey f (fromList [(5,"a"), (3,"b")]) == singleton 3 "key (:) 3" O(n). Map keys/values and collect the Just results. > let f k _ = if k < 5 then Just ("key (:) " ++ (show k)) else Nothing > mapMaybeWithKey f (fromList [(5,"a"), (3,"b")]) == singleton 3 "key (:) 3" optionMaybe p tries to apply parser p. If p fails without consuming input, it return Nothing, otherwise it returns Just the value returned by p. Show more results
{"url":"http://www.haskell.org/hoogle/?hoogle=Maybe+-cgi+-package","timestamp":"2014-04-19T07:41:21Z","content_type":null,"content_length":"24778","record_id":"<urn:uuid:712bb68c-1afd-4f86-b9c0-698c96fe871f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
column space for this vector space December 18th 2012, 08:49 AM #1 Junior Member Sep 2012 column space for this vector space Problem is find a basis and its dimension. Let W={(a,b,c):a-3b+c=0, b-2c=0, 2b-c=0}. I know that W=Nul A and it can be set up as [1 -3 1][a]=[0] [0 1 -2][b]=[0] [0 2 -1][c]=[0] when row reduced it turns out to be [1 0 0][a]=[0] [0 1 0][b]=[0] [0 0 1][c]=[0] The Nul A={0} and there exists no basis for it so the dimension is 0 but what about a basis for the column space. Wouldn't its dimension be 3 and a basis be {(1 0 0),(-3 1 2),(0 2 -1)}? I can see it might be a basis since the only vector in W is (0 0 0). Last edited by bonfire09; December 18th 2012 at 08:57 AM. Re: column space for this vector space A Basis for a vector space is a set of linearly independent vectors which span the vector space. Now, the vector space W only contains $0_w$ . So does not have a basis and $Dim(W) = 0$ . December 18th 2012, 09:17 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/210058-column-space-vector-space.html","timestamp":"2014-04-19T02:21:30Z","content_type":null,"content_length":"32544","record_id":"<urn:uuid:9f0a97df-999b-46ef-bcf2-27a6f2daa090>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
how slow can the dimension of a product set grow? up vote 0 down vote favorite Let us define the following "dimension" of a Borel subet $B \subset \mathbb{R}^k$: $\dim(B) = \min\{n \in \mathbb{N}: \exists K \subset \mathbb{R}^n, ~{\rm s.t.} ~ B \sim K\}$, where $\sim$ denotes "homeomorphic to". Obviously, $0 \leq \dim(B) \leq k$. I have three questions: Given a $B \subset \mathbb{R}$, 1) As $k \to \infty$, how slow can $\dim(B^k)$ grow? Can we choose some $B$ such that $\dim(B^k) = o(k)$ or even $O(1)$? 2) Will it make a difference if we drop the Borel measurability of $B$ or add the condition that $B$ has positive Lebesgue measure? 3) Does this dimension-like notion have a name? The dimension concepts I usually see are Lebesgue's covering dimension, inductive dimension, Hausdorff dimension, Minkowski dimension, etc. I do not think the quantity defined above coincides with any of these, but of course bounds exist. dimension-theory gn.general-topology ca.analysis-and-odes 2 So the circle S^1 has dimension 2 in your sense? If so, I don't know if this has a name but I certainly wouldn't recommend "dimension". – Alon Amit Mar 19 '10 at 0:33 1 It probably makes more sense to define this dimension locally to avoid the $S^1$ issue? – François G. Dorais♦ Mar 19 '10 at 1:18 add comment 3 Answers active oldest votes Of course the point has the desired property, but I guess, this is not the space you are looking for. As François said, $C=\{0;1\}^\omega$ and so we get $C^2\cong C$. up vote 1 down vote The fact that $C^2 \cong C$ is easy to see if you think of $C$ as $\{0,1\}^\omega$. – François G. Dorais♦ Mar 19 '10 at 1:13 add comment As for 1.) $"dim"(\mathbb{Z}^k)=1$ for all $k\in\mathbb{N}$, because all $\mathbb{Z}^k$ are discrete countable and therefore homeomorphic to each other. up vote 1 down vote add comment The Cantor set satisfies $\dim(C^k) = 1$ for all $k$. You can easily find homeomorphic copies of the Cantor set with positive measure (e.g. at the $n$-th step remove every middle $3^ up vote 2 n$-th instead of every middle third). down vote I see. So extending this argument, we can find a $B \subset \mathbb{R^k}$ with arbitrarily large Lebesgue measure and homeomorphic to the standard Cantor set. Is this also true for any other Borel measure? – mr.gondolier Mar 19 '10 at 2:40 If you have an inner regular measure, then every set of positive measure contains a compact set of positive measure. If the compact set is not already totally disconnected, then you can try to punch tiny open holes out until it is totally disconnected while removing only epsilon measure... – François G. Dorais♦ Mar 19 '10 at 3:06 add comment Not the answer you're looking for? Browse other questions tagged dimension-theory gn.general-topology ca.analysis-and-odes or ask your own question.
{"url":"http://mathoverflow.net/questions/18686/how-slow-can-the-dimension-of-a-product-set-grow?answertab=oldest","timestamp":"2014-04-19T15:29:44Z","content_type":null,"content_length":"63265","record_id":"<urn:uuid:769ac810-cd1b-4f36-9167-a42b079cd22a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Information on LaTeX LaTeX is a typesetting program for mathematical documents. It is extremely powerful, and not too hard to learn to use. The easiest way is to modify someone else's file, so I will provide some below. There are also many resources online to help. MiKTeX and WinEdt are available in the Ford computer labs, so you do not need a personal copy. However, if you want one, you will need both MiKTeX and a LaTeX editor. If you have a Mac, you can get a complete LaTeX setup (for free) from U of O. Just follow the directions. For a PC, get MiKTeX (Select the Phoenix download site.) Get a personal copy of WinEdt for $30, or download a free LaTeX editor. I recommend WinEdt; it is very powerful and easy to use, and it's the one I know best. Introduction to LaTeX. (PDF file) Here are some LaTeX files to get you started. MATH 456 Home
{"url":"http://www.willamette.edu/~cstarr/math456/456latex.htm","timestamp":"2014-04-18T23:40:54Z","content_type":null,"content_length":"3137","record_id":"<urn:uuid:2ad1d996-48d5-40ff-9d5b-8f5dd5bcb182>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
The Eternal Universe recently discussed how it may be that mathematical models that describe our universe may or may not be what is actually going on but: it's okay since these models have predictive power. For example, take the famous path integral. Remember the double slit experiment shown in the picture below. If you ask: what is the probability that an elementary particle released at point s, it ends up at point o? It turns out, being non-technical here, to get the right answer you have to assume the particle to go through holes together. Now, run the same experiment where now you have two screens with multiple slits.( See picture below) Again, to get the right answer, you need to assume the particle, in some sense, travels from s to o in every path possible. No, add in infinite number of screens and drill into them an infinite number of holes. What do you get? Free space! Yet surprisingly, you get the right quantum physics if you assume the particle still has to travel through all the infinite holes of the infinite screens: Ie, you assume the particle moves from point s to o taking every path possible in free space. Now, though I didn't state all the details 100% technically correct, the basic intuitive idea of what is going on is still correct. The math we use is suggests the particles take, in some sense, every possible path. Now, is this really what is going on, or is this just a model? I'm guessing it is just a model. However, it's okay since it has significant predictive power. Nevertheless, the question is always going to pester me: why do such bizarre models give such amazing answers ?!?! Nature is very interesting indeed. (These images were taken from Quantum Field Theory In A Nutshell By A. Zee. This book has an amazing section on path integrals.) 12 comments: 1. My research group has a curriculum published by Wiley called Tutorials in Introductory Physics, and we make heavy use of models. In the tutorial entitled "A Model for Single-Slit Diffraction," for example, we use ideas developed previously for multiple-slit interference to build a model in which a single slit is imagined to be a huge number of tiny slits (essentially with the distance between slit centers equal to the tiny slit widths). It's absurd, but it gives the right answers. How wonderful! 2. Yeah, stuff like this is really cool. I had a professor like to always say when he discussed things like this: "Nature is weird". And so it is, and yet so interesting. 3. And thankfully so. If nature weren't so weird, we'd all be out of business. 4. Your right Bill. I remember we had the Nobel Prize winning physicist Leon Lederman visit our quantum mechanics class (Weren't you there Bill?) at BYU who said at one point "It would be great if the LHC discovers something crazy. Well all have jobs for sure." (paraphrasing) And so it is. :) 5. Thought the slit concept was more than just a model. Don't the interference and entanglement experiments prove the model correctly reflects nature? 6. " Don't the interference and entanglement experiments prove the model correctly reflects nature?" Yes, that's my point. The model correctly 100% predicts nature. But the same model mathematically suggests when a particle moves from point A to be it travels in all possible paths. Does this really happen? But, since it's measurable predictions 100% fit nature, the model is none-the-less good. 7. Anyways, I am being *way* too philosophical here. I'm only trying to illustrate how it is hard to know if the model is right because this is how nature really is or if it is right just because it is mathematically equivalent to what is actually going on. Either way its predictions are always observed to be correct. 8. Well I like to think that particles do take every pathway. It gives the Universe a higher Star Trek correlation factor. =:) btw, a good pop sci book on the topic is: 'Entanglement' by Amir D. Aczel I recommend all of his books. 9. Stan, thanks for the book link. I'm interested in learning more about entanglement. And you know, the universe may really be this way. 10. In “Reasoning and the Logic of Things”, the Cambridge Conferences, lectures of 1898 by Charles Sanders Peirce ; it is possible to read : “…It seems that we are reduced to this alternative : either we have to do a very large generalization about the character of the ways of nature ; it can at least say to us that it is better to try such a theory of molecules and ether rather than such an other ; or…” “…About these explanations proposed by the physicists for the irreversible phenomenons by means of the doctrine of random applied to some billions of molecules, I accept them fully, as being one of the most beautiful successes of science.” I think that there is a connection. 11. Cartesian, interesting quote. 12. I did translate so I hope it is correct.
{"url":"http://theeternaluniverse.blogspot.com/2010/03/example-of-crazy-model-path-integral.html","timestamp":"2014-04-20T15:50:07Z","content_type":null,"content_length":"103950","record_id":"<urn:uuid:77d92f89-e102-4a77-83fd-a10ae3c44cbb>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Lomita Math Tutor Find a Lomita Math Tutor ...From my experience, I have found many creative ways of explaining common problems. I love getting to the point when the student finally understands the concept and tells me that they want to finish the problem on their own. I look forward to helping you with your academic needs. 14 Subjects: including SAT math, algebra 1, algebra 2, calculus ...Prior to becoming a teacher I was an Electrical Engineer and a graduate from Carnegie Mellon University in Pittsburgh, PA. I worked for a 8+ years as a teacher in all subjects of Math in High Schools. I have also tutored students during this time frame in Math (pre-Algebra, Algebra 1 and 2, Geo... 11 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...As a result, I understand youth and their parents very well, and have solidified effective teaching methods. Students quickly open up to me and enjoy my company, not only as a teacher and mentor, but also a friend. I have taught English/ESL, math, science, history, Spanish, Korean, SAT, and ACT, and also guided high school students through the college application process. 18 Subjects: including algebra 1, elementary (k-6th), vocabulary, grammar ...Yoga classes can be given in your home or at a local park or beach.I have tutored at the elementary level for over 8 years. I have my BA in Liberal Studies/ multiple subjects from Long beach state university and teaching credential and specialize in reading and writing for young children. I tu... 23 Subjects: including algebra 1, reading, biology, prealgebra ...I'm available both online and in-person. I have zillions of references, ranging from students and parents to high school teachers, college counselors, and principals! You're pretty sure you want to contact me, right? 26 Subjects: including algebra 1, algebra 2, ACT Math, grammar
{"url":"http://www.purplemath.com/lomita_ca_math_tutors.php","timestamp":"2014-04-18T19:16:33Z","content_type":null,"content_length":"23789","record_id":"<urn:uuid:70d22ae8-e9e5-433d-b632-9f85419a52af>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Press Release 07-029 A Mathematical Solution for Another Dimension New tool could drive breakthroughs in several disciplines March 19, 2007 Ever since 1887, when Norwegian mathematician Sophus Lie discovered the mathematical group called E8, researchers have been trying to understand the extraordinarily complex object described by a numerical matrix of more than 400,000 rows and columns. Now, an international team of experts using powerful computers and programming techniques has mapped E8--a feat numerically akin to the mapping of the human genome--allowing for breakthroughs in a wide range of problems in geometry, number theory and the physics of string theory. "Although mapping the human genome was of fundamental importance in biology, it doesn't instantly give you a miracle drug or a cure for cancer" said mathematician Jeffrey Adams, project leader and mathematics professor at the University of Maryland. "This research is similar: it is critical basic research, but its implications may not become known for many years." Team member David Vogan, a professor of mathematics at the Massachusetts Institute of Technology (MIT), presented the findings today at MIT. The effort to map E8 is part of a larger project to map out all of the Lie groups--mathematical descriptions of symmetry for continuous objects like cones, spheres and their higher-dimensional counterparts. Many of the groups are well understood; E8 is the most complex. The project is funded by the National Science Foundation (NSF) through the American Institute of Mathematics. It is fairly easy to understand the symmetry of a square, for example. The group has only two components, the mirror images across the diagonals and the mirror images that result when the square is cut in half midway through any of its sides. The symmetries form a group with only those 2 degrees of freedom, or dimensions, as members. A continuous symmetrical object like a sphere is 2-dimensional on its surface, for it takes only two coordinates (latitude and longitude on the Earth) to define a location. But in space, it can be rotated about three axes (an x-axis, y-axis and z-axis), so the symmetry group has three dimensions. In that context, E8 strains the imagination. The symmetries represent a 57-dimensional solid (it would take 57 coordinates to define a location), and the group of symmetries has a whopping 248 Because of its size and complexity, the E8 calculation ultimately took about 77 hours on the supercomputer Sage and created a file 60 gigabytes in size. For comparison, the human genome is less than a gigabyte in size. In fact, if written out on paper in a small font, the E8 answer would cover an area the size of Manhattan. While even consumer hard drives can store that much data, the computer had to have continuous access to tens of gigabytes of data in its random access memory (the RAM in a personal computer), something far beyond that of home computers and unavailable in any computer until recently. The computation was sophisticated and demanded experts with a range of experiences who could develop both new mathematical techniques and new programming methods. Yet despite numerous computer crashes, both for hardware and software problems, at 9 a.m. on Jan. 8, 2007, the calculation of E8 was complete. The Atlas team consists of 18 researchers from the United States and Europe. The core group consists of Jeffrey Adams (University of Maryland), Dan Barbasch (Cornell), John Stembridge (University of Michigan), Peter Trapa (University of Utah) , Marc van Leeuwen (Poitiers), David Vogan (MIT), and (until his death in 2006) Fokko du Cloux (Lyon). For details on E8, visit http://aimath.org/E8/. The Atlas of Lie Groups Project The E8 calcuation is part of an ambitious project sponsored by AIM and the National Science Foundation (NSF), known as the Atlas of Lie Groups and Representations. The goal of the Atlas project is to determine the unitary representations of all the Lie groups (E8 is the largest of the exceptional Lie groups). This is one of the most important unsolved problems of mathematics. The E8 calculation is a major step, and suggests that the Atlas team is well on the way to solving this problem. The Atlas project is funded by the NSF through the American Institute of Mathematics. The American Institute of Mathematics The American Institute of Mathematics, a nonprofit organization, was founded in 1994 by Silicon Valley businessmen John Fry and Steve Sorenson, longtime supporters of mathematical research. AIM is one of seven mathematics institutes supported by the NSF. AIM's goals are to expand the frontiers of mathematical knowledge through focused research projects, by sponsoring conferences, and helping to develop the leaders of tomorrow. In addition, AIM is interested in helping preserve the history of mathematics through the acquisition and preservation of rare mathematical books and documents and in making these materials available to scholars of mathematical history. AIM currently resides in temporary facilities in Palo Alto, California, the former Fry's Electronics headquarters. A new facility is being constructed in Morgan Hill, California. For more information, visit www.aimath.org. This research was supported by the following three NSF awards: DMS 0554278 FRG: Collaborative Research: Atlas of Lie Groups and Representations DMS 0532393 Atlas of Lie Groups DMS 0532088 Atlas of Lie Groups and Representations Media Contacts Joshua A. Chamot, NSF, (703) 292-7730, jchamot@nsf.gov Brian Conrey, American Institute of Mathematics, (650) 845-2071, conrey@aimath.org Program Contacts Joe W. Jenkins, NSF, (703) 292-4870, jjenkins@nsf.gov Principal Investigators Jeffrey Adams, University of Maryland, (301) 405-5493, jda@math.umd.edu Related Websites David Vogan's March 19 presentation: http://www-math.mit.edu/~dav/E8TALK.pdf The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2014, its budget is $7.2 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly. Get News Updates by Email Useful NSF Web Sites: NSF Home Page: http://www.nsf.gov NSF News: http://www.nsf.gov/news/ For the News Media: http://www.nsf.gov/news/newsroom.jsp Science and Engineering Statistics: http://www.nsf.gov/statistics/ Awards Searches: http://www.nsf.gov/awardsearch/
{"url":"http://nsf.gov/news/news_summ.jsp?cntn_id=108482&org=DMS","timestamp":"2014-04-20T04:40:07Z","content_type":null,"content_length":"66973","record_id":"<urn:uuid:108e0d8e-8fe2-477d-8061-49b918d2e692>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Vineland Precalculus Tutor ...I received an 'A' in each linear algebra course that I took. I also passed the preliminary exam in linear algebra for the Ph.D. program at the University of Pittsburgh in 2009. I took the Praxis II Mathematics: Content Knowledge exam in May, 2013. 19 Subjects: including precalculus, calculus, geometry, GRE ...I have taken classes in math up to calculus at the college level, and love math and helping other people understand it. My chemistry knowledge is up until 2nd level inorganic and I've taken biochemistry. My favorite part of chemistry is redox, so if you need help understanding it and balancing equations, I'm your girl! 30 Subjects: including precalculus, chemistry, English, reading My name is John and I have taught science in NJ public high schools for the past 3 years. I have my certification in Physics and have achieved passing scores on the Praxis Exam in Chemistry and General Science as well. Courses I have taught include General Physical Science and Physics. 16 Subjects: including precalculus, chemistry, calculus, physics ...I currently conduct SAT math preparation classes (group sessions) as well as SAT math prep on a one-to-one basis. I am adept in teaching NJCCCS including: 1. Number and numerical operations; 2. Geometry and measurement; 22 Subjects: including precalculus, statistics, GED, ASVAB ...During undergrad, I took courses in mild/moderate disabilities; in graduate school, I took a psychopathology course focused on the autism spectrum and how ADD/ADHD is very closely related to autism. I have worked with the Gloucester County Special Services School District as a substitute teacher... 39 Subjects: including precalculus, reading, English, physics
{"url":"http://www.purplemath.com/Vineland_Precalculus_tutors.php","timestamp":"2014-04-20T19:29:53Z","content_type":null,"content_length":"24175","record_id":"<urn:uuid:0e767629-b4c1-4070-a3e7-986aebfcd2d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Pingree Grove, IL Math Tutor Find a Pingree Grove, IL Math Tutor I am currently teaching at Mchenry County College as a part-time employee. For the past 6 years I am teaching all levels of mathematics courses. I am also doing one-on-one tutoring for high school students for the past three years. 12 Subjects: including algebra 2, statistics, differential equations, computer science ...I am committed to continuing to learn new things throughout life. I have just enrolled myself in a new program as a student. I have taught algebra 1 to three of my daughters while they were preparing for high school, and they all performed at greater than or equal to 94% on the New York State Regents exam. 17 Subjects: including algebra 1, algebra 2, biology, chemistry ...I was the top student in all my chemistry classes, so I have a clear understanding of all the concepts to do with chemistry. I will be able to help you or your child to understand these concepts, using real life examples, and will also be able to coach you to the techniques which will enable you to answer them every time. I tutor because I love working with children. 20 Subjects: including logic, geometry, trigonometry, precalculus ...While I could share stories of the experiences I have had, I am well aware that every student is different. Each individual needs a unique approach that allows him or her to demonstrate the potential inside. As I tutor, I know that my job is not only to teach, but also to motivate and encourage students. 17 Subjects: including prealgebra, ACT Math, English, reading ...I have a total of 15 years of experience in education. I have been certified in the state of Illinois since 2004 holding a type 9 secondary education degree (grades 6 to 12). My certificate is registered in Cook County as well as in the Winnebago/Boone County area of Illinois. I earned my undergraduate degree in Secondary Education/Social Science from St. 11 Subjects: including calculus, precalculus, algebra 1, algebra 2 Related Pingree Grove, IL Tutors Pingree Grove, IL Accounting Tutors Pingree Grove, IL ACT Tutors Pingree Grove, IL Algebra Tutors Pingree Grove, IL Algebra 2 Tutors Pingree Grove, IL Calculus Tutors Pingree Grove, IL Geometry Tutors Pingree Grove, IL Math Tutors Pingree Grove, IL Prealgebra Tutors Pingree Grove, IL Precalculus Tutors Pingree Grove, IL SAT Tutors Pingree Grove, IL SAT Math Tutors Pingree Grove, IL Science Tutors Pingree Grove, IL Statistics Tutors Pingree Grove, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Pingree_Grove_IL_Math_tutors.php","timestamp":"2014-04-18T06:05:52Z","content_type":null,"content_length":"24165","record_id":"<urn:uuid:4db3dbed-1d40-47ac-b50f-eca786db1800>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Aggregating VAR across portfolio and firm Hi David, If you're given 1-year VARs of a firm for market risk, operational risk, and credit risk (they are uncorrelated to each other and all calculated at same significance level), can you simply sum them to get the overall 1-year firm VAR? For a portfolio VAR (referring to Q#11 on page 285 of FRM curriculum Valuation and Risk Management), we use (Portfolio VAR)^2 = (Stock VAR)^2 + (Bond VAR)^2, assuming no correlation between bond and stock. Should we apply same formula to calculate the firm VAR described above? With regard to the formula for portfolio VAR, if there is correlation between bond and stock, say 0.3, how can we incorporate this into the formula? Hi sleepybird, (I don't understand your reference to page 285? you do not appear to reference the FRM handbook ...) 1. No! Summing the VaRs implicitly assumes they are PERFECTLY correlated. If you are explicitly told the risks are uncorrelated (ZERO correlation) then you can use VAR^2 = VaR^2 + CVaR^2 + OpRiskVaR^2, but this is less than summing. But the question must tell you "zero correlation" or "uncorrelated". For example, Basel implicitly assumes perfect correlation by adding the three 2. The general form is SumVaR^2 = VaR1^2 + VaR^2 + 2*VaR1*VaR2*correlation, such that: if ZERO correlation: SumVaR^2 = VaR1^2 + VaR^2, or if PERFECT correlation: SumVaR^2 = VaR1^2 + VaR^2 + 2*VaR1*VaR2 = (VaR1+VaR2) ^2 --> SumVaR = VaR1 + VaR2 (... but this applies only in the limiting case of mean-variance where VaRs are unrealistically normal) If correlation = 0.3, then a good question should remind this is unrealistically where VaRs are normal, but then: SumVaR^2 = VaR1^2 + VaR^2 + 2*VaR1*VaR2*0.3 David, thanks. I was referring to Q#11 of the sample exam questions at the end of Valuation and Risk Models book. Hi David, This might be a stupid question but why do we ignore the weights of the stocks and bonds when aggregating portfolio VAR in the formula above? Thanks. Hi sleepybird, We don't need to, but position ($) = Portfolio Value($P) * Weight (%), so the weights are "embedded:" From http://www.bionicturtle.com/forum/threads/portfolio-var.4846/ VaR(P$) = W(P$)*deviate*SQRT[w(a%)^2*sigma(a%)^2 + w(b%)^2*sigma(b%)^2 + 2*w(a%)*w(b%)*COV(a,b)], VaR(P)^2 = W(P$)^2*deviate^2*[w(a%)^2*sigma(a)^2 + w(b%)^2*sigma(b)^2 + 2*w(a%)*w(b%)*COV(a,b), VaR(P)^2 = [W(P$)^2*deviate^2*w(a%)^2*sigma(a)^2)] + (W(P$)^2*deviate^2*w(b%)^2*sigma(b)^2) + W(P$)^2*deviate^2*2*w(a%)*w(b%)*COV(a,b), as W($P)^2*deviate^2*w(a%)^2*sigma(a)^2 = [W(P)*deviate*w(a%)*sigma(a)]^2, and W($P)*w(a%) = w($a): VaR(P)^2 = VaR(a$)^2 + VaR(b$)^2 + W(P$)^2*deviate^2*2*w(a%)*w(b%)*COV(a,b), VaR(P)^2 = VaR(a$)^2 + VaR(b$)^2 + [W(P$)*w(a%)*deviate] * [W(P$)*w(b%)*deviate]*2*COV(a,b); as COV = sigma(a)*sigma(b)*correlation(a,b): VaR(P)^2 = VaR(a$)^2 + VaR(b$)^2 + [W(P$)*w(a%)*deviate*sigma(a)] * [W(P$)*w(b%)*deviate*sigma(b)]*2*correlation(a,b), VaR(P)^2 = VaR(a$)^2 + VaR(b$)^2 + 2*VaR(a$) * VaR(B$) * correlation(a,b), VaR(P$) = SQRT[VaR(a$)^2 + VaR(b$)^2 + 2*VaR(a$) * VaR(B$) * correlation(a,b)] Thanks, David Hi David, Thanks. That makes sense, but I think only if we are aggregating the VAR in $ terms? There's a question in the 2010 GARP practice exam where we were given the volatilitities of 5% and 12% for Bond A ($25M) and Bond B($75M), respectively, with correlation of 0.25. We were then asked to calculate the gain from diversification for a VAR estimated at the 95% level for the next 10 days. The answer key calculates the following: Undiversified VAR: 1.645*5%*SQRT(10/250) + 1.645*12%*SQRT(10/250)=3.723% Here's we're not multiplying the portfolio value. Shouldn't we apply the weights here? The answer key goes on to calculate below Diversified VAR: SQRT[(0.25)^2*(5%)^2 + (0.75)^2*(12%)^2+2(0.25)(0.75)(5%)(12%)]=0.09308 Difference is 0.283 or $283,000. Hi sleepybird, (Why aren't yield volatilities multiplied by duration? Bond VaR wants yield volatility * modified duration, otherwise this is just a VaR of the yield volatilities not the bond values? in any case ... we can assume these are price volatilities [sic]) as above, gets to the same place, i think? Individual bond A VaR = 5%*sqrt(10/250) * $25 * 1.645 = $0.41125, Individual bond B VaR = 12%*sqrt(10/250) * $75 * 1.645 = $2.96, then: SQRT[$0.41125^2 + $2.96^2 + 2* $0.41125* $2.96 * 0.25] = 3.089; i.e., 3.372 - 3.089 = ~$283K Hi David, Sorry this is still not very clear to me. Diversified SumVaR^2 = VaR1^2 + VaR^2 + 2*VaR1*VaR2*0.25 = 0.41125^2 + 2.96^2 + 2*0.41125*2.96*0.25 --> we get DIVERSIFIED VAR of $3.089 as you calculated above. Then shouldn't undiversified SumVaR^2 = VaR1^2 + VaR^2, i.e., dropping out the last term. In this case undiversified VAR^2 = 0.41125^2 + 2.96^2 --> we get UNDIVERSIFIED VAR of $2.989. Then the difference is $0.1. Why did you subtract $3.089 from $3.372 rather than $2.989? i.e., why the UNDIVERSIFIED VAR is calculated 1.645*5%*SQRT(10/250) + 1.645*12%*SQRT(10/250)=3.3723%*$100=3.372 rather than SQRT (0.41125^2 + 2.96^2)=2.989? And sorry, few corrections to my original post above: 1. Yes, those are yield volatilties 2. 1.645*5%*SQRT(10/250) + 1.645*12%*SQRT(10/250)=3.3723% not 3.723% 3. SQRT[(0.25)^2*(5%)^2 + (0.75)^2*(12%)^2+2(0.25)(0.75)(5%)(12%)(0.25)]=0.093908 <--missing the correlation 0.25 and final answer 0.093908 rather than 0.09308. Hi Sleepbird, I am using millions, I think we agree that, if correlation = 0.25, diversified VaR = $3.089 million. But undiversified VaR assumes correlation = 1.0. Your $2.989 implicitly assumes ZERO correlation and is therefore a diversified VaR under (an unstated) assumption of zero correlation: □ General form: VaR(P)^2 = VaR(A)^2 + VaR(B)^2 + 2*VaR(A)*VaR(B)*correlation. □ If correlation = zero, VaR(P)^2 = VaR(A)^2 + VaR(B)^2 + 2*VaR(A)*VaR(B)*0 = VaR(A)^2 + VaR(B)^2 --> VaR(P) = SQRT[ VaR(A)^2 + VaR(B)^2]. In this example, under zero correlation, diversified VaR = 2.989 □ If correlation = 1.0 (i.e., "undiversified VaR"): VaR(P)^2 = VaR(A)^2 + VaR(B)^2 + 2*VaR(A)*VaR(B)*1.0 = [VaR(A)^2 + VaR(B)^2]^2 --> VaR(P) = VaR(A) + VaR(B). In this example, undiversified VaR = $3.372 regardless of the correlation, b/c undiversified VaR does not give credit for imperfect correlation.
{"url":"https://www.bionicturtle.com/forum/threads/aggregating-var-across-portfolio-and-firm.5827/","timestamp":"2014-04-19T19:34:36Z","content_type":null,"content_length":"51401","record_id":"<urn:uuid:e4663cab-c3ff-4b07-949c-b5129b11257f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Entropy can drive the formation of complex quasicrystals Monte Carlo simulations of simple hard tetrahedron reveal that, under pressure … by Matt Ford - Dec 14, 2009 8:38 pm UTC From a thermodynamics perspective, all systems, even the universe itself, are all driven by two aspects of their state—entropy and energy. Any closed system will simultaneously tend towards a minimum energy or maximum entropy state—open systems behave differently, but the idea is similar. Under a variety of conditions, these two states represent opposites: a system that has obtained the maximum possible entropy is often a high energy state; conversely, the low energy state may be a nicely ordered system with little disorder. It can be tough to study what entropy alone can do. In real world systems, energetic contributions—such as changes in chemical bonds—often dominate, preventing us from observing the beauty of what disorder alone can accomplish. To overcome the shackles of reality, researchers have employed molecular thermodynamic simulations to study hard systems—those that have no attractive forces, but do exhibit repulsion due to the shapes of individual particles. This allows researchers to study a system where the only available energy state is zero, and the thermodynamic properties are driven by entropic considerations alone. Under certain conditions—high density or pressure—simple systems such as this tend not to arrive at a state of disorder as they attempt to maximize their entropy. Simulations of hard spheres (billiard balls) will show that, at low pressures, a disordered, random state is favored by thermodynamics. But, at high enough pressures, an ordered state emerges, resulting in a close packing arrangement of the spheres—the same shape that one gets when stacking cannon balls. Recent simulations, the results of which are presented in a recent edition of Nature, examined how hard tetrahedrons (four sided pyramids) behave when compressed. The result is not a simple shape (like the face-centered cubic packing that spheres exhibit). Instead, a complex quasicrystal emerges from the disorder—a result that shocked all involved. Where regular crystals are defined as solids made of regular repeating units (called unit cells) that exhibit long-range ordering. Quasicrystals exhibit long range order, but have no regular packing arrangement; there is no simple unit cell that ends up being repeated. In the new research, compressing tetrahedrons produced a structure that consisted of 12-fold quasicrystals, parallel stacks of rings around pentagonal dipyramids. The simulations found that the packing fraction for the structure (the ratio of space taken up by solids to the total space available) to be 0.8503, well above the previous packing record for tetrahedrons, 0.782, which was obtained earlier this summer. For those not overly enamored with the power of entropy to drive order, or how tightly you can pack your d4 die into your bag of holding, the implications of this work go deeper than you might expect. The tetrahedron is a shape that's commonly found in nature, and seeing how it can be ordered can lead to advances in understanding of complex systems. More generally, this is the further evidence that entropy alone can drive extremely complex phenomena (researchers have made entire academic careers out of understanding "simple" hard systems). It may help further our understanding of the emergent behaviors seen in natural systems. Nature, 2009. DOI: 10.1038/nature08641 Listing image by University of Michigan/Glotzer Lab 15 Reader Comments 1. Ars of AresArs Legatus Legioniset Subscriptor I don't understand, though, how you can increase pressure without introducing energy--even in simulation. Doesn't adding pressure cause heat to increase? Isn't heat energy? Please forgive the question of very basic phsyics concepts.. 2. xwred1Ars Praefectus Is the description of entropy at the beginning wrong? I thought low entropy meant more energy -- at least in the sense of energy we care about (like nice orderly oil molecules vs random heat coming off an engine) I could be mistaken, though. However, these two sentences sound contradictory to me: Any closed system will simultaneously tend towards a minimum energy or maximum entropy state—open systems behave differently, but the idea is similar. Under a variety of conditions, these two states represent opposites: a system that has obtained the maximum possible entropy is often a high energy state; conversely, the low energy state may be a nicely ordered system with little 3. PenforhireArs Praefectus I have often wondered about the meanings of entropy and order because of a simple thought experiment. Maybe someone more erudite here could enlighten me? Take a string (or one dimensional line)of arbitrary length composed of binary numbers. Assume half the digits are 1's and the other half are 0's. According to most meanings of order, what is the entropy comparison between two different such strings as follows -- One string has all the 1's grouped together on the left and all the 0's grouped on the right. The other string has a perfectly alternating series of "101010..." Isn't the second string the lowest energy configuration, in the conventional meaning? Yet it is just as "ordered" (meaning predictable) as the first string. So is there no connection between entropy and order? 4. GlaucusArs Tribunus Militum Originally posted by Penforhire: I have often wondered about the meanings of entropy and order because of a simple thought experiment. Maybe someone more erudite here could enlighten me? Take a string (or one dimensional line)of arbitrary length composed of binary numbers. Assume half the digits are 1's and the other half are 0's. According to most meanings of order, what is the entropy comparison between two different such strings as follows -- One string has all the 1's grouped together on the left and all the 0's grouped on the right. The other string has a perfectly alternating series of "101010..." Isn't the second string the lowest energy configuration, in the conventional meaning? Yet it is just as "ordered" (meaning predictable) as the first string. So is there no connection between entropy and order? Physical information theory is out of my area, but you might find this interesting. 5. chocoSmack-Fu Master, in training Originally posted by Penforhire: I have often wondered about the meanings of entropy and order because of a simple thought experiment. Maybe someone more erudite here could enlighten me? Take a string (or one dimensional line)of arbitrary length composed of binary numbers. Assume half the digits are 1's and the other half are 0's. According to most meanings of order, what is the entropy comparison between two different such strings as follows -- One string has all the 1's grouped together on the left and all the 0's grouped on the right. The other string has a perfectly alternating series of "101010..." Isn't the second string the lowest energy configuration, in the conventional meaning? Yet it is just as "ordered" (meaning predictable) as the first string. So is there no connection between entropy and order? I think the place where the thought experiment breaks down is that it omits time. Entropy is related to probability, the number of ways that a configuration/state can happen. There isn't a singular configuration with the lowest energy, there's many. In fact, the configurations with lower energy outnumber the configurations with higher energy. Once a low energy configuration state is reached, the system continues to evolve over time, and there are more possibilities that keep it in the low energy state than possibilities in a higher energy state. To be a closer metaphor the string with perfectly alternating 1's and 0's would need to "jump" around to other orderings (well, assuming it hasn't been collapsed by a "measurement"), and the possibility of assuming one with higher energy is less than keeping the energy low. 6. daemoniosArs Scholae Palatinae Originally posted by Penforhire: I have often wondered about the meanings of entropy and order because of a simple thought experiment. Maybe someone more erudite here could enlighten me? Take a string (or one dimensional line)of arbitrary length composed of binary numbers. Assume half the digits are 1's and the other half are 0's. According to most meanings of order, what is the entropy comparison between two different such strings as follows -- One string has all the 1's grouped together on the left and all the 0's grouped on the right. The other string has a perfectly alternating series of "101010..." Isn't the second string the lowest energy configuration, in the conventional meaning? Yet it is just as "ordered" (meaning predictable) as the first string. So is there no connection between entropy and order? Can you speak of entropy in information theory in the same sense as in physical systems? I mean, information is abstract isn't it? The rows of 0's and 1's are representations. For instance, if you take a given physical phenomenon to represent a 0 and another to represent a 1, maybe you can measure the entropy in the system, but you can't say anything about the entropy of the information because you arbitrarily chose the values. But what do I know, I'm way over my head here 7. GKHArs Centurion Choco already pretty much nailed it, but here's my take: Penforhire is right, there is a sense in which both "111…000" and "101010…" strings are the same. Where they differ is in the number of strings that are similar to them. There are far more strings that are “close” to “101010...” (in the sense that any substring will have comparable numbers of 1s and 0s) than there are “close” to “111…000” (in the sense that any substring will be predominantly 1s or 0s.) This means that in any random distribution of reasonable length, the chance of getting either string exactly is going to be essentially 0 – but the chance of getting a string “close” to “101010…” will a virtual certainty. As to the “energy” part, take a string. Now “shake” it by giving every bit the chance to swap positions with another one. If it’s a “101010…” type string, you’re virtually guaranteed to get another “identical” “101010…” type string. If it’s a “111…000” type string and you shake it for long enough you’re also virtually guaranteed to get a “101010…” type string. If systems tend towards the state of lowest energy, then the “111…000” type states are clearly the higher energy states. This can also be seen with the concept of “work”; if you tie a virtual string around each “1” in the “111…000” state, you’re going to be “pulled” to the right. If you tie a virtual string to either half of the “101010…” state, you’re not going to go anywhere.* In terms of “order”, all we mean is that it’s the opposite of random. As any random state will be essentially guaranteed to be a “101010…” type state, we say that it isn’t ordered. You’re almost never going to get a “111…000” type state randomly, so that state is very highly ordered. *The reason you’re not going to go anywhere is that at any later time, the half you tied your string to isn’t going to change its overall composition and any 1 or 0 is indistinguishable from any other 1 or 0. If you distinguish between a bit with a string attached (AS) and a bit without a string attached (NS) then you’re in the state “AS AS AS … NS NS NS” which is equivalent to “111…000” and you will be “pulled”. 8. nkinnanSmack-Fu Master, in training Originally posted by Penforhire: I have often wondered about the meanings of entropy and order because of a simple thought experiment. Maybe someone more erudite here could enlighten me? Take a string (or one dimensional line)of arbitrary length composed of binary numbers. Assume half the digits are 1's and the other half are 0's. According to most meanings of order, what is the entropy comparison between two different such strings as follows -- One string has all the 1's grouped together on the left and all the 0's grouped on the right. The other string has a perfectly alternating series of "101010..." Isn't the second string the lowest energy configuration, in the conventional meaning? Yet it is just as "ordered" (meaning predictable) as the first string. So is there no connection between entropy and order? When it comes to things like data and compression algorithms, the entropy is related to how "predictable" the sequence is. Both your examples are extremely predictable and therefore low entropy. Purely random data (generated by counting atomic decay events for example, not by a computer algorithm which by definition produces only pseudo-random output) has the highest entropy. Both your examples could be represented very concisely ("100 1's then 100 0's" or "200 digits in an alternating 1010 fashion") whereas the true random data could only be represented by that data itself - there's no "shortcut" to describe it using a smaller number of bits. Note: I'm not an expert, and this is from a comp-sci perspective, not a physical one. 9. nkinnanSmack-Fu Master, in training Originally posted by GKH: If it’s a “111…000” type string and you shake it for long enough you’re also virtually guaranteed to get a “101010…” type string. If systems tend towards the state of lowest energy, then the “111…000” type states are clearly the higher energy states. This makes sense to me, but it doesn't jive with my intuition and understanding of compression algorithms. Can you describe why a "111...000" type string is higher entropy than a "101010..." type string given their equal predictability? From a compression standpoint, the *huge* number of randomized "1101000110101110101000..." (truly randomized, after "shaking" as you put it, and not exactly "101010...") makes them much higher entropy since they are less ordered and less predictable. It takes more bits to represent them in the most concise form. edit: I just checked the link that Glaucus posted: http://en.wikipedia.org/wiki/Kolmogorov_complexity which summarizes this nicely. This comment was edited by nkinnan on December 15, 2009 01:54 10. zeothermModeratoret Subscriptor Originally posted by Ars of Ares: I don't understand, though, how you can increase pressure without introducing energy--even in simulation. Doesn't adding pressure cause heat to increase? Isn't heat energy? Sort of... There is a pressure term, but not heat in the sense that you are thinking (I think). It would be more proper to say that the internal energy of the system is either zero (all allowable states) or infinity (two tetrahedron overlapping). In that sense, there is no heat capacity of the system and no real heat in the macroscopic sense. The pressure that is used can also be used as a temperature scale. The variable that one actually controls in such a simulation (isobaric-isothermal ensemble monte carlo, if your interested in perusing further) is Pσ^3/kT—where σ is a representative length scale (the diameter of a ball if these were spheres) P is the pressure, k is Boltzmann's constant, and T is a temperature. This term, dubbed P*, is really an energy ratio. It is a pressure * volume term divided by kT (the energy scale) so in a sense there is energy over all (the PV term in the Enthalpy) but the internal energy is zero. As for the discussion of Shannon (informational) entropy, I can't offer up much. From a statistical thermodynamics point of view though entropy is defined as Boltzmann's constant multiplied by the logarithm of the microcanonical ensemble partition function. The latter (the partition function) can be described as the number of available states at a given energy level. The view isn't 100% clear and simple. In some cases, a disordered system can have lower entropy because the particles can't move because they are stuck, but a more ordered equivalent gives particles room to wiggle around some crystal-ish structure leading to more available states—microcanonical states. This post is more rambling then I intended, let me know if it clears anything up, or if it just brings more questions to the surface. I really like this stuff and am happy to talk about it all 11. kcisobderfArs Legatus Legioniset Subscriptor "The simulations found that the packing fraction for the structure (the ratio of space taken up by solids to the total space available) to be 0.8503, well above the previous packing record for tetrahedrons, 0.782, which was obtained earlier this summer." This is a bigger deal to me than the quasicrystals. I assume they use some sort of simulated annealing algorithm to "pack" the tetrahedrons. How did they manage to improve the packing so much? Did they employ some vertex swapping scheme or other that is ultimately non physical? So that no real system can ever reach that packing fraction. Has the packing fraction ever been measured for minerals exhibiting a 5 axis pseudocrystal form? 12. PenforhireArs Praefectus Thanks for the responses to my thought experiment guys. Some very helpful ideas. 13. DyDxArs Praefectus Originally posted by xwred1: Is the description of entropy at the beginning wrong? I thought low entropy meant more energy -- at least in the sense of energy we care about (like nice orderly oil molecules vs random heat coming off an engine) I could be mistaken, though. However, these two sentences sound contradictory to me: Any closed system will simultaneously tend towards a minimum energy or maximum entropy state—open systems behave differently, but the idea is similar. Under a variety of conditions, these two states represent opposites: a system that has obtained the maximum possible entropy is often a high energy state; conversely, the low energy state may be a nicely ordered system with little disorder. I think you're confused because the explanation in the article is a little lacking. Energy and entropy are intertwined. This research uses a system of 0 net energy, but each particle still has kinetic energy from motion -- it's just that they all sum to 0. Whether or not a micro-scale phenomenon is entropically-driven or enthalpically-driven (in this sort of discussion you would not say 'energetically-driven') is a matter of which thermodynamic phenomena contributes the 'most' i.e. which is most responsible for forces being exerted on particles in the system. I personally love entropy and highly suggest anyone who is interested to take a class on statistical thermodynamics, it makes entropy make a lot more sense. Since I'm waiting for something to finish in lab, I'll tell you all about another cool force I learned about in a seminar this semester that is driven by entropy: depletion forces. These are forces which researchers are utilizing to induce self-assembly of colloids. They are caused random motions of solvent particles pushing on a colloid -- if the colloid is randomly pushed into a corner or crevice by the solvent particles, eventually the solvent particles on the side of the colloid facing the crevice/corner will be fewer than on the side of the bulk such that the particles in the bulk exert a greater force on the colloid and keep it in place. It's really neat. 14. DyDxArs Praefectus Originally posted by zeotherm: Originally posted by Ars of Ares: I don't understand, though, how you can increase pressure without introducing energy--even in simulation. Doesn't adding pressure cause heat to increase? Isn't heat energy? Sort of... There is a pressure term, but not heat in the sense that you are thinking (I think). It would be more proper to say that the internal energy of the system is either zero (all allowable states) or infinity (two tetrahedron overlapping). In that sense, there is no heat capacity of the system and no real heat in the macroscopic sense. To further clarify zeotherm's point... the article is [as I said in my previous post] slightly inaccurate in the way it is worded. The particles in this system are exerting pressure on each other whenever they touch -- pressure is Force per Area -- they just aren't interacting energetically through any other means (such as covalent bonds, electrostatics or van der Waals). However, the net change in energy of this particular system is always zero for any amount of time evolution of the system. This is because the energy exerted on particle B by particle A bumping into it is equal and opposite to the kinetic energy lost by particle A during this collision. So in summary, the particles are certainly exerting pressure on each other, but the macroscopic pressure of the system is unchanged. 15. Ars of AresArs Legatus Legioniset Subscriptor My head's spinning. Thanks though, to both of you. Everyday I find reasons to revisit teh physics studies. Probably should have paid more attention in college... You must login or create an account to comment. Matt Ford / Matt is a contributing writer at Ars Technica, focusing on physics, astronomy, chemistry, mathematics, and engineering. When he's not writing, he works on realtime models of large-scale engineering systems. Feature Story (2 pages) Introducing Steam Gauge: Ars reveals Steam’s most popular games We sampled public data to estimate sales and gameplay info for every Steam game.
{"url":"http://arstechnica.com/science/2009/12/simple-pyramids-create-complex-quasicrystals/?comments=1","timestamp":"2014-04-16T07:56:07Z","content_type":null,"content_length":"87361","record_id":"<urn:uuid:b1493c96-7b2d-4d91-be3f-ceac57cdaabc>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
From Wikipedia, the free encyclopedia The golden section is a line segment divided according to the golden ratio: The total length a + b is to the longer segment a as a is to the shorter segment b. In mathematics and the arts, two quantities are in the golden ratio if the ratio of the sum of the quantities to the larger quantity is equal to (=) the ratio of the larger quantity to the smaller one. The golden ratio is an irrational mathematical constant, approximately 1.6180339887.[1] Other names frequently used for the golden ratio are the golden section (Latin: sectio aurea) and golden mean.[2][3][4] Other terms encountered include extreme and mean ratio,[5] medial section, divine proportion, divine section (Latin: sectio divina), golden proportion, golden cut,[6] golden number, and mean of Phidias.[7][8][9] The golden ratio is often denoted by the Greek letter phi, usually lower case (φ). The figure on the right illustrates the geometric relationship that defines this constant. Expressed algebraically: This equation has as its unique positive solution the algebraic irrational number At least since the Renaissance, many artists and architects have proportioned their works to approximate the golden ratio—especially in the form of the golden rectangle, in which the ratio of the longer side to the shorter is the golden ratio—believing this proportion to be aesthetically pleasing. Mathematicians have studied the golden ratio because of its unique and interesting properties. Construction of a golden rectangle 1. Construct a unit square (red). 2. Draw a line from the midpoint of one side to an opposite corner. 3. Use that line as the radius to draw an arc that defines the long dimension of the rectangle. // if (window.showTocToggle) { var tocShowText = "show"; var tocHideText = "hide"; showTocToggle(); } // [edit] Calculation Two quantities a and b are said to be in the golden ratio φ if: This equation unambiguously defines φ. The right equation shows that a = bφ, which can be substituted in the left part, giving Dividing out b yields Multiplying both sides by φ and rearranging terms leads to: The only positive solution to this quadratic equation is Mark Barr proposed using the first letter in the name of Greek sculptor , to symbolize the golden ratio. Usually, the lowercase form (φ) is used. Sometimes, the uppercase form (Φ) is used for the of the golden ratio, 1/φ. The golden ratio has fascinated Western intellectuals of diverse interests for at least 2,400 years: Some of the greatest mathematical minds of all ages, from ancient Greece , through the medieval Italian mathematician Leonardo of Pisa and the Renaissance astronomer Johannes Kepler , to present-day scientific figures such as Oxford physicist Roger Penrose , have spent endless hours over this simple ratio and its properties. But the fascination with the Golden Ratio is not confined just to mathematicians. Biologists, artists, musicians, historians, architects, psychologists, and even mystics have pondered and debated the basis of its ubiquity and appeal. In fact, it is probably fair to say that the Golden Ratio has inspired thinkers of all disciplines like no other number in the history of mathematics. Mario Livio The Golden Ratio: The Story of Phi, The World's Most Astonishing Number Ancient Greek mathematicians first studied what we now call the golden ratio because of its frequent appearance in geometry. The division of a line into "extreme and mean ratio" (the golden section) is important in the geometry of regular pentagrams and pentagons. The Greeks usually attributed discovery of this concept to Pythagoras or his followers. The regular pentagram, which has a regular pentagon inscribed within it, was the Pythagoreans' symbol. Euclid's Elements (Greek: Στοιχεῖα) provides the first known written definition of what is now called the golden ratio: "A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the less."[5] Euclid explains a construction for cutting (sectioning) a line "in extreme and mean ratio", i.e. the golden ratio.[11] Throughout the Elements, several propositions (theorems in modern terminology) and their proofs employ the golden ratio.[12] Some of these propositions show that the golden ratio is an irrational The name "extreme and mean ratio" was the principal term used from the 3rd century BC[5] until about the 18th century. The modern history of the golden ratio starts with Luca Pacioli's Divina Proportione of 1509, which captured the imagination of artists, architects, scientists, and mystics with the properties, mathematical and otherwise, of the golden ratio. The first known approximation of the (inverse) golden ratio by a decimal fraction, stated as "about 0.6180340," was written in 1597 by Prof. Michael Maestlin of the University of Tübingen in a letter to his former student Johannes Kepler.[13] Since the twentieth century, the golden ratio has been represented by the Greek letter Φ or φ (phi, after Phidias, a sculptor who is said to have employed it) or less commonly by τ (tau, the first letter of the ancient Greek root τομή—meaning cut). [edit] Timeline Timeline according to Priya Hemenway[14]. • Phidias (490–430 BC) made the Parthenon statues that seem to embody the golden ratio. • Plato (427–347 BC), in his Timaeus, describes five possible regular solids (the Platonic solids: the tetrahedron, cube, octahedron, dodecahedron and icosahedron), some of which are related to the golden ratio.[15] • Euclid (c. 325–c. 265 BC), in his Elements, gave the first recorded definition of the golden ratio, which he called, as translated into English, "extreme and mean ratio" (Greek: ἄκρος καὶ μέσος • Fibonacci (1170–1250) mentioned the numerical series now named after him in his Liber Abaci; the ratio of sequential elements of the Fibonacci sequence approaches the golden ratio asymptotically. • Luca Pacioli (1445–1517) defines the golden ratio as the "divine proportion" in his Divina Proportione. • Johannes Kepler (1571–1630) proves that the golden ratio is the limit of the ratio of consecutive Fibonacci numbers,[16] and describes the golden ratio as a "precious jewel": "Geometry has two great treasures: one is the Theorem of Pythagoras, and the other the division of a line into extreme and mean ratio; the first we may compare to a measure of gold, the second we may name a precious jewel." These two treasures are combined in the Kepler triangle. • Charles Bonnet (1720–1793) points out that in the spiral phyllotaxis of plants going clockwise and counter-clockwise were frequently two successive Fibonacci series. • Martin Ohm (1792–1872) is believed to be the first to use the term goldener Schnitt (golden section) to describe this ratio, in 1835.[17] • Edouard Lucas (1842–1891) gives the numerical sequence now known as the Fibonacci sequence its present name. • Mark Barr (20th century) suggests the Greek letter phi (φ), the initial letter of Greek sculptor Phidias's name, as a symbol for the golden ratio.[18] • Roger Penrose (b.1931) discovered a symmetrical pattern that uses the golden ratio in the field of aperiodic tilings, which led to new discoveries about quasicrystals. [edit] Aesthetics Beginning in the Renaissance, a body of literature on the aesthetics of the golden ratio was developed. As a result, architects, artists, book designers, and others have been encouraged to use the golden ratio in the dimensional relationships of their works. The first and most influential of these was De Divina Proportione by Luca Pacioli, a three-volume work published in 1509. Pacioli, a Franciscan friar, was known mostly as a mathematician, but he was also trained and keenly interested in art. De Divina Proportione explored the mathematics of the golden ratio. Though it is often said that Pacioli advocated the golden ratio's application to yield pleasing, harmonious proportions, Livio points out that that interpretation has been traced to an error in 1799, and that Pacioli actually advocated the Vitruvian system of rational proportions.[2] Pacioli also saw Catholic religious significance in the ratio, which led to his work's title. Containing illustrations of regular solids by Leonardo Da Vinci, Pacioli's longtime friend and collaborator, De Divina Proportione was a major influence on generations of artists and architects alike. [edit] Architecture Some studies of the Acropolis, including the Parthenon, conclude that many of its proportions approximate the golden ratio. The Parthenon's facade as well as elements of its facade and elsewhere are said to be circumscribed by golden rectangles.[19] To the extent that classical buildings or their elements are proportioned according to the golden ratio, this might indicate that their architects were aware of the golden ratio and consciously employed it in their designs. Alternatively, it is possible that the architects used their own sense of good proportion, and that this led to some proportions that closely approximate the golden ratio. On the other hand, such retrospective analyses can always be questioned on the ground that the investigator chooses the points from which measurements are made or where to superimpose golden rectangles, and that these choices affect the proportions observed. Some scholars deny that the Greeks had any aesthetic association with golden ratio. For example, Midhat J. Gazalé says, "It was not until Euclid, however, that the golden ratio's mathematical properties were studied. In the Elements (308 BC) the Greek mathematician merely regarded that number as an interesting irrational number, in connection with the middle and extreme ratios. Its occurrence in regular pentagons and decagons was duly observed, as well as in the dodecahedron (a regular polyhedron whose twelve faces are regular pentagons). It is indeed exemplary that the great Euclid, contrary to generations of mystics who followed, would soberly treat that number for what it is, without attaching to it other than its factual properties."[20] And Keith Devlin says, "Certainly, the oft repeated assertion that the Parthenon in Athens is based on the golden ratio is not supported by actual measurements. In fact, the entire story about the Greeks and golden ratio seems to be without foundation. The one thing we know for sure is that Euclid, in his famous textbook Elements, written around 300 BC, showed how to calculate its value."[21] Near-contemporary sources like Vitruvius exclusively discuss proportions that can be expressed in whole numbers, i.e. commensurate as opposed to irrational proportions. A geometrical analysis of the Great Mosque of Kairouan reveals a consistent application of the golden ratio throughout the design, according to Boussora and Mazouz.[22] It is found in the overall proportion of the plan and in the dimensioning of the prayer space, the court, and the minaret. Boussora and Mazouz also examined earlier archaeological theories about the mosque, and demonstrate the geometric constructions based on the golden ratio by applying these constructions to the plan of the mosque to test their hypothesis. The Swiss architect Le Corbusier, famous for his contributions to the modern international style, centered his design philosophy on systems of harmony and proportion. Le Corbusier's faith in the mathematical order of the universe was closely bound to the golden ratio and the Fibonacci series, which he described as "rhythms apparent to the eye and clear in their relations with one another. And these rhythms are at the very root of human activities. They resound in man by an organic inevitability, the same fine inevitability which causes the tracing out of the Golden Section by children, old men, savages and the learned."[23] Le Corbusier explicitly used the golden ratio in his Modulor system for the scale of architectural proportion. He saw this system as a continuation of the long tradition of Vitruvius, Leonardo da Vinci's "Vitruvian Man", the work of Leon Battista Alberti, and others who used the proportions of the human body to improve the appearance and function of architecture. In addition to the golden ratio, Le Corbusier based the system on human measurements, Fibonacci numbers, and the double unit. He took Leonardo's suggestion of the golden ratio in human proportions to an extreme: he sectioned his model human body's height at the navel with the two sections in golden ratio, then subdivided those sections in golden ratio at the knees and throat; he used these golden ratio proportions in the Modulor system. Le Corbusier's 1927 Villa Stein in Garches exemplified the Modulor system's application. The villa's rectangular ground plan, elevation, and inner structure closely approximate golden Another Swiss architect, Mario Botta, bases many of his designs on geometric figures. Several private houses he designed in Switzerland are composed of squares and circles, cubes and cylinders. In a house he designed in Origlio, the golden ratio is the proportion between the central section and the side sections of the house.[25] In a recent book, author Jason Elliot speculated that the golden ratio was used by the designers of the Naqsh-e Jahan Square and the adjacent Lotfollah mosque.[26] Leonardo da Vinci's illustrations of polyhedra in De Divina Proportione (On the Divine Proportion) and his views that some bodily proportions exhibit the golden ratio have led some scholars to speculate that he incorporated the golden ratio in his paintings.[27] But the suggestion that his Mona Lisa, for example, employs golden ratio proportions, is not supported by anything in Leonardo's own writings.[28] Salvador Dalí explicitly used the golden ratio in his masterpiece, The Sacrament of the Last Supper. The dimensions of the canvas are a golden rectangle. A huge dodecahedron, with edges in golden ratio to one another, is suspended above and behind Jesus and dominates the composition.[2][29] Mondrian used the golden section extensively in his geometrical paintings.[30] A statistical study on 565 works of art of different great painters, performed in 1999, found that these artists had not used the golden ratio in the size of their canvases. The study concluded that the average ratio of the two sides of the paintings studied is 1.34, with averages for individual artists ranging from 1.04 (Goya) to 1.46 (Bellini).[31] On the other hand, Pablo Tosto listed over 350 works by well-known artists, including more than 100 which have canvasses with golden rectangle and root-5 proportions, and others with proportions like root-2, 3, 4, and 6.[32] [edit] Book design Depiction of the proportions in a medieval manuscript. According to Jan Tschichold : "Page proportion 2:3. Margin proportions 1:1:2:3. Text area proportioned in the Golden Section." According to Jan Tschichold,[34] "There was a time when deviations from the truly beautiful page proportions 2:3, 1:√3, and the Golden Section were rare. Many books produced between 1550 and 1770 show these proportions exactly, to within half a millimetre." [edit] Perceptual studies Studies by psychologists, starting with Fechner, have been devised to test the idea that the golden ratio plays a role in human perception of beauty. While Fechner found a preference for rectangle ratios centered on the golden ratio, later attempts to carefully test such a hypothesis have been, at best, inconclusive.[2][35] James Tenney reconceived his piece For Ann (rising), which consists of up to twelve computer-generated upwardly glissandoing tones (see Shepard tone), as having each tone start so it is the golden ratio (in between an equal tempered minor and major sixth) below the previous tone, so that the combination tones produced by all consecutive tones are a lower or higher pitch already, or soon to be, Ernő Lendvai analyzes Béla Bartók's works as being based on two opposing systems, that of the golden ratio and the acoustic scale,[36] though other music scholars reject that analysis.[2] In Bartok's Music for Strings, Percussion and Celesta the xylophone progression occurs at the intervals 1:2:3:5:8:5:3:2:1.[37] French composer Erik Satie used the golden ratio in several of his pieces, including Sonneries de la Rose+Croix. The golden ratio is also apparent in the organization of the sections in the music of Debussy's Reflets dans l'eau (Reflections in Water), from Images (1st series, 1905), in which "the sequence of keys is marked out by the intervals 34, 21, 13 and 8, and the main climax sits at the phi position."[37] The musicologist Roy Howat has observed that the formal boundaries of La Mer correspond exactly to the golden section.[38] Trezise finds the intrinsic evidence "remarkable," but cautions that no written or reported evidence suggests that Debussy consciously sought such proportions.[39] Also, many works of Chopin, mainly Etudes (studies) and Nocturnes, are formally based on the golden ratio. This results in the biggest climax of both musical expression and technical difficulty after about 2/3 of the piece.[citation needed] Pearl Drums positions the air vents on its Masters Premium models based on the golden ratio. The company claims that this arrangement improves bass response and has applied for a patent on this In the opinion of author Leon Harkleroad, "Some of the most misguided attempts to link music and mathematics have involved Fibonacci numbers and the related golden ratio."[41] [edit] Nature Adolf Zeising, whose main interests were mathematics and philosophy, found the golden ratio expressed in the arrangement of branches along the stems of plants and of veins in leaves. He extended his research to the skeletons of animals and the branchings of their veins and nerves, to the proportions of chemical compounds and the geometry of crystals, even to the use of proportion in artistic endeavors. In these phenomena he saw the golden ratio operating as a universal law.[42] In connection with his scheme for golden-ratio-based human body proportions, Zeising wrote in 1854 of a universal law "in which is contained the ground-principle of all formative striving for beauty and completeness in the realms of both nature and art, and which permeates, as a paramount spiritual ideal, all structures, forms and proportions, whether cosmic or individual, organic or inorganic, acoustic or optical; which finds its fullest realization, however, in the human form."[43][Need quotation on talk to verify] [edit] Mathematics [edit] Golden ratio conjugate The negative root of the quadratic equation for φ (the "conjugate root") is 1 − ϕ ≈ −0.618. The absolute value of this quantity (≈ 0.618) corresponds to the length ratio taken in reverse order (shorter segment length over longer segment length, b / a), and is sometimes referred to as the golden ratio conjugate.[10] It is denoted here by the capital Phi (Φ): Alternatively, Φ can be expressed as This illustrates the unique property of the golden ratio among positive numbers, that or its inverse: [edit] Short proofs of irrationality [edit] Contradiction from an expression in lowest terms Recall that: the whole is the longer part plus the shorter part; the whole is to the longer part as the longer part is to the shorter part. If we call the whole n and the longer part m, then the second statement above becomes n is to m as m is to n − m, or, algebraically To say that φ is rational means that φ is a fraction n/m where n and m are integers. We may take n/m to be in lowest terms and n and m to be positive. But if n/m is in lowest terms, then the identity labeled (*) above says m/(n − m) is in still lower terms. That is a contradiction that follows from the assumption that φ is rational. [edit] Derivation from irrationality of √5 Another short proof—perhaps more commonly known—of the irrationality of the golden ratio makes use of the closure of rational numbers under addition and multiplication. If square natural number is [edit] Alternate forms The formula φ = 1 + 1/φ can be expanded recursively to obtain a continued fraction for the golden ratio:[44] and its reciprocal: The convergents of these continued fractions (1, 2, 3/2, 5/3, 8/5, 13/8, … , or 1, 1/2, 2/3, 3/5, 5/8, 8/13, …) are ratios of successive Fibonacci numbers. The equation φ2 = 1 + φ likewise produces the continued square root form: An infinite series can be derived to express phi.[45] These correspond to the fact that the length of the diagonal of a regular pentagon is φ times the length of its side, and similar relations in a pentagram. [edit] Geometry The number φ turns up frequently in geometry, particularly in figures with pentagonal symmetry. The length of a regular pentagon's diagonal is φ times its side. The vertices of a regular icosahedron are those of three mutually orthogonal golden rectangles. There is no known general algorithm to arrange a given number of nodes evenly on a sphere, for any of several definitions of even distribution (see, for example, Thomson problem). However, a useful approximation results from dividing the sphere into parallel bands of equal area and placing one node in each band at longitudes spaced by a golden section of the circle, i.e. 360°/φ ≅ 222.5°. This method was used to arrange the 1500 mirrors of the student-participatory satellite Starshine-3.[46] [edit] Golden triangle, pentagon and pentagram [edit] Golden triangle The golden triangle can be characterised as an isosceles triangle ABC with the property that bisecting the angle C produces a new triangle CXB which is a similar triangle to the original. If angle BCX = α, then XCA = α because of the bisection, and CAB = α because of the similar triangles; ABC = 2α from the original isosceles symmetry, and BXC = 2α by similarity. The angles in a triangle add up to 180°, so 5α = 180, giving α = 36°. So the angles of the golden triangle are thus 36°-72°-72°. The angles of the remaining obtuse isosceles triangle AXC (sometimes called the golden gnomon) are 36°-36°-108°. Suppose XB has length 1, and we call BC length φ. Because of the isosceles triangles BC=XC and XC=XA, so these are also length φ. Length AC = AB, therefore equals φ+1. But triangle ABC is similar to triangle CXB, so AC/BC = BC/BX, and so AC also equals φ2. Thus φ2 = φ+1, confirming that φ is indeed the golden ratio. [edit] Pentagram For more details on this topic, see A pentagram colored to distinguish its line segments of different lengths. The four lengths are in golden ratio to one another. The golden ratio plays an important role in regular pentagons and pentagrams. Each intersection of edges sections other edges in the golden ratio. Also, the ratio of the length of the shorter segment to the segment bounded by the 2 intersecting edges (a side of the pentagon in the pentagram's center) is φ, as the four-color illustration shows. The pentagram includes ten isosceles triangles: five acute and five obtuse isosceles triangles. In all of them, the ratio of the longer side to the shorter side is φ. The acute triangles are golden triangles. The obtuse isosceles triangles are golden gnomon. [edit] Ptolemy's theorem The golden ratio can also be confirmed by applying Ptolemy's theorem to the quadrilateral formed by removing one vertex from a regular pentagon. If the quadrilateral's long edge and diagonals are b, and short edges are a, then Ptolemy's theorem gives b2 = a2 + ab which yields [edit] Scalenity of triangles Consider a triangle with sides of lengths a, b, and c in decreasing order. Define the "scalenity" of the triangle to be the smaller of the two ratios a/b and b/c. The scalenity is always less than φ and can be made as close as desired to φ.[47] [edit] Relationship to Fibonacci sequence Approximate and true golden spirals . The green spiral is made from quarter-circles tangent to the interior of each square, while the red spiral is a Golden Spiral, a special type of logarithmic spiral . Overlapping portions appear yellow. The length of the side of a larger square to the next smaller square is in the golden ratio. Fibonacci spiral that approximates the golden spiral, using Fibonacci sequence square sizes up to 34. The mathematics of the golden ratio and of the Fibonacci sequence are intimately interconnected. The Fibonacci sequence is: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, … The closed-form expression (known as Binet's formula, even though it was already known by Abraham de Moivre) for the Fibonacci sequence involves the golden ratio: The golden ratio is the limit of the ratios of successive terms of the Fibonacci sequence (or any Fibonacci-like sequence), as originally shown by Kepler:[16] Therefore, if a Fibonacci number is divided by its immediate predecessor in the sequence, the quotient approximates φ; e.g., 987/610 ≈ 1.6180327868852. These approximations are alternately lower and higher than φ, and converge on φ as the Fibonacci numbers increase, and: More generally: where above, the ratios of consecutive terms of the Fibonacci sequence, is a case when a = 1. Furthermore, the successive powers of φ obey the Fibonacci recurrence: This identity allows any polynomial in φ to be reduced to a linear expression. For example: However, this is no special property of φ, because polynomials in any solution x to a quadratic equation can be reduced in an analogous manner, by applying: for given coefficients a, b such that x satisfies the equation. Even more generally, any rational function (with rational coefficients) of the root of an irreducible nth-degree polynomial over the rationals can be reduced to a polynomial of degree n ‒ 1. Phrased in terms of field theory, if α is a root of an irreducible nth-degree polynomial, then n over [edit] Other properties The golden ratio has the simplest expression (and slowest convergence) as a continued fraction expansion of any irrational number (see Alternate forms above). It is, for that reason, one of the worst cases of Lagrange's approximation theorem. This may be the reason angles close to the golden ratio often show up in phyllotaxis (the growth of plants). The defining quadratic polynomial and the conjugate relationship lead to decimal values that have their fractional part in common with φ: The sequence of powers of φ contains these values 0.618…, 1.0, 1.618…, 2.618…; more generally, any power of φ is equal to the sum of the two immediately preceding powers: As a result, one can easily decompose any power of φ into a multiple of φ and a constant. The multiple and the constant are always adjacent Fibonacci numbers. This leads to another property of the positive powers of φ: When the golden ratio is used as the base of a numeral system (see Golden ratio base, sometimes dubbed phinary or φ-nary), every integer has a terminating representation, despite φ being irrational, but every fraction has a non-terminating representation. The golden ratio is a fundamental unit of the algebraic number field Pisot–Vijayaraghavan number.[48] The golden ratio also appears in hyperbolic geometry, as the maximum distance from a point on one side of an ideal triangle to the closer of the other two sides: this distance, the side length of the equilateral triangle formed by the points of tangency of a circle inscribed within the ideal triangle, is 4 ln φ.[49] [edit] Decimal expansion The golden ratio's decimal expansion can be calculated directly from the expression with √5 ≈ 2.2360679774997896964. The square root of 5 can be calculated with the Babylonian method, starting with an initial estimate such as xφ = 2 and iterating for n = 1, 2, 3, …, until the difference between xn and xn−1 becomes zero, to the desired number of digits. The Babylonian algorithm for √5 is equivalent to Newton's method for solving the equation x2 − 5 = 0. In its more general form, Newton's method can be applied directly to any algebraic equation, including the equation x2 − x − 1 = 0 that defines the golden ratio. This gives an iteration that converges to the golden ratio itself, for an appropriate initial estimate xφ such as xφ = 1. A slightly faster method is to rewrite the equation as x − 1 − 1/x = 0, in which case the Newton iteration becomes These iterations all converge quadratically; that is, each step roughly doubles the number of correct digits. The golden ratio is therefore relatively easy to compute with arbitrary precision. The time needed to compute n digits of the golden ratio is proportional to the time needed to divide two n-digit numbers. This is considerably faster than known algorithms for the transcendental numbers π and e. An easily programmed alternative using only integer arithmetic is to calculate two large consecutive Fibonacci numbers and divide them. The ratio of Fibonacci numbers F25001 and F25000, each over 5000 digits, yields over 10,000 significant digits of the golden ratio. Millions of digits of φ are available (sequence A001622 in OEIS). See the web page of Alexis Irlande for the 17,000,000,000 first digits[50]. [edit] Pyramids A regular square pyramid is determined by its medial right triangle, whose edges are the pyramid's apothem (a), semi-base (b), and height (h); the face inclination angle is also marked. Mathematical proportions b:h:a of Both Egyptian pyramids and those mathematical regular square pyramids that resemble them can be analyzed with respect to the golden ratio and other ratios. [edit] Mathematical pyramids and triangles A pyramid in which the apothem (slant height along the bisector of a face) is equal to φ times the semi-base (half the base width) is sometimes called a golden pyramid. The isosceles triangle that is the face of such a pyramid can be constructed from the two halves of a diagonally split golden rectangle (of size semi-base by apothem), joining the medium-length edges to make the apothem. The height of this pyramid is times the semi-base (that is, the slope of the face is ); the square of the height is equal to the area of a face, φ times the square of the semi-base. The medial right triangle of this "golden" pyramid (see diagram), with sides is interesting in its own right, demonstrating via the Pythagorean theorem the relationship or . This "Kepler triangle" [51] is the only right triangle proportion with edge lengths in geometric progression,[52] just as the 3–4–5 triangle is the only right triangle proportion with edge lengths in arithmetic progression . The angle with tangent corresponds to the angle that the side of the pyramid makes with respect to the ground, 51.827… degrees (51° 49' 38").[53] A nearly similar pyramid shape, but with rational proportions, is described in the Rhind Mathematical Papyrus (the source of a large part of modern knowledge of ancient Egyptian mathematics), based on the 3:4:5 triangle;[54] the face slope corresponding to the angle with tangent 4/3 is 53.13 degrees (53 degrees and 8 minutes).[55] The slant height or apothem is 5/3 or 1.666… times the semi-base. The Rhind papyrus has another pyramid problem as well, again with rational slope (expressed as run over rise). Egyptian mathematics did not include the notion of irrational numbers,[56] and the rational inverse slope (run/rise, multiplied by a factor of 7 to convert to their conventional units of palms per cubit) was used in the building of pyramids.[54] Another mathematical pyramid with proportions almost identical to the "golden" one is the one with perimeter equal to 2π times the height, or h:b = 4:π. This triangle has a face angle of 51.854° (51°51'), very close to the 51.827° of the Kepler triangle. This pyramid relationship corresponds to the coincidental relationship . Egyptian pyramids very close in proportion to these mathematical pyramids are known.[55] Egyptian pyramids In the mid nineteenth century, Röber studied various Egyptian pyramids including Khafre, Menkaure and some of the Giza, Sakkara and Abusir groups, and was interpreted as saying that half the base of the side of the pyramid is the middle mean of the side, forming what other authors identified as the Kepler triangle; many other mathematical theories of the shape of the pyramids have also been One Egyptian pyramid is remarkably close to a "golden pyramid" – the Great Pyramid of Giza (also known as the Pyramid of Cheops or Khufu). Its slope of 51° 52' is extremely close to the "golden" pyramid inclination of 51° 50' and the π-based pyramid inclination of 51° 51'; other pyramids at Giza (Chephren, 52° 20', and Mycerinus, 50° 47')[54] are also quite close. Whether the relationship to the golden ratio in these pyramids is by design or by accident remains controversial. Several other Egyptian pyramids are very close to the rational 3:4:5 shape.[55] Adding fuel to controversy over the architectural authorship of the Great Pyramid, Eric Temple Bell, mathematician and historian, claimed in 1950 that Egyptian mathematics would not have supported the ability to calculate the slant height of the pyramids, or the ratio to the height, except in the case of the 3:4:5 pyramid, since the 3:4:5 triangle was the only right triangle known to the Egyptians and they did not know the Pythagorean theorem nor any way to reason about irrationals such as π or φ.[57] Michael Rice[58] asserts that principal authorities on the history of Egyptian architecture have argued that the Egyptians were well acquainted with the golden ratio and that it is part of mathematics of the Pyramids, citing Giedon (1957).[59] Historians of science have always debated whether the Egyptians had any such knowledge or not, contending rather that its appearance in an Egyptian building is the result of chance.[60] In 1859, the pyramidologist John Taylor claimed that, in the Great Pyramid of Giza, the golden ratio is represented by the ratio of the length of the face (the slope height), inclined at an angle θ to the ground, to half the length of the side of the square base, equivalent to the secant of the angle θ.[61] The above two lengths were about 186.4 and 115.2 meters respectively. The ratio of these lengths is the golden ratio, accurate to more digits than either of the original measurements. Similarly, Howard Vyse, according to Matila Ghyka,[62] reported the great pyramid height 148.2 m, and half-base 116.4 m, yielding 1.6189 for the ratio of slant height to half-base, again more accurate than the data variability. [edit] Disputed sightings Examples of disputed observations of the golden ratio include the following: • Historian John Man states that the pages of the Gutenberg Bible were "based on the golden section shape". However, according to Man's own measurements, the ratio of height to width was 1.45.[63] • In 1991, Jean-Claude Perez proposed a connection between DNA base sequences and gene sequences and the golden ratio.[64][65] Another such connection, between the Fibonacci numbers and golden ratio and Chargaff's second rule concerning the proportions of nucleobases in the human genome, was proposed in 2007.[66] • Australian sculptor Andrew Rogers's 50-ton stone and gold sculpture entitled Ratio, installed outdoors in Jerusalem.[67] Despite the sculpture's sometimes being referred to as "Golden Ratio,"[68] it is not proportioned according to the golden ratio, and the sculptor does not call it that: the height of each stack of stones, beginning from either end and moving toward the center, is the beginning of the Fibonacci sequence: 1, 1, 2, 3, 5, 8. His sculpture Ascend in Sri Lanka, also in his Rhythms of Life series, is similarly constructed, with heights 1, 1, 2, 3, 5, 8, 13, but no descending side.[67] • Some specific proportions in the bodies of many animals (including humans[69][70]) and parts of the shells of mollusks[4] and cephalopods are often claimed to be in the golden ratio. There is actually a large variation in the real measures of these elements in specific individuals, and the proportion in question is often significantly different from the golden ratio.[69] The ratio of successive phalangeal bones of the digits and the metacarpal bone has been said to approximate the golden ratio.[70] The nautilus shell, the construction of which proceeds in a logarithmic spiral , is often cited, usually with the idea that any logarithmic spiral is related to the golden ratio, but sometimes with the claim that each new chamber is proportioned by the golden ratio relative to the previous one;[71] however, measurements of nautilus shells do not support this claim.[72] • The proportions of different plant components (numbers of leaves to branches, diameters of geometrical figures inside flowers) are often claimed to show the golden ratio proportion in several species.[73] In practice, there are significant variations between individuals, seasonal variations, and age variations in these species. While the golden ratio may be found in some proportions in some individuals at particular times in their life cycles, there is no consistent ratio in their proportions.[citation needed] • In investing, some practitioners of technical analysis use the golden ratio to indicate support of a price level, or resistance to price increases, of a stock or commodity; after significant price changes up or down, new support and resistance levels are supposedly found at or near prices related to the starting price via the golden ratio.[74] The use of the golden ratio in investing is also related to more complicated patterns described by Fibonacci numbers; see, e.g. Elliott wave principle. See Fibonacci retracement. However, other market analysts have published analyses suggesting that these percentages and patterns are not supported by the data.[75] • In 2003 Weiss and Weiss came on a background of psychometric data and theoretical considerations to the conclusion that the golden ratio underlies the clock cycle of brain waves.[76] In 2008 this was empirically confirmed by a group of neurobiologists.[77] • In 2010 the journal Science reported that the golden ratio is present at the atomic scale in the magnetic resonance of spins in cobalt niobate atoms.[2]
{"url":"http://kylepounds.com/Sacred%20Geometry.html","timestamp":"2014-04-20T15:52:30Z","content_type":null,"content_length":"548269","record_id":"<urn:uuid:2e6e0776-f903-416c-93d6-51267cc54653>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
omplexity of Results 1 - 10 of 344 , 2004 "... We describe a system that supports arbitrarily complex SQL queries with ”uncertain” predicates. The query semantics is based on a probabilistic model and the results are ranked, much like in Information Retrieval. Our main focus is efficient query evaluation, a problem that has not received attentio ..." Cited by 347 (38 self) Add to MetaCart We describe a system that supports arbitrarily complex SQL queries with ”uncertain” predicates. The query semantics is based on a probabilistic model and the results are ranked, much like in Information Retrieval. Our main focus is efficient query evaluation, a problem that has not received attention in the past. We describe an optimization algorithm that can compute efficiently most queries. We show, however, that the data complexity of some queries is #P-complete, which implies that these queries do not admit any efficient evaluation methods. For these queries we describe both an approximation algorithm and a Monte-Carlo simulation algorithm. , 1996 "... In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stocha ..." Cited by 234 (13 self) Add to MetaCart In the area of statistical physics, Monte Carlo algorithms based on Markov chain simulation have been in use for many years. The validity of these algorithms depends crucially on the rate of convergence to equilibrium of the Markov chain being simulated. Unfortunately, the classical theory of stochastic processes hardly touches on the sort of non-asymptotic analysis required in this application. As a consequence, it had previously not been possible to make useful, mathematically rigorous statements about the quality of the estimates obtained. Within the last ten years, analytical tools have been devised with the aim of correcting this deficiency. As well as permitting the analysis of Monte Carlo algorithms for classical problems in statistical physics, the introduction of these tools has spurred the development of new approximation algorithms for a wider class of problems in combinatorial enumeration and optimization. The “Markov chain Monte Carlo ” method has been applied to a variety of such problems, and often provides the only known efficient (i.e., polynomial time) solution technique. , 1994 "... We give the first approximation algorithm for the generalized network Steiner problem, a problem in network design. An instance consists of a network with link-costs and, for each pair fi; jg of nodes, an edge-connectivity requirement r ij . The goal is to find a minimum-cost network using the a ..." Cited by 219 (32 self) Add to MetaCart We give the first approximation algorithm for the generalized network Steiner problem, a problem in network design. An instance consists of a network with link-costs and, for each pair fi; jg of nodes, an edge-connectivity requirement r ij . The goal is to find a minimum-cost network using the available links and satisfying the requirements. Our algorithm outputs a solution whose cost is within 2dlog 2 (r + 1)e of optimal, where r is the highest requirement value. In the course of proving the performance guarantee, we prove a combinatorial min-max approximate equality relating minimum-cost networks to maximum packings of certain kinds of cuts. As a consequence of the proof of this theorem, we obtain an approximation algorithm for optimally packing these cuts; we show that this algorithm has application to estimating the reliability of a probabilistic network. , 1996 "... Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider va ..." Cited by 219 (13 self) Add to MetaCart Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to model-counting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses are distinguished by the e... - Journal of Algorithms , 1985 "... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co ..." Cited by 188 (0 self) Add to MetaCart This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder) presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should , 2009 "... Not to be reproduced or distributed without the authors ’ permissioniiTo our wives — Silvia and RavitivAbout this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these incl ..." Cited by 151 (2 self) Add to MetaCart Not to be reproduced or distributed without the authors ’ permissioniiTo our wives — Silvia and RavitivAbout this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these include new probabilistic definitions of classical complexity classes (IP = PSPACE and the PCP Theorems) and their implications for the field of approximation algorithms; Shor’s algorithm to factor integers using a quantum computer; an understanding of why current approaches to the famous P versus NP will not be successful; a theory of derandomization and pseudorandomness based upon computational hardness; and beautiful constructions of pseudorandom objects such as extractors and expanders. This book aims to describe such recent achievements of complexity theory in the context of more classical results. It is intended to both serve as a textbook and as a reference for self-study. This means it must simultaneously cater to many audiences, and it is carefully designed with that goal. We assume essentially no computational background and very minimal mathematical background, which we review in Appendix A. We have also provided a web site for this book at - in ICDE , 2007 "... Modern enterprise applications are forced to deal with unreliable, inconsistent and imprecise information. Probabilistic databases can model such data naturally, but SQL query evaluation on probabilistic databases is difficult: previous approaches have either restricted the SQL queries, or computed ..." Cited by 137 (26 self) Add to MetaCart Modern enterprise applications are forced to deal with unreliable, inconsistent and imprecise information. Probabilistic databases can model such data naturally, but SQL query evaluation on probabilistic databases is difficult: previous approaches have either restricted the SQL queries, or computed approximate probabilities, or did not scale, and it was shown recently that precise query evaluation is theoretically hard. In this paper we describe a novel approach, which computes and ranks efficiently the top-k answers to a SQL query on a probabilistic database. The restriction to top-k answers is natural, since imprecisions in the data often lead to a large number of answers of low quality, and users are interested only in the answers with the highest probabilities. The idea in our algorithm is to run in parallel several Monte-Carlo simulations, one for each candidate answer, and approximate each probability only to the extent needed to compute correctly the top-k answers. The algorithms is in a certain sense provably optimal and scales to large databases: we have measured running times of 5 to 50 seconds for complex SQL queries over a large database (10M tuples of which 6M probabilistic). Additional contributions of the paper include several optimization techniques, and a simple data model for probabilistic data that achieves completeness by using SQL views. 1 , 1991 "... The problem of abduction can be characterized as finding the best explanation of a set of data. In this paper we focus on one type of abduction in which the best explanation is the most plausible combination of hypotheses that explains all the data. We then present several computational complexity r ..." Cited by 108 (3 self) Add to MetaCart The problem of abduction can be characterized as finding the best explanation of a set of data. In this paper we focus on one type of abduction in which the best explanation is the most plausible combination of hypotheses that explains all the data. We then present several computational complexity results demonstrating that this type of abduction is intractable (NP-hard) in general. In particular, choosing between incompatible hypotheses, reasoning about cancellation effects among hypotheses, and satisfying the maximum plausibility requirement are major factors leading to intractability. We also identify a tractable, but restricted, class of abduction problems. Thanks to B. Chandrasekaran, Ashok Goel, Jack Smith, and Jon Sticklen for their comments on the numerous versions of this paper. The referees have also made a substantial contribution. Any remaining errors are our responsibility, of course. This research has been supported in part by the National Library of Medicine, grant LM-... - in ICDCS , 2008 "... Topology control is an effective method to improve the energy efficiency of wireless sensor networks (WSNs). Traditional approaches are based on the assumption that a pair of nodes is either “connected ” or “disconnected”. These approaches are called connectivity-based topology control. In real envi ..." Cited by 91 (15 self) Add to MetaCart Topology control is an effective method to improve the energy efficiency of wireless sensor networks (WSNs). Traditional approaches are based on the assumption that a pair of nodes is either “connected ” or “disconnected”. These approaches are called connectivity-based topology control. In real environments however, there are many intermittently connected wireless links called lossy links. Taking a succeeded lossy link as an advantage, we are able to construct more energy-efficient topologies. Towards this end, we propose a novel opportunity-based topology control. We show that opportunity-based topology control is a problem of NPhard. To address this problem in a practical way, we design a fully distributed algorithm called CONREAP based on reliability theory. We prove that CONREAP has a guaranteed performance. The worst running time is O(|E|) where E is the link set of the original topology, and the space requirement for individual nodes is O(d) where d is the node degree. To evaluate the performance of CONREAP, we design and implement a prototype system consisting of 50 Berkeley Mica2 motes. We also conducted comprehensive simulations. Experimental results show that compared with the connectivity-based topology control algorithms, CONREAP can improve the energy efficiency of a network up to 6 times. 1 - In Proceedings of 20th International Joint Conference on Artificial Intelligence , 2007 "... We introduce ProbLog, a probabilistic extension of Prolog. A ProbLog program defines a distribution over logic programs by specifying for each clause the probability that it belongs to a randomly sampled program, and these probabilities are mutually independent. The semantics of ProbLog is then defi ..." Cited by 87 (14 self) Add to MetaCart We introduce ProbLog, a probabilistic extension of Prolog. A ProbLog program defines a distribution over logic programs by specifying for each clause the probability that it belongs to a randomly sampled program, and these probabilities are mutually independent. The semantics of ProbLog is then defined by the success probability of a query, which corresponds to the probability that the query succeeds in a randomly sampled program. The key contribution of this paper is the introduction of an effective solver for computing success probabilities. It essentially combines SLD-resolution with methods for computing the probability of Boolean formulae. Our implementation further employs an approximation algorithm that combines iterative deepening with binary decision diagrams. We report on experiments in the context of discovering links in real biological networks, a demonstration of the practical usefulness of the approach. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=237022","timestamp":"2014-04-21T07:41:20Z","content_type":null,"content_length":"40326","record_id":"<urn:uuid:50704712-3e40-4bb1-8ff5-aff8c6058b7a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Central Meridian: A straight line. Meridians: Complex curves spaced equally along the Equator and each parallel, and concave toward the central meridian. Parallels: The Equator is a straight line. All other parallels are nonconcentric circular arcs spaced at true distances along the central meridian. Poles: Normally circular arcs, enclosing the same angle as the displayed parallels. Symmetry: About the Equator or the central meridian. For this projection, each parallel has a curvature identical to its curvature on a cone tangent at that latitude. Since each parallel has its own cone, this is a "polyconic" projection. Scale is true along the central meridian and along each parallel. This projection is free of distortion only along the central meridian; distortion can be severe at extreme longitudes. This projection is neither conformal nor equal-area. By definition, this projection has no standard parallels, since every parallel is a standard parallel. This projection was apparently originated about 1820 by Ferdinand Rudolph Hassler. It is also known as the American Polyconic and the Ordinary Polyconic projection. Longitude data greater than 75º east or west of the central meridian is trimmed. landareas = shaperead('landareas.shp','UseGeoCoords',true); axesm ('polycon', 'Frame', 'on', 'Grid', 'on'); geoshow(landareas,'FaceColor',[1 1 .5],'EdgeColor',[.6 .6 .6]); Was this topic helpful? [Yes] [No]
{"url":"http://www.mathworks.com/help/map/polyconicprojection.html?nocookie=true","timestamp":"2014-04-17T18:58:05Z","content_type":null,"content_length":"36206","record_id":"<urn:uuid:e72b24fd-6bb2-42e5-bce9-64638bcbfc5f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Integration From Sutherland_wiki Numeric integration is used in two general situations: • we have discrete data and want to integrate it. • we have an analytic function that we cannot integrate analytically and want to approximate it numerically. There are many ways to perform numerical integration. We will consider a few here. These are all based on fitting a polynomial function to data and then using what we know about polynomials to obtain the integral. Midpoint Rule The midpoint rule approximates the integral of a function over some interval [a,b] by a constant. The best choice for the constant value in general will be the function value at the midopoint, (a+b)/ 2. In other words, \[\int_{a}^{b} f(x) \mathrm{d}x \approx (b-a) f\left(\frac{b+a}{2}\right) \] This is shown pictorially in the figure to the right. Assume that the following flowrate measurements were taken for a river over a 5-day period. │ Day │ 1 │ 2 │ 3 │ 4 │ 5 │ │ Flowrate (CFS) │ 10000 │ 12000 │ 15000 │ 13500 │ 14400 │ Here the units of flowrate are in cubic feet per second. To get the units consistent, let's convert this into cubic feet per day by multiplying by the number of seconds per day. Then our data becomes │ Day │ 1 │ 2 │ 3 │ 4 │ 5 │ │ Flowrate (ft^3/day) │ 8.64x10^8 │ 1.04^9 │ 1.30x10^9 │ 1.17x10^9 │ 1.24x10^9 │ We want to estimate the total volume of water that flowed through the river over this 5-day period. To do this, we could use the flowrate on day 3 (which is the midpoint of the time period) to find \ [\mathrm{Total \; volume} \approx (5-1) \cdot f(3) = 4 \cdot 1.04\times 10^9 = 5.18\times 10^{9} \mathrm{ft}^3 \] The figure to the right shows this pictorially. However, we can see that the midpoint rule significantly overestimates the total volume (e.g. the integral) since day 3 happened to be the day where the flowrate was highest. NOTE: the midpoint rule is not always useful for application to discrete data. This is because often the data is not really available at a "midpoint." Therefore, the trapezoid rule is most often used when discrete data needs to be integrated. Trapezoid Rule The trapezoid rule uses a linear approximation of the function over the interval [a,b], as shown in the figure to the right. The integral of this is the area of the trapezoid, \[\int_{a}^{b} \approx \frac{b-a}{2} \left[ f(a)+f(b) \right] \] If we apply the trapezoidal rule to the example previously, we use the endpoints of the interval to find \[\mathrm{Total \; volume} \approx \frac{5-1}{2} \left[ f(1) + f(5) \right] = 2 \cdot \left[ 8.64\times 10^{8} + 1.24\times 10^{9} \right] = 4.22\times 10^{9} \; \mathrm{ft}^3 \] The trapezoid rule is depicted in the figure to the right. Simpson's 1/3 Rule This section is a stub and needs to be expanded. If you can provide information or finish this section you're welcome to do so and then remove this message afterwards. Summary of Common Quadrature Formulas │ Name │ Formula │ Comments │ │ │ │ • Requires function values at interval midpoints \(f\left(\tfrac{b+a}{2}\ │ │ Midpoint Rule │ \(\int_{a}^{b} f(x) \mathrm{d} x \approx (b-a) f\left(\tfrac{b+a}{2}\right) \) │ right)\) │ │ │ │ • Requires equally spaced data. │ │ Trapezoid Rule │ \(\int_{a}^{b} f(x) \mathrm{d} x \approx \tfrac{b-a}{2} \left[ f(b)+f(a) \right] \) │ • Can be applied to arbitrarily spaced data. │ │ │ │ • Convenient for tabulated data. │ │ Simpson's 1/3 │ \(\int_a^b f(x) \mathrm d x \approx \tfrac{\Delta x}{3} \left[ f(a) +4f\left(\tfrac{a+b}{2}\ │ • Requires three equally spaced points on the interval \([a,b]\) │ │ Rule │ right) + f(b) \right] \) │ • On the interval \([a,b]\), we have \(\Delta x = \tfrac{b-a}{2}\) and \(x_i │ │ │ │ = a + i \Delta x\) │ Composite Rules: Quadrature This section is a stub and needs to be expanded. If you can provide information or finish this section you're welcome to do so and then remove this message afterwards. Matlab Tools for Quadrature
{"url":"http://www.che.utah.edu/~sutherland/wiki/index.php/Numerical_Integration","timestamp":"2014-04-16T10:50:21Z","content_type":null,"content_length":"27139","record_id":"<urn:uuid:8c4ab7a6-59f6-4fe4-8f38-ee4741030ebd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Correlation matrix, efficient method "susan" wrote in message <iqbs9o$mf6$1@newscl01ah.mathworks.com>... > hello, > I have a matrix of x* y* z *176 from a fMRI imaging study. > I need to get a square matrix with correlation values. i.e. each 'voxel(x*y*z)' has a timeseries (176). I need to correlate this with every other voxel. > There has to be a more efficient way than for loops. So please help. > Thanks, > S - - - - - - - - - If your matrix is four dimensional of x by y by z by 176 size and if x, y, and z are large, not only is it a very large matrix, but the square correlation matrix you desire would have to be a whopping big (x*y*z)^2 in size! I hope your computer has room for such a monstrous entity. In any case, my recommendation would be to treat each of voxels' individual time series separately in such a way that its sum is zero and the sum of its squares is one. Then all the correlation values can be calculated as the sum of the various paired products, which is a much simpler computation than doing a correlation calculation for each such pair from the original values. If v is one of these voxel time series, subtract mean(v) from each element, getting v1. Then divide each v1 element by the square root of the sum of its squares, getting v2. Then the sum of the v2 will be zero and the sum of its squares will be one. The time spent doing this in comparison to your main task will be minuscule. If V is a matrix of (x*y*z) by 176 size containing these adjusted v2 quantities, then you can get your desired correlation matrix by simple matrix multiplication: R = V*V.'; However, this will be necessarily be extremely time-consuming and memory-filling, even with the above simplifications. A short cut would be to replace this matrix multiplication by for-loops that only handle each possible pair once rather than twice, which would cut the time down by half though using slower for-loop operations. You would still have the problem of storage space. Roger Stafford
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/307556","timestamp":"2014-04-21T07:57:29Z","content_type":null,"content_length":"31613","record_id":"<urn:uuid:da27a9b9-a82f-4e9f-a6ba-2f23135ea57e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Mead, CO Math Tutor Find a Mead, CO Math Tutor ...I have taught all levels of elementary school math from kindergarten through 6th grade and beyond! I love working with students to discover where challenges lie and finding fun and engaging ways to help them learn! I have taught all types of elementary school and middle school science for 19 years in the public school classroom and as a tutor. 14 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...I love math, and I enjoy helping students understand it as well. When working with a student, I usually try to show the student how I understand the material first. Then upon understanding the student's learning style, I will incorporate a variety of methods to help the student. 17 Subjects: including calculus, statistics, trigonometry, discrete math ...I believe that historical and cultural context is a very important part of teaching, especially with science and technology. Understanding the people and ideas that drove many scientific and engineering discoveries sheds light on the scientific process. This understanding makes science and engineering much more relatable and engaging subjects to many students. 15 Subjects: including algebra 1, algebra 2, biology, calculus ...I am a senior enrolled at the University of Colorado at Boulder studying International Affairs and Political Science with a minor in economics. I have earned a degree in Biotechnology at the Delaware Technical Community College. I have extensive course work in chemistry, biology, politics, economics, and physics with experience in many other fields. 39 Subjects: including algebra 2, public speaking, elementary (k-6th), elementary math ...In addition, I have demonstrated technical writing skill in my volunteer work. Finally, I have taught grammar to both of my children in home school since 2008 using 3-4 different methods, including Saxon Grammar. I have home schooled both children (ages 8 and 11) and they excel in elementary math; both test above grade level. 7 Subjects: including algebra 1, chemistry, grammar, prealgebra Related Mead, CO Tutors Mead, CO Accounting Tutors Mead, CO ACT Tutors Mead, CO Algebra Tutors Mead, CO Algebra 2 Tutors Mead, CO Calculus Tutors Mead, CO Geometry Tutors Mead, CO Math Tutors Mead, CO Prealgebra Tutors Mead, CO Precalculus Tutors Mead, CO SAT Tutors Mead, CO SAT Math Tutors Mead, CO Science Tutors Mead, CO Statistics Tutors Mead, CO Trigonometry Tutors
{"url":"http://www.purplemath.com/Mead_CO_Math_tutors.php","timestamp":"2014-04-18T11:24:59Z","content_type":null,"content_length":"23813","record_id":"<urn:uuid:3a220ca3-1efe-47fc-909d-3c30946164e6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
CS 314: Vocabulary A*: a heuristic search algorithm that attempts to find a desired goal using a heuristic function to estimate the distance from a given node to the goal. abstract data type: a description of operations on a data type that could have multiple possible implementations. acyclic: describes a graph with no cycles (circular paths). adjacency list: a representation of a graph in which each node has a list of nodes that are adjacent to it, i.e. connected to it by an arc. adjacency matrix: a representation of a graph in which a boolean matrix contains a 1 at position (i,j) iff there is an arc from node i to node j. ancestors: in a tree, the union of a node's parent and the parent's ancestors. arc: a link between two nodes in a graph. array: A contiguous block of memory containing elements of the same type, accessed by numeric index. ASCII: (pronounced "ask-key") an abbreviation of American Standard Code for Information Interchange, a character code that maps between 8-bit binary integers and characters such as letters, numbers, and punctuation. There are 256 possible ASCII codes, of which 95 are printable. association list: a list of pairs, where each pair has a key and a value associated with the key. AVL tree: a self-balancing sorted binary tree, in which the heights of subtrees differ by at most 1. B-tree: a tree with a high branching factor, to minimize the number of disk accesses required to access a desired record. backtrack: in a tree search, to move back from the node currently being examined to its parent. balanced tree: a tree in which the heights of subtrees are approximately equal. bandwidth: information transfer rate of a network connection, in bits/second. base case: a simple case that can be solved easily, without recursion. Big O: an abstracted function that describes the amount of computer time or memory space required by an algorithm, as a function of problem size. For problems larger than a certain size, the actual time or space required will be less than the Big O multiplied by some constant. bijective: describes a relation that is both injective and surjective (one-to-one and onto). binary heap: a data structure that implements a complete binary tree within an array, such that every parent node has a value that is less than the value of either of its children. binary tree: a tree in which each node has at most two children. binary search: search of a binary tree or other structure, in which the size of the set to be searched is cut in half at each step. binding: an association of a name with a value. binding list: a list structure that represents a set of bindings. bit: short for binary digit, the smallest unit of computer memory. A bit can have the value 0 or 1. Boolean matrix: a matrix whose elements are 0 or 1. boxed number: a number that is defined as an object, so that it has a runtime type and methods that can be used, e.g. Integer in Java. branching factor: in a search tree, the number of children of a given node. Often, the branching factors of individual nodes will vary, so an average value may be used. bucket: a collection, such as a linked list, of values that hash to the same value. byte: an 8-bit piece of data, which can represent a character in a code such as ASCII. bytecodes: the term used for the language of compiled Java. A given machine may have an interpreter for bytecodes (the JVM), or it may translate the bytecodes to native machine code so that it runs cache: to save a value locally to save re-computing or transferring it in the future. Cartesian product: a set of pairs (x, y) of elements from two sets X and Y. child: in a tree, a node pointed to by a parent node. circularly linked list: a linked list in which the last element points back to the first element. circular queue: a queue implemented within an array, where the first element of the array logically follows the last element. class: in object-oriented programming, a description of a set of similar objects. clustering: a situation in which many elements hash to the same hash value. collision: when two values to be stored in a hash table have the same hash value. comparison: the act of comparing two values to determine which is greater according to some ordering. cons: 1. in Lisp, the function that constructs a pair of pointers, or basic element of list structure. 2. a cons data structure. 3. to make a cons data structure. constructive: describes a function that makes a new data structure but does not modify its arguments. CPU: Central Processing Unit, the "brain" of a computer, which performs operations on data. critical path: in a PERT chart or scheduling graph, a path from the initial state to the goal such that any increase in time required along the critical path will increase the time to complete the whole project. cycle: a circular path in a graph. DAG: directed acyclic graph. dense graph: a graph such that a large fraction of possible connections among nodes are present, i.e. the number of edges is of the order of the number of vertices squared. cf. sparse graph. depth: the number of links between the root of a tree and the leaves. depth-first search: a search in which children of a node are considered (recursively) before siblings are considered. dereference: to convert from a pointer (address) to the data that is pointed to. descendants: all nodes below a given node in a tree. design pattern: a pattern that describes a set of similar programs. destructive: describes a function that modifies its arguments. DFS: depth-first search. Dijkstra's algorithm: an optimal greedy algorithm to find the minimum distance and shortest path to all nodes in a weighted graph from a given start node. directed: describes an arc that can only be traversed in one direction, or a graph with such arcs. directed acyclic graph: a directed graph with no cycles. Every tree is a DAG, but a DAG may be more general. discrete event simulation: a simulation in terms of events, in which the highest-priority (least time) event is removed from an event queue and executed, which may have the effect of scheduling future events. divide and conquer: a problem-solving strategy in which a problem is broken down into sub-problems, until simple subproblems are reached. domain: the set of values that are the source values of a mapping. doubly linked list: a linked list in which each element has both forward and backward pointers. edge: a link or arc between nodes in a graph. exclusive or: a binary Boolean function whose output is 1 if its inputs are different. Abbreviated XOR. external sort: a sort using external storage such as disk in addition to main memory. fair: describes a process in which every arriving customer will eventually be served. FIFO: first-in, first-out: describes the ordering of a queue. A queue is fair. filter: a process that removes unwanted elements from a collection. first-child/next-sibling: a way of implementing trees that uses two pointers per node but can represent an arbitrary number of children of a node. fold: to process a set of items using a specified function; another term for reduce. garbage: 1. data that is incorrect, meaningless, or random; 2. storage that is no longer pointed to by any variable and therefore can no longer be accessed. garbage collection: the process of collecting garbage for recycling. gedanken: describes a thought experiment or view of an entity. geometric series: a series in which each successive term is multiplied by a constant less than 1, e.g. 1 + 1/2 + 1/4 + 1/8 + ... goal: an item (or description of items) being sought in a search. grammar: a formal description of a language in terms of vocabulary and rules for writing phrases and sentences. gradient ascent: a method of finding the value x where f(x) is maximum by taking steps proportional to the gradient or slope of the function. Also called steepest ascent, or gradient descent or steepest descent if the minimum of the function is sought. graph: a set of nodes and arcs connecting the nodes. greedy algorithm: an algorithm that always tries the solution path that appears to be the best. "Eat dessert first." hash function: a function that is deterministic but randomizing, i.e. whose output is a relatively small integer that appears to be a random function of the key value. hashing with buckets: a hash table in which an item's hash value gives the index of a pointer to a bucket, an auxiliary structure containing the items with the same hash value. Using a linked list for a bucket is called separate chaining. heuristic: a function that estimates the distance from a given node to the goal in A* search. More generally, a method that generally gives good advice about which direction to go or how to approach a problem. heuristic search: A* search. immutable: describes a data structure that cannot be changed once it has been created, such as Integer or String in Java. in-place: describes a sort that does not require any additional memory. injective: describes a mapping in which each element of the domain maps to a single element of the range. Also, one-to-one. inorder: an order of processing a tree in which the parent node is processed in between its children. interior node: a node of a tree that has children. internal sort: a sort using only the main memory of the computer. interpreter: a program that reads instructions, determines what they say, and executes them. The CPU is an interpreter for machine language; the JVM is an interpreter for compiled Java bytecodes. intersection: given two sets, the intersection is the set of elements that are members of both sets. intractable: a problem that is so hard (typically exponential) that it cannot be solved unless the problem is small. iterator: an object containing data and methods to iterate through a collection of data, allowing processing of one data item at a time. JVM: Java Virtual Machine, an interpreter for compiled Java bytecodes. latency: the delay between asking for data from an I/O device and the beginning of data transfer. leaf: a tree node containing a contents value but with no children. LIFO: last-in, first out: describes the order of a stack. linear: O(n), a problem whose solution requires a linear amount of time or space if the problem is of size n. link: a pointer to the next element in a linked list. linked list: a sequence of records, where each record contains a link to the next one. load factor: in a hash table, the fraction of the table's capacity that is filled. map: in MapReduce, a program that processes an element of the input and may emit one or more (key, value) pairs. In Java, a Map is a data structure that implements a mapping, such as HashMap or mapcan: in Lisp, a program that applies a mapping function to each element of a list of inputs, producing a list that concatenates corresponding results. The mapping function produces a list of results for each input, which allows it to produce multiple results or an empty result. mapcar: in Lisp, a program that applies a mapping function to each element of a list of inputs, producing a list of corresponding results. mapping: association of elements of a Range set with elements of a Domain set. We write M : R → D , for example PhoneDirectory : Name → Number . master: a program that controls a set of other programs or devices. max queue: a priority queue in which the maximum element is removed first. memory hierarchy: the use of several kinds of memory hardware in a computer system, where the fastest memory (e.g. cache) is smallest, slower memory (e.g. RAM) is larger, and the slowest memory (e.g. disk) is largest. memory locality: the processing of data in such a way that data that are located near each other by memory address are accessed nearby in time. merge: to combine two ordered linear structures into one. min queue: a priority queue in which the minimum element is removed first. minimum spanning tree: a tree formed from the nodes of a graph and a subset of its edges, such that all nodes are connected and the total cost of the edges is minimal. node: an element of a linked list, tree, or graph, often represented by a data structure. null dereference: a runtime error that occurs when an operation such as a method call is attempted on a null pointer. object: a data structure that can be identified at runtime as being a member of a class. on-line: describes a sorting algorithm that can process items one at a time. one-to-one: describes a mapping in which each element of the domain maps to a single element of the range. Also, injective. onto: describes a mapping in which each element of the range is the target of some element of the domain. Also, surjective. ontology: a description of the kinds of objects that exist in a computer program, e.g. a Java class hierarchy. operator: in a search tree, a program that changes a state into a child state, e.g. a move in a game. parent: in a tree, a node that points to a given node. parsing: analysis of a sentence of a language to determine the elements of the sentence and their relationship and meaning. path: a sequence of steps along arcs in a graph. pattern: a representation of a class of objects, containing some constant elements in relation to variable elements. pattern variable: a part of a pattern that can match variable parts of an input. pivot: in Quicksort, a "center" value used in partitioning the set to be sorted. pointer: a variable containing the address of other data. postorder: an order of processing a tree in which the parent node is processed after its children. predicate: a function that returns True or False. In Lisp, {\tt nil} represents False, and anything else represents True. preorder: an order of processing a tree in which the parent node is processed before its children. priority queue: a queue in which the highest-priority elements are removed first; within a priority value, the earliest arrival is removed first. quadratic: O(n^2), a problem whose solution requires a quadratic amount of time or space if the problem is of size n. queue: a data structure representing a sequence of items, which are removed in the same order as they were inserted. random access: describes a data structure or device in which all accesses have the same cost, O(1). randomized algorithm: an algorithm in which the data to be processed or the deice to process it is randomly selected. range: a set of values that are the targets of a mapping. recursion: a case where a program calls itself. recursive case: a condition of the input data where the data will be handled by call(s) to the same program. Red-Black tree: a self-balancing binary tree in which nodes are "colored" red or black. The longest path from the root to a leaf is no more than twice the length of the shortest path. reduce: to apply a given function to the elements of a given list. Also, fold. reference: a pointer to data. reference type: a type in which variables of that type are pointers to objects. In the code Integer i = 3, the variable i holds a pointer to the Integer object that contains the value. In int j = 3, the variable j contains the value. In Java, only reference types have methods. rehash: to apply a different hashing function to a key when a collision occurs. root: the top node of a tree, from which all other nodes can be reached. row-major order: a way of storing a multiply-dimensioned array in memory, such that elements of a row are in adjacent memory addresses. runtime stack: a stack containing a stack frame of variable values for each active invocation of a procedure. scalability: the ability of an algorithm or hardware system to grow to handle a larger number of inputs. scope: the area of program text over which a variable can be referenced. search: to look through a data structure until a goal object is found. sentinel: an extra record at the start or end of a data structure such as a linked list, to simplify the processing. separate chaining: hashing with buckets, using a linked list to store the contents of a bucket. set difference: given two sets, the set difference is the set of elements of the first set that are not members of the second set. shadow: to hide similar items with the same name. shortest path: the shortest path between a start node and a goal node in a weighted graph. side-effect: any effect of a procedure other than returning a value, e.g. printing or modifying a data structure. simple path: a path between two nodes in a graph that does not revisit any intermediate node. slack: in a PERT chart or scheduling graph, the amount of time by which the time of an activity could be increased without affecting the overall completion time. slave: a program or device that operates under control of a master. sort: to modify the order of a set of elements so that a desired ordering holds between them, e.g. alphabetic order. sparse array: an array in which most of the elements are zero or missing. sparse graph: a graph in which any node is connected to relatively few other nodes. cf. dense graph. spatial locality: being close together in space, i.e. memory address. Splay tree: a self-balancing binary tree that places recently accessed elements near the top of the tree for fast access. stable: describes a sort algorithm in which the relative position of elements with equal keys is unchanged after sorting. stack frame: a section of the runtime stack holding the values of all variables for one invocation of a procedure. stack space: the amount of space on the runtime stack required for execution of a program. state: a description of the state of a process, such as a board game. structure sharing: a case where two data structures share some elements. successor: the next element in a linked list. surjective: describes a mapping in which each element of the range is the target of some element of the domain. Also, onto. symbol table: a data structure that links names to information about the objects denoted by the names. tail recursive: a function whose value either does not involve a recursive call, or is exactly the value of a recursive call. taxonomy: a classification of objects into a tree structure that groups related objects. temporal locality: being close together in time, i.e. memory accesses that occur within a short time of each other. topological sort: a linear ordering of nodes of an acyclic graph, such that a node follows all of its graph predecessors in the ordering. tree rotation: changing the links in a binary tree to change the relative heights of the child subtrees, while leaving the sort order of the tree unchanged. undirected: describes a graph in which the arcs may be followed in either direction. Unicode: a character code that maps between binary numbers and the characters used in most modern languages, more than 110,000 characters. The lowest values of the UTF-8 encoding of Unicode are the same as ASCII, allowing characters to be 8 bits when staying within the ASCII character set. For other languages, more bits are used. Java uses Unicode. union: given two sets, the union is the set of elements that are members of either set. unparsing: converting an abstract syntax tree into a sentence in a language, such as a programming language. vertex: a node in a graph. virtual machine: an abstract computer that is simulated by an interpreter program running on an actual computer. weight: a number that denotes the cost of following an arc in a graph. well-founded ordering: an ordering that can be guaranteed to terminate, e.g. starting at a positive integer and counting down to 0. word: a group of bits that are treated as a unit and processed in parallel by a computer CPU. Common word sizes are 32 bits and 64 bits. XML: eXtensible Markup Language, a way of writing data in a tree-structured form by enclosing items in pairs of opening and closing tags, e.g. <zipcode> 78712 </zipcode> XOR: exclusive or. CS 314
{"url":"http://www.cs.utexas.edu/users/novak/cs314vocab.html","timestamp":"2014-04-16T12:20:18Z","content_type":null,"content_length":"22619","record_id":"<urn:uuid:3b15f91e-e6e3-461d-9426-688818a04028>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Matter, Time, Energy.... Theory of Exclusive Relativity In the past two months, political and economic news has been redundant. The same players in politics with very little cognitive ideas and an economic constant picture instead of an evolving video. When things are this consistent, I get bored. Thus, I am writing a scientific article based on extrapolation of recent academic findings. This article will be called a "NEW THEORY OF EXCLUSIVE RELATIVITY". If you are now starting to laugh, I don't blame you, but bear with me a little longer, for I do not jest and will attempt to validate this theory based on Quantum physics and Einstein's quest for a general theory of relativity. I will elucidate in layman's language, for I believe esoteric prose should be resigned to those that are so deeply involved in their thoughts, that they frequently limit their vision and effectiveness. The principle of relativity also known as the Special Theory of Relativity, which treats space and time on an equal footing such that the velocity of light is constant in this four dimensional space-time. It implies that space and time can transform among each other in different inertial systems. Einstein proposed the General Theory of Relativity. In general relativity, it is postulated that the curvature of space-time determines gravity. The mass-energy generates the curvature of space-time and particles moving along the geodesic in this four dimensional curved space. The geodesic is the shortest distance between two points. It is a straight line only in Euclidean space (flat space); it would be different in the curved space (Riemann space) { E=Mc² }. On the other side of physics is Quantum physics. This involves quarks as an elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, the components of atomic nuclei. I'll further simplify to only spin identified neutrinos. Quarks are the only elementary particles in particle physics to experience all four fundamental forces (electromagnetism, gravitation, strong interaction, and weak interaction), as well as the only particles whose electric charges are not integer multiples of the elementary charge. For every quark flavor there is a corresponding type of antiparticle, known as anti-quark, that differs from the quark only in that some of its properties have equal magnitude but opposite sign. A tachyon (mentioned above) is a hypothetical particle that travels faster than the speed of light and has an imaginary rest mass. General relativity and quantum theory describe two extreme of physics - the former is on the very large scale up to the whole universe, while the latter is on the smallest possible scale down to the size of elementary particles. However, there are many instances in which both general relativity and quantum theory are equally important and a common framework is essential. One obvious such instance is in the very early universe, immediately after the Big Bang, when the size is very small and the curvature of space-time were nearly infinite. Much of the difficulty in merging these theories comes from the radically different assumptions that these theories make on how the universe works. Quantum field theory depends on particle fields embedded in a fixed flat space-time, while general relativity models gravity as space-time curvature that changes as mass moves. Recent experiments (9/22/11) where scientists at CERN, the world's largest physics lab near Geneva, stunned the world of science by announcing they had observed tiny particles known as neutrinos travelling slightly faster than light. According to Einstein, that is impossible! If E=Mc², then c=√f(E/M) and M=C²/E. To merge these theories and grant that the speed of light is a constant, time is important. At any moment c is constant, but when introduced to the fourth dimension of time, the speed of light exceeds c. To explain this, only at the moment when M and E merge (Big Bang Theory) can c² dilute to unity and E=Mc. After this event, Einstein's theory is irresolute because Energy and Matter have separated from unity. Think of the time space concept. It is stated that as the speed of matter approaches the speed of light, time essentially stands still, but cannot go backwards. This is true, in that once M actually equaled c, then there is no light, as all matter and energy have been condensed into an infinitesimally small ball of pure ME. This situation occurs only in two event horizons. One is the big bang, the other is on the small scale of a black hole, or a leak between time and matter. We all understand the concept of multiple universes as proposed in string theory, but this theory weakens when the recent CERN experiments prove it to be inconsistent. There are not multiple universes. In the Exclusive Theory of Relativity, matter and energy cannot exist together unless c is K. View that as time allowing the two to exist nearly side by side, separated by an infinitesimal shift in time. As a matter of simplicity, think of energy as the potential merging of matter and anti-matter by nature or man as the time dimension is momentarily breached. As the quantitative shift becomes smaller, then the energy release becomes larger. Expressed mathematically, E=Mc²∫(time). This is a concept and needs proof, but it does merge the two into a unified whole. I propose that as scientists learn more about quarks and photons and see more examples of instantaneous shifts in spin as recorded in normal time, I believe this concept will become more apparent. Thanks for your time.
{"url":"http://bestcom.newsvine.com/_news/2011/10/06/8186614-matter-time-energy-theory-of-exclusive-relativity","timestamp":"2014-04-19T17:56:58Z","content_type":null,"content_length":"48562","record_id":"<urn:uuid:0e18d54c-c0cc-4b67-8e7b-9920a891bb3a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
sifted colimit sifted colimit Limits and colimits A sifted colimit is a colimit of a diagram $D \to C$ where $D$ is a sifted category (in analogy with a filtered colimit, involving diagrams of shape a filtered category). Such colimits commute with finite products in Set by definition. A motivating example is a reflexive coequalizer. In fact, sifted colimits can “almost” be characterized as combinations of filtered colimits and reflexive coequalizers. • P. Gabriel and F. Ulmer, Lokal präsentierbare Kategorien , Springer LNM 221, Springer-Verlag 1971 • J. Adamek, J. Rosicky, E.M. Vitale, What are sifted colimits?, TAC 23 (2010) pp. 251–260. (web) Revised on October 7, 2012 01:24:17 by Todd Trimble
{"url":"http://www.ncatlab.org/nlab/show/sifted+colimit","timestamp":"2014-04-19T11:57:20Z","content_type":null,"content_length":"25609","record_id":"<urn:uuid:7da9ca9c-bde5-4f04-b1d4-0070d7274544>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Mount Hamilton Algebra 2 Tutor ...I have years of experience with not only the software and applications that run on computer systems but also the hardware. I have written a pc bios, scientific computing and control applications for a jet propulsion laboratory. I am familiar with the Office applications from Microsoft as well as their compilers. 19 Subjects: including algebra 2, chemistry, calculus, Spanish ...I do have a passion for Math because I have my Bachelors of Science in Mathematics and Masters of Science in Actuarial Science. I look forward to hearing from you on how I can help you with Math!I currently work at a health care and consulting firm and have used Microsoft Outlook on a daily basi... 9 Subjects: including algebra 2, geometry, algebra 1, trigonometry ...Let me help you learn the concepts behind Math subjects and tackle homework and test questions. I offer Math tutoring in Morgan Hill. I have a B.S. in Biochemistry from UC Davis and have worked in the Biotech field for over 10 years. 5 Subjects: including algebra 2, geometry, algebra 1, prealgebra ...I have taught Algebra 1, Algebra 2, Earth Science, Biology, Biology Honors, Chemistry, Physics and Zoology. I am a patient person and I try to understand each student's needs and what they will need to be most successful. I understand the importance of truly teaching the material so that the st... 16 Subjects: including algebra 2, chemistry, physics, geometry ...I am constantly evolving my theoretical understanding of the both the guitar and music theory alike and whoever studies with me will have access to what I perceive to be the cutting edge of musical and guitar oriented knowledge. I know more about music theory than any one person I have ever met ... 28 Subjects: including algebra 2, chemistry, algebra 1, physics
{"url":"http://www.purplemath.com/Mount_Hamilton_algebra_2_tutors.php","timestamp":"2014-04-18T13:42:16Z","content_type":null,"content_length":"24172","record_id":"<urn:uuid:d08a756b-c3e7-4666-8cd3-2346367b76e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Numb3rs 109: Sniper Zero In this episode the FBI investigates a bizarre string of sniper attacks which seem to have little in common. To determine the location of the sniper in each shooting, Charlie uses ballistic trajectory modelling. Exponential growth and regression to the mean are also briefly mentioned, and the first of these we explore in depth below. Ballistic Trajectory A bullet, like any other object flying through the air, is subject to the forces of gravity, air resistance, and wind. One way to closely approximate the actual trajectory is to ignore the effects of drag and wind, instead looking only at gravity. Consider the figure to the right. A bullet leaves the barrel of a gun inclined at a 30 deg angle and flies a horizontal distance of d before reaching the starting elevation. The force of gravity acts on the bullet, creating a downward acceleration of g=9.8 m/sec^2 and so influencing the vertical component of the velocity vector (see diagram below) over time. Since we disregard drag, the horizontal component of the velocity does not change. The next activity involves figuring out the equations describing the speed and position of an object in freefall. These derivations make some use of calculus. Try to follow them and do the exercises, but if you can't, just use the equations mentioned in order to do activity 2. Technically, the term velocity means the vector pointing in the direction of motion with magnitude equal to the speed of the object. However, in everyday usage and even in many physics textbooks the term velocity is used to denote both the vector and its magnitude, the speed. It is usually easy to figure out which is being meant from the context: just ask yourself, is the sentence talking about a vector or a scalar? Activity 1: Let us first consider only the vertical direction of motion. For the sake of brevity, I'll just write v(t) below instead of v 1. Recall that the acceleration of an object is equal to the instantaneous change in velocity, i.e. a(t)=v'(t). Apply the fundamental theorem of calculus to this equality to deduce that 2. We can do even better by applying the same trick to velocity. Namely, we know that v(t)=y'(t), where y(t) is the vertical position of the object at time t. Apply the fundamental theorem of calculus again to show that 3. Now write down an equation for the horizontal position x(t) of the bullet in terms of the initial horizontal velocity v[horizontal](0). (Hint: remember, we disregard drag and wind so the only acting force is gravity.) Activity 2: Suppose the bullet is fired at an angle of 30 deg as in the first picture, with a speed of 900m/s from an initial point x(0)=0 and y(0)=0 (i.e. from the origin) at time t=0. Use the equations from activity 1 to answer the following questions. 1. What is the maximum height achieved by the bullet? At what time is this height achieved? (Hint: what is the vertical velocity of the bullet when it's at a peak height?) 2. What is the horizontal distance of the bullet from the origin at the time of peak height? What is the distance to the point at which the bullet is again at the height from which it was fired, that is, y=0? 3. Show that the trajectory of the bullet is a parabola. Analysing the general situation, in which both wind and drag affect the path of a bullet, is in fact very complicated. You can get a taste of the difficulties involved by reading the wikipedia article on external ballistics. Furthermore, mathematically recreating the path of a bullet after it has hit a target, thus only knowing its angle of entry, is much harder. Exponential Growth Here are a few recent uses of the term exponential growth in the news media: The company has had a spectacular two years, riding the exponential growth in oil prices that helped to increase profits by a fifth in 2006 to £28.5 million. (Business Big Shot: Alasdair Locke, The Times, Dec 20, 2007) After years of exponential growth, there has recently been a slow down in the Northern Ireland property market. (Well-known property firms merge, BBC News, Dec 7, 2007) Kessler himself came under university scrutiny for alleged financial irregularities. In January 2005, an anonymous source contended he "spent or formally committed all of the reserves of the dean's office and has also incurred substantial long-term debt in the form of lavish salary increases and exponential growth in new, highly compensated faculty and staff directly reporting to him." (UCSF dean is fired, cites whistle-blowing, Los Angeles Times, Dec 15, 2007) While the above excerpts describe growth in entirely different areas, the one thing they have in common is the use of the term exponential growth. In mathematics, we say that quantity x grows exponentially with respect to time t if x satisfies the following differential equation: Activity 3: 1. Suppose yesterday you heard that annual inflation was 3% in the last year. If x is the price of a representative basket of goods, and t is measured in years, what is the corresponding proportionality constant k in the exponential growth equation that models the price increase? (Hint: note that in this case dt=1 year.) 2. What if t is measured in days instead? The reason why such growth is called exponential is that when the time variable t is continuous, we can solve the differential equation ^kt, where D is a constant. We can solve for D by plugging in t =0, the starting time, to arrive at the general solution ^t versus t^3. Activity 4 1. Find a constant r so that 2^t=e^rt. 2. Show that 2^t becomes larger than any polynomial in t, for sufficiently large t. (Hint: suppose p(t)=t^n for some positive integer n. For which t is 2^t > p(t)?) 3. Can you think of a function f(t) which grows faster than an exponential function, in the sense of part 2 above? In practice, when talking about compound interest two quantities are important. One is the annual interest rate, sometimes called the annual percentage rate (APR). The other is the number of compounding periods per year: how many times per year is the interest added to the principal amount. For instance, say you have $100 credit card debt with an APR of 20%. Usually credit cards compound monthly, so there are 12 compounding periods per year. Thus if you make no payments (and incur no additional penalties or expenses) for a whole year, your debt will not simply be 100+100*0.2=120, which it would if the interest was compounded only once per year. Instead, after the first month, you'll owe 100+100*(0.2/12)=101.67 dollars. After the second month, you'll owe 101.67+101.67*(0.2/12) =103.36 dollars, and so on. At the end of the year, with such monthly compounding, you'll owe $121.94. Might not seem like a huge difference from the once a year compounding sum of $120, but over longer periods of time, the difference becomes substantial. Activity 5 1. You open a savings account which earns 2% interest with a deposit of $1000. Would you rather the interest compound daily or monthly? Write down the formula for the amount of money in the account after a year in both cases. (Hint: write down the expression for the amount of money after one period of compounding, now after two periods (don't simplify!), then three... See the pattern?) 2. Suppose we decide to compound not once a month or a day, but once every split second. In fact, we can let the number of compounding periods go to infinity, thus letting the length of each period approach zero. Use the fact that ^0.02*t dollars. This is continuous compounding. In popular usage, the expression "exponential growth" is often used as a synonym for "very fast growth". There's no good reason to describe faculty hiring practices, as the third quote in the beginning of this section does, in terms of exponential growth. While an exceptional number of faculty might have been added during Kessler's tenure as dean, there's no sense in which "faculty makes more faculty" proportionally to existing numbers. At other times, "exponential growth" can be more accurately described as (remember that strange function used in logistical regression ?). While similar in the low range to the exponential function, sigmoidal growth reflects the fact that at some point growth must slow down due to lack of resources.
{"url":"http://www.math.cornell.edu/~numb3rs/kostyuk/num109.htm","timestamp":"2014-04-17T18:24:19Z","content_type":null,"content_length":"12722","record_id":"<urn:uuid:a13fbefb-0bdb-41d6-9416-bfb880404224>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration problem: $\int_{-\pi}^{\pi} | \log( | 1 + \exp(- I \nu ) | ) | \mathrm{d}\nu < \infty$ up vote 2 down vote favorite Hello, I'm trying to bound an integral. I have a function $A(\nu) = | 1 + \exp(-I \nu) |$ (with $I$ being the imaginary unit) and I want to show that the condition (Paley-Wiener criterion for causality) applies $$\int_{-\infty}^{\infty} \frac{|\log(A(\omega))|}{1+\omega^2} \mathrm{d}\omega < \infty$$ (log is the natural logarithm) I used a transformation from $\omega$ to $\nu$: $\omega = \tan(\nu/2)$ and I converted the integral by substitution to $$\int_{-\pi}^{\pi} | \log(A(\nu)) |\mathrm{d}\nu < \infty$$ But I don't know how to show that this condition applies for the given function $A(\nu)$. I tried to simplify the problem by using $A(\nu) = | 1 + \exp(-I \nu) | = \sqrt{(1+\exp(-I\nu))(1+\exp(I\ nu))}$ and thus simplifying the integral to $$\frac{1}{2} \int_{-\pi}^{\pi} | \log(1+\exp(-I\nu)) + \log(1+\exp(I\nu)) |\mathrm{d}\nu$$ But still I have trouble finding a bound. I also tried $A(\nu) = | 1 + \exp(-I \nu) | = \sqrt{2} \sqrt{\cos(\nu) + 1}$. I though maybe the problem can be solved by providing an upper and lower bound function that converges. But because $A(\nu)$ has values in the range $[0,1]$ the logarithm assumes very large values (and there are actually points of singularity for $A(\nu)=0$). Please help me solve this problem. add comment 2 Answers active oldest votes You want to show that $$\int_{-\pi}^\pi|\log|1+e^{-it}||dt$$ is finite. Now $$|1+e^{-it}|=|e^{it/2}+e^{-it/2}|=2\cos(t/2)$$ so your integral is $$\int_{-\pi}^\pi|\log|2\cos(t/2)||dt =2\ int_0^\pi|\log|2\cos(t/2)||dt.$$ Replacing $t$ by $\pi-2$ in the last integral gives $$2\int_0^\pi|\log|2\sin(t/2)||dt.$$ The integrand is nicely continuous away from $0$. Near $0$, $\ up vote 2 sin (t/2)=tf(t)$ where $f(t)\to1/2$ as $t\to0$. Then the integrand is $|\log t+g(t)|$ where $g$ is continuous at $0$ and now finiteness follows since $$\int_0^1|\log t|dt$$ is finite down vote (integration by parts). Thank you very much for your answer. (I think you got a small typo. Replacing $t$ by $\pi-t$ and not $\pi-2$) But I have troubles following the last step of your argument. Why can I ignore the $g(t)$ for the proof of finiteness? – user10256 Oct 22 '10 at 16:02 oh I think I understand it now. $log t$ is not continuous at $0$. But $g(t)$ is. So I don't have to worry about $g(t)$. It has no singularities and therefore is finite. The question left is whether the singularity of $log t$ causes the integral to be infinite. But it is easy to show that $\int_0^1 | \log t | \mathrm{d}t$ is finite and therefore the initial integral. Thanks thanks thanks. – user10256 Oct 23 '10 at 21:52 add comment The key is to understand the behavior of $A(\nu)$ near the singularity $\nu=0$. Using Taylor expansion we know that for $\nu$ small $A(\nu) = 1+e^{-I \nu} \approx -I\nu$. Therefore, $\log |A(\nu)| \approx \log|-I \nu| = \log |\nu|$. Note that $\int_{-\pi}^\pi \log|\nu| d\nu = 2 \int_{0}^\pi \log \nu d\nu < \infty$. To make this precise you need to control the error terms up vote 0 in the Taylor approximation. down vote There is no singularity at $\nu=0$, but at $\nu=\pm\pi$. For $nu$ small, $A(\nu)\approx2$. – Julián Aguirre Oct 22 '10 at 14:58 Oops, as Julian points out I made a mistake above. To handle the singularities at $\pi$ and $-\pi$ you can still use a Taylor approximation. Near $\pi$ we have $A(\nu) \approx -I(\nu-\ pi)$. Therefore the integral should be finite if $\int_0^\pi \log|\nu-\pi|\, d\nu < \infty$. – Jon Peterson Oct 25 '10 at 14:15 add comment Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes or ask your own question.
{"url":"http://mathoverflow.net/questions/43180/integration-problem-int-pi-pi-log-1-exp-i-nu-math?sort=newest","timestamp":"2014-04-21T00:52:55Z","content_type":null,"content_length":"59440","record_id":"<urn:uuid:86292c4c-4689-4149-910a-da96b1a1dc88>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Astronomy 6: Introductory Cosmology 0.5 credits, meets second-half of the fall 2013 semester this is a new course being taught this fall by David Cohen Course description: The subject of cosmology has seen stunning advances in the precision of measurements and in theoretical understanding over the last two decades. The basic framework for understanding the properties of the Universe as a whole is general relativity, but significant understanding can be gained and quantitative detail can be put into context without doing GR calculations or derivations. We will discuss GR at the beginning, but we will not do quantitative calculations with the field equations themselves. We will use the Friedmann equation (which is a consequence of GR but can be derived classically) to understand – quantitatively – the history and fate of the Universe, in the context of the standard hot big bang model. We will focus on observational evidence for this model: the expansion of the Universe, the cosmic microwave background, and big-bang nucleosynthesis. And we will explore more recent observational measurements of the properties of dark matter and dark energy as well as the growth of structure in the Universe. This half-credit class is designed to give students who are excited about cosmology, and comfortable with physics and math, a short introduction to the subject. The level of the class is relatively high and aimed at students who could be astronomy or astrophysics majors. We will use a textbook that is also used in Astro 16, our sophomore classes for prospective majors. The reading has some simple differential and integral calculus and a few straight-forward differential equations. Though the class has no official pre-requisites, some exposure to single-variable differential and integral calculus is required. Students concurrently taking Math 25 or higher will have an adequate mathematical background, and high-school calculus will generally be sufficient. Similarly, there are no official physics pre-requisites, but some exposure to basic physics, especially mechanics, gravity, and the properties of light, are required, even if only in high school. No prior, specific knowledge of astronomy is presumed. The important properties of galaxies will be introduced as needed. Astro 6 is suitable for many first-year students concurrently taking Physics 5 and also for sophomores and others concurrently taking Astro 16. Those two groups are the target audience, but the class should be appropriate for other students, too.
{"url":"http://astro.swarthmore.edu/Astro6_description.htm","timestamp":"2014-04-17T03:48:16Z","content_type":null,"content_length":"25323","record_id":"<urn:uuid:6f31ddde-ce38-450d-9d54-598fa4080eea>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Example applications of the bootstrap method Applying the basic bootstrap method is really straightforward. The only messy part is doing the 'bias-corrected and accellerated' correction (BCa)on the confidence interval. I've provided a function called 'bootstrap' that runs the bootstrap algorithm and then (by default) does the BCa correction. In many cases, this correction doesn't make much difference and in some of the examples below I don't even know how to apply it, so I've left it out. The examples below run through a series of fairly simple applications of the bootstrap method on statistics that we may or may not have a table for. clear all Example 1: Bootstrapping instead of a t-test (with unequal sample sizes) A t-test tests the hypothesis that two samples come from the same distribution based on the differences between the means of the samples. T-tests assume the usual stuff about normal distributions and are most commonly used when comparing equal sized samples. When comparing samples of different sizes, an estimate of pooled variance is used, and the degrees of freedom are the average of the two df's from each sample. This seems like a bit of a hack to me. To bootstrap on samples, we'll sample with replacement from both samples. Just as with the ratio of variances example below, allowing for different sample sizes means that we can't use the BCa method. We'll do the bootstrapping by hand again without the 'bootstrap' function. In this specific example we'll test the hypothesis that the means are different (a two-tailed test). nReps = 10000; n1 = 30; %sample size 1 n2 = 15; %sample size 2 alpha = .05; %alpha value generate fake data by drawing from normal distributions x1 = randn(n1,1); x2 = randn(n2,1); define the statistic as the difference between means myStatistic = @(x1,x2) mean(x1)-mean(x2); sampStat = myStatistic(x1,x2); bootstrapStat = zeros(nReps,1); for i=1:nReps sampX1 = x1(ceil(rand(n1,1)*n1)); sampX2 = x2(ceil(rand(n2,1)*n2)); bootstrapStat(i) = myStatistic(sampX1,sampX2); Calculate the confidence interval (I could make a function out of this...) CI = prctile(bootstrapStat,[100*alpha/2,100*(1-alpha/2)]); %Hypothesis test: Does the confidence interval cover zero? H = CI(1)>0 | CI(2)<0; Draw a histogram of the sampled statistic xx = min(bootstrapStat):.01:max(bootstrapStat); hold on ylim = get(gca,'YLim'); xlabel('Difference between means'); decision = {'Fail to reject H0','Reject H0'}; legend([h1,h2,h3],{'Sample mean',sprintf('%2.0f%% CI',100*alpha),'H0 mean'},'Location','NorthWest'); Example 2: Bootstrapping on an 'index' Often (especially in neuroscience) we make up our own 'index' that is a measure of the effect of a condition on our measure. For example, when measuring neuronal firing rates, comparing the difference in firing rates between two conditions is often expressed as an index that is the ratio of the difference over the sum of the two firing rates. This normalizes by the overall firing rate of the neuron and provides a number that is always between -1 and 1. In this example, we'll make up two samples of 25 firing rates corresponding to 25 neurons measured in two conditions. We'll then calculate the index for each neuron and bootstrap on mean of the indices to see if it is different from zero. n=25; %number of neurons nReps = 10000; %number of iterations for the bootstrap CIrange = 95; %confidence interval range x = ceil(15*randn(n,2).^2); %nx2 matrix of firing rates (Chi-squared distribution) define our 'index' here (difference over sum) myStatistic = @(x) mean((x(:,1)-x(:,2))./(x(:,1)+x(:,2))); run the 'boostrap' program to generate the confidence interval [CI,sampStat,bootstrapStat] = bootstrap(myStatistic,x,nReps,CIrange); Show the histogram of the boostrapped indices xx = min(bootstrapStat):.01:max(bootstrapStat); hold on ylim = get(gca,'YLim'); If our confidence interval does not include zero, then we'd conclude that the mean of our indices across the neurons is significantly different from zero. H = CI(1)>0 | CI(2)<0; title(sprintf('Bootstraping on an ''index'': %s',decision{H+1})); Exercise: Add a value to the first column to see if you can reject the null hypothesis. Example 3: Bootstrapping on a ratio of variances A ratio of variances of two samples an F-distribution. An F-test tests the null hypothesis that the two variances are the same (ratio = 1). We can perform a nonparametric version of the f-test using the bootstrap method. CIrange = 90; nReps = 10000; n1 = 20; n2 = 100; two draws from a unit normal distribution x1 = randn(n1,1); x2 = randn(n2,1); Our statistic is the ratio of the variances myStatistic = @(x1,x2) var(x1)/var(x2); This is our observed value (should be near 1) sampStat = myStatistic(x1,x2); We'll do this manually, rather than call the boostrap program because the program preserves the pair-wise relationship between the two values and can't handle two different sample sizes. This means we won't use the BCa method and instead will use the standard percentiles on our sampled distribution to get the confidence intervals. bootstrapStat = zeros(1,nReps); for i=1:nReps resampX1 = x1(ceil(rand(size(x1))*length(x1))); resampX2 = x2(ceil(rand(size(x2))*length(x2))); bootstrapStat(i) = myStatistic(resampX1,resampX2); Calculate the confidence interval using percentiles. CI = prctile(bootstrapStat,[50-CIrange/2,50+CIrange/2]); disp(sprintf('Ratio of variances: %5.2f',sampStat)); disp(sprintf('%d%% Confidence interval: [%5.2f,%5.2f]',CIrange,CI(1),CI(2))); Ratio of variances: 0.87 90% Confidence interval: [ 0.35, 1.54] draw a histogram of the sampled distribution and the confidence intervals. xx = min(bootstrapStat):.01:max(bootstrapStat); hold on ylim = get(gca,'YLim'); title('bootstrapping on a ratio of variances'); Example 4: Bootstrapping on residuals after regression: An fMRI example 'Event-related' fMRI involves a deconvolution between an fMRI time-series and an 'event sequence'. This is really a linear regression problem where the output is the predicted hemodynamic response. This output are regressors, or values that when convolved with the event matrix predict the fMRI data with minimal least-squares error. The difference between the prediction and the actual data is called the residual. We can obtain an estimate of the standard error for these regressors by bootsrapping on these residuals. That is, by repeatedly resampling the residuals with replacement and re-estimating the hemodynamic response. The standard deviation of these resampled estimates provides a measure of the standard error of our estimate. This first part generates a fake hemodyamic response from an event-releated study with three event types (plus blank). Experimental parameters dt = 2; %step size, or TR (seconds) maxt = 16; %ending time for the estimated hdr (seconds) th = 0:dt:(maxt-dt); %time vector for plotting n = length(th); %length of the hdr. Model parameters used for generating fake data k = 1.25; %seconds nCascades = 3; delay = 1; %seconds amps = [1,2,3,4]; %amplitudes of the hdr for the four event types (e.g. for increasing stimulus contrast). true hdr are gamma functions with increasing amplitudes h = zeros(n,4); for i=1:4 delay = 2; %seconds h(:,i) = amps(i)*gamma(nCascades,k,th-delay)'; %make it a column vector Event sequence is an m-sequence s = mseq(5,3); m = length(s); t = 0:dt:(m-1); %Generate a concatenated design matrix X = []; for j=1:4 Xj = zeros(m,n); temp = s==j; for i=1:n Xj(:,i) = temp; temp = [0;temp(1:end-1)]; X = [X,Xj]; %Predicted hemodynamic response (convolution of event matrix with hdr) r = X*h(:); %add IID noise noiseSD = .25; fMRI = r+noiseSD*randn(size(r)); Now for the interesting part. First we'll estimate the hdr from our data using linear regression (using the 'pinv' function). hest = pinv(X)*fMRI; Next we'll calculate the residual error between the model and the data pred =X*hest; resid = fMRI-pred; Then we'll bootstrap by resampling the residuals, adding these new residuals to the prediction, and re-estimating the hdr. Note here that we're not calling the 'bootstrap' program but instead are just doing it manually. This because (1) we're not using the BCa method and (2) our 'statistic' has n values for each resample instead of just 1. nReps = 1000; sampHest = zeros(length(hest),nReps); for i=1:nReps %resample residuals newResid = resid(ceil(rand(size(resid))*length(resid))); %generate new fMRI signal newFMRI = pred+newResid; %re-estimate the hdr sampHest(:,i) = pinv(X)*newFMRI; The standard deviation of these re-estimated hdrs across resamples of the residual provides an estimate of the SEM for each time-point of the estimated hdr. hestSEM = std(sampHest'); This part just reshapes the estimated hdr's into four columns - one for each event type (the original estimate comes out in one long vector). hest = reshape(hest,n,4); hestSEM = reshape(hestSEM,n,4); Plot the estimated hdr's and their standard errors as error bars. hold on colList = {'r','g','b','m'}; for i=1:4 xlabel('Time (s)'); ylabel('fMRI signal (%)'); Example 5: Bootstrap on a correlation coefficient to get a confidence interval. Bootstrapping on a correlation is useful because we know that the distribution of correlations is not normal since it's bounded between -1 and 1. Matlab provides an example data set of gpa and lsat scores for 15 students. We'll load it here and calculate the correlation. load lawdata gpa lsat sampStat = correlation([gpa,lsat]); Show the scatter plot of GPA vs LSAT and display the correlation in the title. title(sprintf('r = %5.2f, %2.1f%% CI: [%5.2f, %5.2f]',sampStat,CIrange,CI(1),CI(2))); Bootstrap the data by pulling out pairs with replacement. We'll use the 'BCa' method here. nReps = 10000; CIrange = 99; %alpha <.01 (two-tailed) [CI,sampStat,bootstrapStat] = bootstrap(@correlation,[gpa,lsat],nReps,CIrange); Show the distribution of bootrapped values and the confidence interval xx = min(bootstrapStat):.01:max(bootstrapStat); hold on ylim = get(gca,'YLim'); title('bootstrapping on a correlation coeficient'); Since the lower end of our confidence interval is above zero, we conclude that our correlation is significant at the p<.01 level (two-tailed). Example 6: Permutation test instead of bootstrapping A 'permutation test' is a second resampling method that addresses the question of whether a correlation is significant or not. While the bootstrap method estimates a confidence interval around your measured statistic, the permutation test estimates the probability of obtaining your data by chance. For the gpa lsat example, it involves shuffling the relationshiop between the two variables repeatedly and recalculating the correlation. It's like re-assigning one student's gpa with another student's lsat randomly to test the distribution of the null hypothesis that there is no relationship to the specific pairing of the two variables. Computatinally, it is similar to the bootstrap method. On each iteration we'll shuffle the order of values in one of the variables before computing the correlation. After many iterations, we'll compare the distribution of reshuffled correlations with our observed correlation. If it falls way out in the tail then we decide that we have a significant correlation. Note, a true permutation test uses every possible reshuffling of the data. For our 15 observations, there are 15 factorial, or around a trillion combinations. To be reasonable, we'll just subsample 20,000 samples from these trillion combinations. This subsampling is called Monte Carlo simulation. nReps = 100000; perm = zeros(nReps,1); for i=1:nReps %shuffle the lsat scores and recalculate the correlation perm(i) = correlation([gpa,shuffle(lsat)]); determine how many reshuffled correlations exceed our observed value. p = sum(perm>sampStat)/nReps; Show a histogram of the reshuffled correlations and our observed value. ylim = get(gca,'YLim'); hold on title(sprintf('Proportion of permutations exceending observed correlation: %5.5f',p)); The standard statistical test for correlations is to assume a t-distribution with n-2 degrees of freedom. This test will conclude that we have a significant correlation with a p-value of 0.000665. It is interesting to note the similarities and differences between the bootstrap and the permutation test here. The bootstrap uses sampling without replacement while the permutation test samples with replacement (reshuffles). The bootstrap preserves the pair-wise relationship between the two variables and therefore produces a distribution of values centered at our observed value. The permutation test does the opposite - shuffling the pairwise relationships and therefore produces a distribution centered at zero. The decision in the bootstrap method is made by determining how much of the tail of the distribution falls below zero. The decision in the perumation test is made by determining how much of the distribution falls above our observed value. I don't know which test is more appropriate, or whether they make similar decisions. From Wikepedia: Good (2000) explains the difference between permutation tests and bootstrap tests the following way: "Permutations test hypotheses concerning distributions; bootstraps tests hypotheses concerning parameters. As a result, the bootstrap entails less-stringent assumptions." So there you go. Good, P. (2002) Extensions of the concept of exchangeability and their applications, J. Modern Appl. Statist. Methods, 1:243-247.
{"url":"http://courses.washington.edu/matlab1/Bootstrap_examples.html","timestamp":"2014-04-16T07:15:11Z","content_type":null,"content_length":"40940","record_id":"<urn:uuid:782a488b-154d-4f76-b723-703153ada0c3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - User Profile for: randre_@_eton.UVic.CA Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. User Profile: randre_@_eton.UVic.CA User Profile for: randre_@_eton.UVic.CA UserID: 56911 Name: Rex Andrew Registered: 12/7/04 Total Posts: 6 Show all user messages
{"url":"http://mathforum.org/kb/profile.jspa?userID=56911","timestamp":"2014-04-16T20:49:40Z","content_type":null,"content_length":"12598","record_id":"<urn:uuid:ec04312c-e922-458b-8f45-a3e2b8c3bdcc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Rayleigh estimator and confidence intervals for small N October 22nd 2013, 08:15 AM #1 Oct 2013 United States I have a small number of samples from a Rayleigh process, and I am trying to estimate the Rayleigh parameter sigma. This popular exercise suggests that $\widehat{ \sigma} = \frac{\sum r^{2}_{i} }{2n}$ is an unbiased estimator for sigma. Except that it doesn't come close to the underlying sigma unless you take the square root of the result, and even then it is biased for small n! (I verified this via monte carlo simulation.) So for small Rayleigh populations is there a BLUE, or a correction term to the MLE $\widehat{ \sigma} = \sqrt{\frac{\sum r^{2}_{i} }{2n}}$? Furthermore, what can I say about confidence in the parameter estimate for a given population of n samples? Re: Rayleigh estimator and confidence intervals for small N Hey dbooksta. Did you try using the MLE estimator to get the estimator of sigma? Once you do that, you can compare it to the MOM estimator and see how the value of n impacts the biased-ness of the estimator which will allow you to add a correction factor that is a function of n. Re: Rayleigh estimator and confidence intervals for small N Please have some patience with me -- I managed to get a B.S. in Math without covering any formal statistics: The MLE I give is an estimator of the Rayleigh parameter sigma. I assume the source of the bias is analogous to the bias in standard deviation estimators resulting from the presence of an exponent in the sample sum. (In fact, the Rayleigh distribution has relationships to other common distributions, so I am hoping no new ground needs to be broken to answer my question!) Checking MOM: The distribution only has one parameter, so MOM only has us looking at the first moment, $m_1 = \sigma \sqrt{\frac{\pi}{2}}$, or $\sigma = m_1 \sqrt{\frac{2}{\pi}}$: a constant times the sample mean, which I guess tells us once again that the bias is a result of the concavity of the $\sqrt{n}$ in the estimator for sigma. Have I used MOM correctly? Again, given the similarities, I wouldn't be surprised to see $c_4$ show up here, but I can't make the connection. P.S. Any hints on why the two Latex expressions in the third paragraph aren't rendering? [Edit: Fixed them -- had used \TEX instead of /TEX to terminate!] Last edited by dbooksta; October 23rd 2013 at 06:32 PM. Re: Rayleigh estimator and confidence intervals for small N I don't know what is wrong with your latex but if anyone knows please reply so we can fix it up. Re: Rayleigh estimator and confidence intervals for small N And further confusion: the MLE I am frequently finding as I research this is for the Rayleigh parameter squared -- i.e., $\sigma^2 = \vartheta = \frac{\sum r^{2}_{i} }{2n}$. As I mentioned initially, that appears to be unbiased, and both the Rayleigh pdf and CDF only refer to $\sigma^2$. So how do we introduce bias when we use the square root of that unbiased parameter estimation in the formulas for moments? Re: Rayleigh estimator and confidence intervals for small N Thats OK since you can use the invariance principle for the MLE estimator. You will get bias and it may be complicated, but in terms of the point estimate the square root should be OK. Just out of curiosity, do you need to measure sigma as opposed to sigma^2? If so what is the reason and the context of your decision? Re: Rayleigh estimator and confidence intervals for small N I'll be using both, but I'm more likely to need confidence intervals on sigma since I care most about the expected mean of the sampled process. But then I'll also be calculating probabilities based on the sample parameter, and those use sigma^2. BTW, here's a thread on this very forum with the common MLE and proof that the sigma^2 estimator is unbiased. But it I'm coming at this convinced that it is biased for sigma and small N -- do I need to post Monte Carlo simulations to demonstrate that, or is "unbiased" used loosely in statistics to exclude small samples? Or could I have a confidence interval problem? I wish I could make sense of this.... Re: Rayleigh estimator and confidence intervals for small N Please have some patience with me -- I managed to get a B.S. in Math without covering any formal statistics: The MLE I give is an estimator of the Rayleigh parameter sigma. I assume the source of the bias is analogous to the bias in standard deviation estimators resulting from the presence of an exponent in the sample sum. (In fact, the Rayleigh distribution has relationships to other common distributions, so I am hoping no new ground needs to be broken to answer my question!) Checking MOM: The distribution only has one parameter, so MOM only has us looking at the first moment, [TEX]m_1 = \sigma \sqrt{\frac{\pi}{2}}[\TEX], or [TEX]\sigma = m_1 \sqrt{\frac{2}{\pi}}[\ TEX]: a constant times the sample mean, which I guess tells us once again that the bias is a result of the concavity of the $\sqrt{n}$ in the estimator for sigma. Have I used MOM correctly? Again, given the similarities, I wouldn't be surprised to see $c_4$ show up here, but I can't make the connection. P.S. Any hints on why the two Latex expressions in the third paragraph aren't rendering? Tex correctly rendered should be: $m_{1}=\sigma \sqrt{\frac{\pi}{2}}$, or $\sigma = m_1 \sqrt{\frac{2}{\pi}}$ As a FYI, I have no idea what you are talking about Re: Rayleigh estimator and confidence intervals for small N I don't know about the Rayleigh, but for the Normal distribution there are un-biased estimators and they are super complicated (using a tonne of Gamma functions). I might suggest you look into that and see if you can adapt the solution in terms of the Rayleigh distributions sigma. You could also use the Monte-Carlo distribution as well with regards to verifying any analytical solution you may get. Re: Rayleigh estimator and confidence intervals for small N Based on Monte-Carlo I can say that $c_4^2$ corrects sample parameters to less than 1% error for N > 10, but starting with N = 2 it has the following errors: 5.3%, 2.5%, 1.4%, 0.9%. Am I correct in assuming that the "correct" correction factor would have only rounding error for small N? Obviously I can use the Monte-Carlo correction factors in practice, but I'm curious to see the analytic estimator correction (even if it's as difficult to use as $c_4$). I don't think I have the capacity to derive that myself, but I'd put up a small "bounty" for the satisfaction of seeing one. Is there anyplace I might post such a challenge and reward for the consideration of "real" Re: Rayleigh estimator and confidence intervals for small N I would try talkstats forums or stackexchange forums: you get researchers on there so you might have luck with it there. October 22nd 2013, 05:48 PM #2 MHF Contributor Sep 2012 October 23rd 2013, 05:39 PM #3 Oct 2013 United States October 23rd 2013, 05:51 PM #4 MHF Contributor Sep 2012 October 23rd 2013, 06:00 PM #5 Oct 2013 United States October 23rd 2013, 06:14 PM #6 MHF Contributor Sep 2012 October 23rd 2013, 06:24 PM #7 Oct 2013 United States October 23rd 2013, 06:25 PM #8 Junior Member Oct 2013 October 23rd 2013, 06:58 PM #9 MHF Contributor Sep 2012 October 28th 2013, 06:26 AM #10 Oct 2013 United States October 28th 2013, 04:04 PM #11 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/advanced-statistics/223335-rayleigh-estimator-confidence-intervals-small-n.html","timestamp":"2014-04-19T03:16:43Z","content_type":null,"content_length":"65494","record_id":"<urn:uuid:3f9a8ad7-69c6-4cc0-956f-1b16ba53f3fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof milestone Ian Lynagh igloo at earth.li Tue May 5 06:13:08 EDT 2009 Hi all, With a lot of help from numerous people on and off this list, I've finally just pushed patches that complete the proof of Lemma (<<qs :+> r :> s :> []>> <~~>* <<us :+> r' :> s' :> vs>>) -> (NameOf r = NameOf r') -> (NameOf s = NameOf s') -> (r <~?~> s) -> (r' <~?~> s'). or in English: If (qs r s) commutes to (us r' s' vs), and (r s) commutes, then (r' s') commutes. This isn't quite the full lemma - the [] should be any sequence ts - but proving the remaining bit just inolves repeating some of the existing proof, but for "commutes to the right" instead of "commutes to the left". This is quite a big milestone: It's the first proof of something that sounds useful, and not entirely trivial. There's still a long way to go, but we are at least making progress! I plan to take a step back now, and look at where we are. If any coq folk are interested in looking at the proof and telling me what I've done stupidly, that would be great. It's all in the camp paper darcs get http://code.haskell.org/camp/devel/paper/ Note that you need the trunk version of coq in order to build the proof; 8.2 isn't sufficient. You'll even need trunk coqdoc to make the PS/PDF, although the snapshow is up-to-date: Also, you need to do "make coq" before opening it in coqide etc, as the module imports need to be compiled. There are a few things that I know need to be fixed: * Naming consistency, e.g. rather the first letter is capitalised or not * Always use "ps" rather than "p" for a sequence of patches * Prove decidability of <~?~>, and use that rather than Classical_Prop Florent has been working on proving this lemma in a different way: I prove it by contradiction, whereas he is working on a constructive proof. On paper I find the proof by contradiction simpler, and more satisfying, but I want to see how the two compare in coq. And I think that's all I have to say for now! More information about the Camp mailing list
{"url":"http://projects.haskell.org/pipermail/camp/2009-May/000052.html","timestamp":"2014-04-20T23:59:11Z","content_type":null,"content_length":"4429","record_id":"<urn:uuid:4621b941-5dae-4b76-abc4-80c55295810e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional Calculation of pi(10^24) Email from Jens Franke [Thu 7/29/2010 2:47 PM]: (color added) Using an analytic method assuming (for the current calculation) the Riemann Hypthesis, we found that the number of primes below 10^24 is 18435599767349200867866. The analytic method used is similar to the one described by Lagarias and Odlyzko, but uses the Weil explicit formula instead of complex curve integrals. The actual value of the analytic approximation to pi(10^24) found was For the current calculation, all zeros of the zeta function below 10^11 were calculated with an absolute precision of 64 bits. We also verified the known values of pi(10^k) for k<24, also using the analytic method and assuming the Riemann hypothesis. Other calculations of pi(x) using the same method are (with the deviation of the analytic approximation from the closest integer included in pi(2^76)=1462626667154509638735 (-6.60903e-09) pi(2^77)=2886507381056867953916 (-1.72698e-08) Computations were carried out using resources at the Institute for Numerial Simulation and the Hausdorff Center at Bonn University. Among others, the programs used the GNU scientific library, the fftw3-library and mpfr and mpc, although many time critical floating point calculations were done using special purpose routines. J. Buethe J. Franke A. Jost T. Kleinjung
{"url":"http://primes.utm.edu/notes/pi(10%5E24).html","timestamp":"2014-04-20T08:15:59Z","content_type":null,"content_length":"6228","record_id":"<urn:uuid:0c4c41c0-5246-416b-9130-2b51086cc7b9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
search problem search problem • If $x$ is such that there is some $y$ such that $R(x,y)$ then $T$ accepts $x$ with output $z$ such that $R(x,z)$ (there may be multiple $y$, and $T$ need only find one of them) • If $x$ is such that there is no $y$ such that $R(x,y)$ then $T$ rejects $x$ Note that the graph of a partial function is a binary relation, and if $T$ calculates a partial function then there is at most one possible output. A relation $R$ can be viewed as a search problem, and a Turing machine which calculates $R$ is also said to solve it. Every search problem has a corresponding decision problem, namely $L(R)=\{x\mid\ exists yR(x,y)\}$. This definition may be generalized to $n$-ary relations using any suitable encoding which allows multiple strings to be compressed into one string (for instance by listing them consecutively with a Mathematics Subject Classification no label found Added: 2002-09-06 - 15:51
{"url":"http://planetmath.org/searchproblem","timestamp":"2014-04-18T23:15:54Z","content_type":null,"content_length":"46816","record_id":"<urn:uuid:b224c23e-ae63-4a56-8887-cddd5205e5bb>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Took GMAT for Q47. Retaking in hopes of 50+. Advice? Hi all! I recently took the GMAT and am concerned by my sub-80 percentile math score. I've been scoring consistently 47-48 in 4 takes of the GMAT Prep exams, so while I'd love to blame test day issues, my score is not exactly inconsistent with practice. For my retake, I'd love to score 50+ consistently so I'm not so dependent on verbal to pull up my overall score. My question is: I've exhausted OG questions (both the book and the quant specific book). I feel like I've gotten a good grasp of the stereotypically "hard problems" (e.g. combinatorics) and timing, but it tends to be those subtle basics that I have the hardest time with (e.g. tricky number properties questions). I've read through the forum and I'm considering either MGMAT (both exams and their prep books) or Jeff Sackmann's Total GMAT Hacks (I'm a big fan of the way he explains OG problems on his website). For those who have similarly exhausted OG problems and are looking to push their Q score to a similar level, what books and official exams do you recommend? Any advice would be greatly greatly appreciated. I know there's a lot of similar threads floating, but I haven't found one that quite hits the nail on the head.
{"url":"http://gmatclub.com/forum/took-gmat-for-q47-retaking-in-hopes-of-50-advice-128414.html","timestamp":"2014-04-19T15:42:24Z","content_type":null,"content_length":"141336","record_id":"<urn:uuid:2952278e-9465-41c4-87ea-921071d001f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Diary of a Graphics Programmer I got a new notebook today with 64-bit VISTA pre-installed. It will replace a Desktop that had 64-bit VISTA on there. My friend Andy Firth provided me with the following tricks to make my life easier (it has a 64 GB solid state in there, so no hard-drive optimizations): Switch Off User Account Control This gets rid of the on-going "are you sure" questions. Go to Control Panel. Click on User Account and switch it off. Disable Superfetch Press Windows key + R. Start services.msc and scroll down until you find Superfetch. Double click on it and change the startup type to Disabled. I spent some more time with the Light Pre-Pass renderer. Here are my assumptions: N.H^n = (N.L * N.H^n * Att) / (N.L * Att) This division happens in the forward rendering path. The light source has its own shininess value in there == the power n value. With the specular component extracted, I can apply the material shininess value like this. Then I can re-construct the Blinn-Phong lighting equation. The data stored in the Light Buffer is treated like one light source. As a reminder, the first three channels of the light buffer hold: N.L * Att * DiffuseColor Color = Ambient + (LightBuffer.rgb * MatDiffInt) + MatSpecInt * (N.H^n)^mn * N.L * Att So how could I do this :-) N.H^n = (N.L * N.H^n * Att) / (N.L * Att) N.L * Att is not in any channel of the Light buffer. How can I get this? The trick here is to convert the first three channels of the Light Buffer to luminance. The value should be pretty close to N.L * Att. This also opens up a bunch of ideas for different materials. Every time you need the N.L * Att term you replace it with luminance. This should give you a wide range of materials. The results I get are very exciting. Here is a list of advantages over a Deferred Renderer: - less cost per light (you calculate much less in the Light pass) - easier MSAA - more material variety - less read memory bandwidth -> fetches only two instead of the four textures it takes in a Deferred Renderer - runs on hardware without ps_3_0 and MRT -> runs on DX8.1 hardware [quote]As far as I can tell from this discussion, no one has really proposed an alternative to shader permutations, merely they've been proposing ways of managing those permutations.[/quote] If you define shader permutations as having lots of small differences but using the same code than you have to live with the fact that whatever is send to the hardware is a full-blown shader, even if you have exactly the same skinning code in every other shader. So the end result is always the same ... whatever you do on the level above that. What I describe is a practical approach to handle shaders with a high amount of material variety and a good workflow. Shaders are some of the most expensive assets in production value and time spend of the programming team. They need to be the highest optimized piece of code we have, because it is much harder to squeeze out performance from a GPU than from a CPU. Shader generators or a material editor (.. or however you call it) are not an appropriate way to generate or handle shaders because they are hard to maintain, offer not enough material variety and are not very efficient because it is hard to hand optimize code that is generated on the fly. This is why developers do not use them and do not want to use them. It is possible that they play a role in indie or non-profit development so because those teams are money and time constraint and do not have to compete in the AAA sector. In general the basic mistake people make that think that ueber-shaders or material editors or shader generators would make sense is that they do not understand how to program a graphics card. They assume it would be similar to programming a CPU and therefore think they could generate code for those cards. It would make more sense to generate code on the fly for CPUs (... which also happens in the graphics card drivers) and at other places (real-time assemblers) than for GPUs because GPUs do not have anything close to linear performance behaviours. The difference between a performance hotspot and a point where you made something wrong can be 1:1000 in time (following a presentation from Matthias Wloka). You hand optimize shaders to hit those hotspots and the way you do it is that you analyze the results provided by PIX and other tools to find out where the performance hotspot of the shader Following Matthias Grundmann's invitation to join forces I setup a Google code repository for this: The idea is to have a math library that is optimized for the VFP unit of an ARM processor. This should be useful on the iPhone / iPod touch. Now that I had so much fun with the iPhone I am thinking about new challenges in the mobile phone development area. The Touch HD looks like a cool target. It has a DX8-class ATI graphics card in there. Probably on par with the iPhone graphics card and you can program it in C/C++ which is important for the performance. Depending on how easy it will be to get Oolong running on this I will extend Oolong to support this platform as well. I just posted a forum message about what I consider an ideal shader workflow in a team. I thought I share it here: Setting up a good shader workflow is easy. You just setup a folder that is called shaderlib, then you setup a folder that is called shader. In shaderlib there are files like lighting.fxh, utility.fxh, normals.fxh, skinning.fxh etc. and in the directory shader there are files like metal.fx, skin.fx, stone.fx, eyelashes.fx, eyes.fx. In each of those *.fx files there is a technique for whatever special state you need. You might have in there techniques like lit, depthwrite etc.. All the "intelligence" is in the shaderlib directory in the *.fxh files. The fx files just stitch together function calls. The HLSL compiler resolves those function calls by inlining the code. So it is easy to just send someone the shaderlib directory with all the files in there and share your shader code this way. In the lighting.fxh include file you will have all kinds of lighting models like Ashikhmin-Shirley, Cook-Torrance or Oren-Nayar and obviously Blinn-Phong or just a different BRDF that can mimic a certain material especially good. In normals.fxh you have routines that can fetch normals in different ways and unpack them. Obviously all the DXT5 and DXT1 tricks are in there but also routines that let you fetch height data to generate normals from it. In utility.fxh you have support for different color spaces, special optimizations for different platforms, like special texture fetches etc. In skinning.fxh you have all code related to skinning and animation ... etc. If you give this library to a graphics programmer he obviously has to put together the shader on his own but he can start looking at what is requested and use different approaches to see what fits best for the job. He does not have to come up with ways on how to generate a normal from height or color data or how to deal with different color spaces. For a good, efficient and high quality workflow in a game team, this is what you want. Calculating screen space texture coordinates for the 2D projection of a volume is more complicated than for an already transformed full-screen quad. Here is a step-by-step approach on how to achieve 1. Transforming position into projection space is done in the vertex shader by multiplying the concatenated World-View-Projection matrix. 2. The Direct3D run-time will now divide those values by Z; stored in the W component. The resulting position is then considered in clipping space, where the x and y value is clipped to the [-1.0, 1.0] range. xclip = xproj / wproj yclip = yproj / wproj 3. Then the Direct3D run-time transforms position into viewport space from the value range [-1.0, 1.0] to the range [0.0, ScreenWidth/ScreenHeight]. xviewport = xclipspace * ScreenWidth / 2 + ScreenWidth / 2 yviewport = -yclipspace * ScreenHeight / 2 + ScreenHeight / 2 This can be simplified to: xviewport = (xclipspace + 1.0) * ScreenWidth / 2 yviewport = (1.0 - yclipspace ) * ScreenHeight / 2 The result represents the position on the screen. The y component need to be inverted because in world / view / projection space it increases in the opposite direction than in screen coordinates. 4. Because the result should be in texture space and not in screen space, the coordinates need to be transformed from clipping space to texture space. In other words from the range [-1.0, 1.0] to the range [0.0, 1.0]. u = (xclipspace + 1.0) * 1 / 2 v = (1.0 - yclipspace ) * 1 / 2 5. Due to the texturing algorithm used by Direct3D, we need to adjust texture coordinates by half a texel: u = (xclipspace + 1.0) * ½ + ½ * TargetWidth v = (1.0 - yclipspace ) * ½ + ½ * TargetHeight Plugging in the x and y clipspace coordinates results from step 2: u = (xproj / wproj + 1.0) * ½ + ½ * TargetWidth v = (1.0 - yproj / wproj ) * ½ + ½ * TargetHeight 6. Because the final calculation of this equation should happen in the vertex shader results will be send down through the texture coordinate interpolator registers. Interpolating 1/ wproj is not the same as 1 / interpolated wproj. Therefore the term 1/ wproj needs to be extracted and applied in the pixel shader. u = 1/ wproj * ((xproj + wproj) * ½ + ½ * TargetWidth * wproj) v = 1/ wproj * ((wproj - yproj) * ½ + ½ * TargetHeight* wproj) The vertex shader source code looks like this: Float4 vPos = float4(0.5 * (float2(p.x + p.w, p.w – p.y) + p.w * inScreenDim.xy), pos.zw) The equation without the half pixel offset would start at No. 4 like this: u = (xclipspace + 1.0) * 1 / 2 v = (1.0 - yclipspace ) * 1 / 2 Plugging in the x and y clipspace coordinates results from step 2: u = (xproj / wproj + 1.0) * ½ v = (1.0 - yproj / wproj ) * ½ Moving 1 / wproj to the front leads to: u = 1/ wproj * ((xproj + wproj) * ½) v = 1/ wproj * ((wproj - yproj) * ½) Because the pixel shader is doing the 1 / wproj, this would lead to the following vertex shader code: Float4 vPos = float4(0.5 * (float2(p.x + p.w, p.w – p.y)), pos.zw) All this is based on a response of mikaelc in the following thread: Lighting in a Deferred Renderer and a response by Frank Puig Placeres in the following thread: Reconstructing Position from Depth Data Just found a good tutorial on how to setup a Gauss filter kernel here: OpenGL Bloom Tutorial The interesting part is that he shows a way on how to generate the offset values and he also mentions a trick that I use for a long time. He reduces the filter kernel size by utilizing the hardware linear filtering. So he can go down from 5 to 3 taps. I usually use bilinear filtering to go down from 9 to 4 taps or 25 to 16 taps (with non-separable filter kernels) ... you got the idea. Eric Haines just reminded me of the fact that this is also described in ShaderX2 - Tips and Tricks on page 451. You can find the -now free- book at BTW: Eric Haines contacted all the authors of this book to get permission to make it "open source". I would like to thank him for this. Check out his blog at
{"url":"http://diaryofagraphicsprogrammer.blogspot.com/2008_09_01_archive.html","timestamp":"2014-04-20T03:09:57Z","content_type":null,"content_length":"115536","record_id":"<urn:uuid:257f94f7-1c97-489f-8f1d-269242b5f3d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from March 2011 on Xi'an's Og We had sent our discussion paper of Murray Aitkin’s Statistical Inference, with Andrew Gelman and Judith Rousseau, to the review section of JASA, but were again unsuccessful as the paper was sent back with the comments that “this paper is not a good fit for JASA Reviews. You may wish to consider broadening your discussion so that the paper reads less as an attack on Aitkin’s book“. While I understand that journals cannot publish all critical accounts of all statistics books, I feel a bit depressed by my overall lack of success in publishing extended book reviews. Electronic journals could easily include book discussions and I do not think this would negatively impact the readership as book reviews are generally appreciated by the community.
{"url":"http://xianblog.wordpress.com/2011/03/","timestamp":"2014-04-17T03:49:56Z","content_type":null,"content_length":"74616","record_id":"<urn:uuid:f7465413-5fe4-4cf3-8c5f-69af7b6ef2f5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Is this done correctly? September 16th 2007, 02:44 PM Is this done correctly? A tank contains 300 gal of salt free water. A brine containing .5lb of salt per gallon of water runs into the tank at the rate of 2 gal/min and the well stirred mixtur runs out at the rate of 2 gal/min. What is the concentration of the salt in the tank at the end of the 10 min? $A' = 2gal/min * .5lb/gal - 2gal/min * Alb/300gal$ $A' = 1 - (1/150)A$ $\frac{1}{1-1/150A} dA/dt = 1$ $\int t dt = \int \frac{150}{150-A} dA$ $t + c = 150 * ln |150-A|$ $\frac{t + c}{150} = ln |150-A|$ $+-e^{\frac{t + c}{150}} = 150-A$ $+-Ke^{\frac{t}{150}} = 150-A$ $+-150e^{\frac{t}{150}} = 150-A$ $A(t) = 150 +- 150 e^{t/150}$ Are my steps correct? September 16th 2007, 03:03 PM Your first integral has a problem. Why is there a 't' in that argument? You ignored that error and mamaged the right antiderivative, but that was sort of magic. Your right-hand antiderivative has the wrong sign. You can always check by simply finding the derivative. The chain rule will show the sign error. When you lose the absolute values you introduce that very odd symbol "+-". I realize this is an attempt to retain the meaning of the absolute value, but this is no good. Make up your mind. Is it positive or negative? Think about the behavior of the salt. Does it increase or decrease? Does it have a maximum? Often you can resolve absolute values by determining the sign of the argument. You didn't show how you determined that K = 150. Note: GREAT WORK at showing your work! Very, very, very good. I didn't have to guess at anything. Did I mention this is GREAT?!
{"url":"http://mathhelpforum.com/calculus/19044-done-correctly-print.html","timestamp":"2014-04-18T13:45:00Z","content_type":null,"content_length":"6816","record_id":"<urn:uuid:80db93a4-0d64-4ca9-bbc3-c198fa2f0f16>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
In Praise of the Illustrious Lady The Lady Vittoria Santacroce Borghese, Roman Gentlewoman In starting this Ballo, the man will take the lady by the ordinary hand, and together they will do the Riverenza minima, with 2 Continenze, 1 to the left, the other to the right; then they will do 8 Seguiti spezzati schisciati, 4 walking forward, and 4 letting go, turned to the left, to the end of which finding it at one head of the hall, and the other at the other, facing the one to the other they will do 2 Passi presti forward, and the Cadenza, starting each thing with the left foot, and then with the right. Change of the man. The man alone will do 8 Seguiti battuti, and he will do the Retreat with 4 Passi gravi schisciati back flanked: after that they will do together 4 Seguiti spezzati schisciati, 2 turned to the left, and 2 in perspective, with 2 Passi presti forward, and the Cadenza. The lady will do the same change, and together they will do the said actions. Second change. In the second change, the man will do 2 Seguiti battuti by feet 8 turns; then he will do the Retreat in this manner, that is, 3 Riprese, and 1 Trabuchetto with the left flank outward: he will do the same with the right; and he will do this 4 turns always flanking it: then they will do together the said Seguiti schisciati, with the Passi forward, and the Cadenza. The lady will do the same change, and together the said actions. Third change. In the third change, the man alone will do 2 beats of feet preste, and 1 Seguito battuto, starting with the left foot; then he will do the same starting with the right: and this he will do 4 turns, and together they will do the actions said of above. The lady will do the same change; and together they will do the Retreat; after the which they will do 12 Seguiti schisciati, 4 turned to the left, and 4 walking forward, changing place, with 4 turned similarly to the left, and with 2 Passi presti forward, and the Cadenza. Fourth change. In the fourth change, the man alone will do 2 Seguiti battuti always with the left foot, with 3 beats preste, starting it with the right foot, and another Seguito battuto with the left, flanking the body a little: he will do the same starting with the right foot to the right; and this he will do 4 turns; then he will do the Retreat in this manner, that is, 2 Passi gravi schisciati, and 3 presti, starting with the left; he will do the same starting it with the right: and if they will have enough field in the hall, he will do this 4 turns: then together they will do the same actions said of above, that is, 4 Seguiti schisciati turned to the left, and 2 Passi presti forward, and the Cadenza. The lady alone will do the same change, and together the things said of above. Fifth change. The man alone will be turned a little in perspective to the right, and he will do with the left foot 2 Schisciate, first with the heel forward, then with the point back, and turning facing to the lady, he will do 1 beat with the same foot; he will do the same by opposite: then he will do 2 Passi gravi schisciati back by straight line, with 1 Seguito battuto, starting it with the left foot, with 2 beats preste, 1 with the right, the other with the left, settling it there a pause; then he will do 1 Zoppetto with the said foot, and raising the right, in the lowering it he will be turned with the right flank inward towards the lady, and will do 4 beats preste, with 2 Seguiti battuti in perspective to the lady, starting said Passi, and Seguiti with the right foot: he will do the same another turn by opposite. After that he will do the Retreat, that is, 2 Passi schisciati, and 1 Seguito battuto, wantonizing it always, starting with the left foot, and then with the right; and he will do this 4 turns: then they will turn together the said Seguiti, and they will do 2 Passi presti forward, and the Cadenza. The lady alone will do the same change. Sixth change. The man alone will do 5 Schisciate preste forward, always with the left foot, starting with the heel; then he will cross, or to say better, will put the left foot over to the right, and will do 2 other Schisciate, first with the point, then with the heel, and raising said foot, he will do with the same foot another Schisciata with the point by straight line: finally he will do 1 beat level keeping said left foot all on ground. After that he will do facing 2 Seguiti battuti, 1 with the right, the other with the left, with 3 beats preste, starting it with the right, and another Seguito battuto with the left. He will start the same change another turn by opposite; then he will do the Retreat in this manner, that is, 1 Ripresa with the left flank outward, and 3 Trabuchetti flanked, starting with the left foot; and he will do the same another turn by opposite. Together they will do the said Seguiti schisciati turned to the left, with the 2 Passi presti forward, and the Cadenza. The lady alone will do the same change; then together they will do 4 Seguiti schisciati turned to the left, and another 8 scorsi, 4 taking the right hand, and 4 letting it go, and the lady will turn the 4 final to the left, and the man will do them facing; the which then taking the lady by the ordinary hand, they will finish the Ballo with doing together the Riverenza: to the end of which the man will pause the lady to his place. Lute tablature of the Canario. Return to Book 2 Index Previous Dance Next Dance
{"url":"http://jducoeur.org/IlBallarino/Book2/Canario.html","timestamp":"2014-04-20T18:22:40Z","content_type":null,"content_length":"6512","record_id":"<urn:uuid:a292a5be-8c7c-496a-a17d-66e63063fa97>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: APPARATUS AND METHOD FOR ADAPTIVE WHITENING IN A MULTIPLE ANTENNA SYSTEM Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP An apparatus and method for controlling a whitening function of a whitening Maximum Ratio Combining (MRC) in a receive end of a multiple antenna system are provided. The method includes identifying if there is interference from at least one neighbor cell, if there is interference, generating a weight of the whitening MRC using a pre-whitening inverse matrix, and, if there is no interference, generating a weight of the whitening MRC using a unit matrix, thus being capable of improving a reception performance of the receive end. A method for controlling a whitening function of a whitening Maximum Ratio Combining (MRC) in a receive end of a multiple antenna system, the method comprising the steps of: identifying if there is interference from at least one neighbor cell; if there is interference, generating a weight of the whitening MRC using a pre-whitening inverse matrix; and if there is no interference, generating a weight of the whitening MRC using a unit matrix. The method of claim 1, wherein identifying if there is interference comprises: measuring an interference power using a receive signal; comparing the interference power with a threshold interference power; and identifying if there is interference from the neighbor cell based on the comparison. The method of claim 2, wherein measuring the interference power comprises: measuring power of a combination of noise and interference using a pilot included in the receive signal; measuring a power of at least one unused tone; removing the power of the at least one unused tone from the power of the combination of noise and interference; and identifying the interference power. The method of claim 1, wherein identifying if there is interference comprises: calculating a Carrier to Interference plus Noise Ratio (CINR) using a receive signal; comparing the CINR with a threshold CINR; and identifying if there is interference from the neighbor cell based on the comparison. The method of claim 1, wherein the pre-whitening inverse matrix is generated through Cholesky Factorization of a covariance matrix of noise plus interference. An apparatus for controlling a whitening function of a whitening Maximum Ratio Combining (MRC) in a receive end of a multiple antenna system, the apparatus comprising: at least one antenna; an interference identifier for identifying if there is interference from at least one neighbor cell, using a signal received through the at least one antenna; a filter controller for, if there is interference, providing a pre-whitening inverse matrix to a pre-whitening filter and, if there is no interference, providing a unit matrix to the pre-whitening filter; and the pre-whitening filter for generating a weight of the whitening MRC using the pre-whitening inverse matrix or unit matrix provided from the filter controller. The apparatus of claim 6, wherein the interference identifier compares an interference power measured using the receive signal with a threshold interference power, and identifies if there is interference from the neighbor cell based on the comparison. The apparatus of claim 7, wherein the interference identifier removes power of at least one unused tone from a power of a combination of noise and interference measured using a pilot included in the receive signal, and identifies an interference power. The apparatus of claim 6, wherein the interference identifier compares a Carrier to Interference plus Noise Ratio (CINR) calculated using the receive signal with a threshold CINR, and identifies if there is interference from the neighbor cell based on the comparison. The apparatus of claim 6, wherein, if there is interference, the filter controller provides a pre-whitening inverse matrix, which is generated through Cholesky Factorization of a covariance matrix of noise plus interference, to the pre-whitening filter. PRIORITY [0001] This application is a Divisional Application of U.S. patent application Ser. No. 12/704,281 filed in the U.S. Patent and Trademark Office on Feb. 11, 2010 and claims priority under 35 U.S.C. ยง119(a) to a Korean Patent Application filed in the Korean Intellectual Property Office on Feb. 10, 2009 and assigned Serial No. 10-2009-0010464, the contents of each of which are incorporated herein by BACKGROUND OF THE INVENTION [0002] 1. Field of the Invention The present invention relates to a pre-whitening filter of a receive end in a multiple antenna system. More particularly, the present invention relates to an apparatus and method for selectively using a Minimum Mean Square Error (MMSE) scheme or a Maximum Ratio Combining (MRC) scheme depending on the existence or absence of inter-cell interference in a receive end of a multiple antenna 2. Description of the Related Art With a rapid growth of the wireless mobile communication market, there is an increase in the demand for diversity of multimedia services in the wireless environment. Accordingly, as a large capacity of transmission data and a high speed of data transmission are implemented to provide multimedia services, extensive research of multiple antenna systems capable of efficiently using limited frequency resources is being conducted. The multiple antenna system can increase transmission reliability and data rate compared to a single antenna system without additional frequency or transmission power allocation through data transmission using an independent channel. The typical receiving method used in the multiple antenna system can be an MMSE scheme and an MRC scheme. In the environment in which there is inter-cell interference, an MMSE receive end has more excellent reception performance than an MRC receive end. However, when there is no inter-cell interference, the MMSE receive end exhibits a lesser reception performance than the MRC receive end. For example, when there is no inter-cell interference, the MMSE receive end has poor reception performance because off-diagonal elements do not converge to `0` at the time when the R calculation necessary for MMSE weight calculation is performed. Thus, a receive end of a multiple antenna system needs a method for selectively using an MMSE scheme or an MRC scheme depending on the existence or absence of inter-cell interference to improve reception performance. SUMMARY OF THE INVENTION [0009] The present invention substantially solves at least the above problems and/or disadvantages and provides at least the advantages below. Accordingly, one aspect of the present invention is to provide an apparatus and method for selectively using a Minimum Mean Square Error (MMSE) scheme or a Maximum Ratio Combining (MRC) scheme depending on the existence or absence of interference in a receive end of a multiple antenna system. Another aspect of the present invention is to provide an apparatus and method for controlling a whitening function of a whitening MRC depending on the existence or absence of interference in a receive end of a multiple antenna system. A further aspect of the present invention is to provide an apparatus and method for controlling a whitening function of a whitening MRC depending on a Carrier to Interference plus Noise Ratio (CINR) in a receive end of a multiple antenna system. The above aspects are achieved by providing an apparatus and method for adaptive whitening in a multiple antenna system. According to one aspect of the present invention, a method for controlling a whitening function of a whitening Maximum Ratio Combining (MRC) in a receive end of a multiple antenna system is provided. The method includes identifying if there is an influence of interference from at least one neighbor cell, if there is the influence of interference, generating a weight of the whitening MRC using a pre-whitening inverse matrix, and, if there is no influence of interference, generating a weight of the whitening MRC using a unit matrix. According to another aspect of the present invention, a method for controlling a whitening function of a whitening MRC in a receive end of a multiple antenna system is provided. The method includes identifying if there is an influence of interference from at least one neighbor cell, setting an update variable for a covariance matrix of noise plus interference in consideration the influence of interference, calculating a covariance matrix of noise plus interference, updating the covariance matrix of noise plus interference using the update variable, calculating a pre-whitening inverse matrix using the updated covariance matrix of noise plus interference, and generating a weight of the whitening MRC using the pre-whitening inverse matrix. According to a further aspect of the present invention, an apparatus for controlling a whitening function of a whitening MRC in a receive end of a multiple antenna system is provided. The apparatus includes at least one antenna, an interference identifier, a filter controller, and a pre-whitening filter. The interference identifier identifies if there is an influence of interference from at least one neighbor cell, using a signal received through the at least one antenna. If there is an influence of interference, the filter controller provides a pre-whitening inverse matrix to the pre-whitening filter and, if there is no influence of interference, the filter controller provides a unit matrix to the pre-whitening filter. The pre-whitening filter generates a weight of the whitening MRC using the pre-whitening inverse matrix or unit matrix provided from the filter controller. According to a yet another aspect of the present invention, an apparatus for controlling a whitening function of a whitening MRC in a receive end of a multiple antenna system is provided. The apparatus includes at least one antenna, an interference identifier, a filter controller, and a pre-whitening filter. The interference identifier identifies if there is an influence of interference from at least one neighbor cell using a signal received through the at least one antenna. The filter controller updates a covariance matrix of noise plus interference using an update variable for a covariance matrix of noise plus interference that is set considering an influence of interference, and transmits a pre-whitening inverse matrix, which is calculated using the updated covariance matrix, to the pre-whitening filter. The pre-whitening filter generates a weight of the whitening MRC using the pre-whitening inverse matrix provided from the filter controller. BRIEF DESCRIPTION OF THE DRAWINGS [0017] The above and other objects, features and advantages of the present invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings in which: FIG. 1 is a block diagram illustrating whitening Maximum Ratio Combining (MRC) according to the present invention; FIG. 2 is a block diagram illustrating a receive end in a multiple antenna system according to the present invention; FIG. 3 is a flow diagram illustrating controlling a whitening function depending on an interference amount in a receive end according to an embodiment of the present invention; FIG. 4 is a flow diagram illustrating controlling a whitening function depending on an interference amount in a receive end according to another embodiment of the present invention; FIG. 5 is a flow diagram illustrating controlling a whitening function depending on a Carrier to Interference plus Noise Ratio (CINR) in a receive end according to an embodiment of the present invention; and FIG. 6 is a flow diagram illustrating controlling a whitening function depending on a CINR in a receive end according to another embodiment of the present invention. DETAILED DESCRIPTION OF EMBODIMENTS OF THE PRESENT INVENTION [0024] The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted for clarity and conciseness. The terms and words used in the following description and claims are not limited to their dictionary meanings, but are merely used to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents. A technology for selectively using a Minimum Mean Square Error (MMSE) scheme or a Maximum Ratio Combining (MRC) scheme depending on the existence or absence of inter-cell interference in a receive end of a multiple antenna system is described below. In the following description, it is assumed that the receive end includes N antennas. Here, the `N ` represents an integer of `1` or more. Also, in the following description, it is assumed that the multiple antenna system uses an Orthogonal Frequency Division Multiplexing (OFDM) scheme. However, the present invention is applicable even when the multiple antenna system uses other communication schemes. When there is inter-cell interference, the receive end receives a signal that can be represented as in Equation (1). Here, Equation (1) represents a receive signal converted into a frequency domain signal through Fast Fourier Transform (FFT). +N (1) In Equation (1), `Y` represents a signal received at a receive end, `H` represents a channel between a transmit end and the receive end, `X` represents a signal transmitted at the transmit end, `H.sub.I` represents an interference channel between a different transmit end having the influence of interference and the receive end, T represents a neighbor cell interference signal, and `N` represents a thermal noise. When the receive end uses an MMSE scheme, the receive end generates an MMSE weight based on Equation (2). .s- ub.nn (2) In Equation (2), `W` represents an MMSE weight, `y` represents a receive signal, `H` represents a channel between a transmit end and a receive end, and `R ` represents a covariance matrix of noise plus interference. Here, `H ` can be proved by applying a matrix inversion theorem. The covariance matrix (R ) of Equation (2) can be given using Cholesky Factorization as in Equation (3). In Equation (3), `R ` represents a covariance matrix of noise plus interference, `n` represents the sum of noise and interference, and `L` represents a pre-whitening inverse matrix. By applying Equation (3) to Equation (2), the MMSE weight can be expressed in Equation (4). In Equation (4), `W` represents an MMSE weight, `H` represents a channel between a transmit end and a receive end, `R ` represents a covariance matrix of noise plus interference, and `L` represents a pre-whitening inverse matrix. When the receive end uses an MRC scheme, the receive end generates an MRC weight given in Equation (5). In Equation (5), W ` represents an MRC weight, and `H` represents a channel between a transmit end and a receive end. In a comparison between Equations (4) and (5), the MMSE weight and MRC weight have a difference of application/non-application of a pre-whitening inverse matrix (L). Accordingly, as illustrated in FIG. 1 the receive end can selectively use an MMSE scheme or an MRC scheme by controlling the pre-whitening inverse matrix (L) of a whitening MRC depending on the existence or absence of interference. Here, the whitening MRC represents a reception scheme designed to exhibit the reception performance of the MMSE scheme using an MRC. FIG. 1 is a block diagram illustrating whitening MRC according to the present invention. As illustrated in FIG. 1, the whitening MRC multiplies a weight by a pre whitening inverse matrix (L) to receive the same signal as that of an MMSE scheme. At this time, a receive end selectively provides the pre-whitening inverse matrix (L) depending on the existence or absence of interference and turns ON/OFF a whitening function of the whitening MRC. For example, when there is inter-cell interference, the receive end provides the pre-whitening inverse matrix (L). Accordingly, the receive end can generate an MMSE weight given in Equation (4) and receive the same signal as that of the MMSE scheme. As another example, when there is no inter-cell interference, the receive end provides a unit matrix (kI), not the pre-whitening inverse matrix (L). Accordingly, the receive end can generate an MRC weight given in Equation (5) and receive the same signal as that of an MRC scheme. The following description is that of a receive end for selectively providing a pre-whitening inverse matrix depending on the existence or absence of interference. FIG. 2 is a block diagram illustrating a receive end in a multiple antenna system according to the present invention. As illustrated in FIG. 2, the receive end includes a plurality of Radio Frequency (RF) receivers 201-1 to 201-N , a plurality of Analog to Digital Converters (ADCs) 203-1 to 203-N , a plurality of OFDM demodulators 205-1 to 205-N , a plurality of interference measurement units 207-1 to 207-N , a plurality of filter controllers 209-1 to 209-N , a pre-whitening filter 211, and a Multiple Input Multiple Output (MIMO) detector 213. The RF receivers 201-1 to 201-N convert signals received through antennas (N to N ) into baseband signals. The ADCs 203-1 to 203-N convert analog signals provided from the respective RF receivers 201-1 to 201-N into digital signals. The OFDM demodulators 205-1 to 205-N convert time domain signals provided from the respective ADCs 203-1 to 203-N into frequency domain signals through a Fast Fourier Transform (FFT) operation. The interference measurement units 207-1 to 207-N measure interference power in signals provided from the respective OFDM demodulators 205-1 to 205-N . For example, the interference measurement units 207-1 to 207-N measure power of a combination of noise and interference using pilot signals. The interference measurement units 207-1 to 207-N measure power of unused tones. If the interference measurement units 207-1 to 207-N recognize the sum of the power of the unused tones as thermal noise, the interference measurement units 207-1 to 207-N remove the thermal noise from the power of combination of noise and interference and measure interference power. As another example, when measuring covariance matrices (R ) of noise plus interference in a burst, the interference measurement units 207-1 to 207-N can also measure interference power in the burst. As a further example, in the case of an Adaptive Modulation and Coding (AMC) sub-channel structure, the interference measurement units 207-1 to 207-N can also measure interference power in a band. The filter controllers 209-1 to 209-N determine if there is interference depending on interference power provided from the respective interference measurement units 207-1 to 207-N , and select pre-whitening inverse matrices (L). For example, when there is interference, the filter controllers 209-1 to 209-N select and provide pre-whitening inverse matrices (L) to the pre-whitening filter 211. As another example, when there is no interference, the filter controllers 209-1 to 209-N select and provide unit matrices (kI) to the pre-whitening filter 211. The pre-whitening filter 211 filters out interference from receive signals provided from the OFDM demodulators 205-1 to 205-N . The pre-whitening filter 211 is included in a whitening MRC. Accordingly, when receiving pre-whitening inverse matrices (L) from the filter controllers 209-1 to 209-N , the pre-whitening filter 211 generates an MMSE weight given in Equation (4) and receives a signal in an MMSE scheme. On the other hand, when receiving unit matrices (kI) from the filter controllers 209-1 to 209-N , the pre-whitening filter 211 generates an MRC weight given in Equation (5) and receives a signal in an MRC scheme. The MIMO detector 213 determines a transmit signal using a receive signal from which interference is filtered out by the pre-whitening filter 211 and channel information. The following description is a method for controlling a whitening function depending on the existence or absence of interference that is identified using interference power. FIG. 3 is a flow diagram illustrating controlling a whitening function depending on an interference amount in a receive end according to an embodiment of the present invention. Referring to FIG. 3, in step 301, the receive end identifies if a signal is received from a transmit end. If the signal is received from the transmit end, in step 303, the receive end measures power of interference included in the receive signal. For example, the receive end measures power of a combination of noise and interference using a pilot signal included in the receive signal. Also, the receive end sums up power of unused tones and recognizes the summed power as a thermal noise. After that, the receive end removes thermal noise from the power of the combination of noise and interference, and measures interference power. After measuring the interference power, the receive end proceeds to step 305 and compares the interference power measured in step 303 to a threshold interference power so as to determine if there is If the interference power measured in step 303 is greater than the threshold interference power, the receive end recognizes that there is interference. Accordingly, the receive end proceeds to step 307 and selects a pre-whitening inverse matrix (L) as a whitening control variable. In this case, the receive end turns ON a whitening function of a whitening MRC and receives a signal in an MMSE On the other hand, if the interference power measured in step 303 is less than or equal to the threshold interference power, the receive end recognizes that there is no interference. Accordingly, the receive end proceeds to step 309 and selects a unit matrix (kI) as the whitening control variable. In this case, the receive end turns OFF the whitening function of the whitening MRC and receives a signal in an MRC scheme. After that, the receive end terminates the procedure according to the embodiment of the present invention. In the aforementioned embodiment, a receive end selectively provides a pre-whitening inverse matrix depending on the existence or absence of interference. In another embodiment, a receive end can also control an R update variable and turn ON/OFF a whitening function of a whitening MRC. For example, the receive end converts an `R ` into an `LL ` form through Cholesky Factorization as given in Equation (3). Before carrying out Cholesky Factorization of the `R `, the receive end updates the `R ` as given in Equation (6). +kI (6) In Equation (6), `R ` represents an updated R , `R ` represents a covariance matrix of noise plus interference, `k` represents an R update variable, and `I` represents a unit matrix. As in Equation (6), the receive end updates `R ` before generating a pre-whitening inverse matrix (L) through Cholesky Factorization of `R `. At this time, the receive end can turn ON/OFF the whitening function of the whitening MRC depending on `k`. For example, when there is inter-cell interference, the receive end sets `k` down and does not vary R `. As another example, when there is no inter-cell interference, the receive end sets `k` up. In this case, diagonal elements of `hd nn` are relatively greater than off-diagonal elements and thus `R ` approaches a unit matrix. When the `R ` is a unit matrix, even the pre-whitening inverse matrix (L) becomes a unit matrix and thus, the whitening function can turn OFF. Accordingly, the filter controllers 209-1 to 209-N of FIG. 2 can determine an update variable for a covariance matrix (R ) of noise plus interference in consideration of the influence of interference. The filter controllers 209-1 to 209-N calculate pre-whitening inverse matrices (L) using the `R ` updated using the update variable, and transmit the calculated pre-whitening inverse matrices (L) to the pre-whitening filter 211. The following description is made for a method for controlling an R update variable and turning ON/OFF a whitening function of a whitening MRC in a receive end. FIG. 4 is a flow diagram illustrating controlling a whitening function depending on an interference amount in a receive end according to another embodiment of the present invention. Referring to FIG. 4, in step 401, the receive end identifies if a signal is received from a transmit end. If the signal is received from the transmit end, in step 403, the receive end measures power of interference included in the receive signal. For example, the receive end measures power of a combination of noise and interference, using a pilot signal included in the receive signal. Also, the receive end sums up power of unused tones and recognizes the summed power as thermal noise. The receive end then removes the thermal noise from the power of combination of noise and interference, and measures interference power. After measuring the interference power, the receive end proceeds to step 405 and compares the interference power measured in step 403 with a threshold interference power in order to determine if there is interference. If the interference power measured in step 403 is greater than the threshold interference power, the receive end recognizes that there is interference. Accordingly, the receive end proceeds to step 407 and sets an R update variable (k) to a value less than a threshold value. Here, the threshold value includes a value for determining if a covariance matrix (R ) is varied through the R update variable (k). Then, the receive end proceeds to step 409 and calculates a covariance matrix (R ) of noise plus interference. On the other hand, if the interference power measured in step 403 is less than or equal to the threshold interference power, the receive end recognizes that there is no interference. Accordingly, the receive end proceeds to step 415 and sets the R update variable (k) to a value greater than the threshold value. Then, the receive end proceeds to step 409 and calculates a covariance matrix (R ) of noise plus interference. After calculating the covariance matrix (R ) of noise plus interference, the receive end proceeds to step 411 and updates the covariance matrix (R ) using the R update variable (k) set in step 407 or 415. For example, the receive end updates the `R ` as given in Equation (6). After updating R , the receive end proceeds to step 413 and calculates a pre-whitening inverse matrix (L) through Cholesky Factorization of the R updated in step 411. For example, when there is no interference and thus the R update variable (k) is set to a value greater than the threshold value, the R approaches a unit matrix because diagonal elements of the R are relatively greater than off-diagonal elements. In this case, even the pre-whitening inverse matrix (L) becomes a unit matrix and thus, the receive end turns OFF a whitening function of a whitening MRC and receives a signal in an MRC scheme. As another example, when there is interference and thus the R update variable (k) is set to a value less than the threshold value, R does not vary. Accordingly, the receive end can turn ON the whitening function of the whitening MRC and receive a signal in an MMSE scheme. After that, the receive end terminates the procedure according to the embodiment of the present invention. In the aforementioned embodiment, a receive end determines if there is interference using interference power. In another embodiment, a receive end can also determine if there is interference using a Carrier to Interference plus Noise Ratio (CINR). The following description is a method for determining a pre-whitening control variable depending on the existence or absence of interference that is identified using a CINR. FIG. 5 is a flow diagram illustrating controlling a whitening function depending on a CINR in a receive end according to an embodiment of the present invention. Referring to FIG. 5, in step 501, the receive end identifies if a signal is received from a transmit end. If the signal is received from the transmit end, in step 503, the receive end calculates a Carrier to Interference plus Noise Ratio (CINR) for the receive signal. After measuring the CINR, the receive end proceeds to step 505 and compares the CINR calculated in step 503 with a threshold CINR (CINR ) in order to determine if there is interference. If the CINR calculated in step 503 is less than or equal to the threshold CINR (CINR ), the receive end recognizes that there is interference. Accordingly, the receive end proceeds to step 507 and selects a pre-whitening inverse matrix (L) as a whitening control variable. In this case, the receive end turns ON a whitening function of a whitening MRC and receives a signal in an MMSE scheme. On the other hand, if the CINR calculated in step 503 is greater than the threshold CINR (CINR ), the receive end recognizes that there is no interference. Accordingly, the receive end proceeds to step 509 and selects a unit matrix (kI) as the whitening control variable. In this case, the receive end turns OFF the whitening function of the whitening MRC and receives a signal in an MRC scheme. After that, the receive end terminates the procedure according to the embodiment of the present invention. In the aforementioned embodiment, a receive end selectively provides a pre-whitening inverse matrix depending on the existence or absence of interference. In another embodiment, a receive end can also control an R update variable and turn ON/OFF a whitening function of a whitening MRC. FIG. 6 is a flow diagram illustrating controlling a whitening function depending on a CINR in a receive end of according to another embodiment of the present invention. Referring to FIG. 6, in step 601, the receive end identifies if a signal is received from a transmit end. If the signal is received from the transmit end, in step 603, the receive end calculates a CINR for the receive signal. After measuring the CINR, the receive end proceeds to step 605 and compares the CINR calculated in step 603 with a threshold CINR (CINR ) in order to determine if there is interference. If the CINR calculated in step 603 is less than or equal to the threshold CINR (CINR ), the receive end recognizes that there is interference. Accordingly, the receive end proceeds to step 607 and sets an R update variable (k) to a value less than a threshold value. Then, the receive end proceeds to step 609 and calculates a covariance matrix (R ) of noise plus interference. On the other hand, if the CINR calculated in step 603 is greater than the threshold CINR (CINR ), the receive end recognizes that there is no interference. Accordingly, the receive end proceeds to step 615 and sets the R update variable (k) to a value greater than the threshold value. Then, the receive end proceeds to step 609 and calculates a covariance matrix (R ) of noise plus interference. After calculating the covariance matrix (R ) of noise plus interference, the receive end proceeds to step 611 and updates the covariance matrix (R ) using the R update variable (k) that is set in step 607 or step 615. For example, the receive end updates the R as given in Equation (6). After updating the R , the receive end proceeds to step 613 and calculates a pre-whitening inverse matrix (L) through Cholesky Factorization of the R updated in step 611. For example, when there is no interference and thus the R update variable (k) is set to a value greater than the threshold value, the R approaches a unit matrix because diagonal elements of the R are relatively greater than off-diagonal elements. In this case, even the pre-whitening inverse matrix (L) becomes a unit matrix and thus, the receive end turns OFF a whitening function of a whitening MRC and receives a signal in an MRC scheme. As another example, when there is interference and thus the R update variable (k) is set to a value less than the threshold value, the R does not vary. Accordingly, the receive end can turn ON the whitening function of the whitening MRC and receive a signal in an MMSE scheme. After that, the receive end terminates the procedure according to the embodiment of the present invention. The present invention has an advantage of being capable of improving a reception performance of a receive end by selectively using an MMSE scheme or an MRC scheme depending on the existence or absence of inter-cell interference in the receive end of a multiple antenna system. While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Patent applications by SAMSUNG ELECTRONICS CO., LTD. Patent applications in class Interference or noise reduction Patent applications in all subclasses Interference or noise reduction User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130022160","timestamp":"2014-04-19T03:54:28Z","content_type":null,"content_length":"64461","record_id":"<urn:uuid:4c6d7145-1006-4a12-8a87-6379f50c18ae>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
The Graph Theorists' Home Page Guide by Jörg Zuther What do you find on this page? Why do I maintain this page? You're a graph theorist and want to see your name here? What can YOU do to improve this page? The Pages Links on this page have last been checked: May 30th, 2002 Last update: May 26th, 2006 • 25.05.2006: Unfortunately, my time is tight. Please don't expect much change or growth on this list. Currently, I have not the resources to put more data onto this page according to my earlier plans (see below). • 30.05.2002: Link rot has become a great annoyance. The usefulness of this page is strongly reduced because of this problem. See the page "Cool URIs don't change" of the W3C for a description and many tips to avoid this problem. • 07.03.2002: I must apologize to some people who had to wait very long to be put onto this page (some more than one year!!). My time was very tight and I had big problems in updating the page. I have to apologize even more since I didn't answer some mails and have even lost some. I'm ***VERY*** sorry about that! So, if you have tried to contact me but received no answer and still don't find your name here, please try again. I beg your pardon. This time, I promise to do it within one week. • 07.03.2002: I plan to use an XML approach to maintain my web pages. This will cost a lot of time but will pay off when it comes to put new data onto the pages or changing the layout. Further, I'm thinking about some improvements of the data: I want to include email (and people who have only email but no homepage) and the Mathematics Subject Classification for the subjects listed in the "Research Interests". The latter should facilitate searching. What do you find on this page? As the title suggests, this is a collection of links to home pages of graph theorists. Every link is accompanied by some information about the residence and the research interests of the according graph theorist taken from his/her home page. Furthermore, notes about or links to interesting or special information on the corresponding site are given. The research and maintenance of all this information costs a lot of time, causing a slow growth of this list. Why do I maintain this page? 07.03.2002:Since I started this page several years have passed by and the web has grown superexponentially. Nowadays, almost every graph theorist has his own homepage - with exception of some of the "dinosaurs" who still "refuse" to create a homepage ;-). Therefore, I think that the graph theoretical subdigraph of the web has a lot of vertices now. Further, the linkage has improved a lot. However, I'm still of the opinion that it would be good for a faster development of the field if much more content would be accessible via web. I have changed my opinion about the difficulty of creating a homepage. If you are not familiar with programming at all and have even problems in using a text editor (which should't be the case - how could you type latex documents without some sort of text editor?), then it may cost you more than some hours to accomplish even a very simple HTML-document. You may curse the rigidity of the syntax (programmers may find it sloppy, though). In addition, you'll find only a few pages in the web that use plain HTML, other than in the early days of the web. The source code of many pages may not be very useful for you. If you still want to try, you can work yourself into HTML - for links see below. Old Text: It seems to me that the graph theoretical subdigraph of the web has an unnecessarily small number of vertices and is very sparse. Further, a lot of vertices contain only few information. It is a pity that the web is apparently not very popular among many graph theorists. Running a home page has a lot of advantages: • It facilitates access to your 'snail mail' or email address for other people. • You can tell other people what topics you're interested in. This catalyses finding contacts. • You can give information about your lectures, seminars etc. • You can distribute your preprints, lists of publications etc. • You can collect links to your favourite web sites to give other people the opportunity to meet these sites, too. You have one of the following objections against an own home page? • Objection: I'm scared of distributing information about me in the web. My reply: It's not necessary to put any private information onto your home page. You can restrict yourself to information that is accessible for the public anyway, such as the address of your bureau, your preprints, papers, main research interests, lectures,... • Objection: I'm clumsy with the computer. / I've not enough time to create and maintain an own home page. My reply: It is very easy to create a homepage. If you just want to put your address and your research interests onto your page, it should be possible to do it by only one or two hours. Web pages are written in HTML (Hypertext Markup Language) which is self-explanatory to a wide extent. You can download and display the source of any page you're accessing to see how certain effects and features you like can be achieved by HTML. For instance, in some menu of your WWW browser you may find an entry like "View Source Code". Clicking this entry will open a new browser window showing the source code of this page. Furthermore, the web contains a lot of information about HTML (you can find several links by visiting Jörg Zuther's Quality Web Site Seeing). If you don't know anything about HTML you may need another two hours to get the basics. Ask your webmaster where to store your page and how to become linked. I hope that this page will improve the communication among graph theorists and encourage many graph theorists to create their own home page and to start running pages containing sound information about special graph theoretical topics. Some of the following collections of home pages runned on other sites just offer the raw link by the name of the graph theorist. On this page, you find some more information (as specified above) to give the surfer more orientation. 12.12.1999: I want to add that I'm no longer in graph theory. I quit university about 2 years ago and have not very much time left to do scientific studies. Nevertheless, I'll maintain this page in the future due to the fact that it has attracted some attention and seems to be useful for some people. You're a graph theorist and want to see your name here? Just mail to jzuther@gmx.de. Include the URL of your home page and it'll be accessible by your linked name soon. You can also suggest some comments you want to be displayed together with your link. What can YOU do to improve this page? • First of all, if you're a graph theorist or some person with strong interest in graph theory (you need not to be a mathematician!), and if you have a homepage but don't find a link to it on this page, please contact me as described above. • If you find any incorrect information on this page, please feel free to contact me and suggest some corrections/just give a hint. For example, if there's any inaccurate information about yourself on this page, please tell me immediately. If you find an outdated link and know a new one, please send the new URL. This would be very helpful. • If you know a homepage of a graph theorist and find a "missing link" on this page, please send me the URL. I've not the time to do all the necessary research on my own. The more complete this page is, the more useful it is - for YOU. I don't guarantee the correctness of the information on this page (despite I'll check the links periodically) and, of course, take no responsibility for the contents of the pages accessible via this list. See also my general Disclaimer page. • Ghidewon Abay Asmerom (at the Dept. of Math, Virginia Commonwealth Univ., Richmond, USA) Research Interests: graph theory and combinatorics Remarkable Features: collection of links to African language sites • Arnold Adelberg (at the Dept. of Math and CS, Grinnell College, Iowa, USA) Research Interests: number theory, algebraic geometry, combinatorics • Alexander A. Ageev (at the Sobolev Inst. of Math, Novosibirsk, Russia) Research Interests: combinatorial optimization and graph theory, esp. design and analysis of algorithms for discrete optimization problems • Martin Aigner (at the Inst. for Math II, Freie Universität Berlin, Germany) Research Interests: combinatorics and graph theory • Farid Alizadeh (at RUTCOR, Rutgers, the State Univ. of New Jersey, Piscataway, USA) Research Interests: computational biology, combinatorial optimization, numerical methods in optimization, convex analysis • Noga Alon (at the School of Math Sciences, Tel Aviv Univ., Israel) Research Interests: applications of combinatorics and graph theory to theoretical computer science, combinatorial algorithms and circuit complexity, combinatorial geometry, combinatorial number theory, algebraic and probabilistic methods in combinatorics • Brian Alspach (at the Dept. of Math and Statistics, Simon Fraser Univ., Burnaby, British Columbia, Canada) Research Interests: graph theory and the mathematics of gambling, esp. poker Remarkable Features: Poker Digest, a collection of articles Alspach published in Poker Digest, and Poker Computations, a collection of computations concerning Poker. • Thomas Andreae (at the Dept. of Math, Univ. of Hamburg, Germany) Research Interests: graph theory and combinatorics • Richard Anstee (at the Math Dept., Univ. of British Columbia, Canada) Research Interests: combinatorics, extremal set theory, graph theory, matching theory • Dan Archdeacon at the Dept. of Math and Statistics, Univ. of Vermont, Burlington, USA) Research Interests: combinatorics, computer science, and graph theory, esp. opological graph theory Remarkable Features: Problems in Topological Graph Theory, big, well maintained list • Esther M. Arkin (at the Dept. of Applied Math and Statistics, Univ. of New York at Stony Brook, USA) Research Interests: operations research, computational geometry, algorithms and data structures • Stefan Arnborg (at the Dept. of Numerical Analysis and CS (NADA), Royal Inst. of Technology (KTH), Stockholm, Sweden) Research Interests: graph theory, algorithms for Swedish language tools • Sanjeev Arora (at the CS Dept., Princeton Univ., New Jersey, USA) Research Interests: computational complexity, randomness in computation, probabilistically checkable proofs, computing approximate solutions to NP-hard problems • Edward F. Assmus, Jr. (at the Dept. of Math, Lehigh Univ., Bethlehem, Pennsylvania, USA) Research Interests: combinatorics (esp. codes, Steiner triple systems, designs, discrete geometry) • Mike Atkinson (at the School of Math and Computational Sciences, Univ. of St. Andrews, Scotland, UK) Research Interests: design and analysis of algorithms, algebra, combinatorics, and connections between these (esp. descent algebras, container data types, restricted permutations) Remarkable Features: GAP - Groups, Algorithms and Programming is a software "system for computational discrete algebra with particular emphasis on, but not restricted to computational group • David Avis (at the Dept. of CS, McGill Univ., Montreal, Quebec, Canada) Research Interests: combinatorics (esp. reverse search algorithms and perfect graphs), computational geometry Remarkable Features: lrs home page, on the software package lrslib"a self-contained ANSI C implementation of the reverse search algorithm for vertex enumeration/convex hull problems" • Luitpold Babel (at the Math Center, TU Munich, Germany) Research Interests: structural and algorithmic graph theory, combinatorial optimization, efficient algorithms, algebraic combinatorics • Camino Balbuena (Link outdated! Does anyone know what happened to him or his page?) (at the Dept. de Matemàtica Aplicada III, Univ. Politècnica de Catalunya, Barcelona, Spain) Research Interests: graph theory, esp. connectivity of graphs and digraphs, vulnerability and design of large interconnection networks, optimal network design, algebraic graph theory • Jørgen Bang-Jensen (at the Dept. of Math & CS, Odense Univ., Denmark) Research Interests: graph theory, esp. digraphs, tournaments and networks • Curtis Barefoot (at the Math Dept., New Mexico Tech, Socorro, USA) Research Interests: combinatorics and graph theory • David Barnette (has become Emeritus Prof. at the Dept. of Math, Univ. of California, Davis, USA; homepage deleted) Research Interests: graph theory and combinatorial geometry, esp. convex polytopes and triangulations of manifolds • Reuven Bar-Yehuda (at the CS Dept., Technion IIT, Haifa, Israel) Research Interests: combinatorial algorithms, esp. approximation algorithms, computational geometry and applications, theoretical aspects of communications, and practical aspects of VLSI Remarkable Features: homepage contains also a lot of information about the local-ratio technique and its applications • Vladimir Batagelj (at the Dept. of Math, Univ. of Ljubljana, Slovenia) Research Interests: inductive classes of graphs, semirings and graphs, graph theory algorithms, social networks analysis, visualization of graphs, large networks Remarkable Features: □ coauthor of the program Pajek (Slovenian word for Spider) for large networks analysis (for Windows) • Robert A. Beezer (at the Dept. of Math and CS, Univ. of Puget Sound, Tacoma, Washington, USA) Research Interests: combinatorics and graph theory, esp. regular graphs and algebraic graph theory • Claude Benzaken (at the Dept. of Discrete Math, Leibniz Lab, IMAG Inst., Grenoble, France) Research Interests: graphs and hypergraphs, esp. coloring; general combinatorial invariants; software development, esp. for handling hypergraphs and Boolean functions Remarkable Features: □ Cabri-Clutter, a software for the study of Sperner hypergraphs that are bicritical for the chromatic number; a more general project of the Graph Theory Team at the Leibniz Lab is □ Cabri-graph, a software "to handle graphs (edition, operations, computations)" • David Berman (at the Math Dept., Univ. of New Orleans, Louisiana, USA) Research Interests: graph theory • Bing Xu (Link outdated! Does anyone know what happened to him or his page?) (at the Dept. of Computing Science, Univ. of Alberta, Edmonton, Canada) Research Interests: graph theory (esp. graph classes) and programming query interfaces for databases Remarkable Features: Graph Class, a page about graph families • Paul E. Black (at the U.S. National Institute of Standards and Technology, Gaithersburg, Maryland, USA) Research Interests: algorithms and data structures in general, and formal methods and verification Remarkable Features: Dictionary of Algorithms and Data Structures, which contains many notions also used in graph theory (graph theory and algorithms are closely related subjects) • Avrim L. Blum (at the Dept. of CS, Carnegie Mellon Univ., Pittsburgh, Pennsylvania, USA) Research Interests: machine learning theory, approximation algorithms, on-line algorithms Remarkable Features: Graphplan, a software tool for constructing planning graphs • Hans L. Bodlaender (at the Dept. of CS, Utrecht Univ., The Netherlands) Research Interests: graph theory, esp. treewidth and parallel algorithms • Thomas Böhme (at the Dept. of Math, Faculty of Math and Natural Sciences, TU Ilmenau, Germany) Research Interests: graph theory and computer science, esp. topological graph theory, cycles in graphs, minors, petri nets, load balance • James "Jay" Boland (at the Dept. of Math, East Tennessee State Univ., Johnson City, USA) Research Interests: graph theory, esp. connectivity • Bela Bollobas (at the Dept. of Math Sciences, The Univ. of Memphis, Tennessee, USA) Research Interests: combinatorics, functional analysis • Anthony Bonato (at the Dept. of Math, Wilfrid Laurier Univ., Canada) Research Interests: graph theory and combinatorics, esp. graph homomorphisms and adjacency properties of graphs • C. Paul Bonnington (at the Division of Science and Technology, Univ. of Auckland, New Zealand) Research Interests: combinatorics, esp. graph theory (finite and infinite), combinatorial computing, combinatorial topology • Andreas Brandstädt (at the Dept. of CS, Univ. of Rostock, Germany) Research Interests: graph theory, esp. efficient graph algorithms, graph and hypergraph models, graph classes • Stefan Brandt (at the Dept. of Math and CS, Freie Universität Berlin, Germany) Research Interests: graph theory and its relations to algebra, geometry, and computer science • Hajo Broersma (at the Faculty of Applied Math, Univ. of Twente, Enschede, The Netherlands) Research Interests: discrete mathematics and graph theory, esp. cycles and paths, spanning trees, and claw-free graphs, also planar graphs, colouring, connectivity, vulnerability, and diameter • Andries E. Brouwer (at the Dept. of Math, TU Eindhoven, The Netherlands) Research Interests: combinatorics and graph theory, esp. linear codes, distance-regular graphs Remarkable Features: Server "for bounds on the minimum distance of q-ary linear codes, q=2,3,4" • W. G. Brown (at the Dept. of Math and Statistics, McGill Univ., Montreal, Quebec, Canada) Research Interests: combinatorics and graph theory • Richard A. Brualdi (at the Math Dept., Univ. of Wisconsin, Madison, USA) Research Interests: combinatorics, codes, matrices • Anita C. Burris (at the Dept. of Math and Statistics, Youngstown State Univ., Ohio, USA) Research Interests: graph theory and combinatorics; also algebra, number theory, mathematics education • Ibrahim Cahit (at the Eastern Mediterranean Univ., North Cyprus) Research Interests: combinatorics and graph theory, esp. graph labelings, decomposition of graphs, and trees • Leizhen Cai at the Dept. of CS and Engineering, The Chinese Univ. of Hong Kong, Shatin, Hong Kong, P.R. China) Research Interests: graph algorithms (esp. uniformly polynomial-time algorithms for graph problems, efficient algorithms for parameterized graphs, recognition algorithms for special graphs, spanning subgraphs), graph theory (esp. intersection graphs, colouring games, perfect graphs, local structures, path and cycle coverings and decompositions), computational complexity (esp. NP-completeness, fixed-parameter tractability) • Peter J. Cameron (at the School of Math Sciences, Queen Mary and Westfield College, Univ. of London, UK) Research Interests: permutation groups acting on any structures, such as designs, graphs, codes, ordered sets, topological spaces,... • Yair Caro (Link to Graph Theory White Pages by Daniel Sanders until Yair has put up a new homepage/ at the Dept. of Math, Univ. of Haifa - Oranim, Tivon, Israel) Research Interests: combinatorics and graph theory, esp. independence and domination parameters, extremal-problems, Ramsey theory and zero-sum problems, decomposition, packings and coverings of • Gary Chartrand (at the Dept. of Math and Statistics, Western Michigan Univ., Kalamazoo, USA) Research Interests: combinatorics and graph theory, esp. applied and algorithmic graph theory and digraphs • Yeow Meng Chee (lives in Singapore, has his own domain, and his activities include research, fundraising, and governmental comitees) Research Interests: Turan-type problems, combinatorial designs, coding theory, and cryptography • Bill Chen's Field of Dreams (at the Center for Combinatorics, Nankai Univ., Tianjin, P.R. China) Research Interests: combinatorics Remarkable Features: The Combinatorics Net, a vast resource covering a wide range of subjects concerning combinatorics • Fan Chung Graham (at the Dept. of Math, Univ. of California San Diego, La Jolla, California, USA) Research Interests: graph theory and combinatorics, esp. Eigenvalues and spectra of graphs • Va ek Chvátal (at the Dept. of CS, Rutgers, the State University of New Jersey, Piscataway, USA) Research Interests: analysis of algorithms (esp. cutting-plane proofs), combinatorics (esp. extremal problems and random discrete structures), graph theory (esp. hamiltonian cycles and perfect graphs), and operations research (esp. linear programming) Remarkable Features: • Karen L. Collins (at the Math Dept., Wesleyan Univ., Middletown, Connecticut, USA) Research Interests: graph theory, enumerative and algebraic combinatorics • Francesc Comellas (at the Dept. de Matemàtica Aplicada i Telemàtica, Univ. Politènica de Catalunya, Barcelona, Spain) Research Interests: application of graph theory to the design of topologies and communication strategies for interconnection networks, esp. design of large networks, particularly the degree-diameter problem for vertex-symetric networks, communication problems (broadcasting and gossiping), use of combinatorial optimization algorithms for problems like the design of certain classes of networks or communication strategies, simulated annealing, tabu search, genetic algorithms, genetic programming, multi-agent algorithms (ants), immune system based algorithms • Derek G. Corneil (at the Dept. of CS, Univ. of Toronto, Canada) Research Interests: graph theory and combinatorics • Bruno Courcelle (at the Lab for Research in CS (LaBRI), Bordeaux, France) Research Interests: graph theory, logic and their interrelationships • Lenore Cowen (at the CS Dept., Tufts University, Medford, Massachusetts, USA) Research Interests: Graph coloring, applications of combinatorics and graph theory to theoretical computer science, probabilistic methods in graph theory and combinatorics, graph algorithms • Joseph (Joe) Culberson (at the Dept. of Computing Science, Univ. of Alberta, Edmonton, Canada) Research Interests: algorithms, esp. hardness of algorithms, randomized approaches to NP-hard problems, genetic algorithms, graph colorings; also computational geometry, binary search trees, real cost searching, partial orders Remarkable Features: Graph Coloring Page • Nathaniel Dean (at the Computational & Applied Math, Rice University, Houston, Texas, USA) Research Interests: algorithms, graph theory, geometry, and combinatorics and their application to data visualization and network design • Ermelinda De La Vina (at the Dept. of Math, Univ. of Houston, Texas, USA) Research Interests: knowledge discovery in databases and graph theory, esp. algorithms and extremal problems • Walter Deuber (at the Dept. of Math, Univ. of Bielefeld, Germany) Research Interests: graph theory, combinatorics and their application to other fields of mathematics • Matthew "Matt" J. DeVos (at the Math Dept., Princeton Univ., New Jersey, USA) Research Interests: graph theory and combinatorics, esp. flow/coloring problems, matroid theory, and combinatorial properties of finite vector spaces Remarkable Features: His page Open Problems In Discrete Math offers open problems (and prizes for solutions) in the areas flows of graphs, cycle covers, choosability for Ax=y, edge coloring, vertex coloring, directed graphs, topological graph theory, matroid theory, additive number theory and more. • Dominique de Werra (at the Dept. of Math, École Polytechnique Fédéral de Lausanne (EPFL), Switzerland) Research Interests: combinatorial optimization, scheduling, graph theory, operations research, timetabling • Reinhard Diestel (at the Dept. of Math, Univ. of Hamburg, Germany) Research Interests: discrete mathematics, graph theory • Guoli Ding (at the Dept. of Math, Louisiana State Univ., Baton Rouge, USA) Research Interests: graph theory, esp. matroids and substructures of embedded graphs • Michael J. Dinneen (at the Dept. of CS, Univ. of Auckland, New Zealand) Research Interests: graph theory, esp. obstructions sets of graph families, generation of graph theoretical conjectures by computers, and Cayley graphs as an underlying network structure Remarkable Features: The VACS Page, a program to find obstructions of graph families that are closed under the minor order • Hristo N. Djidjev (at the Dept. of CS, Univ. of Warwick, Coventry, UK) Research Interests: graph theory and combinatorics, esp. efficient combinatorial algorithms based on nice properties of the input data, graph partitioning, parallel computing, shortest paths in planar networks, graph drawing and computational geometry • Gayla S. Domke (at the Dept. of Math and CS, Georgia State Univ., Atlanta, USA) Research Interests: graph theory • Richard Duke (at the Georgia Inst. of Technology, Atlanta, USA) Research Interests: combinatorics, esp. graph theory and finite set systems • Mark Ellingham (at the Dept. of Math, Vanderbilt Univ., Nashville, Tennessee, USA) Research Interests: combinatorics • Robert Ellis (at the Dept. of Math, Texas A&M Univ., College Station, Texas, USA) Research Interests: combinatorics and graph theory, esp. reformulation of the proof of the Four Colour Theorem, cubical complexes, randomization in card shuffling, listening to graphs (!) Remarkable Features: projects menu (4-Colour-Theorem, cross-product reformulation of its proof, listening to graphs (!), torus hitting times and others) Click here for a frame version. • Thomas Emden-Weinert (at the Dept. of CS, Humboldt-Univ. Berlin, Germany) Research Interests: transport and logistics, human resource scheduling; combinatorial optimization, operations research, metaheuristics, simulation; decision support and management information systems; software engineering, software development process improvement; graph theory and algorithms; approximation algorithms • David Eppstein (at the Dept. of Information and CS, Univ. of California, Irvine, USA) Research Interests: graph algorithms and computational geometry Remarkable Features: • Erdõs Pál (1913-1996) (international version: Paul Erdös/ by the Eötvös University, Budapest, Hungary,); see also Research Interests: mathematics, esp. number theory, combinatorics, graph theory • Martin Erickson (at the Division of Math and CS, Truman State Univ., Kirksville, Missouri, USA) Research Interests: number theory, problem solving, combinatorics, and graph theory, esp. ramsey theory • Josep Fàbrega (at the Dept. de Matemàtica Aplicada i Telemàtica, Univ. Politènica de Catalunya, Barcelona, Spain) Research Interests: application of graph theory to the design of topologies and communication strategies for interconnection networks, esp. connectivity of graphs and digraphs, extremal problems in graph theory, vulnerability of interconnection networks, routing and information dissemination in interconnection networks, permutation and dynamic memory networks • Siemion Fajtlowicz (at the Dept. of Math, Univ. of Houston, Texas, USA) Research Interests: algebra, combinatorics, graph theory • Ralph Faudree (at the Dept. of Mathemathical Sciences, The Univ. of Memphis, Tennessee, USA) Research Interests: graph theory, esp. Hamiltonian theory and Ramsey theory of graphs • Michael Fellows (at the School of Electrical Engineering and CS, Univ. of Newcastle, Callaghan, NSW, Australia) Research Interests: combinatorics and graph theory, esp. optimal network design, well-quasiordering theory, and graph algorithms and complexity theory and their apllications to VLSI layout, computational biology and cryptography • Miguel Angel Fiol (at the Dept. de Matemàtica Aplicada i Telemàtica, Univ. Politènica de Catalunya, Barcelona, Spain) Research Interests: graph theory and combintorics, esp. graph coloring, congruences in Z^n, tessellations and equidecompositions, groups and graphs, graphs and interconnection networks, extremal problems in graphs, connectivity of graphs, algebraic graph theory • Steve Fisk (at the Dept. of Math, Bowdoin College, Brunswick, maine, USA) Research Interests: combinatorics, topological graph theory, projective geometry • Herbert Fleischner (at the Inst. of Information Processing, Austrian Academy of Sciences, Vienna, Austria) Research Interests: graph theory, esp. Eulerian and Hamiltonian graphs, dominating cycles, the Cycle Double Cover Conjecture, constructions involving the Petersen graph • Pierre Fraigniaud (at the Laboratoire de Recherche en Informatique, Univ. Paris-Sud, Orsay, France) Research Interests: group communications,esp. multicasting, broadcasting; routing; interconnection networks for parallel and distributed systems; communication algorithms for telecommunication • Carlos E. Frasser (Senior Scientist-Consultant, at the ATMIC Corporation: Applied Researches and Development, Toronto, Canada) Research Interests: combinatorics, graph theory, application of graph theory to the topological design of computer networks Remarkable Features: • Kathryn (Kathy) Fraughnaugh (at the Dept. of Math, Univ. of Colorado at Denver, USA) Research Interests: graph theory, applied abstract algebra, mathematical foundations of artificial intelligence and heuristic search • Hubert de Fraysseix (at the Centre d'Analyse et de Mathématique Sociales, Ecole des Hautes Etudes en Sciences Sociales, Paris, France) Research Interests: graph theory and combinatorics, esp. planarity and graph drawing Remarkable Features: P.I.G.A.L.E. (Public Implementation of a Graph Algorithm Library and Editor, together with P. Ossona de Mendez and P. Rosenstiehl). PIGALE is a graph editor with an interface to the LEDA library and with many algorithms implemented essentially concerning planar graphs. • Alan Frieze (at the Dept. of Math, Carnegie Mellon Univ., Pittsburgh, Pennsylvania, USA) Research Interests: probabilistic combinatorics and its applications to theoretical computer science and operations research • Dalibor Froncek (at the Dept. of Math and Statistics, University of Minnesota, Duluth, USA) Research Interests: graph theory and design theory, esp. graph and design factorizations and their applications, e.g. on tournament scheduling • Zoltán Füredi (at the Dept. of Math, Univ. of Illinois at Urbana-Champaign, Urbana, USA) Research Interests: theory of finite sets with applications to geometry, designs, computer science • Joseph A. Gallian (at the Dept. of Math and Statistics, Univ. of Minnesota, Duluth, USA) Research Interests: algebra and graph theory, esp. groups; teaching math Remarkable Features: The Duluth Undergraduate Research Program , a funded program for undergraduate students to encourage and teach them to do serious mathematical research in the areas of algebra and combinatorics resulting in published papers in established journals. If you don't believe it, read this. If you are really interested, click here. • Zhicheng Gao (at the Dept. of Math and Statistics, Carleton Univ., Ottawa, Ontario, Canada) Research Interests: combinatorics and graph theory, esp. graphs and maps on surfaces, asymptotic methods in combinatorial enumeration, solving functional equations in the ring of formal power series, algorithms dealing with hypergeometric terms and holonomic sequences • Naveen Garg (at the Max-Planck-Istitute for CS, Saarbrücken, Germany) Research Interests: approximation algorithms, combinatorial optimization, graph algorithms • Dieter Gernert (at the TU München, Germany) Research Interests: general systems theory, philosophy of science, cognitive science, cellular automata; further graph theory, esp. extremal graph theory, relations between graph invariants, colouring, graphs and matrices, eigenvalues, recursive graph classes • Joan Gimbert (at the Dept. of Math, Univ. of Lleida, Catalunya, Spain) Research Interests: algebraic graph theory, esp. applications of spectral theory to the study of dense digraphs; applications of number theory to cryptography • Mark Ginn (Link outdated! Does anyone know what happened to him or his page?) (at the Dept. of Math, Austin Peay State Univ., Clarksville, Tennessee, USA) Research Interests: combinatorics and graph theory, esp. computational complexity and probabilistic methods; also genetic algorithms and their application to NP-hard graph theory problems • Luis A. Goddyn (at the Dept. of Math and Statistics and Center for Experimental & Constructive Math, Simon Fraser Univ., Burnaby, British Columbia, Canada) Research Interests: combinatorics and graph theory, esp. circuit and flow structure of graphs and matroids, specialized Gray codes, Euclidean optimization problems • Chris D. Godsil (at the Dept. of Combinatorics and Optimization, Univ. of Waterloo, Ontario, Canada) Research Interests: interactions between algebra and combinatorics, esp. applications of association schemes to graphs, codes, and designs • Michael X. Goemans (at the MIT Dept. of Math, Cambridge, Massachusetts, USA) Research Interests: combinatorial optimization, esp. several areas of combinatorics, theoretical computer science, optimization, and lately (cooperative) game theory • Andrew V. Goldberg (at the Microsoft Research -- Silicon Valley, Mountain View, California, USA) Research Interests: computer science and discrete mathematics, esp. optimization algorithms and flow algorithms; internet commerce, design and analysis of algorithms, computational testing of algorithms, computer system performance, and archival publication Remarkable Features: Andrew Goldberg's Network Optimization Library, some software packages to run under UNIX • Mark K. Goldberg (at the Rensselaer Polytechnic Inst. (RPI), Troy, New York, USA) Research Interests: combinatorics, design of efficient algorithms for combinatorial optimization problems, software design for mathematical applications, computational learning theory, machine • Martin Charles Golumbic (at the Dept. of Math and CS, Bar-Ilan Univ., Ramat-Gan, Israel) Research Interests: combinatorics, algorithmic analysis, expert systems, artificial intelligence, and programming languages • Frank Göring (at the Dept. of Math, Faculty of Math and Natural Sciences, TU Ilmenau, Germany) Research Interests: graph theory, esp. polyhedral graphs and Menger's Theorem • Ronald J. Gould (at the Dept. of Math and CS, Emory Univ., Atlanta, Georgia) Research Interests: combinatorics, graph theory, computer science • Jerrold R. Griggs (at the Dept. of Math, Univ. of South Carolina, Columbia, USA) Research Interests: combinatorics and graph theory, esp. extremal set theory, extremal graph theory, graph coloring, and applications of discrete math to biology, number theory, analysis of algorithms, communications. Remarkable Features: Photos of Paul Erdos (1913--1996) • Jonathan L. Gross (at the Dept. of CS, Columbia University, New York, USA) Research Interests: structural analysis of mathematical objects and improving methods for representation of mathematical objects, esp. interconnection networks and their layouts. Methods are from the geometric side of algebraic topology and from the algebra of permutation groups. Several aspects of recent research (November 1999): algebraic specification of interconnection network relationships, algebraic specification of network layouts and their duals, probabilistic algorithms for graph isomorphism testing Remarkable Features: The book "Graph Theory and Its Applications" (together with Jay Yellen), "a comprehensive applications-driven textbook that provides material for several different courses in graph theory." This site also provides links to other graph theoretical and mathematical resources. • Jerry Grossman (at the Dept. of Math Sciences, Oakland Univ., Rochester, Michigan, USA) Research Interests: combinatorics, graph theory, and theoretical computer science, esp. dominating sets and algorithms; also elementary number theory, algebraic topology, probability and • Martin Grötschel (at the Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB), Germany) Research Interests: combinatorics, graph theory and combinatorial optimization, esp. transportation problems with and without on-line requirements, telecommunication networks • Jens Gustedt (at the Laboratoire lorrain de recherche en informatique et ses applications (LORIA), Vandoeuvre-lès-Nancy Cedex, France) Research Interests: graph algorithms, theory and algorithms of ordered sets, image processing, digital filters • Z. Gregory Gutin (at the Dept. of Computer Science, Royal Holloway Univ. of London, Egham, UK) Research Interests: paths and cycles in directed and undirected graphs, combinatorial optimization, graph algorithms • Roland Häggkvist (at the Dept. of Math, Umeå Univ., Sweden) Research Interests: graph theory and combinatorics • Magnús M. Halldórsson (at the Science Inst., Univ. of Iceland, Reykjavik) Research Interests: design and analysis of algorithms, esp. approximation algorithms (heuristics for NP-complete problems with guaranteed performance), on-line algorithms • Jochen Harant (at the Dept. of Math, Faculty of Math and Natural Sciences, TU Ilmenau, Germany) Research Interests: graph theory, esp. cycles in graphs , planarity criterions, colourings of graphs, dominating and independent sets of graphs, probabilistic methods, scheduling • Frank Harary (at the CS Dept., New Mexico State Univ., Las Cruces, USA) Research Interests: all aspects of graph theory, esp. sum and difference graphs, new invariation dominants, forcing concepts, new games • Heiko Harborth (at the Dept. of Math and CS, TU Braunschweig, Germany) Research Interests: combinatorics and graph theory • Refael Hassin (at the Dept. of Statistics and Operations Research, Tel Aviv Univ., Israel) Research Interests: combinatorial optimization, approximation algorithms, network flows, economics of queues • Teresa Haynes (at the Dept. of Math, East Tennessee State Univ., Johnson City, USA) Research Interests: graph theory, networking • Ryan B. Hayward (at the Dept. of CS, Univ. of Alberta, Edmonton, Canada) Research Interests: algorithmic graph theory, esp. related to classes of perfect graphs • Lenwood S. Heath (at the Dept. of CS, Virginia Polytechnic Inst. and State Univ., Blacksburg, USA) Research Interests: algorithms, graph theory, graph embeddings, topology, computational geometry, information retrieval, theoretical computer science • Christopher Carl Heckman (at the Dept. of Math and Statistics, Arizona State Univ., Tempe, USA) Research Interests: graph theory, combinatorics, graph algorithms, number theory Remarkable Features: □ You can download the program GEMBED directly from the home page, "which tests for embeddability of graphs on the projective plane, the torus, and the spindle surface. Available as a gzipped tar file." □ Computing even cycles in graphs, an algorithm designed by N. Robertson, P.D. Seymour and R. Thomas and programmed by Heckman • Stephen T. Hedetniemi (at the Dept. of CS, Clemson Univ., South Carolina, USA) Research Interests: design and analysis of algorithms, esp. combinatorial optimization, methodologies for constructing linear algorithms, computational complexity and NP-completeness, parallel algorithms and complexity; graph theory, esp. domination in graphs, coverings and packings in graphs, colorings and partitions of graphs, operations on graphs, graph algorithms, chessboard problems and algorithms; data structures; computation theory, esp. models of computation and the limits of computation Remarkable Features: OPEN PROBLEMS IN COMBINATORIAL OPTIMIZATION, a big list of open problems that Hedetniemi is interested in. You may also want to risk a look on his page A Compendium of My Favorite Metaphysical Books. • Alain Hertz (at the Dept. of Math, École Polytechnique Fédéral de Lausanne (EPFL), Switzerland) Research Interests: combinatorial optimization, algorithmics in graph theory, evolutionary algorithms, scheduling and timetabling, distribution and transportation problems • A. J. W. Hilton (at the Dept. of Math, Univ. of Reading, UK) Research Interests: combinatorics and graph theory, esp. extremal set theory, edge-colouring graphs, total-colouring graphs, k-to-1 continuous maps, Latin squares, Steiner triple systems, • Michael Himsolt (own domain, Ulm, Germany) Research Interests: graph editors, graph drawing algorithms Remarkable Features: software package GRAPHLET, a "toolkit for implementing graph editors and graph drawing algorithms", GraphEd, an "interactive, extensible editor for graphs and graph grammars with lots of layout and other algorithms" • Chinh T. Hoâng (at the Dept. of Physics and Computing, Wilfrid Laurier Univ., Waterloo, Ontario, Canada) Research Interests: graph theory, graph algorithms • Arthur M. Hobbs (at the Dept. of Math, Texas A&M Univ., College Station, USA) Research Interests: graph theory and matroids, esp. thickness of graphs, Hamiltonian cycles, packings of graphs with trees • Dorit S. Hochbaum (at the Dept. of Industrial Engineering and Operations Research, Univ. of California, Berkeley, USA) Research Interests: optimization, esp. in manufacturing of VLSI circuits, in testing and designing circuits, in scheduling problems, in planning of mining operations, in robots motion problems, in locations of facilities, in distribution and logistics, and in baking cakes; further approximation algorithms, strongly polynomial algorithms, practical integer programming algorithms for discrete optimization problems, problems on graphs, and nonlinear problems • Stefan Hougardy (at the Dept. of CS, Humboldt-Univ., Berlin, Germany) Research Interests: graph theory, esp. perfect graphs, graph algorithms, approximation algorithms, knot theory • Wen-Lian Hsu (at the Inst. of Information Science, Academia Sinica, Taipei, Taiwan) Research Interests: efficient algorithms, esp. concerning graphs; computerization the Chinese language • Jing Huang (at the Dept. of Math and Stats, Univ. of Victoria, Victoria, Canada) Research Interests: graph theory, algorithms and complexity • Alice Hubenkó (at the Dept. of CS, Eötvös Loránd Univ., Budapest, Hungary) Research Interests: combinatorial algorithms, graph theory, combinatorial geometry • Glenn H. Hurlbert (at the Dept. of Math and Stats, Arizona State Univ, Tempe, USA) Research Interests: combinatorics, graph theory, extremal sets, probabilistic methods and graph pebbling • Garth Isaak (at the Dept. of Math, Lehigh Univ., Bethlehem, Pennsylvania, USA) Research Interests: graph theory and ordered sets • Mike Jacobson (at the Dept. of Math, Univ. of Louisville, Kentucky, USA) Research Interests: graph theory and combinatorics, esp. Hamiltonian graphs, cycles and paths, domination and related parameters, intersection graphs and generalizations, ramsey theory, graph labelings (in particular irregular labelings), permutations, designs • Klaus Jansen (at the Dept. for CS and Applied Math, Univ. Kiel, Kiel, Germany) Research Interests: combinatorial optimization, approximation algorithms scheduling theory, inapproximability, on-line algorithms, randomize algorithms • Tommy R. Jensen (at the Dept. of Math, TU Chemnitz, Germany) Research Interests: graph theory, esp. graph colourings • Mark Jerrum (at the Dept. of CS, Univ. of Edinburgh, UK) Research Interests: computational complexity, esp. probabilistic computation, complexity of combinatorial enumeration, information- and complexity-theoretic aspects of machine learning, combinatorial optimization • Pranava K. Jha (at the Dept. of CS, St. Cloud State Univ., Minnesota, USA) Research Interests: design and analysis of algorithms, graph products and related structures, applications of graph theory • Tao Jiang (at the Dept. of Math, Univ. of Illinois at Urbana-Champaign, USA) Research Interests: graph theory and combinatorics, esp. paths and cycles, connectivity and Ramsey type problems • Carroll K. Johnson (at the Chemical and Analytical Sciences Division, Oak Ridge National Lab, Oak Ridge, Tennessee, USA) Research Interests: geometric topology, euclidean 3-orbifold singular set graphs, Morse function critical-net graphs Remarkable Features: □ ORTEP, a program for thermal-motion and critical-net crystal structure drawings • David S. Johnson (at the AT&T Labs - Research, Florham Park, New Jersey, USA) Research Interests: combinatorial optimization, approximation algorithms, NP-completeness, network design, routing and scheduling, facilities location, The Traveling Salesman Problem, bin packing, graph coloring • Heinz Adolf Jung (at the Dept. of Math, TU Berlin, Germany) Research Interests: graph theory, esp. connectivity, longest paths and cycles, toughness, infinite graphs, and symmetry in graphs • Michael Jünger (at the Inst. of CS, Univ. of Köln, Germany) Research Interests: combinatorial optimization, esp. design, analysis, implementation, and evaluation of algorithms for hard combinatorial optimization problems Remarkable Features: software package ABACUS, a "framework for the implementation of branch-and-bound algorithms" (together with Gerhard Reinelt and Stefan Thienel). Lots of other related projects and software packages can be found here. • Viggo Kann (at the Dept. of Numerical Analysis and Computing Science (NADA), Royal Inst. of Technology, Stockholm, Sweden) Research Interests: approximation algorithms, algorithms for Swedish language tools Remarkable Features: A compendium of NP optimization problems • Mikio Kano (at the Dept. of Computer and Information Sciences, Ibaraki Univ., Japan) (Japanese/English) Research Interests: graph theory, discrete geometry • David A. Karger (at the Lab for CS, MIT, Cambridge, USA) Research Interests: graph algorithms, esp. cuts and flows algorithms, graph coloring algorithms, information retrieval, randomized algorithms • Gyula Y. Katona (at the Math Inst., Hungarian Academy of Sciences, Budapest, Hungary) Research Interests: graph theory, graph algorithms, extremal graphs and hypergraphs, esp. Hamiltonian cycles in graphs and hypergraphs, toughness • Michael Kaufmann (at the Wilhelm-Schickard-Inst. for CS, Univ. Tübingen, Germany) Research Interests: algorithms and graphs, esp. graph drawing algorithms, parallel computing, external memory algorithms, Steiner trees Remarkable Features: GraVis (Permission Denied!), a graph visualization system • Mark Kayll (at the Dept. of Math Sciences, The Univ. of Montana, Missoula, USA) Research Interests: discrete mathematics, operations research, theoretical computer science, esp. graph and hypergraph theory, probabilistic methods in combinatorics, random matroids • André E. Kézdy (at the Dept. of Math, Univ. of Louisville, Kentucky, USA) Research Interests: combinatorics and graph theory, esp. perfect graphs, packing/covering graph problems, eigenvalue methods, computational complexity, partially ordered sets, well-quasi orderings, C++ Graph Theory Package • Samir Khuller (at the CS Dept., Univ. of Maryland, College Park, USA) Research Interests: computational graph theory, approximation algorithms, combinatorial optimization, computational geometry • Philip Klein (at the CS Dept., Brown Univ., Providence, Richmond) Research Interests: shortest path algorithms, planar graphs • Jon Kleinberg (at the Dept. of CS, Cornell Univ., Ithaca, New York, USA) Research Interests: discrete algorithms and their applications, esp. optimization problems in network routing and network design, the development of heuristics with provable performance guarantuees, analysis of network traffic as a dynamic phenomenon; algorithms for clustering and indexing of high-dimensional data and application of these to hypermedia environments such as WWW; geometric and combinatorial techniques for algorithmic problems in molecular biology • Joseph B. Klerlein (at the Dept. of Math and CS, Western Carolina Univ., Cullowhee, North Carolina, USA) Research Interests: combinatorics and graph theory, esp. permutations, Hamiltonian cycles in Cayley color graphs and near Cayley color graphs • Debra Knisley (at the Dept. of Math, East Tennessee State Univ., Johnson City, USA) Research Interests: graph theory, esp. vertex degree conditions • Donald E. Knuth (at the CS Dept., Stanford Univ., Palo Alto, California, USA) Research Interests: computer science and discrete mathematics, esp. the art of programming • Bill Kocay (at the CS Dept., Univ. of Manitoba, Winnipeg, Canada) Research Interests: development of mathematical software, combinatorics and graph theory, esp. algorithms for graphs, graph reconstruction problem, graph isomorphism problem, projective geometry, hamiltonian cycles, planarity, combinatorial designs • Jan Kratochvil (at the Dept. of Applied Math, Charles Univ., Prague, Czech Republic) Research Interests: graph theory, combinatorics and computational complexity, esp. intersection graphs, domination theory, covers of graphs, induced minors, Hamiltonian cycles, graph colorings, • Dieter Kratsch (at the Math and CS Faculty, Friedrich-Schiller-Univ., Jena, Germany) Research Interests: combinatorics, graph theory and algorithms, esp. bandwidth, treewidth, reconstruction, cocomparability graphs, hamiltonicity • Matthias Kriesell (at the Dept. of Math, Univ. of Hannover, Germany) Research Interests: graph theory, esp. connectivity, cycles in graphs planarity, line graphs, factors, domination; hypergraphs, esp. connectivity, domination; matroid theory, esp. matroid • André Kündgen (at the Dept. of Math, California State Univ. San Marcos, San Marcos, California, USA) Research Interests: discrete mathematics, esp. graph theory (extremal and coloring problems, Ramsey theory) • Peter Lam (at the Dept. of Math, Hong Kong Baptist Univ., Hong Kong, P.R. China) Research Interests: time series and forecasting, neural network, graph theory and combinatorics, operations research and business application of mathematics • Michael A. Langston (at the Dept. of CS, Univ. of Tennessee, Knoxville, USA) Research Interests: analysis of algorithms, concrete complexity theory, deterministic scheduling theory, discrete mathematics, graph theory, operations research, parallel computing, VLSI design • Brenda J. Latka (at the Dept. of Math, Lafayette College, Easton, Pennsylvania, USA) Research Interests: tournaments, well-quasi-orderings, antichains of tournaments • Josef Lauri (at the Dept. of Math, Univ. of Malta, Malta) Research Interests: graph theory and combinatorics, esp. the reconstruction problem, pseudosimilarity and related questions about symmetries of graphs Remarkable Features: Some open problems("mostly from graph theory") • Linda M. Lawson (at the Dept. of Math, East Tennessee State Univ., Johnson City, USA) Research Interests: applied math, graph theory • Felix Lazebnik (at the Dept. of Math, Univ. of Delaware, Newark, USA) Research Interests: graph theory, combinatorics, algebra • Van Bang Le (at the Dept. of CS, Univ. of Rostock, Germany) Research Interests: graph theory, esp. perfect graphs • Jenõ Lehel (at the Dept. of Math, Univ. of Louisville, Ohio, USA) Research Interests: combinatorics and graph theory • Thomas Lengauer (at the Inst. for Algorithms and Scientific Computing (SCAI), German National Research Center for Information Technology (GMD), Sankt Augustin, Germany) Research Interests: efficient algorithms, combinatorial optimization, molecular modelling in chemistry and molecular biology, analysis of biological sequences, packaging problems in engineering (esp. circuit design, textile and leather cutting, three-dimensional packing) • Gregory M. Levin (at the Dept. of Math, Harvey Mudd College, Claremont, California, USA) Research Interests: fractional graph theory • Martin "Marty" J. Lewinter (at the Dept. of Math and CS, Purchase College, New York, USA) Research Interests: classical differential geometry, number theory, history of mathematics, and graph theory, esp. spanning trees, distance problems, and hypercubes • Cai Heng Li (at the Dept. of Math and Statistics, Univ. of Western Australia, Perth, Australia) Research Interests: algebraic combinatorics and group theory, esp. Cayley graphs • Charles Little (at the Dept. of Math, Massey Univ., Palmerston North, New Zealand) Research Interests: topological graph theory, 1-factors of graphs • Stephen C. Locke (at the Dept. of Math Sciences, Florida Atlantic Univ., Boca Raton, USA) Research Interests: graph theory and graph theory algorithms, esp. Dirac-type conditions and long cycles, independence ratio in triangle-free graphs Remarkable Features: maintains a Graph Theorypage containing basic definitions and an index • László Lovász (at the Dept. of CS, Yale Univ., New Haven, Connecticut, USA) Research Interests: combinatorial optimization, algorithms, complexity, random walks on graphs • Vadim V. Lozin (at RUTCOR, Rutgers, the State Univ. of New Jersey, Piscataway, USA) Research Interests: special graph classes, esp. subclasses of bipartite graphs, algorithms for hard graph problems on special classes; optimization of stable sets; graph representations and universal graphs • Rich Lundgren (at the Dept. of Math, Univ. of Colorado at Denver, USA) Research Interests: abstract algebra, applied graph theory, teaching math • Nik Lygeros (at the Inst. Girard Desargues, Univ. Lyon I, Villeurbanne, France) Research Interests: combinatorics, computer algebra, graph theory, number theory • Gary MacGillivray (at the Dept. of Math and Statistics, Univ. of Victoria, British Columbia, Canada) Research Interests: combinatorial problems with a view towards algorithms and complexity, esp. graph colouring, domination in graphs, and extremal problems • Frédéric Maffray (at the Dept. of Discrete Math, Leibniz Lab, IMAG Inst., Grenoble, France) Research Interests: graph colorings, perfect graphs, domination, cuts Remarkable Features: An overview on Graph Theory (together with Sylvain Gravier) • John Maharry (at the Dept. of Math, Ohio State Univ., Columbus, USA) Research Interests: characterization of graph classes, graph minors • Johann (Janos) A. Makowsky (at the Faculty of CS, Technion (Israel Institute of Technology), Haifa, Israel ) Research Interests: mathematical logic and its interaction with computer science, database theory, finite model theory and descriptive complexity Remarkable Features: • Joseph "Joe" Malkevitch (at the Dept. of Math/Computing, York College (CUNY), Jamaica, New York, USA) Research Interests: geometry, esp. polytopes, graph theory, tilings; mathematical modeling, esp. models dealing with fairness and equity issues and codes of all kinds; graph theoretical and combinatorial methods; mathematical education, esp. public perceptions of mathematics and mathematicians, "when to call a mathematician!", techniques versus themes for organizing the content of mathematics, training "ambassadors" for mathematics, providing a non-standard look at geometry for children Remarkable Features: □ Mathematical Tidbits, a collection of notes on some all-day or fun topics involving some mathematics. □ directly on the homepage there is a collection of bibliographies covering subjects like codes, geometry, graph theory and combinatorics, fairness and equity... • Lisa Markus (at the Dept. of Math, Furman Univ., Greenwille, South Carolina, USA) Research Interests: graph theory, esp. dominating sets • Ernesto Martins (at the Math Dept., Univ. of Coimbra, Portugal) Research Interests: optimal path problems, shortest path ranking problem, multiobjective optimal path problem, maximal flow problem, minimal cost flow problem Remarkable Features: Source Codes, a collection of FORTRAN 77 and C sources dealing with many network, path, and flow problems and algorithms • C. J. McDiarmid (at the Dept. of Statistics, Univ. of Oxford, UK) Research Interests: discrete mathematics, esp. probability and algorithms, mathematics of OR, graph colouring, radio channel assignment problems • Sean McGuinness (at the Dept. of Math, Umeå Univ., Sweden) Research Interests: graph theory • Brendan McKay (at the Dept. of CS, Australian National Univ., Canberra) Research Interests: graph theory and combinatorics Remarkable Features: □ nauty, a program which calculates automorphism groups of graphs and digraphs □ autoson, a "tool for scheduling independent processes across a network of UNIX workstations" □ plantri, a "program for generating planar triangulations" (on the homepage of Brendan) • Terry A. McKee (at the Dept. of Math and Statistics, Wright State Univ., Dayton, Ohio, USA) Research Interests: graph theory, esp. intersection graphs, chordal graphs, graph dualities, and graph meta-theory; mathematical logic Remarkable Features: table of contents and additions and corrections for the book "Topics in Intersection Graph Theory" (by Terry A. McKee and F. R. McMorris) • Kurt Mehlhorn (at the Algorithms and Complexity Group, Max-Planck-Inst. for CS, Saarbrücken, Germany) Research Interests: data structures, graph algorithms, computational geometry, parallel algorithms, computational complexity, software libraries Remarkable Features: The LEDA Platform of Combinatorial and Geometric Computing, a page introducing the book 'LEDA' by K. Mehlhorn and S. Näher; LEDA is "a library of the data types and algorithms of combinatorial computing" • Sarah Merz (at the Math Dept., Univ. of the Pacific, Stockton, California, USA) Research Interests: graph theory, esp. competition graphs • Peter Mihók (at the Math Inst., Slovak Academy of Sciences, Extension in Kosice, Bratislava, Slovak Republic) Research Interests: graph theory • Mirka Miller (at the Dept. of CS and Software Engineering, Univ. of Newcastle, Callaghan, Australia) Research Interests: optimal networks, constructions of large graphs and digraphs, data security, security of statistical databases, combinatorics and its applications Current Research Interests: extremal graphs and digraphs, graph labeling (magic, antimagic, sum, mod sum, integral sum), combinatorics • Bojan Mohar (at the Dept. of Math, Univ. of Ljubljana, Slovenia) Research Interests: graphs on surfaces, algebraic graph theory, graph coloring, graph minors Remarkable Features: Problems, a page with open problems in graph theory. • Rolf H. Möhring (at the Dept. of Math, TU Berlin, Germany) Research Interests: graph algorithms, combinatorial optimization, project scheduling, ordered sets • Pablo Moscato (at the Dept. de Engenharia de Sistemas, Faculdade de Engenharia Eletrica e de Computacao, Univ. Estadual de Campinas, Campinas, Brazil) Research Interests: optimization, combinatorial optimization, approximation algorithms, heuristic and metaheuristic approaches for large scale problems; proper evaluation of simulated annealing, tabu search, genetic algoritms and memetic algorithms , space-filling curves and the Traveling Salesman Problem. Remarkable Features: • Rajeev Motwani (at the Dept. of CS, Stanford Univ., California, USA) Research Interests: design and analysis of algorithms, esp. approximation, online computations, randomized algorithms, complexity theory; combinatorial optimization and scheduling theory with application to computer systems, esp. compilers (code optimization via combined register allocation and instruction scheduling for superscalar machines) and databases (parallel query optimization and multi-query optimization); information retrieval, web searching, data mining; computational biology, automated drug design; computational and combinatorial geometry with applications to • Dhruv Mubayi (at the School of Math, Georgia Inst. of Technology, Atlanta, Georgia, USA) Research Interests: graph theory, partially ordered sets, set systems, and hypergraphs, esp. extremal graph theory • Haiko Müller (at the School of Computing, Univ. of Leeds, UK) Research Interests: algorithms and complexity, esp. algorithms on graphs and partially ordered sets, approximative algorithms, graph theory and special classes of graphs • Xavier Muñoz (at the Dept. de Matemàtica Aplicada i Telemàtica, Univ. Politènica de Catalunya, Barcelona, Spain) Research Interests: communication problems in graphs • Wendy Myrvold (at the Dept. of CS, Univ. of Victoria, British Columbia, Canada) Research Interests: graph theory, graph algorithms, and network reliability, esp. unranking spanning trees, maximum clique problem, practical torus embeddings, finding new cages, graph • Stefan Näher (at the Dept. of Math and CS, Martin-Luther Univ. Halle-Wittenberg, Halle (Saale), Germany) Research Interests: computational geometry, graph algorithms, graph drawing Remarkable Features: The LEDA Platform of Combinatorial and Geometric Computing, a page introducing the book 'LEDA' by K. Mehlhorn and S. Näher; LEDA is "a library of the data types and algorithms of combinatorial computing" • Darren A. Narayan (at the Dept. of Math & Stats, Rochester Inst. of Technology, Rochester, New York, USA) Research Interests: combinatorics and graph theory, esp. minimum feedback arc sets and the reversing number of a digraph, representations of graphs modulo n, tilings, Fibonacci determinants • C. St. J. A. Nash-Williams (????-2001) (died on January 20th, 2001. His last job (as far as I know) was at the Dept. of Math, Univ. of Reading, UK. I wasn't able to find anything about his death in the web. Therefore, if you find something about his life and death in the web, please tell me.) Research Interests: graph theory • Takao Nishizeki (at the Dept. of System Information Sciences, Tohoku Univ., Sendai, Japan) Research Interests: algorithms for planar graphs, edge-coloring, network flows, VLSI routing and cryptology • Steve Noble (at the Dept of Math Sciences, Brunel Univ., Uxbridge, UK) Research Interests: combinatorial optimisation; graph theory, esp. complexity of counting problems, graph algorithms, the Tutte polynomial and the frequency assignment problem • Ortrud R. Oellermann (at the Dept. of Math and Statistics, Univ. of Winnipeg, Manitoba, Canada) Research Interests: combinatorics and graph theory with emphasis on effective communication and network reliability, esp. Steiner trees, average distance, connectivity and its generalizations • Bogdan Oporowski (at the Dept. of Math, Louisiana State Univ., Baton Rouge, USA) Research Interests: graphs and matroids • Patrice Ossona de Mendez (at the Centre d'Analyse et de Mathématique Sociales, Ecole des Hautes Etudes en Sciences Sociales, Paris, France) Research Interests: graph theory and combinatorics, esp. planarity and graph drawing Remarkable Features: P.I.G.A.L.E. (Public Implementation of a Graph Algorithm Library and Editor, together with H. de Fraysseix and P. Rosenstiehl). PIGALE is a graph editor with an interface to the LEDA library and with many algorithms implemented essentially concerning planar graphs. • Steven R. Pagano (at the Dept. of Math, Univ. of Kentucky, Lexington, USA) Research Interests: matroids and graph theory, esp. signed graphs Remarkable Features: Matroids and Signed Graphs, a smooth introduction into matroids and signed graphs • Ignacio M. Pelayo (at the Dept. de Matemàtica Aplicada III, Univ. Politècnica de Catalunya, Barcelona, Spain) Research Interests: graph theory, esp. connectivity (05C40), extremal problems (05C35), distance in graphs (05C12), digraphs and tournaments (05C35), paths and cycles (05C38); algebraic graph theory, esp. graphs and matrices (05C50) • Vitali Petchenkine (at the Dept. of Math, Saratov State Univ., Saratov, Russia) Research Interests: graph theory and social structures Remarkable Features: GRIN (GRaph INterface), free software on graph theory for Win 9X,NT which covers a wide range of problems and can easily be used, e.g. for demonstrations (you find it directly on the homepage) • Vojislav Petrovic (at the Institute of Math, Univ. of Novi Sad, Yugoslavia) Research Interests: combinatorics and graph theory, esp. oriented graphs, tournaments, and Hamiltonian cycles • Tomaz Pisanski (at the Dept. of Math, Univ. of Ljubljana, Slovenia) Research Interests: computability of combinatorial and graph theoretical objects Remarkable Features: takes part in the project "VEGA", a software package for "manipulating discrete mathematical structures" • Miguel Angel Pizaña (at the Universidad Autónoma Metropolitana, Iztapalapa, Mexico) Research Interests: iterated clique graphs, graph theory, computational complexity Remarkable Features: Clique Graph Theorists, a very "visual" page of graph theory people • Michael Plummer (at the Dept. of Math, Vanderbilt Univ., Nashville, Tennessee, USA) Research Interests: graph theory and combinatorics • Cheryl E. Praeger (at the Dept. of Math and Statistics, Univ. of Western Australia, Perth, Australia) Research Interests: group theory, esp. permutation groups and algorithms for computing with groups; algebraic graph theory; theory of combinatorial designs • Rob Pratt (at the Dept. of Operations Research, Univ. of North Carolina at Chapel Hill, USA) Research Interests: combinatorics, graph theory and knot theory; further bioinformatics and computational biology Remarkable Features: Graph Theory, a collection of links to graph theoretical resources and people on the web • Myriam Preissmann (at the Dept. of Discrete Math, Leibniz Lab, IMAG Inst., Grenoble, France) Research Interests: graph theory and algorithms, esp. perfect graphs, cubic graphs, edge-colourings of graphs, VLSI-design, combinatorial physics • Erich Prisner (at the Dept. of Math, Univ. Hamburg, Germany) Research Interests: graph transformations, intersection graphs, distance in graphs, interconnection networks, graph algorithms, computational geometry, and cellular automata Remarkable Features: • H. J. Prömel (German/ at the Dept. of CS, Humboldt-Univ., Berlin, Germany) Research Interests: graph theory and combinatorics, probabilistic methods in computer science, randomized and approximation algorithms, combinatorial optimization • Andrzej Proskurowski (at the CS Dept., Univ. of Oregon, Eugene, USA) Research Interests: combinatorics and graph theory, esp. tree-like graphs (e.g. complexity of combinatorial optimization problems restricted to graphs with bounded treewidth), fault-tolerant communication networks, generation of combinatorial structures in amortized constant time • Hari Pulapaka (at the Dept. of Math and CS, Stetson Univ., DeLand, Florida, USA) Research Interests: graph theory (esp. topological graph theory and random graph theory), number theory (esp. partitions of numbers and other problems in combinatorial number theory), and the history of mathematics • Douglas F. Rall (at the Dept. of Math, Furman Univ., Greenville, South Carolina, USA) Research Interests: graph theory, esp. Vizing's Conjecture, graph products, domination numbers, independence numbers, irredundance numbers, self-complementary graphs • Radhika Ramamurthi (at the Dept. of Math, Univ. of California at San Diego, San Diego, California, USA) Research Interests: combinatorics, graph theory and optimization, stochastic processes, esp. queueing systems • Alexey Rasskazov (at the School of Information and Communication Technologies, Univ. of Paisley, Scotland) Research Interests: data mining, machine learning, neural networks, anomalous behaviour and fraud detection, expert systems, decision making support; hyperbolic geometry, knot theory, low dimensional topology, combinatorial group theory, Schottky groups • Craig W. Rasmussen (at the Dept. of Math, Naval Postgraduate School, Monterey, California, USA) Research Interests: graph theory, combinatorics, optimization • André Raspaud (at the Lab for Research in CS (LaBRI), Bordeaux, France) Research Interests: graph theory and its applications, esp. cycles and circuits • K. Brooks Reid (at the Dept. of Math, California State Univ. San Marcos, USA) Research Interests: combinatorics and graph theory, esp. tournament theory, centrality and anticentrality in graphs, and aspects of voting theory • Gerhard Ringel (Link outdated! Has become Professor emeritus.) (at the Dept. of Math, Univ. of California, Santa Cruz, USA) Research Interests: combinatorics and graph theory, esp. the map color theorem and embeddings of graphs into surfaces • Neil Robertson (at the Ohio State Univ., Worthington, USA) Research Interests: graph theory, esp. graph minors and consequences of the graph minor theory • Vojtech Rödl (at the Dept. of Math and CS, Emory Univ., Atlanta, Georgia, USA) Research Interests: graph theory and combinatorics • Juan Alberto Rodríguez Velázquez (at the Dept. d'Enginyeria Informàtica i Matemàtiques, Univ. Rovira i Virgili, Tarragona, Spain) Research Interests: algebraic graph theory, topology of complex networks, combinatorial optimization • Cecil C. Rousseau (at the Dept. of Math Sciences, Univ. of Memphis, Tennessee, USA) Research Interests: graph theory, probabilistic and asymptotic methods in combinatorics • Gordon Royle (at the Dept. of CS, Univ. of Western Australia, Nedlands) Research Interests: catalogues of interesting combinatorial objects Remarkable Features: Combinatorial Catalogues, a collection of informations on small instances of certain classes of combinatorial objects • Zdeněk Ryjáček (at the Dept. of Math, Faculty of Applied Sciences, Univ. of West Bohemia, Plzen, Czech Republic) Research Interests: graph theory, esp. Hamiltonian graph theory, matchings and factors, local properties of graphs, claw-free graphs and their generalizations • Sabu Ryokawa (an amateur mathematician living in Hiroshima, Japan) Research Interests: anti-mainstream graph theory Remarkable Features: Ryokawa states he may have proved Ulam's Conjecture. His short paper got rejected by some society. I have not the time to read it, hence I can't say whether reading it is worth it or a waste of time. If you want to risk it: You can download it from here. • Daniel P. Sanders (at the Dept. of Math, Princeton Univ., New Jersey, USA) Research Interests: graph theory, esp. graph colourings Remarkable Features: The Graph Theory White Pages (part of www.graphtheory.com), a list of graph theorists and tons of information about them (like name, URL, email address, photo, snail mail address, publications... whenever possible! The aim is to give lists of publications as complete as possible.) • Edward R. Scheinermann (at the Dept. of Math, Johns Hopkins Univ., Baltimore, Maryland, USA) Research Interests: discrete mathematics, esp. graph theory, partially ordered sets, random graphs, and combinatorics • Richard Schelp (at the Dept. of Math Sciences, Univ. of Memphis, Tennessee, USA) Research Interests: graph theory, esp. Ramsey theory, extremal graph theory, Hamiltonian graph theory • Irene Sciriha Aquilina (at the Dept of Math, Faculty of Science, Univ. of Malta, Malta) Research Interests: graph theory, combinatorics and linear algebra, esp. graph spectra, the polynomial reconstruction conjecture and chemical applications; further outerplanar graphs and group • Stephen E. Shauger (at the Dept. of Math and Stats, Coastal Carolina Univ., Conway, South Carolina, USA) Research Interests: graph theory and combinatorics • James B. Shearer (at the IBM T. J. Watson Research Center, Westchester, New York, USA) Research Interests: combinatorics, graph theory, and discrete mathematics, esp. Golomb rulers, binary codes, and Paley graphs Remarkable Features: page containing some computation results on combinatorial computing problems Shearer has worked on • John Sheehan (at the Dept. of Math Sciences, Univ. of Aberdeen, UK) Research Interests: graph theory, esp. finite Ramsey theory, Hamiltonian circuits and symmetry of graphs • Michael Somos (at the Dept. of Computer and Information Science, Cleveland State Univ., Ohio, USA) Research Interests: algebra, combinatorics and graph theory, esp. the Four Color Theorem, knot theory, group theory • Jerry Spinrad (at the CS Dept., Vanderbilt Univ., Nashville, Tennessee, USA) Research Interests: graph algorithms, esp. recognition algorithms for classes of graphs with interesting representations (for instance, permutation graphs, comparability graphs, circular-arc graphs, circle graphs, trapezoid graphs, and two dimensional partial orders) Remarkable Features: list of some open problems • Joel Spencer (at the Dept. of CS, New York Univ., USA) Research Interests: applications of probabilistic methods in discrete mathematices and theoretical computer science Remarkable Features: Chapter 1 from the book The Probabilistic Method by Noga Alon and Joel Spencer • Saul Stahl (at the Dept. of Math, Univ. of Kansas, Lawrence, USA) Research Interests: graph theory, esp. topological graph theory and graph colorings • Lorna Stewart (at the Dept. of Computing Science, Univ. of Alberta, Edmonton, Canada) Research Interests: graph theory and algorithm design, esp. cographs, series parallel graphs, chordal graphs, bipartite permutation graphs, (co)comparability graphs, permutation graphs, polygon • Michael Stiebitz (at the Dept. of Math, Faculty of Math and Natural Sciences, TU Ilmenau, Germany) Research Interests: combinatorics, cryptology, linear algebra, and graph theory, esp. graph coloring problems • Ian Stobert (at the Dept. of Math, Vanderbilt Univ., Nashville, Tennessee, USA) Research Interests: graph theory, esp. topological graph theory, relationships between graph theory and other branches of mathematics (for instance, algebra, topology, geometry), crossing numbers of graphs Remarkable Features: Graph Theory on the Web, a big collection of links concerning graph theory and to homepages of graph theoretical people • Paul K. Stockmeyer (at the Dept. of CS, College of William and Mary, Williamsburg, Virginia, USA) Research Interests: graph theory and combinatorics, esp. reconstruction of graphical structures, towers of Hanoï, bandwidth reduction of sparse matrices • Paul J. Tanenbaum (at the U. S. Army Research Laboratory (ARL), Aberdeen Proving Ground (APG), Maryland, USA) Research Interests: graph theory, partially ordered sets, computational geometry • Jan Arne Telle (at the Dept. of CS, Univ. of Bergen, HIB, Bergen, Norway) Research Interests: graph algorithms and computational complexity • John-Tagore Tevet (at the Research Group of Structure Semiotics , Eurouniversity, Tallinn, Estonia) Research Interests: constructive approach to graph theory: structural and semiotic problems in graph theory, structural ascertaining of cliques, orbits and isomorphism of graphs, structural changes of graphs, constructive reconstructions of graphs, systems of constructive reconstructions etc. (for better finding: distance in graphs (05C12), paths and cycles (05C38), connectivity (05C40), isomorphism and reconstructions (05C60), structural graph representation (05C62), random graphs (05C80), graphs algorithms (05C85), cliques) • Murat Tezer (at the Dept. of Math, Eastern Mediterranean Univ., G. Magosa, Turkey) Research Interests: graph theory, esp. Hamiltonian cycle decompositions • Stefan Thienel (at the Inst. of CS, Univ. of Köln, Germany) Research Interests: combinatorial optimization, LP-based branch and bound algorithms, object oriented programming (C++), literate programming, computational geometry Remarkable Features: software package ABACUS, a "framework for the implementation of branch-and-bound algorithms" (together with Michael Jünger and Gerhard Reinelt) • Robin Thomas (at the School of Math, Georgia Inst. of Technology, Atlanta, USA) Research Interests: graph theory (including infinite graphs, combinatorics, combinatorial optimization, algorithms Special Features: The Four Color Theorem (summary of a new proof given by R. Robertson, D.P. Sanders, P. Seymour,and R. Thomas) • A. G. Thomason (at the Dept. of Pure Math & Math Statistics, Univ. of Cambridge, UK) Research Interests: combinatorics • Bjarne Toft (at the Dept. of Math & CS, Odense Univ., Denmark) Research Interests: graph theory, esp. graph colourings • Marián Trenkler (at the Inst. of Math, Univ. of P. J. Safarik, Kosice, Slovakia) Research Interests: graph theory and geometry, esp. magic graphs, magic hypercubes, and convex polytopes • Michael I. Trofimov (Link outdated! Does anyone know what happened to him or his page?) (at the Lab of Computer Chemistry, N.D.Zelinsky Inst. of Organic Chemistry, Russian Academy of Sci., Moscow, Russia) Research Interests: applications of graph theory for programming (esp. compiler design and implementation) and for organic chemistry (topological indices, etc.) Remarkable Features: • William T. Trotter (at the School of Math, Georgia Inst. of Technology, Atlanta, USA) Research Interests: combinatorics and graph theory, esp. extremal problems, on-line and approximation algorithms, Ramsey theory, discrete geometry, and discrete optimization. • Daniel Ullmann (at the Dept. of Math, George Washington Univ., Washington DC, USA) Research Interests: graph theory and combinatorics, esp. chromatic theory of graphs, combinatorial game theory, theory of fractional graph invariants • Lucas van der Merwe (at the Northeast State Technical Community College, Blountville, Tennessee, USA) Research Interests: graph theory, esp. domination • Andrew Vince (at the Dept. of Math, Univ. of Florida, Gainesville, USA) Research Interests: combinatorics and graph theory, esp. polytopes and tilings, combinatorial optimization and network algorithms, long range aperiodic order, and discrete geometry • Alex Vinokur (at the Tadiran Scopus Digital Video Compression, Holon, Israel) Research Interests: design and optimization of algorithms and data structures, complexity analysis of algorithms, artificial intelligence, coding theory, system analysis and mathematical modelling, database design, software quality assurance, design methodology, computer and information technologies • Margit Voigt (at the Dept. of Math, Faculty of Math and Natural Sciences, TU Ilmenau, Germany) Research Interests: graph theory and combinatorics, esp. graph coloring, list coloring of graphs, choosability, ranking, graph algorithms and their complexity, probabilistic methods in combinatorics, hereditary properties • Vitaly Voloshin (at the Inst. of Math and CS, Chisinau, Moldova) Research Interests: mixed hypergraph coloring, graph and hypergraph theory, combinatorics and discrete optimization, algorithms, complexity, data structure and software for solving problems of discrete optimization, applications of graphs and hypergraphs in CS and other sciences. Remarkable Features: If you want a glint on some of his ideas, look at his Mixed Hypergraph Coloring Web Site. • Hansjoachim Walther (at the Dept. of Math, Faculty of Math and Natural Sciences, TU Ilmenau, Germany) Research Interests: graph theory, esp. walks on polyhedral graphs and complexity of combinatorial algorithms • Ping Wang (at the Math and Computing Science Dept., St. Francis Xavier Univ., Antigonish, Nova Scotia, Canada) Research Interests: graph theory, combinatorics, and algorithm analysis • Douglas B. West (at the Math Dept., Univ. of Illinois at Urbana-Champaign, USA) Research Interests: discrete mathematics, esp. extremal and structural problems on graphs and partial orders • Tom Whaley (at the Dept. of CS, Washington and Lee Univ., Lexington, Virginia, USA) Research Interests: formal development of programs, correctness of programs, Steinhaus graphs, parallel computing Remarkable Features: Washington and Lee University Steinhaus Research Web Site about Steinhaus graphs • Arthur White (at the Dept. of Math and Statistics, Western Michigan Univ., Kalamazoo, USA) Research Interests: topological graph theory, esp. modelling various finite structures such as groups, block designs, and geometries via graph embeddings in surfaces, probability spaces on graph embeddings in surfaces • Herbert S. Wilf (at the Dept. of Math, Univ. of Pennsylvania, Philadelphia, USA) Research Interests: combinatorics, esp. eigenvectors, eigenvalues, eigenfunctions etc. • Todd G. Will (at the Dept. of Math, Davidson College, North Carolina, USA) Research Interests: graph theory, combinatorics and commutative algebra • Gerhard J. Woeginger (at the Faculty of Math Sciences, Univ. of Twente, Enschede, The Netherlands) Research Interests: combinatorial optimization, approximation algorithms, online algorithms, scheduling • David Wood (at the School of CS, Carleton Univ., Ottawa, Canada) Research Interests: packet routing, 3-dimensional graph drawings • Douglas R. Woodall (at the Dept. of Math, Univ. of Nottingham, U.K.) Research Interests: combinatorics and graph theory. • Nick Wormald (at the Dept. of Math, Univ. of Melbourne, Parkville, Australia) Research Interests: combinatorics, probabilistic combinatorics, graph theory, and combinatorial algorithms • Jun-Ming Xu (at the Dept. of Math, Univ. of Science and Technology of China, Hefei, Anhui, P.R. China) Research Interests: graph theory, combinatorial theory in interconnection networks, esp. connectivity and diameter of graphs • José Luís Andrés Yebra (at the Dept. de Matemàtica Aplicada i Telemàtica, Univ. Politènica de Catalunya, Barcelona, Spain) Research Interests: large distance regular graphs • Xingxing Yu (at the School of Math, Georgia Inst. of Technology, Atlanta, USA) Research Interests: graph theory, algorithms, low dimensional topology, coding theory, and discrete optimization • Thomas Zaslavsky (at the Massachusetts Inst. of Technology, USA) Research Interests: combinatorics and graph theory Remarkable Features: • Xiaoya Zha (at the Math Dept., Vanderbilt Univ., Nashville, Tennessee, USA) Research Interests: combinatorics and graph theory, esp. graph embeddings • Nick Zhao (at the Dept. of Math, Tulane, Univ., New Orleans, Louisiana, USA) Research Interests: combinatorics • Bing Zhou (at the Dept. of Math, Trent Univ., Peterborough, Ontario, Canada) Research Interests: graph theory and combinatorics, esp. graph coloring, extremal problems involving chromatic numbers and planar graphs, hypergraphs, partially ordered sets • Günter M. Ziegler (at the Dept. of Math, TU Berlin, Germany) Research Interests: polytopes, discrete and algebraic geometry combinatorics and topological methods, linear and integer programming
{"url":"http://www.joergzuther.de/math/graph/homes.html","timestamp":"2014-04-19T08:03:54Z","content_type":null,"content_length":"186035","record_id":"<urn:uuid:29d9476f-9c1a-48af-ad87-aaa41a21586e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Answers to some of Gonzales Cabillon's questions JOE SHIPMAN, BLOOMBERG/ SKILLMAN jshipman at bloomberg.net Wed Mar 18 15:18:02 EST 1998 1) Mathematics on other planets may be very different *BUT* the alien mathematicians will *NOT* have any theorems formulable in the language of arithmetic (equivalently in ZF without the axiom of infinity) which contradict any of our theorems. They may well contradict our theorems about real numbers, for example if they work from the Axiom of Determinacy instead of AC. 2) I would not argue with this as far as arithmetical statements are concerned. 3) Conway constructed the surreals; nobody knows who discovered pi. 4-5) Existence need not be localizable, space and time are physics not math; even if I were an atheist it wouldn't bother me that there was no mind to think about the mathematical concepts "before" we arrived. 6) The first is (subjectively) extremely high, the second very low; did you reverse something? 7) I'm working on a posting on this; computers are practically very important to mathematics, and will become increasingly important. 8) Math activity must be in principle communicable, it doesn't matter when. - JS More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-March/001592.html","timestamp":"2014-04-20T08:19:05Z","content_type":null,"content_length":"3592","record_id":"<urn:uuid:9bdfbeb0-d1db-4797-97ff-7920b56a9076>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
The University of Texas at Arlington - UT Arlington - UTA The Department of Mathematics 478 Pickard Hall · Box 19408 · 817-272-3261 Academic Advising: 478 Pickard Hall · 817-272-3261 Bachelor's Degrees in Mathematics The Department of Mathematics offers programs leading to the Bachelor of Science Degree in Mathematics and the Bachelor of Arts Degree in Mathematics. The Bachelor of Science degree may also be acquired with the explicit addition of one of these options: actuarial science, industrial and applied mathematics, mathematical biology, pure mathematics, statistics, or management science/ operations research. The Bachelor of Science (no option) is primarily intended for students wishing to pursue graduate work in mathematics. The industrial and applied mathematics option is aimed at students seeking careers as mathematicians in the emerging high-tech industries. The mathematical biology option is aimed at those seeking careers in that emerging field. The statistics, management science/operations research, and actuarial science options are intended for students with an interest in a career involving various applications of mathematics to the world of business. The Bachelor of Arts is intended for those students desiring to teach mathematics at the elementary and secondary school level and for those seeking a traditional liberal arts education with an emphasis on mathematics. All students seeking a bachelor's degree in mathematics must take at least two mathematics sequences. A sequence is defined as a 3300-level course followed by a 4300-level course in the same general area of mathematics. The approved sequences are as follows: MATH 3321-4321 (Abstract Algebra), MATH 3335-4335 or MATH 3335-4334 (Analysis), MATH 3345-4345 (Numerical Analysis), MATH 3335-4303 (Analysis and Topology), MATH/STATS 3313-4313 (Probability and Statistics), MATH/STATS 3313-4311 (Probability and Random Processes), MATH 3314-4314 (Discrete Mathematics), MATH 3318-4324 (Differential Equations), and MATH 3318-4318 (Mathematical Methods for Sciences). For the statistics option, the second sequence must be MATH/STATS 3313-4311 or MATH/STATS 3313-4313. For the actuarial science option, the second sequence must be MATH 3335-4335 , MATH 3335-4334 or MATH 3345-4345. It is strongly recommended that mathematics majors take MATH 3330 (Intro to Matrices and Linear Algebra) and MATH 3300 (Intro to Proofs) as early as possible, since these courses are prerequisites for many other 3000/4000-level courses. It is suggested to take MATH 3330 simultaneously with Calculus III. Mathematics majors must take MATH 3300 before attempting the required courses MATH 3321 and MATH 3335. It is strongly recommended that mathematics majors with little or no computer programming experience satisfy the computer programming requirement as early as possible with MATH 1319, CSE 1311, or 1320. Back to top Teacher Certification Students interested in earning a Bachelor of Arts degree with a major in mathematics with secondary teacher certification should refer to the "Bachelor of Arts in Mathematics with Secondary Teaching Certification" degree plan for teacher certification requirements. Students should also see an advisor in the UTeach Arlington department. Back to top Requirements for a Bachelor of Science Degree in Mathematics Six hours of composition. Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective Technical Writing (ENGL 3373). Six hours from 1311, 1312, and 3364. Political Science 2311, 2312. Social/Cultural Studies Three hours of designated courses in social or cultural anthropology, archaeology, social/political/cultural geography, psychology, economics, sociology, classical studies, or linguistics. Fine Arts Three hours from architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages Eight hours (Levels I and II or higher) in one language. Natural Science PHYS 1443, 1444, and three hours from 2311, 3313, 3445. Eight hours in one other science; the choices are: CHEM 1441 and 1442, or BIOL 1441 and 1442, or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Computer Programming Three hours from MATH 1319, CSE 1311, 1320, 1325. Computer Literacy Three hours from MATH 1319, CSE 1301, INSY 2303, or equivalent course approved by Undergraduate Advisor or competency test. Oral Communication Competency This is satisfied by the required course, MATH 3300. MATH 1426, 2425, 2326, 3300, 3318, 3330, 3321, 3335, 3313, 3345. One course from 4321, 4335, 4334. Nine additional advanced hours (3301 or above, except for capstone mathematics courses specifically for prospective middle grades or secondary grades mathematics teachers), including a second sequence (see paragraph three in the opening section). Capstone mathematics courses specifically for prospective middle grade mathematics teachers do not count toward a degree in mathematics. Capstone mathematics courses for secondary mathematics teachers will count only for those working on the BA in Mathematics with Teaching Certification. The student should consult the appropriate section in this catalog for the exact requirements for a minor in a given department or contact that department's undergraduate advisor. The minor must be in the College of Science or College of Engineering. Sufficient number of hours to complete the total hours required for a degree. A minimum of 120 hours, of which at least 39 must be 3000/4000 level. Suggested Course Sequence Freshman Year First Semester: MATH 1426; INSY 2303; ENGL 1301; HIST 1311; Liberal Arts Elective, 3 hours - Total Credit 16 hours. Second Semester: MATH 2425; MATH 3314; PHYS 1443; ENGL 1302; HIST 1312 - Total Credit 17 hours. Sophomore Year First Semester: MATH 2326; MATH 3330; PHYS 1444; English Literature, 3 hours; Social and Cultural Studies, 3 hours - Total Credit 16 hours. Second Semester: MATH 3318; MATH 3300; Physics, 3 hours; Natural Science, 4 hours; Fine Arts, 3 hours - Total Credit 16 hours. Junior Year First Semester: MATH 3321 or MATH 3335; Minor, 3 hours; Natural Science, 4 hours; POLS 2311 - Total Credit 13 hours. Second Semester: MATH 4321 or MATH 4335; Mathematics, 6 hours; Minor, 3 hours; POLS 2312 - Total Credit 14 hours. Senior Year First Semester: MATH 3335 or MATH 3321; Mathematics, 3 hours; Minor, 6 hours; Modern Language I, 4 hours - Total Credit 16 hours. Second Semester: Mathematics, 6 hours; Minor, 6 hours; Modern Language II, 4 hours - Total Credit 16 hours. Back to top Requirements for a Bachelor of Arts Degree in Mathematics Six hours of composition. Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective Three hours above the freshman level of literature, or social and cultural studies designated as taught in the College of Liberal Arts, or fine arts or philosophy, or technical writing. Six hours from 1311, 1312, and 3364. Political Science 2311, 2312. Social/Cultural Studies The Social and Cultural Studies requirement will be satisfied by three hours of designated courses that have been approved by the Undergraduate Assembly. For a list of approved courses, contact the University Advising Center or see http://www.uta.edu/universitycollege/current/academic-planning/core-courses.php. Fine Arts Three hours from architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages 14 hours (Level I, II, III, and IV) in one language, or Level I and II and 6 hours of approved cultural studies. (See information in College of Science section). Natural Science A total of 14 hours is required. Eight hours including laboratory in one science; the choices are: PHYS 1443 and 1444; or CHEM 1441 and 1442; or BIOL 1441 and 1442; or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Six additional hours of science from the above science courses or from science courses that have above science courses as prerequisites. Computer Literacy Three hours from MATH 1319, CSE 1301, INSY 2303, or equivalent course approved by Undergraduate Advisor, or competency test. Oral Communication Competency MATH 3300. Computer Programming Three hours from MATH 1319, CSE 1311, 1320, 1325. MATH 1426, 2425, 2326, 3300, 3314, 3330, 3321, 3335. One course from 4321, 4335, 4334. Nine additional advanced hours (3301 or above, except for capstone mathematics courses specifically for prospective middle or secondary grades mathematics teachers), including a second sequence (see paragraph three in the opening section). Capstone mathematics courses specifically for prospective middle grade mathematics teachers do not count toward a degree in mathematics. Capstone mathematics courses for secondary mathematics teachers will count only for those working on the BA in Mathematics with Teaching Certification. The student should consult the appropriate section in this catalog for the exact requirements for a minor in a given department or contact that department's undergraduate advisor. Sufficient number of hours to complete the total hours required for a degree. A minimum of 120 hours, of which at least 39 must be 3000/4000 level. Suggested Course Sequence Freshman Year First Semester: MATH 1426; ENGL 1301; HIST 1311; INSY 2303; Modern Language I, 4 hours - Total Credit 17 hours. Second Semester: MATH 2425; ENGL 1302; Natural Science, 4 hours; Modern Language II, 4 hours - Total Credit 15 hours. Sophomore Year First Semester: MATH 2326; MATH 3314; English Literature, 3 hours; Natural Science, 4 hours; Modern Language III, 3 hours - Total Credit 16 hours. Second Semester: MATH 3300; MATH 3330; Liberal Arts Elective, 3 hours; Natural Science, 4 hours; Modern Language IV, 3 hours - Total Credit 16 hours. Junior Year First Semester: MATH 3321; Mathematics, 3 hours; Minor 3 hours; Natural Science, 4 hours; Social and Cultural Studies, 3 hours - Total Credit 16 hours. Second Semester: MATH 4321; Mathematics, 3 hours; Minor, 3 hours; Fine Arts, 3 hours; Elective, 3 hours - Total Credit 15 hours. Senior Year First Semester: MATH 3335; Mathematics, 3 hours; Minor, 6 hours; POLS 2311; Elective, 3 hours - Total Credit 18 hours. Second Semester: Mathematics, 3 hours; Minor, 6 hours; MATH 4180; HIST 1312; POLS 2312 - Total Credit 15 hours. Back to top Requirements for a Bachelor of Science Degree in Mathematics (Actuarial Science Option) Six hours of composition (1301 and 1302). Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective Technical Writing (ENGL 3373). Six hours from 1311, 1312, and 3364. Political Science 2311, 2312. Social/Cultural Studies Econ 23051 Fine Arts Three hours from architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages Eight hours (Levels I and II or higher) in one language. Natural Science A total of 14 hours is required. Eight hours including laboratory in one science; the choices are: PHYS 1443 and 1444; or CHEM 1441 and 1442; or BIOL 1441 and 1442; or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Six additional hours of science from the above science courses or from science courses that have above science courses as prerequisites. Computer Literacy Three hours from MATH 1319, CSE 1301, INSY 2303, or equivalent course approved by Undergraduate Advisor, or competency test (see http://www.uta.edu/uac/testing/computer-skills). Oral Communication Competency MATH 3300. Computer Programming Three hours from MATH 1319, CSE 1311, 1320, 1325. MATH 1426, 2425, 2326, 3300, 33022, 33133, 3314, 3316, 3330, 3345, 3321, 3335, 43123, 43132 One course from 4335, 4334, 4345. Capstone mathematics courses specifically for prospective middle grade mathematics teachers do not count toward a degree in mathematics. Capstone mathematics courses for secondary mathematics teachers will count only for those working on the BA in Mathematics with Teaching Certification. ECON 23061,4,ACCT 23024, FINA 33134,5, FINA 33155, FINA 43185, FINA 43195. A minimum of 120 hours, of which at least 39 must be 3000/4000 level. 1. ECON 2305 and ECON 2306, passed with a B or better, together satisfy the Society of Actuaries requirement for VEE certification in Economics. 2. MATH 3302 and MATH 4313, passed with a B or better, together satisfy the Society of Actuaries requirement for VEE certification in Applied Statistical Methods. (Pending approval from the Society of Actuaries.) 3. MATH 3313 and MATH 4312 should prepare a student to pass Exam P of the Society of Actuaries Associateship Course Catalog. 4. FINA 3313, passed with a B or better, satisfies the Society of Actuaries requirement for VEE certification in Corporate Finance. This course has prerequisites: ACCT 2302 and ECON 2306. 5. FINA 3313, FINA 3315, FINA 4318, and FINA 4319 should prepare a student to pass Exam FM of the Society of Actuaries Associateship Course Catalog. See www.soa.org for more details about VEE Certification and the Associateship Course Catalog. Back to top Requirements for a Bachelor of Science Degree in Mathematics (Statistics Option) Six hours of composition. Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective Technical Writing (ENGL 3373). Six hours from 1311, 1312, and 3364. Political Science 2311, 2312. Social/Cultural Studies The Social and Cultural Studies requirement will be satisfied by three hours of designated courses that have been approved by the Undergraduate Assembly. For a list of approved courses, contact the University Advising Center or see http://www.uta.edu/universitycollege/current/academic-planning/core-courses.php. Fine Arts Three hours in architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages Eight hours (Levels I and II or higher) in one language. Natural Science A total of 14 hours is required. Eight hours including laboratory in one science; the choices are: PHYS 1443 and 1444; or CHEM 1441 and 1442; or BIOL 1441 and 1442; or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Six additional hours of science from the above science courses or from science courses that have above science courses as prerequisites. Computer Literacy Three hours from MATH 1319, CSE 1301, INSY 2303, or equivalent course approved by Undergraduate Advisor, or competency test. Oral Communication Competency MATH 3300. Computer Programming Three hours from MATH 1319, CSE 1311, 1320, 1325. MATH 1426, 2425, 2326, 3300, 3302, 3303, 3313, 3314, 3316, 3330, 3345, 3321, 3335, 4311. One course from 4321, 4335, 4334. Three additional advanced hours (3301 or above, except for capstone mathematics courses specifically for prospective middle or secondary grades mathematics teachers) in mathematics. Capstone mathematics courses specifically for prospective middle grade mathematics teachers do not count toward a degree in mathematics. Capstone mathematics courses for secondary mathematics teachers will count only for those working on the BA in Mathematics with Teaching Certification. BSTAT 3322, IE 4308, and either IE 3315 or MATH 3304. Sufficient to give the total number of hours required for a degree. A minimum of 120 hours, of which at least 39 must be 3000/4000 level. Suggested Course Sequence Freshman Year First Semester: MATH 1426; MATH 1319; ENGL 1301; HIST 1311; Liberal Arts Elective, 3 hours - Total Credit 16 hours. Second Semester: MATH 2425; MATH 3314; Natural Science, 4 hours; ENGL 1302; HIST 1312 - Total Credit 17 hours. Sophomore Year First Semester: MATH 2326; MATH 3330; English Literature, 3 hours; Social and Cultural Studies, 3 hours; Natural Science, 4 hours - Total Credit 16 hours. Second Semester: MATH 3313; MATH 3316; Natural Science, 4 hours; MATH 3300; Fine Arts, 3 hours - Total Credit 16 hours. Junior Year First Semester: MATH 3335; MATH 3302; Natural Science, 4 hours; POLS 2311 - Total Credit 13 hours. Second Semester: MATH 4335; MATH 4313; MATH 3303; Elective, 3 hours; POLS 2312 - Total Credit 15 hours. Senior Year First Semester: MATH 3345; Mathematics, 3 hours; STAT 3322; Modern Language I, 4 hours; Elective, 3 hours - Total Credit 16 hours. Second Semester: MATH 3321; MATH 3304 or IE 3315; IE 4308; Modern Language II, 4 hours - Total Credit 13 hours. Back to top Requirements for a Bachelor of Science Degree in Mathematics (Management Science/Operations Research Option) Six hours of composition. Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective Technical Writing (ENGL 3373). Six hours from 1311, 1312, and 3364. Political Science 2311, 2312. Social/Cultural Studies ECON 2305. Fine Arts Three hours from architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages Eight hours (Levels I and II or higher) in one language. Natural Science A total of 14 hours is required. Eight hours including laboratory in one science; the choices are: PHYS 1443 and 1444; or CHEM 1441 and 1442; or BIOL 1441 and 1442; or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Six additional hours of science from the above science courses or from science courses that have above science courses as prerequisites. Computer Programming Three hours from MATH 1319, CSE 1311, 1320, 1325. Computer Literacy Three hours from MATH 1319, CSE 1301, INSY 2303, or equivalent course approved by Undergraduate Advisor, or competency test. Oral Communication Competency MATH 3300. MATH 1426, 2425, 2326, 3300, 3303, 3304, 3313, 3314, 3330, 3321, 3335. One course from 4321, 4335, 4334. Nine additional advanced hours (3301 or above, except for capstone mathematics courses specifically for prospective middle or secondary grades mathematics teachers), including a second sequence (see paragraph three in the opening section). Capstone mathematics courses specifically for prospective middle grade mathematics teachers do not count toward a degree in mathematics. Capstone mathematics courses for secondary mathematics teachers will count only for those working on the BA in Mathematics with Teaching Certification. OPMA 3306, OPMA 3308, and three additional hours in Operations Management, ECON 2305, 2306, and ACCT 2301,2302. Sufficient hours to complete the total hours required for a degree. A minimum of 120 hours, of which at least 39 must be 3000/4000 level. Suggested Course Sequence Freshman Year First Semester: MATH 1426; MATH 1319; ENGL 1301; HIST 1311; Liberal Arts Elective, 3 hours - Total Credit 16 hours. Second Semester: MATH 2425; MATH 3314; Natural Science, 4 hours; ENGL 1302; HIST 1312 - Total Credit 17 hours. Sophomore Year First Semester: MATH 2326; MATH 3330; English Literature, 3 hours; ECON 2305; Natural Science, 4 hours - Total Credit 16 hours. Second Semester: MATH 3313; MATH 3304; Natural Science, 4 hours; MATH 3300; ECON 2306 - Total Credit 16 hours. Junior Year First Semester: MATH 3335; MATH 3303; ACCT 2301; Natural Science, 3 hours; POLS 2311 - Total Credit 15 hours. Second Semester: MATH 4335; OPMA 3306; ACCT 2302; Fine Arts, 3 hours; POLS 2312; MATH 4180 - Total Credit 16 hours. Senior Year First Semester: MATH 3321; Mathematics, 3 hours; OPMA 3308; Advanced Bus., 3 hours; Modern Language I, 4 hours - Total Credit 16 hours. Second Semester: Mathematics, 6 hours; OPMA, 3 hours; Advanced Bus., 3 hours; Modern Language II, 4 hours - Total Credit 16 hours. Back to top Requirements for a Bachelor of Science Degree in Mathematics (Industrial and Applied Mathematics Option) This degree option is for students seeking immediate employment after graduation. Additional course work may be required for admission to graduate school. Six hours of composition. Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective Technical Writing (ENGL 3373). Six hours from 1311, 1312 and 3364. Political Science 2311, 2312. Social/Cultural Studies The Social and Cultural Studies requirement will be satisfied by three hours of designated courses that have been approved by the Undergraduate Assembly. For a list of approved courses, contact the University Advising Center or see http://www.uta.edu/universitycollege/current/academic-planning/core-courses.php . Fine Arts Three hours from architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages Eight hours (Levels I and II or higher) in one language. Natural Science PHYS 1443, 1444 and three hours from 2311, 3313, 3445, 2321. Eight hours in one other science; the choices are: CHEM 1441 and 1442, or BIOL 1441 and 1442, or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Computer Programming CSE 1311 or MATH 1319. Computer Literacy Three hours from MATH 1319, CSE 1301, INSY 2303, or equivalent course approved by Undergraduate Advisor, or competency test. Oral Communication Competency MATH 3300. MATH 1426, 2425, 2326 MATH 3300, 3330, 3318 MATH 3345, 4345 MATH 3314, 4314 MATH 3335 Capstone mathematics courses specifically for prospective middle grade mathematics teachers do not count toward a degree in mathematics. Capstone mathematics courses for secondary mathematics teachers will count only for those working on the BA in Mathematics with Teaching Certification. MATH 3313 and MATH 4311 MATH 3316 and MATH 3302 MATH 3315 MATH 3304 and MATH 4304; or IE 3315 and IE 4315 Sufficient to bring total hours to 120 of which at least 39 must be 3000/4000 level. Back to top Requirements for a Bachelor of Science Degree in Mathematics (Mathematical Biology Option) Six hours of composition. Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective Technical Writing (ENGL 3373). Six hours from 1311, 1312, and 3364. Political Science 2311, 2312. Social/Cultural Studies The Social and Cultural Studies requirement will be satisfied by three hours of designated courses that have been approved by the Undergraduate Assembly. For a list of approved courses, contact the University Advising Center or see http://www.uta.edu/universitycollege/current/academic-planning/core-courses.php. Fine Arts Three hours from architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages Eight hours (Levels I and II or higher) in one language. Natural Science BIOL 1441 and 1442. Eight hours including laboratory in one other science; the choices are: PHYS 1443 and 1444; or CHEM 1441 and 1442; or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Computer Programming Three hours from MATH 1319, CSE 1311, 1320, 1325. Computer Literacy Three hours from MATH 1319, CSE 1301, INSY 2303, or equivalent course approved by Undergraduate Advisor, or competency test. (see http://www.uta.edu/uac/testing/computer-skills ) Oral Communication Competency MATH 3300. MATH 1426, 2425, 2326, 3300, 3314, 3318, 3330, 3335, 3345, 4324, 4335, 4345 Capstone mathematics courses specifically for prospective middle grade mathematics teachers do not count toward a degree in mathematics. Capstone mathematics courses for secondary mathematics teachers will count only for those working on the BA in Mathematics with Teaching Certification. MATH/BIOL 2350, 3350, 3351, 4150; MATH 4311; BIOL 2343 and three additional hours from 3000/4000 level Biology courses. Sufficient hours to complete the total hours required for a degree. A minimum of 120 hours, of which at least 39 must be 3000/4000 level. Back to top Requirements for a Bachelor of Science Degree in Mathematics (Pure Mathematics Option) Six hours of composition. Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective PHIL 2311 (Logic). Six hours from 1311, 1312, and 3364. Political Science 2311, 2312. Social/Cultural Studies The Social and Cultural Studies requirement will be satisfied by three hours of designated courses that have been approved by the Undergraduate Assembly. For a list of approved courses, contact the University Advising Center or see http://www.uta.edu/universitycollege/current/academic-planning/core-courses.php. Fine Arts Three hours from architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages Eight hours (Levels I and II or higher) in one language. Natural Science PHYS 1443, 1444, and three hours from 2311, 3313, 3445. Eight hours including laboratory in one other science; the choices are: CHEM 1441 and 1442, or BIOL 1441 and 1442, or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Computer Programming Three hours from MATH 1319, CSE 1311, 1320, 1325. Computer Literacy Three hours from MATH 1319, CSE 1301, INSY 2303, or equivalent course approved by Undergraduate Advisor or competency test. Oral Communication Competency This is satisfied by the required course, MATH 3300. MATH 1426, 2425, 2326, 3300, 3313, 3318, 3330, 3321, 3335, 3345, 4321, 4335. Twenty one additional advanced hours (3301 or above, except for capstone mathematics courses specifically for prospective middle grades or secondary grades mathematics teachers). Capstone mathematics courses specifically for prospective middle grade mathematics teachers do not count toward a degree in mathematics. Capstone mathematics courses for secondary mathematics teachers will count only for those working on the BA in Mathematics with Teaching Certification. Sufficient number of hours to complete the total hours required for a degree. A minimum of 120 hours, of which at least 39 must be 3000/4000 level. Back to top Bachelor of Arts in Mathematics with Secondary Teaching Certification Six hours of composition (1301 and 1302). Three hours of English or modern and classical languages literature or other approved substitute at the 2000 level or above. Liberal Arts Elective PHIL 2314. Six hours from 1311, 1312 and 3364. Political Science 2311, 2312. Social/Cultural Studies The Social and Cultural Studies requirement will be satisfied by three hours of designated courses that have been approved by the Undergraduate Assembly. For a list of approved courses, contact the University Advising Center or see http://www.uta.edu/universitycollege/current/academic-planning/core-courses.php. Fine Arts Three hours from architecture, art, DNCE 1300, music, or theatre arts. Modern and Classical Languages 10 hours (Levels II, III and IV) in one language, or Level II and 6 hours of approved cultural studies (see http://www.uta.edu/universitycollege/current/academic-planning/core-courses.php for a complete list of approved cultural studies). Natural Science A total of 15 hours is required. One of BIOL 3310, CHEM 4392, GEOL 4305, or PHYS 4391. Eight hours including laboratory in one science; the choices are: PHYS 1443 and 1444; CHEM 1441 and 1442; or BIOL 1441 and 1442; or GEOL 1425 and 1426. Each course may be replaced by another course in the same field that requires the original course as a prerequisite. Four additional science hours taken from the above science courses. Oral Communication Competency MATH 3300. Computer Programming Three hours from MATH 1319, CSE 1311, 1320 or 1325. MATH 1426, 2425, 2326, 2330, 3300, 3301, 3307, 3314, 3330, 3321, 3335. One course from 4321, 4335, 4334. Six additional advanced hours (3302 or above, except MATH 4350 and MATH 4351), including either a second sequence or a capstone course specifically for prospective secondary mathematics teachers. Education Requirements Certification requirements are subject to change; consult with an advisor in UTeach Arlington to verify current requirements. SCIE 1101, SCIE 1102, EDUC 4331, EDUC 4332, EDUC 4333, SCIE 4607, SCIE 4107 A minimum of 120 hours, of which at least 39 must be 3000/4000 level. Back to top Students in non-engineering majors may minor in mathematics by taking 18 hours of mathematics courses with an average GPA in mathematics courses of 2.0, and with at least six hours of 3000/4000 level courses. The courses that may be counted toward a math minor are MATH 1426 and above, except for capstone mathematics courses specifically for prospective middle or secondary grades mathematics teachers. Nine hours of the minor must be taken in residence. Engineering majors seeking a math minor should refer to the College of Engineering section of this catalog for the requirements for the engineering math minor. Back to top Second Major A student who satisfies the requirements for any other baccalaureate degree qualifies for having mathematics named as a second major upon completion of nine mathematics courses at 3000/4000 level (except for capstone mathematics courses specifically for prospective middle or secondary grades mathematics teachers). The following courses are required: 3300, 3314, 3330, 3321, 3335, and one from 4321, 4335, 4334. Besides the sequence 3321-4321 or the sequence 3335 and (4335 or 4334), a second sequence must be part of the second major. The GPA requirements on the mathematics courses for a second major are identical to those listed below under the heading Graduation Requirements. Back to top First-time Admission Requirements Students who wish to apply for major status in mathematics must first complete the University and College of Science requirements and the specific requirements of the Department of Mathematics listed • Overall GPA of 2.25; • Minimum GPA of 2.25 in at least nine hours of mathematics courses in residence at the level of MATH 1426 or above, excluding capstone mathematics courses specifically for prospective middle or secondary grades mathematics teachers; • At least six hours from the science or computer science courses listed in the mathematics degree plans; and • Twelve hours of courses of the University core curriculum in disciplines other than science and mathematics. Students currently enrolled at the University may qualify to change their major to mathematics by meeting the requirements listed above. Back to top Satisfactory Academic Standard Requirement Majors whose overall GPA or GPA in major courses falls below 2.25 will be required to change their major. To re-enter as a mathematics major, the student must meet the requirements listed in the First-time Admissions Requirements section. Back to top Non-Credit Courses The following courses will not be counted for credit (as mathematics or electives) toward a bachelor's degree in mathematics: MATH 1301, 1302,1308, 1315, 1316, 1330, 1331, 1332, 4359, 4351, 3319, BSTAT 3321. Capstone mathematics courses specifically for prospective secondary grades mathematics teachers can be counted for credit only by those pursuing a B.A. with Secondary Teaching Back to top Department of Mathematics Faculty Professor Su Aktosun, Chen-Charpentier, Han, Kojouharov, Korzeniowski, Kribs-Zaleta, R. C. Li, Liao, C. Liu, Y. Liu, Nestell, Su, Sun-Mitchel, Vancliff Associate Professors Cordero, Epperson, Gornet, Grantcharov, Hawkins, D. Jorgensen, Shipman Assistant Professors Ambartsoumian, T. Jorgensen, Y. Li Senior Lecturer Baker, Campbell, Hamilton, Krasij, Mitchell, Smith Professors Emeritus Corduneanu, Dragan, Dyer, Heath Back to top Course Descriptions View Course Descriptions for: Mathematics (MATH) Students who wish to register for Math 1301, 1302, 1303, 1308, 1315, 1322, 1323, 1324, 1325, 1421, and 1426 are required to take the Math Aptitude Test (MAT). Students with SAT Math scores of 600 or higher, or ACT Math scores of 26 or higher, within the past 5 years may be exempted from placement testing. See http://www.uta.edu/math/pages/main/mpt.htm for details regarding math placement Prerequisite requirements should be checked in the course listings below or in the online catalog. Students wishing to enroll in MATH 1330, 1331 and 1332 must be education majors and have the appropriate prerequisites. Back to top
{"url":"http://wweb.uta.edu/catalog/content/academics/department.aspx?college=SCIE&dept=MATH","timestamp":"2014-04-21T12:07:31Z","content_type":null,"content_length":"138156","record_id":"<urn:uuid:965b4e8b-72ae-4bf3-909e-bda2e4e0927e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture 23 continuous random variables Random variables can be classified as either discrete or continuous. ◦ Discrete: mostly counts ◦ Continuous: time, distance, etc. 1. They are used to describe different types of quantities. 2. We use distinct values for discrete random variables but continuous real numbers for continuous random variables. 3. Numbers between the values of discrete random variable makes no sense, for example, P(0)=0.5, P(1)=0.5, then P(1.5) has no meaning at all. But that is not true for continuous random variables. Both discrete and continuous random variables have sample space. For discrete r.v., there may be finite or infinite number of sample points in the sample space. For continuous r.v., there are always infinitely many sample points in the sample space. *** For discrete r.v., given the pmf, we can find the probability of each sample point in the sample space. *** But for continuous r.v., we DO NOT consider the probability of each sample point in the sample space because it is defined to be ZERO! In another word, For discrete random variables, only the value listed in the PMF have positive probabilities, all other values have probability zero. We can find probability for some specific value or an interval of values. For continuous random variables, the probability of every specific value is zero. Probability only exists for an interval of values for continuous r.v.. Let X be the number of stops for a citybus going from downtown Lafayette to Purdue campus. X is a discrete/continuous? Let Y be the distance from the train station and where a citybus can stop at when it comes from downtown Lafayette to Purdue campus. Y is a discrete/continuous? P(X=3 stops)=? P(Y=150 yards)=? PDF and CDF. PDF is Probability Density Function, it is similar to the PMF for discrete random variables, but unlike PMF, it does not tell us about the probability. CDF is Cumulative Distribution Function, it has a counterpart for discrete random variables, but for continuous random variables, it is the only way we can find a For discrete random variables: ◦ PMF: P(X=K) ◦ CDF: P(a < X < b) = ∑KP(X=K) For continuous random variables: ◦ PDF: f(x) ◦ CDF: F(x)=P(a < X < b) = ∫ab f(x)dx For discrete random variables, both PMF and CDF can tell us probabilities. For continuous random variables, ONLY CDF can tell us probabilities. Given X is a continuous random variable with sample space Ω and its PDF is f(x), f(x) must satisfy the following conditions: ◦ 1. 0≤ f(x) ◦ 2. ∫Ωf(x) dx= 1 ◦ The same as the conditions for discrete random A continuous random variable X has the pdf f(x)=c(x-1)(2-x) over the interval [1, 2] and 0 elsewhere. What value of c makes f(x) a valid pdf for X? What is P(x>1.5)? Think about the citybus example and simplify it. Suppose the citybus starts at point A and goes toward point B, if this bus can stop at will, or stop at each point between A and B with equal probability, we let X be the distance between where the bus stops and point A. Then X is a random variable and it is said to follow a Uniform distribution. We will talk about several continuous distributions, we need to know: ◦ Their PDF ◦ How to calculate probability under those ◦ How to find mean and variance for those random For Uniform: ◦ PDF: ,A X B f ( x) B A 0, elsewhere In order to calculate the probability, we need to know the distance between A and B. In another word, the parameters for a uniform distribution are A and B in this case, where A and B are defined as the distance mark for the two For example, if B is 2000 yards away from A, then B-A=2000. And the probability that the bus stops within 200 yards from A would be f ( x)dx Then what is the probability that the bus stops somewhere between 400 yards away from A and 600 yards away from A? f ( x)dx 400 2000 dx 2000 0.1 What is the probability that the bus stops within 200 yards away from point B? f ( x)dx What is the probability that the bus stops half way between A and B. f ( x)dx Given that a continuous r.v. follows a uniform distribution with pdf: ,a X b f ( x) b a 0, elsewhere E( X ) (b a ) 2 Var ( X ) Let T be the time when a STAT225 student turned in his/her exam 1 hour after the exam started. Suppose this time is uniformly/evenly distributed between 9pm and 9:30pm. What is the pdf of T? What is the probability that a student turned in the exam between 9:10pm and 9:25pm? What is the mean and standard deviation of What is the probability that a student turned in the exam at 9:30pm? What is the probability that a student turned in the exam by 9:30pm?
{"url":"http://www.docstoc.com/docs/157698816/Lecture-23-continuous-random-variables","timestamp":"2014-04-19T18:02:45Z","content_type":null,"content_length":"56335","record_id":"<urn:uuid:a41fae47-4502-482e-91dd-1ebad0a60a25>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Bounded factorization property for Frechet spaces Terzioğlu, Tosun and Zahariuta, Vyacheslav (2003) Bounded factorization property for Frechet spaces. Mathematische nachrichten, 253 (1). pp. 81-91. ISSN 0025-584X PDF - Registered users only - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Official URL: http://dx.doi.org/10.1002/mana.200310046 An operator T is an element of L(E, F) factors over G if T = RS for some S is an element of L(E, G) and R is an element of L(G, F); the set of such operators is denoted by L G (E, F). A triple (E, G, F) satisfies bounded factorization property (shortly, (E, G, F) is an element of BF) if L-G (E, F) subset of LB(E, F), where LB(E, F) is the set of all bounded linear operators from E to F. The relationship (E, G, F) is an element of BF is characterized in the spirit of Vogt's characterisation of the relationship L(E, F) = LB(E, F) [23]. For triples of Kothe spaces the property BF is characterized in terms of their Kothe matrices. As an application we prove that in certain cases the relations L(E, G(1)) = LB(E, G(1)) and L(G(2), F) = LB(G(2), F) imply (E, G, F) is an element of BF where G is a tensor product of G(1) and G(2). Item Type: Article Uncontrolled Keywords: Fréchet and Köthe spaces • continuous and bounded linear operators • projective tensor products Subjects: Q Science > QA Mathematics ID Code: 11446 Deposited By: Tosun Terzioğlu Deposited On: 10 Apr 2009 16:10 Last Modified: 25 May 2011 14:14 Repository Staff Only: item control page
{"url":"http://research.sabanciuniv.edu/11446/","timestamp":"2014-04-19T00:44:56Z","content_type":null,"content_length":"16240","record_id":"<urn:uuid:8e122376-d233-4a8e-969a-a886b49d1bf2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
About the intrinsic definition of the Weyl group of complex semisimple Lie algebras up vote 10 down vote favorite It may be a easy question for experts. The definition of the Weyl group of a complex semisimple Lie algebra $\mathfrak{g}$ is well-known: We first $\textbf{choose}$ a Cartan subalgebra $\mathfrak{h}$ and we have the root space decomposition. The Weyl group now is the group generated by the reflections according to roots. Naively this definition depends on the choices of the Cartan subalgebra $\mathfrak{h}$. Of course we can prove that for different choices the resulting Weyl groups are isomorphic. My question is: can we define the Weyl group intrinsically such that we don't need the do check the unambiguity. One thought is: we have the abstract Cartan subalgebra $\mathfrak{H}:=\mathfrak{b}/[\mathfrak{b},\mathfrak{b}]$ of $\mathfrak{g}$ (which is in fact not a subalgebra of $\mathfrak{g}$). Can we define the Weyl group along this way? Again is there any references for this? lie-algebras weyl-group lie-groups reference-request 6 Don't you end up proving the result does not depend on the Borel you picked? – Mariano Suárez-Alvarez♦ Aug 2 '12 at 3:00 @ Mariano You are right. It can be proved that for two Borel subalgebra $\mathfrak{b}$ and $\mathfrak{b}' $ , the resulting quotient $\mathfrak{b}/[\mathfrak{b}, \mathfrak{b}]$ and $\mathfrak{b}'/ [\mathfrak{b}', \mathfrak{b}']$ are canknically isomorphic. It's in Representation Theory and Complex Geometry Chapter 3 by Chriss/Ginzburg. Oop, still there are choices. – Zhaoting Wei Aug 3 '12 at 6:12 1 @Zhaoting: I think all the approaches suggested in the answers tend to show the impossibility of giving the desired intrinsic definition "such that we don't need to check the unambiguity". Under the surface the conjugacy theorems and other scaffolding are concealed. – Jim Humphreys Aug 3 '12 at 14:18 @Jim: I see. Thank you very much! – Zhaoting Wei Aug 4 '12 at 5:25 add comment 5 Answers active oldest votes Probably the earliest intrinsic definition of Weyl group occurs in section 1.2 of the groundbreaking paper "Representations of Reductive Groups Over Finite Fields" by Deligne and Lusztig (Ann. of Math. 103, 1976, available at JSTOR). This is done elegantly in the closely related but more general setting of a reductive algebraic group $G$ over an arbitrary algebraically closed field (though their interest is mainly in prime characteristic). Letting $X$ denote the set of all Borel subgroups of $G$, the set of $G$-orbits on $X \times X$ provides a natural model for a universal Weyl group of $G$ (or its Lie algebra). [ADDED] In the algebraic group setting, this intrinsic definition depends just on knowing what a connected reductive (or semisimple) group is and what a Borel subgroup is (maximal closed connected solvable subgroup). But obviously one can't exploit the "Weyl group" without knowing more of the structure theory: conjugacy theorems, Bruhat decomposition. (Is it a group? finite?) In the easier characteristic 0 Lie algebra theory, where $X$ becomes the set of Borel subalgebras (whose definition requires some theory) with conjugation action by the adjoint group, this abstract notion of "Weyl group" similarly needs unpacking. But the Deligne-Lusztig definition is a good conceptual one for their purposes and sneaks in the underlying set $X$ up vote 12 of the flag variety of $G$. Any intrinsic definition of the Weyl group needs serious background in Lie theory. down vote accepted In the treatment by Chriss and Ginzburg, even when one is primarily interested in the Lie algebra picture, the group in the background tends to play an important role. Indeed, in the early work of Borel and Chevalley on semisimple algebraic groups, the Weyl group appears most naturally in the guise of the finite quotient $W_G(T) :=N_G(T)/T$ for a fixed maximal torus $T$. Then one sees $W$ as generated by reflections relative to roots, etc. As in the parallel Lie algebra setting in characteristic 0, the maximal tori (or Cartan subalgebras) are all conjugate under the adjoint group action, but this falls short of giving an intrinsic definition of the sort provided by Deligne-Lusztig. [Weyl himself gave the group an awkward name, but was mainly concerned with its use in the context of a compact Lie group. The notion basically originates earlier in the work of Cartan, but it took a while to see the root system and Weyl group as combinatorial objects including the Coxeter presentation of the group as a reflection group (carried over by Witt to Lie Of course, this elegant "intrinsic definition" rests on all of the usual conjugacy results, though it isn't clear what the OP is really seeking by asking for a definition which avoids the need to check "the unambiguity". – user22479 Aug 2 '12 at 12:34 @quasi (if I may call you that): See my added paragraph, where I emphasize that the definition itself uses no more than basic definitions. The price for that is having to figure out what it actually means in concrete terms; then you do need more theory. – Jim Humphreys Aug 2 '12 at 17:36 @Jim, a related approach defining an abstract "Weyl group" for probably the most general category of groups that should have Weyl groups, appeared in the recent work by Bader and Furman related to Margulis' superrigidity, see for example homepages.math.uic.edu/~furman/preprints/sr-note-published.pdf, and after your introduction, I see it inspired from the Deligne-Lusztig definition. – Asaf Aug 2 '12 at 18:14 1 @Jim Maybe we can look at the set of $G$ -orbits of $X \times X$ and say that "this is the Weyl group". But can we define a multiplication just on this set of $G$-orbits? If we can, then this is what I am seeking for: an intrinsic definition of Weyl group. – Zhaoting Wei Aug 2 '12 at 23:10 1 @Zhaoting: This is all worked out carefully by Deligne-Lusztig in their section 1.2. But I'd emphasize that it uses most of the deep structure theory (including conjugation theorems and Bruhat decomposition in the group version) to reach the intrinsic formulation. – Jim Humphreys Aug 3 '12 at 14:14 show 1 more comment Let $g_r$ be the set of regular semi-simple elements of the Lie algebra, and $\tilde g_r$ be the set of these elements with a choice of Borel containing it. The Weyl group is the group of up vote 15 deck transformations of the cover $\tilde {g}_r\to {g}_r$. down vote This is a great point! Maybe the remaining problem to me is that can we relate this deck transformation with the set of $G$-orbits on $X \times X$, as Jim Humphreys pointed out in his answer. – Zhaoting Wei Aug 2 '12 at 23:13 1 @Zhaoting: Like the other intrinsic descriptions, this requires a lot of the structure theory to relate it to the concrete Weyl group attached to a Cartan subalgebra (or maximal torus) of a semisimple Lie algebra (or group). Here you also need regular elements (Kostant/Steinberg; cf. Bourbaki Ch. 7-8): regular semisimple elements are dense and each lies in exactly $|W |$ Borel subalgebras (or subgroups), corresponding to positive systems of roots or Weyl chambers. – Jim Humphreys Aug 3 '12 at 14:07 @Ben: This is a very nice topological viewpoint, which I guess goes back to work on the topology of compact Lie groups (Adams, Bott, Samelson, ...)? Is there a good reference for the translation to semisimple Lie algebras and Borel subalgebras? – Jim Humphreys Aug 3 '12 at 14:11 add comment Sometimes when you define a group using an arbitrary choice of object and then show the choice of object doesn't matter, you could have defined a groupoid without making an arbitrary choice. For example, to define the fundamental group $\pi_1(X,x)$ of a path-connected space $X$ we need to choose a basepoint $x \in X$, but then we can show we get isomorphic groups no matter what basepoint we choose, with an isomorphism given by a homotopy class of paths between the basepoints. To avoid this maneuver we can work with the fundamental groupoid of $X$, whose objects are points of $X$ and whose morphisms are homotopy classes of paths. If $X$ is path-connected all objects in this groupoid are isomorphic, and thus the automorphism groups of all objects are isomorphic. The automorphism group of $x$ is just $\pi_1(X,x)$. The fundamental groupoid is thus equivalent, as a category, to the one-object groupoid corresponding to the group $\pi_1(X,x)$. But the advantage of the fundamental groupoid is that we can define it without choosing a basepoint, and it makes sense and works well even when $X$ is not path-connected. Similarly, I think we can define the Weyl groupoid of a compact semisimple Lie group $G$ in a way that gives a groupoid equivalent to the usual Weyl group, but doesn't require a choice of up vote maximal torus. The idea should go like this. The objects of the Weyl groupoid are maximal tori. A morphism $f : T \to T'$ in the Weyl groupoid is a Lie group isomorphism of the form 7 down vote $$ t \mapsto g t g^{-1} \textrm{ for all } t \in T $$ for some $g \in G$. If I did this right, the automorphism group of any object $T$ in the Weyl groupoid is the usual Weyl group $$ W_G(T) = N_G(T) / T ,$$ that is, the normalizer of $T \subset G$ modulo the centralizer of $T \subset G$, which is $T$ itself. If this is true, the Weyl groupoid will be equivalent, as a groupoid, to the usual Weyl group $W_G(T)$ for any maximal torus $T$. 1 Isn't this just pushing the bump under the rug? That is, if one wants to know that there is one Weyl group associated to a given semisimple Lie group, then one has to show that the resulting groupoid is connected, which in this case is the same as showing that the maximal tori are all conjugate... right? – Joshua Grochow Sep 12 '13 at 17:14 You need to show all maximal tori are conjugate to show this groupoid is equivalent to a group, and to get a specific group that it's equivalent to, you may need to pick a specific maximal torus... but if you're happy working with groupoids instead of groups (as I am), you can avoid this arbitrary choice - and it's this arbitrary choice that was annoying the original questioner, not the difficulty of proving all maximal tori are conjugate. – John Baez Sep 13 '13 at 5:34 add comment Yes: this is the approach to defining the 'abstract Weyl group' introduced in "Representation Theory and Complex Geometry" by Chriss/Ginzburg on p. 135 (2nd Edition, Birkhauser). up vote 4 down vote add comment I have heard that, originally, the Weyl group was designed (and worked out e.g. by Chevalley) as some Galois group which are then intrinsic. up vote 3 down vote add comment Not the answer you're looking for? Browse other questions tagged lie-algebras weyl-group lie-groups reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/103751/about-the-intrinsic-definition-of-the-weyl-group-of-complex-semisimple-lie-algeb?sort=votes","timestamp":"2014-04-18T14:06:05Z","content_type":null,"content_length":"89600","record_id":"<urn:uuid:82597b0f-da3b-4ae8-94a5-4392df2923fd>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
percent of a solution... August 29th 2011, 09:09 PM #1 Senior Member Aug 2009 percent of a solution... ok here is the question.. how many quarts of alcohol must be added to 10 quarts of a 25% alcohol solution to produce 40% alcohol solution? i dunno i jus feel like i have too many unknowns.. i tried to use barrels to make sense of the problem.. in the first barrel i have 10 gallons of 25% solution + X gallons of Y% which will yield (10 +X) gallons of 40% solution? ok where did i go wrong with this problem, because its not working out. thanks in advance Re: percent of a solution... ok here is the question.. how many quarts of alcohol must be added to 10 quarts of a 25% alcohol solution to produce 40% alcohol solution? i dunno i jus feel like i have too many unknowns.. i tried to use barrels to make sense of the problem.. in the first barrel i have 10 gallons of 25% solution + X gallons of Y% which will yield (10 +X) gallons of 40% solution? ok where did i go wrong with this problem, because its not working out. thanks in advance Let the amount of alcohol added be x quarts. Then you have 10/4 + x = (10 + 4x)/4 quarts of pure alcohol in a volume of 10 + x quarts. You require (10 + 4x)/[4(10 + x)] = 0.4 => x = ..... Re: percent of a solution... Hi slapmaxwell1, Let x = the qts of 100% alc to be added to 10 qts of 25% alc to make x+10 qts of 40% alc equation of alc balance x*1 + 10*(.25) = (10+x)*(.4) Solve for x August 30th 2011, 04:19 AM #2 August 30th 2011, 04:26 AM #3 Super Member Nov 2007 Trumbull Ct
{"url":"http://mathhelpforum.com/algebra/186945-percent-solution.html","timestamp":"2014-04-17T23:16:41Z","content_type":null,"content_length":"36119","record_id":"<urn:uuid:4b64283d-264d-4d4c-97da-611adb5718c0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Write a variable expression. help please..The length of a rectangle is 5 m more than twice the width. Express the length of the rectangle in terms of the width. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ffd11aae4b00c7a70c5d1b5","timestamp":"2014-04-18T10:36:32Z","content_type":null,"content_length":"46553","record_id":"<urn:uuid:49a114ec-21f6-477a-8be4-07f84d7959ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
College Hill, PA Algebra 2 Tutor Find a College Hill, PA Algebra 2 Tutor ...I currently hold an Elementary Education Certificate from New Jersey and passed my Praxis II test in Elementary Education. I currently teach math and science in grades five to seven with the anticipation that I will be also teaching fifth grade social studies next academic year. Additionally, I... 19 Subjects: including algebra 2, geometry, biology, algebra 1 ...I can provide references. I got back into tutoring within the past few years purely by accident. I went to pick up my order of Chinese food at a local restaurant and saw a sign in the window requesting a math tutor for the owners' son. 11 Subjects: including algebra 2, Spanish, calculus, geometry ...SAT math classes spend too much time, and too much of your money, on subjects that your child may not need as these programs cater to much larger groups. Allow an experienced PA certified math teacher to make your college admissions officers take notice!Algebra is often a stumbling block for the... 12 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I am a head tutor at Muhlenberg, meaning I lead tutor trainings, attend weekly tutoring meetings, and tutor individual students at my school. I have experience tutoring psychology and math courses. I have experience teaching in a variety of school settings ranging from pre-kindergarten through fourth grade. 26 Subjects: including algebra 2, reading, calculus, statistics ...I'd love to teach you in any of my listed academic subjects. I favor a dual approach, focused on both understanding concepts and going through practice problems. Let me know what concepts you're struggling with before our session, so I can streamline the session as much as possible! 26 Subjects: including algebra 2, English, calculus, physics Related College Hill, PA Tutors College Hill, PA Accounting Tutors College Hill, PA ACT Tutors College Hill, PA Algebra Tutors College Hill, PA Algebra 2 Tutors College Hill, PA Calculus Tutors College Hill, PA Geometry Tutors College Hill, PA Math Tutors College Hill, PA Prealgebra Tutors College Hill, PA Precalculus Tutors College Hill, PA SAT Tutors College Hill, PA SAT Math Tutors College Hill, PA Science Tutors College Hill, PA Statistics Tutors College Hill, PA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Alpha, NJ algebra 2 Tutors Butztown, PA algebra 2 Tutors Chapmans, PA algebra 2 Tutors Delaware Park, NJ algebra 2 Tutors Easton, PA algebra 2 Tutors Forks Township, PA algebra 2 Tutors Hokendauqua, PA algebra 2 Tutors Lehigh Valley algebra 2 Tutors Longswamp, PA algebra 2 Tutors Lopatcong, NJ algebra 2 Tutors Phillipsburg, NJ algebra 2 Tutors Stockertown algebra 2 Tutors Tatamy algebra 2 Tutors West Easton, PA algebra 2 Tutors Willow Grove, NJ algebra 2 Tutors
{"url":"http://www.purplemath.com/college_hill_pa_algebra_2_tutors.php","timestamp":"2014-04-18T14:18:16Z","content_type":null,"content_length":"24349","record_id":"<urn:uuid:b12affea-0f6f-4217-80ef-aca016f1bc21>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Mission Viejo Prealgebra Tutor Find a Mission Viejo Prealgebra Tutor ...Perfect shapes. Even though in the real world we have to deal with imperfect shapes, the base to be able to deal with them is perfect shapes. I passed more than 20 units of mathematics including trigonometry, algebra, and calculus which all are very helpful in better understanding the geometry. 11 Subjects: including prealgebra, calculus, statistics, geometry ...We agree on what we both understand about the matter at hand. We take the time necessary to discuss the definitions of the terms we use and the principles of the subject area in which we're working. We work together through as many specific questions and problems as necessary to make the client comfortable enough to work without me. 13 Subjects: including prealgebra, English, calculus, physics ...I currently home school my 2nd and 6th graders, one of whom has ADHD and Aspergers. I have also worked with special needs children from preschool through 8th grade. If you are looking for an understanding and patient tutor, than stop looking! 48 Subjects: including prealgebra, reading, English, writing ...Trigonometry is one of my stronger subjects as well. I believe I am really good with trigonometry and also skillful in teaching it as well. I have taken the SAT myself and has done well on the Math section of it. 12 Subjects: including prealgebra, calculus, geometry, algebra 1 ...I am proficient in Algebra, Geometry, Pre-Calculus, Calculus, Probability & Statistics, and more. I am very passionate about educating students by teaching them mathematics. I value the mastery and fluency of computational techniques and I stress the understanding of the conceptual underpinning and examination of the reasonableness of results. 10 Subjects: including prealgebra, geometry, statistics, algebra 1
{"url":"http://www.purplemath.com/Mission_Viejo_Prealgebra_tutors.php","timestamp":"2014-04-19T02:49:11Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:128d62d4-af23-4a2b-bead-17b1776dce3e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Log? Date: 26 Feb 1995 22:46:28 -0500 From: charley Subject: Math questions My name is Yutaka Charley and I'm in the 5th grade at PS150Q in NYC. What's 4 to the half power? What does log mean? Thank you. Date: 27 Feb 1995 21:54:12 -0500 From: Dr. Ken Subject: Re: Math questions Hello there! I'll address your second question, the one about Logs; and my colleague and buddy Ethan has promised to answer your first question, the one about 4 to the 1/2 power. Here's the definition of Log: If a = x, then Log (x) = b. When you read that, you say "if a to the b power equals x, then the Log (or Logarithm) to the base a of x equals b." Log is short for the word Logarithm. Here are a couple of examples: Since 2^3 = 8, Log (8) = 3. For the rest of this letter we will use ^ to represent exponents - 2^3 means 2 to the third power. To find out what Log (25) is, we'd ask ourselves "what power do you raise 5 to to get 25?" Since 5^2 = 25, the answer to this one is 2. So the Logarithm to the base 5 of 25 is 2. Whenever you talk about a Logarithm, you have to say what base you're talking about. For instance, the Logarithm to the base 3 of 81 is 4, but the Logarithm to the base 9 of 81 is 2. Here are a couple of examples that you can try to figure out: What is the Logarithm to the base 2 of 16? What is the Logarithm to the base 7 of 343? How would you express the information, 4^3 = 64, in terms of Logarithms? Now that you have done Logarithms I will take over for my buddy Ken and talk about fractional exponents. To help explain fractional exponents I need to teach you one neat fact about exponents: 3^4 times 3^5 equals 3^(4+5) or 3^9 This will be very important so I will show a few more examples. 4^7 times 4^10 equals 4^17 5^2 times 5^6 equals 5^8 Now let's get to fractional exponents. Let's start with 9^(1/2). We know from our adding rule that 9^(1/2) times 9^(1/2) is 9^(1/2 + 1/2), which is 9^1; so whatever 9^(1/2) is, we know that it times itself has to equal nine. But what times itself equals 9? Well 3, so 9^(1/2) is 3. All fractional exponents work this way. Lets look at 8^(1/3). Again, 8^(1/3) times 8^(1/3) times 8^(1/3) is 8^(1/3 + 1/3 + 1/3), which is 8; so we need to know what times itself three times is 8. That is 2. So now look at your problem, 4^(1/2). We know from experience that this means what number times itself is 4? That is 2, so 4^(1/2) equals 2. Hope that helps, - Ken "Dr." Math and Ethan Doctor On Call
{"url":"http://mathforum.org/library/drmath/view/58046.html","timestamp":"2014-04-19T09:54:33Z","content_type":null,"content_length":"7630","record_id":"<urn:uuid:d85419ee-1f1c-476a-916d-48db7791e3ac>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Probability distribution using the Maxwell Boltzmann formula Both sides were squared in that step. So on the right you have the integral times the integral. Remember that the variable you integrate over is a "dummy variable", that means it can be changed to a different character and it doesnt matter. By changing the dummy variable from "x" to "y" it is better illustrated how you are going from cartesian to polar coordinates.
{"url":"http://www.physicsforums.com/showpost.php?p=4251853&postcount=2","timestamp":"2014-04-17T04:03:25Z","content_type":null,"content_length":"7252","record_id":"<urn:uuid:65d2f569-2e78-48ce-963a-d6447be4b012>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
An Example of MBH "Robustness" In the MBH source code, they apply steps that purport to weight the temperature PCs in their regression calculations proportional to their eigenvalues. Comments on their code say: c set specified weights on data c weights on PCs are proportional to their singular values This is one of two weighting procedures in MBH for the regression, the other being the weighting of the proxy series. Wahl and Ammann tout one of the key results of their study as showing that the MBH algorithm is “robust” against “important” simplifications and modifications. They claimed: A second simplification eliminated the differential weights assigned to individual proxy series in MBH, after testing showed that the results are insensitive to such linear scale factors. This claim is repeated in slightly different terms on several occasions: Additionally, the reconstruction is robust in relation to two significant methodological simplifications-the calculation of instrumental PC series on the annual instrumental data, and the omission of weights imputed to the proxy series. Our results show that the MBH climate reconstruction method applied to the original proxy data is not only reproducible, but also proves robust against important simplifications and Indeed, our analyses act as an overall indication of the robustness of the MBH reconstruction to a variety of issues raised concerning its methods of assimilating proxy data, and also to two significant simplifications of the MBH method that we have introduced. I’ll discuss this claim in respect to weights on proxies in detail on another occasion, but today will note that it can trivially be seen to be untrue, merely through considering the acknowledged impact of differing weights on the NOAMER PC4. Obviously results are not “insensitive” to the “linear scale factors” (weight) of the PC4. Low weights remove the HS-ness of the result and yield a low RE statistic. So this particular claim is patently false. However, it is true that the MBH result is “robust” to the use or non-use of weight factors on the temperature PCs. This can be seen through some trivial (though long-winded) linear algebra as As noted in the previous post, the matrix of unrescaled reconstructed RPCs $\tilde{U}$ can be expressed as follows (using notation from prior post): $\tilde{U}=YP^2 C_{uy}^T (C_{uy} P^2 C_{uy}^T)^{-1} C_{uu}L$ Mann re-scaled each such series $\tilde{U}$ so that its standard deviation (in the calibration period) individually matched the standard deviation of the corresponding temperature PC $U$. Denoting the matrix of rescaled RPCs by $\hat{U}$, we then have: $\hat{U}_{,i}= \tilde{U}_{,i} ||U_{,i}}|| / ||\tilde{U}_{,i}||$ for i=1,…,k Or stacking: $\hat{U}=\tilde{U} ||U|| / ||\tilde{U}||$ where $||.||$ applied to a matrix $X$ is the square root of the diagonal of $X^TX$ – which, when divided by $\sqrt{N-1}$ yields the standard deviation and, for these ratio calculations, equivalent The expression for $\tilde{U}$ can be used to expand $\hat{U}$ as follows: $\hat{U}=YP^2C_{uy}^T (C_{uy} P^2 C_{uy}^T)^{-1}C_{uu}L * diag^{1/2}(U^TU)/ diag^{1/2} (\tilde{U}^T \tilde{U})$ We obtain the following long (but easily calculable expression) for $\hat{U}$ by substituting for $\tilde{U}$: $\hat{U}=YP^2C_{uy}^T AC_{uu}L * diag^{1/2}(C_{uu}) / diag^{1/2}(LC_{uu}ABAC_{uu}L)$ where $A= (C_{uy} P^2 C_{uy}^T)^{-1}$ and $B= (C_{uy}P^2 C_{yy}P^2 C_{uy}^T)$ In the 1-dimensional case (one reconstructed temperature PC), these long matrix expressions reduce to a scalar and a very simple expression – as I discussed a long time ago. Even in the multiple PC cases, because the normalization is being done one by one and L is a diagonal matrix, one can apply the simple identity: $||LXL|| = ||L|| ||X|| ||L|| = L ||X||$ where $||.||$ is the diagonal matrix from the square root of the diagonal. Thus all uses of the weight matrix L cancel out in the above expression yielding: $\hat{U}= YP^2 C_{uy}^T AC_{uu} diag^{1/2}(C_{uu})/ diag^{1/2} (C_{uu} ABA C_{uu}$) Because the matrix L cancels out because of the underlying linear algebra, one would expect that any Wahl and Ammann “experiments” with varying this procedure would be “robust” to this particular 29 Comments 1. Am I correct in assuming that Mann provides a procedure that does nothing and that his colleagues do not understand this. 2. I’m also curious about how likely it is that Mann’s hockey team doesn’t even realize that this is a problem. But I’m sure it doesn’t matter and everybody should just move on. BTW, nice catch. 3. Closing parentheses on last equation? 4. Macbeth comes to mind: “..it is a tale Told by an idiot, full of sound and fury, Signifying nothing.” 5. ..a poor player, who struts and frets his hour upon the stage, and then is heard no more. (From the same soliloquy, just before your quotation.) 6. Steve, check your email 7. Please, what must I do to be able to submit a comment? Have tried with two longish contributions and one short one, but all failed :-(( 8. However, a short attempt has just loaded successfully. Mysterious – must be that my others were too long. 9. OT: The Shakespeare quotation by Michael Jankowski and Pat Keating is from King Lear ;-) 10. 9 Pedro Wrong king. Macbeth, but he wasn’t yet king at the time he said it. 11. Re #10 Yes he was king, it was said when he received the news of his wife’s death as the English backed army attacked his castle at Dunsinane. 12. 11 After I posted, I realized that I had confused that soliloquy with the “If ’tis done,..” soliloquy, so your post was not unexpected. However, wasn’t it “My life has fallen into the sere, the yellow leaf” at the point you are talking about? I thought “Tomorrow” was earlier than Dunsinane. 13. at some point between the matrix algebra and the transition to the bard I got lost. And I’m tangent man! 14. 13 Steve M Yes, you are. But we can try to compete…. I think the tele-connection is “tale told be an idiot”. 15. Re Robin, #7, your longish post is under the related “Squared Weights” thread: http://www.climateaudit.org/?p=2962#comment-232458“>. 16. It might be that I’m not reading the notation correctly, but it looks like the second to last equation equates a scalar and a matrix. It seems to me that diag^1/2^(LXL) a number, while L*diag^1/2 ^(X) is a matrix. I can’t claim to be well-versed in the terminology, so I’m not sure how this affects the math logic. Steve: EVerything is a matrix in this expression. The notation is a little idiosyncratic as diag^{1/2}(X) is the diagonal matrix consisting of the square roots of diag(X). EVerything in this line is a diagonal matrix. 17. Re #12 However, wasn’t it “My life has fallen into the sere, the yellow leaf” at the point you are talking about? I thought “Tomorrow” was earlier than Dunsinane. No, it’s Act V, scene v in Dunsinane: ” Seyton The Queen, my lord, is dead. Macbeth She should have died hereafter; There would have been a time for such a word. To-morrow, and to-morrow, and to-morrow, Creeps in this petty pace from day to day To the last syllable of recorded time; And all our yesterdays have lighted fools The way to dusty death. Out, out, brief candle! Life’s but a walking shadow, a poor player That struts and frets his hour upon the stage And then is heard no more. It is a tale Told by an idiot, full of sound and fury, Signifying nothing.” The line about the “sere, the yellow leaf” comes 2 scenes earlier. 18. Re #16: Didn’t notice the update. It’s not the second to last equation anymore; it is the second to last before the note. Steve: I’m tidying the nomenclature a little offline. 19. Thanks for the notes; I understand now (I think). 20. Steve, you seem to be implying that the product of symmetric matrices must be symmetric. Not so, I’m afraid – but I haven’t delved far enough into this piece to see if it affects the result. e.g. A = {{1,3},{3,2}}, B={{7,4},{4,9}} AB = {{19,29},{31,30}} Steve: Thanks. Brain cramp on my part. I was using diagonal matrices here and the results rely on properties of diagonal matrices. I’ve corrected the text accordingly. 21. Anybody, When GISS does their monthly update to the historical reports, do they update the entire history or do the just tack on the latest month’s result to the previous month’s record? 22. wkkruse: When I visited the monthly data in march, several previous monthly values had changed. So, at least occasionally, they do more than just tack on the new month’s value. I think some HadCrut numbers also change. But I’ve only checked once, so it’s possible that the change at HadCrut was me looking at different files. In any case, if you are using this data, it seems wise to check for the latest data fairly regularly. Don’t assume you can just go in, add the most recent value, and use that only. 23. re 22,21 In any professionally managed engineering program, no previous version of data would just be discarded or overwritten. The older version would be archived and a change notice issued detailing the reason(s) for the update. This change notice would have to be approved by an authorized official before any modification to the current data could be done. Perhaps GISS is doing this and has the older versions safely archived with their change notices. 24. “It might be that I’m not reading the notation correctly, but it looks like the second to last equation equates a scalar and a matrix. It seems to me that diag^1/2^(LXL) a number, while L*diag^1/ 2^(X) is a matrix.” I have absolutely NO idea what you just said. You are taking this entire controversy to a place outside of my areas of expertise, and thereby marginalizing the worth of my contributions to the discussion. As I am a person of worth and intellect and moral correctness, to exclude my input in favor of a technocrat’s formulas is entirely wrong, and works to show only that you have valued and weighted the wrong inputs and influences in your results-driven propogandizing. Clearly, your ad hominum, math-centric attacks on Mr. Mann’s work cannot detract from the essential TRUTH of what he tells us. Mann shows us our evil, and your attempt to use these outlandish, inscrutable MATH problems to discredit him fails to address, much less impeach, the core of his work.Mann is post-math! 25. I have the Jan GHCN+ERSST numbers, and the ones now up for Apr. The anomaly for a number of months in 2007 are different. Jan version: 2007 87 63 59 66 55 53 51 56 50 55 46 39 Year 57 Apr version: 2007 86 63 60 64 55 53 51 56 50 55 49 40 Year 57 I didn’t check other years. 26. Rather odd; the first quarter is unaffected, the second quarter .02 lower (average 0 for the quarter) the third quarter unaffected, and the fourth quarter .04 warmer (average .1 for the quarter) and the year unaffected. Why bother; The climate only matters on 30 year time scales. Somebody ought to tell the people working with the monthly and yearly anomalies to not bother with this weather stuff. 27. OOOps, I meant .01 for the quarter 3rd. 28. re 27. you made a mistake. you made a mistake. you made a mistake. Its my tamino impression. 29. A reader posted this nice characterization of this particular MBH “robustness” at his blog. In his analysis of his hockey stick temperature reconstruction, Michel Mann claimed that his results were robust to changes in certain weighting factors. Humorously, Steven McIntyre demonstrates that it is robust because when you do the math, the weighting factors actually cancel out of all the equations. In effect, Mann was saying that y =3x/x gives the answer “3″ robustly for all values of x (well, except zero). True, but scientifically meaningless. But worrisome when a scientist has to run numerous simulations to discover the fact. I presume he thought his weighting factors were actually doing something in his model. “3x/x” captures the situation exactly. To be fair, Mann himself did not claim that the changes were “robust” to weighting factors; his claims were to things like all dendroclimatic indicators; he himself never mentioned that he used weighting factors. This particular issue was raised by Wahl and Ammann. Again to be precise, they didn’t make the claim of robustness about the weighting factor L (using the nomenclature of the post) where the claim is trivial but true, but about the weighting factor P where the claim is false. This is the Team after all. One Trackback 1. [...] To MBH’s credit, they at least employed a form of the CCE calibration approach, rather than LNA’s direct regression of temperature on proxies. There are, nevertheless, serious problems with their use (or non-use) of the covariance matrix of the residuals. See, eg, “Squared Weights in MBH98″> and “An Example of MBH ‘Robustness’”.> [...] Post a Comment
{"url":"http://climateaudit.org/2008/04/05/an-example-of-mbh-robustness/?like=1&source=post_flair&_wpnonce=eb9f8a7f00","timestamp":"2014-04-21T04:36:22Z","content_type":null,"content_length":"111258","record_id":"<urn:uuid:5b503fe0-11ee-464f-9758-74daacedcbc3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Blog Archives Hidden Curriculum 10 Comments My grad school advisor keeps telling me that I need to write more about my thoughts and observations in the classroom for my action research project. I figure I might as well multi-task here and just blog about my research along the way. We are now a month into the new school year, but I collected some data from them during the first week that I never had time to analyze and write about. There are some interesting (but not all that surprising) things that I found. First, I gave them a journal prompt that asked: In math class, what is the role of the teacher and what is the role of the student? Of my 72 students, 79% identified the teacher as the authority (status and epistemic), the student as passive recipient, and/or the role of school as knowledge transmission. Here are some of their Role of Teacher All of the following are pieces of direct quotes from students: Grade the work Impart knowledge on the students Teach the material Share their knowledge Teach clearly Help solve problems and do them over and over again Pass on knowledge Teach math concepts so they are simple and easy to understand Show examples of a problem Teach students how to do the assigned work Give and deliver information Role of Student All of the following are pieces of direct quotes from students: Listen to the teacher Take all the knowledge the teachers have to offer Learn from the teacher Take notes Do worksheets Pay attention Study diligently Be quiet Absorb the knowledge Absorb information Learns what the teacher teaches To be honest, this wasn't surprising...but it is alarming. It's alarming because the way in which we teach math inevitably (and implicitly) simultaneously teaches students things about themselves as mathematicians. Here is the evidence (responses from beginning of the year survey): 80% of my students think they can't do a math problem unless I tell them how to do it first... 85% think they need to memorize things... and about half of them don't think they can create mathematical ideas, formulas, and rules. All of this is further support that, as I cited in my research proposal (bold added for discussion here): 1. "our classrooms are the primary experiences from which students abstract both their definition of mathematics (Schoenfeld, 1994) and their sense of self as an active participant in the authoring of mathematics (Lawler, 2010)." 2. "Identity is a model for self-direction and, as a result, a possibility for mediating agency (Holland et al., 1998). Many students have established their identity as receivers of knowledge, with no active role in creating or critiquing mathematical claims. As a result, their sense of agency is surrendered. Research supports the view that such environments cause students to surrender their sense of thought and agency in order to comply with the procedural routines outlined by the teacher/authority figure (Boaler, 2000). Signs of this include negative attitudes towards math, lack of connected knowing, and the belief that mathematics is absorbed rather than created." I'm interested in the idea of agency (mathematical and otherwise). I'm interested in the hidden curriculum in our classes and how it impacts students' definition of math, students' formation of self, the mediation (or perpetuation) of status/race/economic/power issues, and the recognition of their own ways of thinking and being mathematical in the world. 10 Comments 6 Comments 3 Comments 6 Comments Creating Rich Discussion by Honoring Individual Thinking 6 Comments As I have posted about before, I really want to do a problem-based unit this year in which students attempt to answer the question, "How far away is the horizon line?" I'm getting ready to start the unit in about a week, so I have been thinking about it a lot lately. Mostly, I have been thinking a lot about the idea of a line that is tangent to a circle and how students might conceptualize that. The "visual" that I get in my head when I think about this horizon line question is this: This morning I was sitting around the house with my girlfriend, and I decided to see what visual she might come up with and what ideas she might have about tangent lines. First, I asked her to draw the visual that comes to mind when she thinks about the horizon line problem. This is what she drew: Pretty fantastic, right?!?! Certainly more artistic than my picture! It was interesting to me how differently two people might be thinking about the same scenario. Then I asked her to draw a circle. Then I asked her to draw a line that touched that circle in only one point. She drew line #1 below (I added the numbering to make some distinctions here). Our conversation went something like this: ME: Tell me why you decided to draw it that way. HER: Well, cause it would only touch the circle in one point. ME: What would happen if you continued your line? HER: It would cross the circle on the other side. ME: So would that work? HER: I guess not....Well, I was thinking about this (draws line #2). ME: Why did you decide not to draw that one? HER: It just seems like it would touch the circle in more than one point. ME: What if we zoomed in? (I drew the picture on the right) HER: Hmmm...not sure. I still feel more certain that line #1 would only touch in one spot. To me, this was really interesting. I wonder about how students think. I wonder about their mental models. And, mostly, I wonder how much we actually listen to them and respond to how THEY think. It can be tempting to tell students about a tangent line in the context of this problem, but that would be a missed opportunity for rich discussion. Perhaps more importantly, it would be imposing a way of thinking on them that is incompatible with how they are currently thinking. You hear a lot of people say that they don't like math. I wonder how much of that is due to the fact that they have learned that math doesn't care about their ideas, that math is always right, and that they need to learn to think more like math. For one hour of my school day, I work with an amazing group of Seniors in a class I have called "Mathematical Thinking" (more about that in a later post, perhaps). Mostly, the class is a mixture of problem-based units and other miscellaneous open-ended puzzles, problems, and mathematical games. Yesterday, we worked on a puzzle/game called Cartesian Chase. I played a few games against students to demonstrate the rules (we confined ourselves to a 3x7 rectangle) and then just let them play for while. Then, I had them stop and record anything they were noticing in terms of a strategy that seemed to be working. Then, they switched and played with new partners for a while longer. I stopped them again after a few games and had them record updated strategies. We ended with a class a few "undefeated" people playing each other. It quickly became apparent that there was a winning strategy at play. In the process of all this, here is what I noticed: - nearly ALL of the students were engaged and playing for the whole time - students were having fun with each other - we had a few early conjectures in place about what strategy might be best - students uncovered structure in the problem, used it to win every time, & were able to clearly explain it - after the game was "solved," a few students were curious: "what if we added another column?" and "what happens with other board sizes?" I work hard to bring this same spirit of playfulness to other lessons. I work hard to make every day feel like a puzzle in our class. For some reason, I can never quite bridge that gap in the way I would like. I think I get pretty close most days, but for some reason "how many burger combinations are possible?" still feels more like a math problem and less like a puzzle to students. Maybe it has to do with our intent as teachers? Do we place too much emphasis on students "knowing" something specific by the end of the lesson? Could we set up the task better (slower?) so that it emerges as a puzzle? I have a lot of questions, but I do know that I value what students are learning about themselves as mathematicians and thinkers from a lesson just as much, if not more than, I value students knowing some piece of the thing we call "mathematics." Agency, Power, and Classroom Community 6 Comments I should start this post by saying that I work in a school with "inclusive classrooms." Students are in the same class based on grade level alone, no tracking by "ability level" or any other metric. I suppose I should also say that I am doing an action research project on mathematical agency. Agency is a slippery word to define, but for the sake of simplicity for now let's say mathematical agency is defined by a positive self-concept as a mathematician (as one who believes in their ability to make sense of mathematical tasks and situations and to judge the validity of those responses). Lately, I have been thinking most closely about how the set-up and discussion of tasks in the classroom can affect student agency and power - for good and bad. This all came about the other day when some of Brian Lawler's credential students were in observing my 10th grade class working on "Consecutive Sums." Basically, the prompt is: "explore consecutive sums and see what you discover" (where equations such as 1+2+3=6 and 7+8=15 are considered consecutive sums). Eventually, the conversation turned into trying to figure out which numbers can (or cannot) be written as consecutive sums. I put a table up on the board with the numbers 1-25 on it and asked groups to send people up to fill in the chart once they had found one. What ended up happening is that about 5 students dominated this part of the lesson while others sat and watched. I know there are plenty of suggestions about better ways to handle this particular part of the lesson, but I think the implications are greater than that. I have read several articles and books for my action research (a couple good ones if you are interested) that outline an amazing vision for a classroom community in which students present ideas, challenge each other, and construct meaning together. Most times when I try this, one of two things happens: 1. I select and sequence student share outs so that certain voices are heard that are usually silenced. Mostly, because I have created the conversation, there isn't much to talk about and students seem disinterested. They aren't debating anything; they aren't solving things collaboratively. OR... 2. I'll select one or two pieces of work to get a conversation started and then step out of the way. This usually gets students talking and debating. The only problem is, it's usually no more than 10 students out of a class of 20-30. I'm not sure I have any answers to this yet, or even that an answer exists that will work for all groups of students. But, I am really interested by the intricacies of teaching...by the tasks we choose, by how we set up those tasks, by how we get students talking about those tasks, by how we conclude those tasks, and, especially, how ALL of those moves inevitably make a difference in what students are learning about themselves as capable mathematicians. This last bit, to me, is far more important than the "mathematics" that they learn.
{"url":"http://www.doingmathematics.com/2/archives/09-2012/1.html","timestamp":"2014-04-21T15:18:18Z","content_type":null,"content_length":"40291","record_id":"<urn:uuid:ba70b6fc-f2b5-4f31-a137-76d916cbd6af>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
West Easton, PA Prealgebra Tutor Find a West Easton, PA Prealgebra Tutor ...The SAT Writing section is my favorite component of the test. I've consistently scored 800, often without a single error. The dreaded GMAT. 34 Subjects: including prealgebra, English, physics, calculus ...Tutoring with me will be a positive experience for you and/or your child. I enjoy working with students of all ages and abilities who are serious about learning. I have teaching and tutoring experience with students in both public and private schools in New Jersey and Pennsylvania. 22 Subjects: including prealgebra, English, reading, ESL/ESOL ...I have seen many very good Algebra 2 and Pre-Calculus students flounder in Trigonometry. My one-on-one method is to show the student that Trig is quite understandable and not as overwhelming as they might believe. I am a certified PA math teacher and have taught all levels of Math, including Algebra I, Geometry and Algebra II. 12 Subjects: including prealgebra, calculus, geometry, algebra 1 ...I will work with you to develop these habits. Drawing diagrams of laws and problems you encounter can help to solve even those problems you have not seen previously demonstrated. My background in visual illustration helps me to help you develop this skill. 13 Subjects: including prealgebra, reading, physics, writing ...I enjoy all the different topics covered by this course. I have taught trig for several years, including helping to develop the curriculum for my district. I love that by the time to you get to trig, many more real world applications become accessible. 12 Subjects: including prealgebra, calculus, statistics, geometry Related West Easton, PA Tutors West Easton, PA Accounting Tutors West Easton, PA ACT Tutors West Easton, PA Algebra Tutors West Easton, PA Algebra 2 Tutors West Easton, PA Calculus Tutors West Easton, PA Geometry Tutors West Easton, PA Math Tutors West Easton, PA Prealgebra Tutors West Easton, PA Precalculus Tutors West Easton, PA SAT Tutors West Easton, PA SAT Math Tutors West Easton, PA Science Tutors West Easton, PA Statistics Tutors West Easton, PA Trigonometry Tutors Nearby Cities With prealgebra Tutor Alpha, NJ prealgebra Tutors Durham, PA prealgebra Tutors Easton, PA prealgebra Tutors Forks Township, PA prealgebra Tutors Freemansburg, PA prealgebra Tutors Glendon, PA prealgebra Tutors Martins Creek prealgebra Tutors Milford, NJ prealgebra Tutors Nazareth, PA prealgebra Tutors Palmer Township, PA prealgebra Tutors Phillipsburg, NJ prealgebra Tutors Riegelsville prealgebra Tutors Stewartsville, NJ prealgebra Tutors Stockertown prealgebra Tutors Tatamy prealgebra Tutors
{"url":"http://www.purplemath.com/West_Easton_PA_prealgebra_tutors.php","timestamp":"2014-04-19T09:31:50Z","content_type":null,"content_length":"24212","record_id":"<urn:uuid:d2c80135-0320-4caa-87dd-b4299db35a7a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00582-ip-10-147-4-33.ec2.internal.warc.gz"}