content
stringlengths
86
994k
meta
stringlengths
288
619
Using phi in a Dif EQ Prob November 6th 2007, 03:01 PM #1 Junior Member Sep 2007 Using phi in a Dif EQ Prob Solve the following system: X' = ([[1,-1],[1,-1]])X + ([1/t],[1/t]), X(1) = ([2],[-1]) using X = phi(t)*phi^(-1)(t_0)X_0 + phi(t)*integral[from t_0 to t](phi^(-1)(s)F(s)ds) Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/22160-using-phi-dif-eq-prob.html","timestamp":"2014-04-18T04:08:27Z","content_type":null,"content_length":"28527","record_id":"<urn:uuid:e96de550-6bb0-4fb9-9102-e242d7c66744>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure This Math Challenges for Families - Did You Know? Mathematicians use the word combination differently than it is used in a "combination lock." In a mathematical combination, the order in which an item occurs is not important. Some old safes can be opened using more than one combination. A disc lock consists of a sequence of discs numbered on their outer edges. To open the lock, you turn the discs (usually three) to the appropriate numbers. Many apartment buildings, businesses, and airports-even restrooms-use some form of keypad lock.
{"url":"http://www.figurethis.org/challenges/c22/did_you_know.htm","timestamp":"2014-04-20T08:27:03Z","content_type":null,"content_length":"14782","record_id":"<urn:uuid:dbabd3f4-2f0c-4f25-9916-128dc1c7c316>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the volume of the solid generated when the region bounded by e^x and the x-axis on the interval [0,1] is rotated around the y-axis, using (a) the washer method (b) the shell method Cut the solid in two at y = 1. The region looks like this: and the solid looks like this: (a) If we try to use the washer method, we have to deal with the solid in two separate pieces. From y = 0 to y = 1, all slices perpendicular to the y-axis are disks of radius 1. The volume of this part of the solid is From y = 1 to y = e, slices perpendicular to the y-axis are washers with outer radius 1 and inner radius x = ln y. The volume of this part of the solid is To get the volume of the entire solid, we add the volumes of its two pieces: (b) If we use the shell method we can deal with the whole solid at once. The cylinder at position x has radius x and height y = e^x. The volume of the solid is
{"url":"http://www.shmoop.com/area-volume-arc-length/washer-shell-exercises.html","timestamp":"2014-04-18T11:05:47Z","content_type":null,"content_length":"28421","record_id":"<urn:uuid:2b23bf56-9bb2-4df8-81c0-7162b6ecd121>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement of Length Lesson Plan: Standard and Nonstandard Units Submitted by: Stefanie Sheridan In this lesson plan, which is adaptable for grades K-3, students use BrainPOP Jr. resources to explore non-standard and standard units for measuring length. Students then use non-standard and standard units to measure the length of various objects and themselves/their classmates. Students will: 1. Learn about non standard and standard units for measuring length 2. Use non-standard and standard units to measure the length of various objects and themselves • Internet access for BrainPOP Jr. • Interactive whiteboard (or LCD projector) • Sidewalk chalk • Paper and pencil for each student • Non-standard measurement units, such as DVD cases • Paperclips • Rulers • Class set of the Activity length; width; height; inch; foot; compare; predict; summarize Preview the Inches and Feet movie and prepare pause points for discussion. Lesson Procedure: 1. Project the Talk About It feature for the class. Demonstrate how to measure in standard and non-standard units and record the results by typing directly into the form. 2. Show the Inches and Feet movie to the class. 3. Pass out the Activity and have students practice measuring with the paperclip (nonstandard unit) and ruler (standard unit.) 4. Take students outside. Have a student lay down on the sidewalk and demonstrate how to measure from his or her feet to his or her head. Use both nonstandard and standard measures. 5. Pair students up and have them take turns measuring one another using both measurement tools. Have students record the measurements on a chart or on their own paper (perhaps the back of the activity paper they completed earlier.) 6. Talk with students about the measurements they recored. Who is taller? Who is shorter? 7. Bring students back to the classroom and have them write about what they learned. They could write, for example: "John is 10 bricks long. I am 6 bricks long. 10 is 4 more than 6 so John is 8. As a review or assessment, have students play the Game or take the Easy Quiz or Hard Quiz. Extension Activity: This activity could also be completed at home with a parent or caregiver. Share the link to this lesson with students' family members and have them measure one another! Students can write about their findings and compare the height of each family member, then bring their research into school to discuss and share. You must be logged in to post a comment.
{"url":"http://www.brainpop.com/educators/community/lesson-plan/measurement-of-length-lesson-plan-standard-and-nonstandard-units/?bp-topic=metric-units","timestamp":"2014-04-18T19:01:53Z","content_type":null,"content_length":"66285","record_id":"<urn:uuid:227c15a3-9c26-4728-acf9-c66230ec919f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with Transient Circuits I know you have a rule requiring an attempt at the problem before posting here. The trouble is, I have a department required simulation project due this Wednesday and my professor is very behind in her lectures and has yet to even mention transient circuits. I dont expect this problem to be done for me, but I would REALLY appreciate some tips or preliminary help in getting started. I am tasked to: Design a 1st-order circuit having a time constant of 1 ms. Simulate the output voltage for a square-wave input voltage. (Using Microcap to simulate) This is all the info we are given...with this little of information, I am really at a loss on how to proceed. Thanks in advance,
{"url":"http://www.physicsforums.com/showthread.php?p=3614361","timestamp":"2014-04-24T14:36:54Z","content_type":null,"content_length":"28536","record_id":"<urn:uuid:e7cbb105-591e-4e45-9322-1e65ef785bb9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 4 Resolution: standard / high Figure 4. CAGE replicate from THP-1 cells after 8 hours of lipopolysaccharide treatment. For each position with mapped tags, the logarithm of the number of tags per million (TPM) in the first replicate is shown on the horizontal axis, and the logarithm of the number of TPM in the second replicate on the vertical axis. Logarithms are natural logarithms. Balwierz et al. Genome Biology 2009 10:R79 doi:10.1186/gb-2009-10-7-r79 Download authors' original image
{"url":"http://genomebiology.com/2009/10/7/R79/figure/F4","timestamp":"2014-04-19T07:37:06Z","content_type":null,"content_length":"11676","record_id":"<urn:uuid:6b4d9489-cff6-491f-86bc-e84bbf1af616>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
"Spectral decomposition" action on the unitary group up vote 3 down vote favorite Consider a matrix $U$ from the unitary group $U_N(\mathbb{C})$ and consider the map $f:U_N(\mathbb{C})\rightarrow U_N(\mathbb{C})$ where $f(U)$ is the matrix of the eigenvectors of $U$. What is known for the orbits of this map, i.e. $(f^n(U):\;n\in\mathbb N)$ ? And what can we say about the eigenvalues of each $f^n(U)$ ? One observation is that if you endow $U_N(\mathbb{C})$ with its Haar measure, then its image by $f$ is still Haar distributed : it kind of preserves disorder. An other observation is that if you embed $U_{N-1}(\mathbb{C})$ in an obvious way in $U_N(\mathbb{C})$, then $f$ preserves that subgroup. I guess this has been studied somewhere, but I'm not able to find any reference (but maybe it is just that I don't know a proper name for it). linear-algebra gr.group-theory group-actions ds.dynamical-systems 2 Your map is not well-defined. Even if $U$ has distinct eigenvalues we are still free to multiply the eigenvectors by complex numbers of absolute value one, or to permute them. There is even more freedom if eigenvalues are repeated. Different choices of $f(U)$ will give completely unrelated answers for $f^2(U)$. – Neil Strickland Jun 5 '13 at 21:00 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged linear-algebra gr.group-theory group-actions ds.dynamical-systems or ask your own question.
{"url":"http://mathoverflow.net/questions/132842/spectral-decomposition-action-on-the-unitary-group","timestamp":"2014-04-16T13:35:33Z","content_type":null,"content_length":"47352","record_id":"<urn:uuid:3a5239dc-4da4-47ea-8b0f-d45de4e25cbe>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Snow White and the Seven Pixels I began the semester just playing with the different mapping software while trying to find where my math and me fit into the class. I was very unsure of how math could be related to mapping and how mapping could and would be interesting to me, a science geek. As I worked with the first software introduced, Arcview, I found myself drawn to the color scheme for the map. As I changed the colors manually, I noticed that the color had numbers related to them. These numbers were broken down into three categories, with the final color depending on the setting of each category. I found it intriguing that you could actually choose your color by putting numbers into the computer. It reminded me of painting by number as a child. I also had a side thought about mapping ski resorts, since that is a hobby of mine. I worked with this idea, first using Atlas to plot the points of several ski resorts in Colorado, but then changed to digital maps of Colorado and plotted the ski resorts using their latitudes and longitudes. In this part of my project I learned how to maneuver and work my way through a new program, testing different buttons to find what I was looking for. I also learned how to edit my web page, make points in the map clickable using Map Edit, and put text into my map using Adobe Photo Shop. I became more comfortable with a computer and authoring tools. Now I needed to somehow link together or relate my two interests of colors as numbers (the math) and skiing (the mapping). With Sandi’s help, I decided to try and relate these interests using group theory from Modern Algebra. For the remainder of the semester I have been researching and reading about group theory. Group Theory Group: a set of 1:1 functions on finite sets that could be grouped together to form a closed set satisfying the following properties: associativity- the operation is associative; (ab)c=a(bc) for all a,b,c in the set. identity- there is an element, e (the identity), in the set such that ae=ea=a for all a in the set. inverses- for every element a in the set, there is an element b in the set (the inverse) such that ab=ba=e. closure- any pair of elements in the set can be combined without going outside the set, which means that the final product is already a part of the set. *note: the group is abelian if it also is commutative, ab=ba, but not all groups satisfy this property. Subgroup: a subset, H, of the original group satisfying the group conditions under the operation of the original group, G. Proper subgroup: if H is a subgroup of G, but does not equal G, totally contained within the group. Trivial subgroup: the identity alone. Nontrivial subgroup: any subgroup that is not the identity. Order of a Group: the number of elements the group contains. Order of an Element: the smallest positive integer n such that g^n=e, g is the element. The abstract approach to determining if a set is a group is that the set has only one identity element, the right and left cancellation laws hold, ba=ca implies b=c, and ab=ac implies b=c, and the inverses are unique. Since I was hopefully going to relate these mathematical concepts to skiing, Sandi and I decided to work with the symmetries of a square to explain the mathematical concepts in hopes that it could then be related to the various snow colors of the slopes. The symmetry group of a plane figure includes the set of all symmetries of the figure, which are rotations and reflections. First one must imagine a square with the four corners numbered from 1 to 4. Next you look at all the different rotations and reflections that can happen. From here you write down where the number moved to during the rotation or reflection. For example: when you don't rotate the square at all you indicate this by writing (1)(2)(3)(4), which means that 1 goes to 1, 2 goes to 2, 3 goes to 3, and 4 goes to 4. When you rotate the square clockwise 180 degrees: (13)(24), this means 1 goes where the 3 was and the 3 goes where the 1 was, the 2 goes where the 4 was and the 4 goes where the 2 was. When you reflect it over the x-axis (12)(34), 1 goes to the 2 place, 2 goes to the 1 place, 3 goes to the 4 place, and 4 goes to the 3 place. The following table gives the eight possible symmetries of the square. No matter what combination of rotations and/or reflections you do, you will always end up with one of these eight products listed in the first column and row of the table. Another way of saying this is, the table is filled in without introducing any new elements. This shows that the set I am working with is closed. Table 1: Multiplication table of the eight possible outcomes from various movements of the square. │ 8 Outcomes │(1)(2)(3)(4)│ (1234) │ (1432) │ (13)(24) │ (12)(34) │ (14)(23) │ (1)(3)(24) │ (2)(4)(13) │ │(1)(2)(3)(4)│(1)(2)(3)(4)│ (1234) │ (1432) │ (13)(24) │ (12)(34) │ (14)(23) │ (1)(3)(24) │ (2)(4)(13) │ │ (1234) │ (1234) │ (13)(24) │(1)(2)(3)(4)│ (1432) │ (1)(3)(24) │ (2)(4)(13) │ (14)(23) │ (12)(34) │ │ (1432) │ (1432) │(1)(2)(3)(4)│ (13)(24) │ (1234) │ (2)(4)(13) │ (1)(3)(24) │ (12)(34) │ (14)(23) │ │ (13)(24) │ (13)(24) │ (1432) │ (1234) │(1)(2)(3)(4)│ (14)(23) │ (12)(34) │ (2)(4)(13) │ (1)(3)(24) │ │ (12)(34) │ (12)(34) │ (2)(4)(13) │ (1)(3)(24) │ (14)(23) │(1)(2)(3)(4)│ (13)(24) │ (1432) │ (1234) │ │ (14)(23) │ (14)(23) │ (1)(3)(24) │ (2)(4)(13) │ (12)(34) │ (13)(24) │(1)(2)(3)(4)│ (1234) │ (1432) │ │ (1)(3)(24) │ (1)(3)(24) │ (12)(34) │ (14)(23) │ (2)(4)(13) │ (1234) │ (1432) │(1)(2)(3)(4)│ (13)(24) │ │ (2)(4)(13) │ (2)(4)(13) │ (14)(23) │ (12)(34) │ (1)(3)(24) │ (1432) │ (1234) │ (13)(24) │(1)(2)(3)(4)│ So far I have shown that the set is closed. Next the set has a unique identity of (1)(2)(3)(4). If you notice from the table that when any element in the set is multiplied by (1)(2)(3)(4) the product is the original element. Therefore by definition of identity, (1)(2)(3)(4) is the identity. For example when you multiple (1234) x (1)(2)(3)(4) you get (1234). The next property is a unique inverse for each element. This means that for each of the 8 elements in the set there is another element in the set that when multiplied together gives the identity as the product. If you look at table 1 you will see that for every element there is another element, or itself, that when multiplied together gives the identity as the product. For example: when (1234) x (1432)=(1)(2)(3)(4). The last property is associativity and one cannot see from the table, but if 3 elements are multiplied together, the order in which they are multiplied does not matter, a(bc)=(ab)c. For example: b=(1234), c=(1432),a= (13)(24) do (bc) first which gives (1)(2)(3)(4) then multiply by (a) which gives (13)(24). Now we do the other side: do (ab) first which gives (1432) multiply by c=(1432) which gives, as table 1 shows, (13)(24). Therefore the set is associative, closed, has a unique identity, and has unique inverses. This set is a group. This group is not abelian because it does not satisfy the commutative property with all the elements of ab=ba: (12)(34) x (1234)=(13)(2)(4), whereas (1234)x(12)(34) = (1)(3)(24). These are not the same and therefore the property is not satisfied. Since the group has eight elements its order is eight, by the definition noted earlier. The way these elements are multiplied together is a bit confusing, so I will try to explain it here. Take for example (1234) x (1234) 1. Look at the 1 in the first element, it goes to 2, now look at the 2 in the second element, it goes to the 3 You can write: (13 2. Look at the 3 in the first element it goes to the 4, look at the 4 in the second element it goes to the 1, ( the parenthesis show that it is closed and therefore almost like a circular pattern within, so the number before the parenthesis, goes back to the first number of the element in the same parenthesis) You can write :(13) 3. Look at 2 in the first element, it goes to the 3, look at the 3 in the second element, it goes to the 4 You can write: (13)(24 4. Look at the 4 in the first element, it goes to the 1, look at the 1 in the second element it goes to the 2 You can write: (13)(24) and you are done. Within this group there is the possibility that there are subgroups. Using the definition from above, I was able to find eight subgroups and the identity. Five are of the order 2 and three are of the order 4 because of how many elements are in the subgroup. I=(1)(2)(3)(4) which is the trivial subgroup Order 2: (12)(34), I (13)(24), I (14)(23), I We know these are subgroups because they are their own inverse. (Proper) (1)(3)(24), I (2)(4)(13), I Order 4: (1234),(1432),(13)(24), I (12)(34),(13)(24),(14)(23),I These subgroups are all closed, have inverses, an identity, and are associative. (Non-trivial) The symmetries of a square are a dihedral group which appear frequently in art and nature, which is why we chose to analyze it. Now I had done some math and could try and relate it to the snow covered mountains. At first I labeled the corners of the square using four different shades of white to represent the different colors of snow a skier might see depending on the shadow being made, due to placement of the sun and trees. This was quite simple because now instead of numbers as the group, I had specific colors where the number previously existed. I decided to look at it from a shadow perspective and try to define each time a day by one of the eight elements. I was trying to prove that with any movement, no matter how complicated, the outcome was equivalent to one of the eight elements of the Dihedral group above, numbers or colors. The part I had a problem with, in this model, was the fact that the ski slope is not a square and therefore does not have the symmetry I needed it to. I had tried many ways to relate the shadows from the sun hitting the ski slope at different angles, or with different terrain, but because the symmetry is different I couldn't go any farther. Many times I would get to a conclusion and realize that I couldn't put trees in an area for one shadow, and not the other. The other aspect that was hard to put in was when an element was closed. At first I was unsure how to show this, but then I realized I might be able to illustrate the parenthesis as leveling out along the slope. I again ran into trouble modeling all eight elements according to this group. So instead of dwelling on this issue we decided to look at the mapping in another way. We noticed that we could look at the elevations of a mountain using Visual Terrain. I labeled parts of Steamboat Colorado on the map below and then the program interpolated the rest. Map: Elevation of Steamboat Colorado, (700 means 7000) From this map I was able to use Visual Animator and make a 3-D map of the terrain, as shown below. Map: 3-D Steamboat Colorado From this map you can see some of the shadowing I was talking about. Depending on such factors as where you were on the mountain, what time of day, and the terrain, different shadows could be seen. In this program I was able to change the view by moving the target and the camera, so I could shift the shadows that are illustrated in this map. To remember the elevations I had to choose colors for the contour lines that were visible and definable, so I went with as many basic colors as I could. Then when I was choosing to color the 3-D map I chose the summer color schema. Can I relate this all back to what I originally was thinking about, colors? Yes, I can. I went back to my group of the square symmetry and made a lattice. A lattice is another way to show relationships between the various elements or subgroups in a group. The lattice shows that as you proceed downwards those elements are subgroups of the ones connected above them. Since I was able to make a lattice for this group, why not try it for the way colors are chosen. First I hold one aspect of the color at 0 so I am only dealing with two colors, then I make a lattice by choosing the max number between the x component or y component of the ones I am combining. This is best understood through an illustration. This illustration looks like something we have all seen before, but probably don't remember, and that is Pascal's Triangle. If I were to overlay Pascal's triangle over my lattice of two colors it would fit perfectly because they are mathematically related through binomials. Pascal's triangle gives the coefficients while the color lattice gives the powers for the x and y coordinates. For example: the third row of Pascal's triangle is 1 2 1, and the third row of the 2-color lattice is (2,0) (1,1) (0,2). Now using a form of the equation x^2 + 2xy + y^2, where the powers change according to the elements in the 2-color lattice. (2,0) is the 1 in Pascal's triangle, and is therefore x^2, (1,1) is the 2 in Pascal's triangle, and is therefore 2x^1y^1, and (0,2) is the 1 on the right side of Pascal's triangle, therefore it is y^2. If we skip down and look at row 4, the equation above will work its way in from x^4 to y^4. The 2-color lattice is a way of tracking the coefficients. This shows that we could use a Binomial Color guide where numbers are listed rather than colors, like on the color formula guide. The binomial equation used to determine an element in Pascal's triangle is n!/(r!(n-r)!) where n is the row number starting at 0 and r is the position of the element in each row starting at 0, which is now related to the 2-color lattice I have made. Since the color schemes are split into 3 main categories, in some programs it is blue, green, and red, while in others it is hue, saturation, and value, each of which can range from 0 to 255, the example above is a simplification of this where one category is held at 0 and the other two can range from 0 to 255. At the point 000 the color is black and as long as one stays at 0 the other two have to change by almost 100 to get a significant color change. Adobe Photo shop uses Pantone colors, which I was able to obtain a guide for. Through a function one could choose a four color mode or a three color mode. Since the color guide I had was four colors I chose to the CYMK mode, where C is cyan, Y is yellow, M is magenta, and K is black. I then tested out this theory of choosing a color by numbers. I chose a simple one that holds in this case two colors constant, but I still had on the screen R,G,B so I assumed since I no longer had green that I could replace it with the yellow number. Pantone gave the percentage of yellow as 62.5 and Blue as 37.5. I had to round up or down because the computer would only take integers, but I put in for R =0, G=159, and B=96, and I came up with a match for the color of Pantone 361U. I found this project to be very interesting and helpful. I had fun doing the project so I spent more time learning about the tools, making mistakes, and learning the concepts that I previously had trouble with. My confidence working with computers has greatly increased. I would have never thought that through the semester I would have integrated skiing, math, mapping, and colors, but I did. Text References Childs, L. (1995). A Concrete Introduction to Higher Algebra. (2nd Ed.) New York, NY: Springer. Gallian, J. (1998). Contemporary Abstract Algebra. (4th Ed.) Boston: Houghton Mifflin Co. Mackiw, G. (1985). Applications of Abstract Algebra. New York, NY: John Wiley & Sons. Computer Programs Digital Mapping software Map Edit Adobe Photo shop Visual Terrain and Animator Netscape Communicator
{"url":"http://www-personal.umich.edu/~sarhaus/courses/NRE530_F1998/bblloyd/skimath.html","timestamp":"2014-04-20T23:48:38Z","content_type":null,"content_length":"23267","record_id":"<urn:uuid:09880faa-6d2d-4cdc-9471-7f0489a6ffd9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Hollywood, CA Algebra 2 Tutor Find a Hollywood, CA Algebra 2 Tutor My services are great for students and prospective students studying for standardized tests (ACT, SAT, GRE, GMAT) and /or who need help with school coursework. I was educated at West Point and Cornell University where I was in the top 5% of my class and received a degree in Economics and Mandarin C... 30 Subjects: including algebra 2, reading, Spanish, English ...I love tutoring because it gives me a chance to focus on one person at a time and most people just need that extra attention to excel. I have had great results with all of my clients and I have a good sense of humor, so the time we spend together will not be boring. I have often been told that ... 11 Subjects: including algebra 2, physics, geometry, algebra 1 ...I earned my MBA in June 2009 from the UCLA Anderson School of Management (#1 Fully-Employed MBA Program 2007, Business Week). I also have a Master’s of Science in Biomedical Engineering from the University of Southern California (December 1999) and Bachelor’s of Science in Biomedical Engineering... 30 Subjects: including algebra 2, chemistry, physics, calculus ...In addition to helping students prepare for standardized tests (SAT, ACT, ISEE, SSAT), I provide tutoring in history, mathematics, English, and other subjects. I also specialize in helping students write essays for college applications, teasing out what specific aspects of their lives will make them interesting to admissions officers. Aside from tutoring, I'm a professional 42 Subjects: including algebra 2, reading, English, German ...For better understanding and less frustration, it's a good idea to start sketching drawings and to start building models immediately, whether they are right or wrong. As a tutor, I believe that students who are willing to use the above learning techniques have a great chance to have their study ... 17 Subjects: including algebra 2, chemistry, organic chemistry, ESL/ESOL
{"url":"http://www.purplemath.com/hollywood_ca_algebra_2_tutors.php","timestamp":"2014-04-16T07:51:47Z","content_type":null,"content_length":"24347","record_id":"<urn:uuid:1627ffe6-8133-4f79-85e5-b84a86c0646e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Elmwood Park, NJ Statistics Tutor Find an Elmwood Park, NJ Statistics Tutor ...I will tailor our sessions together to reflect your abilities, goals, and time-constraints. I also provide much more individual feedback than could be possible in a large classroom, so you will find that time spent with me is much more productive. Whether you have taken a class before, or if yo... 55 Subjects: including statistics, English, reading, writing ...Most importantly, I am personable and easy to talk to; Lessons are thorough but generally informal. I also make myself available by phone and e-mail outside of lessons--My goal is for you to succeed on your tests. My expertise is in basic and advanced math: algebra 1/2, trigonometry, geometry, precalculus/analysis, calculus (AB/BC), and statistics. 10 Subjects: including statistics, calculus, physics, geometry ...Finally, I studied mathematics at Princeton University where I again encountered this material. I'm very familiar with it. I studied A-level mathematics and further mathematics in the UK and received As in both subjects. 40 Subjects: including statistics, chemistry, reading, physics ...This program includes grammar as well, and is not just geared for the younger grades, but scaffolds higher-level phonics & vocabulary up through 12th grade. It is an individualized, naturally differentiated program. Through my education and teaching experience, I've gained deep insight into how to best motivate students to create strong habits and schedules for themselves. 22 Subjects: including statistics, English, reading, algebra 1 I have been teaching statistics and tutoring for almost 10 years and in that time have never found a student I could not teach statistics to, no matter how much they hate/fear math, provided they are willing to put in some work and practice. As a trained psychologist and neuroscientist, I can also ... 5 Subjects: including statistics, algebra 1, psychology, Microsoft Excel
{"url":"http://www.purplemath.com/elmwood_park_nj_statistics_tutors.php","timestamp":"2014-04-18T23:44:58Z","content_type":null,"content_length":"24477","record_id":"<urn:uuid:22c43131-d931-4a61-a6b6-727502451c8e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7971082 - Method and system for estimating power consumption of integrated circuit design The present invention relates generally to an Integrated Circuit (IC) design, and in particular, to a method and system for estimating the power consumption of the IC. In recent years, power has become a key design metric for System-on-Chip (SOC) designs implemented in deep sub-micron (DSM) technologies. There are several known methods for estimating the power consumption of SOCs. Accurate power estimation is essential for minimizing the power consumption of a circuit. Power estimation is also needed for various functions, such as in the process of designing a robust power grid. Moreover, accurate power estimation of parameterized reusable circuits (also known as cores or intellectual property (IP) cores) is required at the early stages of the analyses, to analyze power versus performance trade offs among the different components. Commonly used power estimation methodologies include both analytical and simulation methods. In the analytical method, the power consumption of the circuit is estimated from switching at different circuit nodes and the capacitances associated with those nodes. However, for this method, detailed knowledge of the internal implementation of the circuit is required in order to obtain capacitances of circuit nodes. Moreover, the method has practical application only to circuits with a very regular structure, such as memory arrays. In the simulation method, a power model of the circuit is constructed from circuit simulations, the functions of which include switching activity, circuit state and the input vectors applied to the circuit. However, with an increase in the number of inputs and the complexity of today's circuits, the complexity of the process of creating such power macro models has increased exponentially. Although clustering can be used to reduce the complexity of power macro models, clustering is practical only for small circuit structures. Further, these techniques require substantial computation and storage resources. An exceptionally high level of integration is possible in SOCs these days. However, due to the large size of the circuit and the computational complexity of the power estimation process, power estimation of real-world applications and workloads is not feasible at the gate-level or lower levels of abstraction. Previous work on instruction-based system-level power evaluation of SOC peripheral cores is based on power characterization of instructions or operations executed by the block. However, these prior art methods are applicable only to estimate average power of the blocks and cannot be used for cycle accurate power consumption of a block. Further, these methods do not provide a clear methodology for modeling power consumption of multiple overlapping operations, which is very common on complex blocks. It would be advantageous to be able to accurately estimate the power consumption of a circuit design without the need for excessive computational power and memory. The following detailed description of the preferred embodiments of the present invention will be better understood when read in conjunction with the appended drawings. The present invention is illustrated by way of example and not limited by the accompanying figures, in which like references indicate similar elements. FIG. 1 is a schematic block diagram of a System-on-Chip (SOC), in accordance with an embodiment of the present invention; FIG. 2 illustrates a flow diagram depicting a method for estimating the power consumption for at least one IP block in an integrated circuit (IC) design, in accordance with an embodiment of the present invention; FIGS. 3 and 4 illustrate a flow diagram depicting a method for estimating the power consumption for at least one IP block in an IC design, in accordance with another embodiment of the present FIG. 5 is a schematic block diagram of a system for estimating the power consumption for at least one IP block in an IC design, in accordance with an embodiment of the present invention; and FIG. 6 is a schematic block diagram of a characterizing module, in accordance with an embodiment of the present invention. The detailed description, in connection with the appended drawings, is intended as a description of the presently preferred embodiments of the present invention, and is not intended to represent the only form in which the present invention may be practiced. It is to be understood that the same or equivalent functions may be accomplished by different embodiments that are intended to be encompassed within the spirit and scope of the present invention. In one embodiment, the present invention provides a method for estimating the power consumption for at least one Intellectual Property (IP) block in an integrated circuit (IC) design. The power consumption estimation method includes identifying at least one port in the at least one IP block. The at least one port is associated with at least one operation. Further, the method includes identifying a sequence of micro-operations for the at least one operation and identifying a set of micro-operations per cycle. The set of micro-operations constitute an operation pipeline. An energy per cycle for each cycle of the operation pipeline is determined based on the set of micro-operations per cycle, using one or more of an idle energy value, a micro-operation isolated energy (MIE) value, an overlap energy value (OE), and a micro-operation overlap energy (MOE) value. The power consumption of the at least one IP block is then determined using the energy per cycle for each cycle of the operation pipeline. In another embodiment, the present invention provides a system for estimating the power consumption for at least one IP block in an IC design. The power consumption estimation system includes a port identification module, a micro-operation sequence identifier module, a micro-operation cycle identifier module, an energy-calculator module, and an operation power calculator module. The port identification module identifies at least one port in the at least one IP block. The at least one port is associated with at least one operation. The micro-operation sequence identifier module, in communication with the port identification module, identifies a sequence of micro-operations of the at least one operation. The sequence of micro-operations constitutes the operation pipeline. The micro-operation cycle identifier module, in communication with the micro-operation sequence identifier module, identifies a set of micro-operations per cycle. The set of micro-operations constitute an operation pipeline. The energy-calculator module, in communication with the micro-operation cycle identifier module, determines an energy per cycle for each cycle of the operation pipeline based on the set of micro-operations per cycle, using one or more of an idle energy value, a micro-operation isolated energy (MIE) value, an overlap energy (OE) value, and a micro-operation overlap energy (MOE) value. The operation power calculator module, in communication with the energy-calculator module, determines power consumption of the operation pipeline using the energy per cycle for each cycle of the operation pipeline. Various embodiments of the present invention provide a method and system for estimating power consumption for at least one IP block in an IC design. The estimation of the power consumption is based on the functional operations performed by the at least one IP block. The energy consumption of each operation pipeline can be based on variations of data and control signal toggling. The estimation of energy consumption is dependent on whether an operation pipeline overlaps another operation pipeline. However, estimating the energy consumption of each cycle of an operation pipeline enables an accurate calculation of the power consumption of the at least one IP block. Further, different energy values are stored in predefined tables before calculating the energy consumption. When the energy values are characterized for storage in predefined tables, a number of reduction steps are applied, which helps to reduce the complexity of creation of the predefined tables and estimating the energy consumption, while maintaining the desired level of accuracy. Referring now to FIG. 1, a schematic block diagram of a system-on-chip (SOC) 100 is shown, in accordance with an embodiment of the present invention. The SOC 100 includes a processor 102, a first intellectual property (IP) block 104, a second IP block 106 and a third IP block 108. In embodiment, the first IP block 104 is a co-processor, the second IP block 106 is a memory sub-system, and the third IP block 108 is a data input-output block. The first, second and third IP blocks 104, 106 and 108 are connected to the processor 102. The processor 102 can receive instructions, process the instructions and provide a desired output. The first IP block 104 performs a specialized task to reduce the load on the processor 102. The second IP block 106 is a memory sub-system and provides additional memory to the processor 102 to store instructions and data. The third IP block 108 controls the transfer of information between a main memory of the processor 102 and the external components to which the processor 102 is connected, for example, the first, second and third IP blocks 104, 106 and 108. It should be apparent to those skilled in the art that the SOC 100 can include one or more of the IP blocks 104, 106 and 108. Further, it should be apparent to those skilled in the art that other blocks apart from those shown may be present in the SOC 100. FIG. 2 illustrates a flow diagram depicting a method for estimating the power consumption for at least one IP block in an IC design, in accordance with an embodiment of the present invention. The at least one IP block can be, for example, the second IP block 106. In one embodiment, the power consumption of the second IP block 106 is estimated. The second IP block 106 can be configured to execute transactions or operations, based on the inputs provided to it. The inputs are sent to the second IP block 106 via ports. After executing the operations, the output of the second IP block 106 is obtained at the different ports allocated to receive inputs and provide outputs. In another embodiment, the same ports can be used to receive inputs and provide outputs. In one embodiment, the power consumption is estimated by determining the energy per cycle for each cycle of an operation pipeline of the second IP block 106. The energy per cycle is determined by identifying a set of micro-operations per cycle in an operation pipeline. At step 202, at least one port is identified in the second IP block 106, which is associated with at least one operation. In one embodiment, one or more input/output ports of the second IP block 106 are identified using the technical specifications of the IC design. The technical specifications provide block diagrams and interface specifications of the second IP block 106. Further, the technical specifications can also be used to identify the operations that can be received at, and executed from, each port. A set of ports P , present in the second IP block 106, can be denoted as {P[1], P [2], . . . , P[n], . . . , P[N]}. Each port P[n ]of the set P can include a group of control and data signals required by the operations the port is designed to support. The power consumption of the second IP block 106 is calculated based on operations or transactions received at one or more identified ports. At step 204, a sequence of micro-operations of the at least one operation is identified. The sequence of micro-operations constitutes an operation pipeline. The second IP block 106 can perform a set of operations O , wherein the operations are denoted as {O[1], O[2], . . . , O[k], . . . , O[K]}. Each operation O[k ]of set O can include a sequence of micro-operations. The sequence of micro-operations forms an operation pipeline with stages S[k], where the stages are denoted as {S[k1], S[k2], . . . , S[k3], . . . , S[kI]}. An ith stage S[ki ]of the operation O[k ]can execute in the ith cycle relative to the start of operation O[k]. The sequence of micro-operations is executed in successive cycles of the operation pipeline. At step 206, a set of micro-operations per cycle is identified based on the possible overlap of operation pipelines of concurrently executing operations on the second IP block 106. At step 208, energy per cycle is determined for each cycle, based on the set of micro-operations per cycle, using an idle energy value, a micro-operation isolated energy (MIE) value, an overlap energy (OE) value, and/or a micro-operation overlap energy (MOE) value. The idle energy value is the base energy consumption of the second IP block 106, and typically includes the clock network power and leakage power components of the second IP block 106. The MIE value is the energy consumption of a single micro-operation of an operation pipeline, wherein the single micro-operation is being executed on a specific port. The OE value is the energy consumption of the set of micro-operations for different operations executed on multiple ports in the same cycle. The MOE value is the energy consumption of a micro-operation when there is an overlap of that micro-operation with at least one other micro-operation in the same cycle. The MOE energy cost of a micro-operation is by definition independent of the other micro-operations which overlap with it in the same cycle. Calculations of the MIE value, the OE value and the MOE value are explained in detail in conjunction with FIG. 6. Further, the energy of the operation pipeline can be calculated using the energy per cycle of each cycle of the operation pipeline. Similarly, the energy of all the operation pipelines executed in the second IP block 106 are calculated and used to calculate the total energy consumption of the second IP block 106. Further, the power consumption of the second IP block 106 is calculated using the total energy consumption of the second IP block 106. The power consumption of the other IP blocks can be similarly calculated. At step 210, the power consumption of the second IP block 106 is determined using the energy per cycle for each cycle of the operation pipeline. Similarly, power consumption of one or more IP blocks in the IC design can be estimated. Further, by estimating the power consumption of the one or more IP blocks in the IC design, power consumption of the IC design can be calculated. FIGS. 3 and 4 illustrate a flow diagram depicting a method for estimating the power consumption for at least one IP block in an IC design, in accordance with another embodiment of the present Referring now to FIG. 3, in one embodiment, the second IP block 106 is configured to process specific instructions. Input data is provided to the second IP block 106 through input ports, wherein the second IP block 106 executes the operations according to the data provided. The input data is provided through input ports. After executing the operations, the output is provided through output ports. In another embodiment, the same port can act as an input port and an output port. At step 302, at least one port in the second IP block 106 is identified, wherein the at least one port is associated with at least one operation. One or more ports of the second IP block 106 are capable of receiving and executing either a subset or all the operations of the second IP block 106. In one embodiment, energy per cycle is calculated for the at least one operation of the at least one port. At step 304, a sequence of micro-operations is identified for the at least one operation on the at least one port in the second IP block 106. In one embodiment, the sequence of micro-operations constitutes the at least one operation. Further, the sequence of micro-operations is executed in a pipeline. The identification of the sequence of micro-operations facilitates the process of identifying the overlap among the micro-operations of two or more operations in the same cycle. At step 30G, a set of micro-operations executed in one cycle is identified. Different operations have a different sequence of micro-operations. The operations in the IC design are executed in parallel. This implies that the micro-operations of the operations that fall under the same cycle are executed first, after which the micro-operations in the next cycle are executed. In one embodiment, the second IP block 106 performs a cache controller operation. A cache controller operation of a read-hit access and a read-miss access includes a sequence of micro-operations. For a read-hit operation in a first cycle, a tag comparison check for a hit or miss in the cache is carried out and the speculative memory read is performed. A hit implies that a read operation is possible for a given cycle. A speculative read operation is executed before it is known when the read operation will occur and from what address will it occur. If there is a hit, it reduces the latency time taken to read data from the cache. In a second cycle, the memory read is completed and data is sent to the second IP block 106. In a third cycle, the cache usage state is updated, such as, for a least recently used (LRU) state. In one cycle, one micro-operation is executed. Similarly, for a read-miss operation in a first cycle, a tag-comparison check for a hit or a miss in the cache is carried out and the speculative memory read is performed. For this operation, the result is a miss. In a second cycle, a line to be replaced due to the resulting miss is identified and a new tag is set for it. In a third cycle, a cache usage state is updated, such as an LRU state, and the thrashing of dirty data is followed by the fetch request for the required data from a lower-level memory subsystem. From the fourth cycle to the kth cycle, a variable number of wait cycles are required for a requested data arrival, depending on the memory access time and system latency. In the k+1th cycle, the fetched data is sent out to the second IP block 106 and is also written into the cache. Similarly, other operations, along with their sequence of micro-operations, can be identified for any other IP block. Further, information on operations and their sequence of micro-operations for an IP block is available in technical reference manuals. There is a difference in the nature of the pipeline between a programmable processor and a genetic peripheral IP Block. The pipeline of a programmable processor is generally of a regular nature and is identical for most of the instructions. Typically, a pipeline of a programmable processor can include stages such as fetch, decode, memory access, execute and write-back. Further, the micro-operation pipelines of the different operations of an IP block can have a varying pipeline depth and different micro-operations performed in them, as described in the case of the read hit/miss of a cache controller. At step 308, the idle energy is determined when the at least one operation in the second IP block 106 is not being executed. The idle energy is referred to as E[idle]. In one embodiment, the E[idle ] constitutes the base energy of the second IP block 106, and typically includes the clock network power and the leakage power components of the second IP block 106. The idle energy value is used in further calculations. Idle energy is taken into account to enable accurate energy estimation of the second IP block 106. Apart from the idle energy, the MIE values, the MOE values and the OE values are needed to calculate the energy consumption of second IP block 106. The MIE value is the energy consumption of a single micro-operation of an operation pipeline, where the single micro-operation is being executed on a specific port. The OE value is the energy consumption of the set of micro-operations for different operations that are executed on multiple ports in the same cycle. The MOE value is the energy consumption of a micro-operation when there is an overlap of that micro-operation with at least one other micro-operation occurring in the same cycle. The MOE energy cost of a micro-operation is by definition independent of the other micro-operations which overlap with it in the same cycle. The calculation of the above mentioned energy values is explained in detailed in conjunction with FIG. 6. Referring now to FIG. 4, at step 402, whether there is more than one micro-operation per cycle in the operation pipeline is checked. At step 404, the energy per cycle of the operation pipeline is determined using the idle energy value and the MIE values when there is only one micro-operation per cycle in the operation pipeline. The MIE value is the energy consumption of a single micro-operation of an operation pipeline, where the single micro-operation is executed on a specific port. The calculation of MIE values is explained in detail in conjunction with FIG. 6. At step 406, it is determined whether the MOE reduction of the table size is applicable when there is more than one micro-operation per cycle in the operation pipeline. In one embodiment, the reduction of the table size is related to the reduction in the number of the OE values stored in one or more predefined tables. This reduction of the table size is based on the assumption that when at least more than one micro-operation occurs in a cycle, there is an overlap between the energy values by virtue of control or data path logic sharing. Further, in one embodiment, a suitable algorithm is applied, by which a fewer number of OE energy values are needed to be stored in the predefined table. Reducing the table sizes simplifies the calculation of the energy per cycle while maintaining a desired accuracy level. The calculation of the MIE values, the MOE values and the OE values is explained further in conjunction with FIG. 6. In one embodiment, the reduction of the table size is based on the port independence and operation independence of one or more of the micro-operation isolated energy and the overlap energy of the set of micro-operations per cycle. The reduction of the table size helps in reducing the number of MIE and OE values that need to be stored and used to calculate the energy per cycle. The reduction of the MIE values is divided into two categories. The first category involves the reduction of the MIE values, based on the port independence of the micro-operation isolated energy. The reduction of the MIE values, based on the port independence of the micro-operation isolated energy involves verifying the hypothesis that for a given operation O[k], the micro-operation isolated energy of stage S[ki ]is independent of the port P[n ]that executes the operation, within an acceptable range of accuracy. This implies that the MIE values of the micro-operation occurring on any of the ports P[n1 ]or P [n2 ]is almost of an identical value and is independent of the port on which it is being executed. Consequently, any one of the MIE values can be stored in the first predefined table. The second category involves the reduction of the MIE values, based on the operation independence of the micro-operation isolated energy. In one embodiment, two operations, O[k1 ]and O[k2], have different sequences of micro-operations, even if they have only a few micro-operations in common. For example, in the case of a specific cache controller implementation, both the read-hit operation and the read-miss operation can have a common micro-operation for performing tag comparison in the first cycle of the operations. The reduction of the MIE values, based on the operation independence of the micro-operation isolated energy, can include the testing of the hypothesis that the energy consumption of micro-operation M[i] is independent of the operation for which the micro-operation M [i ]occurs. This implies that the same micro-operation Mi occurs during different stages of operations O[k1 ]and O[k2], and hence, only a single value can be stored for the MIE value in the first predefined table. This reduction reduces the number of MIE values that need to be stored in the first predefined table. Further, the reduction of the OE values is divided into two categories. In one embodiment, the first category can involve the reduction of the OE values, based on the port independence of the overlap energy. The reduction of the OE values based on the port independence of the overlap energy, verifies the hypothesis that the overlap energy of a set of overlapping micro-operations is independent of the ports on which these operations are executing, within an acceptable range of accuracy. This implies that the OE value for a set of micro-operations being executed from different ports in the circuit is almost of an identical value, and hence, any one value of the overlap energy can be stored in the third predefined table. This reduction reduces the number of OE values that needs to be stored in the third predefined table. The second category involves the reduction of the OE values, based on the overlap vector independence of the micro-operation overlap energy. The reduction of the OE values, based on the overlap vector independence of micro-operation overlap energy, verifies the hypothesis that there is a micro-operation overlap energy for a given micro-operation that is independent of the overlapping micro-operation vector. The overlapping vector is a set of micro-operations that occur in the same cycle. This micro-operation overlap energy is stored in the second predefined table. This reduction reduces the number of OE values that need to be stored in the third predefined table. Accordingly, if at step 406 it is determined that the MOE modification step is not applicable, then at step 408, the energy per cycle for each cycle of the operation pipeline is determined using the idle energy value, the MIE value and the OE value. However, if at step 406, it is determined that the MOE modification step is applicable, then at step 410, the energy per cycle for each cycle of the operation pipeline is determined using the idle energy value, the MIE value and the MOE value, when the MOE reduction step (the second OE reduction step mentioned above) is applicable. Referring now to FIG. 5, a schematic block diagram of a system for estimating the power consumption for at least one IP block in an IC design is shown, in accordance with an embodiment of the present invention. In one embodiment, the power consumption of the second IP block 106 is calculated. The second IP block 106 is configured to execute operations, based on the inputs provided to the second IP block 106 via the input ports. Similarly, after executing the operations, the output is obtained at output ports. In one embodiment, a single port acts as both an input port and an output port. The power consumption is calculated based on the transactions received at the ports. The system for estimating the power consumption includes a port identification module 502, a micro-operation sequence identifier module 504, a micro-operation cycle identifier module 506, an energy-calculator module 50S, and an operation power calculator module 510. The port identification module 502 identifies at least one port in the second IP block 106, where the at least one port is associated with at least one operation. The second IP block 106 includes a set of ports P , where the ports can be denoted as {P[1], P[2], . . . , P[n], . . . , P[N]}. Each port P[n ]of the set P can include a group of control and data signals, as required by the operations the port is designed to support. The micro-operation sequence identifier module 504, in communication with the port identification module 502, identifies a sequence of micro-operations for the at least one operation, wherein the sequence of micro-operations constitutes an operation pipeline. The micro-operation cycle identifier module 506, in communication with the micro-operation sequence identifier module 504, identifies the set of micro-operations per cycle in the operation pipeline. The set of micro-operations per cycle is identified to accurately estimate the energy consumption of the second IP block 106. The energy-calculator module 508, in communication with the micro-operation cycle identifier module 506, determines energy per cycle of each cycle of the operation pipeline. The energy per cycle is based on the set of micro-operations per cycle, using one or more of, an idle energy value, an MIE value, an OE value, and an MOE value. The idle energy value is the base energy consumption of the second IP block 106, and typically includes the clock network power and leakage power components of the second IP block 106. The MIE value is the energy consumption of a single micro-operation of an operation pipeline, where the single micro-operation is executed on a specific port. The OE value is the energy consumption of the set of micro-operations for different operations that are executed on multiple ports in the same cycle. The MOE value is the energy consumption of a micro-operation when there is an overlap of that micro-operation with at least one other micro-operation in the same cycle. The MOE energy cost of a micro-operation is by definition independent of the other micro-operations which overlap with it in the same cycle. In one embodiment, the MIE values, the MOE values and the OE values are stored in respective first, second and third predefined tables. Further, the first, second and third predefined tables can be included in a characterizing module. The characterizing module provides the first, second and third predefined tables as inputs to the energy calculator module 508. The characterizing module is explained in detail in conjunction with FIG. 6. The operation power calculator module 510, in communication with the energy-calculator module 508, determines the power consumption of the operation pipeline using the energy per cycle of each cycle of the operation pipeline. The operation power is used to determine the average power of the second IP block 106. The operation power of the other IP Blocks is calculated in a similar manner. The operational power of the IC design is calculated by combining the operational power of the second IP block 106 and the other IP blocks. The power consumption estimation system can also include a total energy calculation module 514. The total energy calculation module 514 determines the total energy consumption of the second IP block 106 using the energy consumption of one or more operation pipelines. In one embodiment, the total energy-calculation module 514 uses the operation power calculated for the second IP block 106. The power consumption estimation system can also include an average power calculation module 516 that determines the average power consumption of the second IP block 106 using the total energy consumption of the second IP block 106. The power consumption of the other IP blocks is calculated in a similar manner. The power consumption of the IC design is calculated by combining the power consumption of all of the IP blocks. FIG. 6 is a schematic block diagram of a characterizing module 600, in accordance with an embodiment of the present invention. The characterizing module 600 includes a MIE value calculator module 602 , a MOE value calculator module 604, and an OE calculator module 606. The characterizing module 600 may also include first, second and third predefined tables 608, 610 and 612, respectively. However, it will be understood by those of skill in the art that the predefined tables 608, 610 and 612 may comprise registers or a local memory formed in the characterizing module 600 or that the predefined tables 608, 610 and 612 may be defined in an external memory coupled to the energy calculator module 508, in which case the tables 608, 610 and 612 would provide inputs to the energy calculator module 508. In one embodiment, the characterizing module 600 stores the MIE values, the MOE values and the OE values in the first, second and third predefined tables 608, 610 and 612, respectively. In one embodiment of the invention, the characterizing module 600 also includes a MIE reduction module 620, a first OE reduction module 622 for overlap vector independence and a second OE reduction module 624 for port independence. The MIE values, the MOE values and the OE values are required for the calculation of the energy consumption of the second IP block 106. A set of ports P , present in the second IP block 106, can be denoted as {P[1], P[2], . . . , P[n], . . . , P[N]}. Each port P[n ]of the set P can include a group of control and data signals, as required by the operations the port is designed to support. In one embodiment, the power consumption of the second IP block 106 is calculated based on the operations executed at the ports. The second IP block 106 can perform a set of operations O , where the operations are denoted as {O[1], O[2], . . . , O[k], . . . , O[K]}, where each operation O[k ]of the set O includes a sequence of micro-operations. The sequence of micro-operations is executed in the successive cycles of an operation pipeline. The sequence of micro-operations forms the operation pipeline with stages S[k] , where the stages are denoted as {S[k1], S[k2], . . . , S[k3 ]. . . S [kI]}. Stage S[ki] of the set S[k ]executes in the ith cycle that is relative to the start of operation O[k]. In one embodiment, W[k,i,n] can represent the occurrence of micro-operation stage S [ki ]of an operation O[k], when the operation O[k ]is executed from port P[n]. A T-length set of micro-operations W[k,i,n ]that occur on a given cycle in the second IP block 106 can be denoted as V[Tm] A group of all such possible T-length sets V[T] can be denoted as {V[T1], V[T2], . . . , V[Tm], . . . , V[TN]}, where V[Tm]=[W[k1,i1,n1], W[k2,i2,n2], . . . , W[kl,il,nl], . . . , W [kT,iT,nT]], where O[k1]εO, S[ki1]εS[k], P[nl]εP. The complete set of all the possible length event micro-operations sets V is denoted as {V[1], V[2], . . . , V[T], . . . , V[L]}. Further, all the possible sets of overlaps of the set of micro-operations W[k,i,n ]of the second IP block 106 can be included in the set V . The single length overlap vector V[lm ]can represent the occurrence of the isolated micro-operation {W[k,i,n]}. This is required to calculate the micro-operation isolated energy (MIE) E[v1m] [ ] [MIE] of all the possible single-length overlap vectors V[1m ]of all the O[k]εO, S[ki]εS[k], P[n]εP. The isolated energy consumption of each micro-operation W[k,i,n ]denoted as E[v1m] [ ] [MIE ]is calculated by applying functional vectors which execute a single operation O[k ]on a specific port P[n ]in isolation, where isolation implies that prior to and after the selected operation execution on the selected port, the circuit is in idle state and that during the execution of operation O[k], there is no other operation executing on the circuit. E[v1m] [ ] [MIE ]represents the energy consumption of the micro-operation set V[1m ]that exceeds the base idle energy consumption of the IP block. The idle energy consumption of the IP block can be denoted as E[idle]. In one embodiment, the micro-operation isolated energy, when a micro-operation of stage S[ki ]of operation O[k ]occurs when operation O[k ]is being executed from port P[n], is given as: E [V1m] [ ] [MIE] =E [V1m] −E [idle](1) In the equation (1), E[V1m ]represents the measured energy consumption of stage S[ki ]of operation O[k ]that is executing from port P[n]. In one embodiment, the calculation of E[V1m] [ ] [MIE ]can be based on different data toggling under different functional scenarios. For example, when W[k,i,n ]corresponds to a micro-operation stage comprising data read from a memory and driven on a data bus in a memory controller circuit, the variation in isolated energy consumption can be taken into account, depending on the number of data bits and address bits toggling. In one embodiment, the MIE value calculator module 602 in conjunction with MIE reduction module 620 creates the data stored in the first predefined table 608. The MIE value calculator module 602 calculates the MIE value using the equation (1) for each micro-operation W[k,i,n], where W[k,i,n ]represents micro-operation stage S[ki ]of operation O[k], when operation O[k], is executed from port P[n]. The MIE reduction module 620 checks for both port and operation independence reduction steps. The calculated MIE values are stored in the first predefined table 608 and are used to determine the energy per cycle. The MOE value calculator module 604, in communication with the MIE value calculator module 602, calculates the MOE value. The micro-operation overlap energy represents the overlap energy consumption of this micro-operation with other micro-operations. To calculate the MOE value of a micro-operation W[k1,i1,n1 ]a functional vector that causes occurrence of two micro-operation W[k1,i1,n1 ]and W [k2,i2,n2 ]is applied, and the MOE value can be determined using the MIE values of the micro-operations W[k1,i1,n1 ]and W[k2,i2,n2]. The MOE value of the micro-operation W[k1,i1,n1], can be given as: E [v1m1] [ ] [MOE] =E [idle] +E [v1m2] [ ] [MIE] +E [v1m1] [ ] [MIE] −E [j](2) In the equation (2), E[V1m1] [ ] [MIE ]and E[v1m2] [ ] [MIE ]represent the MIE values of the micro-operations W[k1,i1,n1 ]and W[k2,i2,n2], respectively. Further, E[j ]represents the measured energy consumption of the cycle j in which these two micro-operations overlap. Equation (2) assumes that E[v1m2] [ ] [MIE ]is greater than or equal to E[v1m1] [ ] [MIE]. This requirement comes because the MOE formulation is based on the premise that that when multiple micro-operations overlap in a given cycle, the effective energy contribution of a micro-operation is its (MIE-MOE) value, except for the micro-operation with the largest value of MIE. The MOE values of the different micro-operation events can be calculated by the MOE value calculator module 604. In one embodiment, the calculated MOE values are stored in the second predefined table 610. The OE value calculator module 606; in communication with the MIE value calculator module 602, and in conjunction with the first OE Reduction Module 622 (OE overlap vector independence) checks for applicability of overlap-vector independence reduction step; and if found, then invokes the MOE value calculator module 604 to add the MOE energy value for the specific micro-operation to the data stored in the second predefined table 610. If applicability of the overlap-vector independence reduction step is found to be not true, then the OE value calculator module 606 in conjunction with the second OE Reduction Module 624 (port independence reduction module) adds the OE energy value for the specific overlap-vector to data stored in the third predefined table 612. The overlap energy is calculated when a set of micro-operations of different operations are executed on multiple ports in the same cycle. The set of T overlapping micro-operations {W[k1,i1,n1], W [k2,i2,n2], . . . , W[kT,iT,nT]} can be denoted as V[Tm]. The overlap energy can be calculated for all possible T -length micro-operation sets by using the MIE values, where E[V11] [ ] [MIE], E [V12] [ ] [MIE], . . . , E[V1T] [ ] [MIE ]are the MIE energies of micro-operations comprising V[Tm]. For one embodiment, the overlap energy of an event set V[Tm ]can be given as: E [VTm] [ ] [OE] =E [idle]+(E [V11] [ ] [MIE] +E [V12] [ ] [MIE] + . . . +E [V1T] [ ] [MIE])−E [VTm](3) In the equation (3), E[VTm ]represents the measured energy consumption of the overlap cycle. Further, for one embodiment, the overlap energy can be a positive quantity, a negative quantity or zero. During the execution of a set of micro-operations in a cycle, when some portion of the control or data path logic is shared by multiple micro-operations, that portion would be counted multiple times while adding the MIE of the micro-operations in the set of micro-operations. This would result in the overlap energy being a positive quantity. However, when the overlap of multiple micro-operations causes extra controller or data path toggling, the overlap energy would be a negative quantity. Further, if there is no dependence between the control and data paths between the overlapping micro-operations, the overlap energy would be zero. As explained above, the characterizing module 600 calculates the MIE, MOE and OE values, and stores them in the first, second and third predefined tables 608, 610 and 612, respectively. These energy values are used to estimate the power consumption of the IP block. Characterization including the reduction steps is a one-time activity leading to the creation of the first, second and third predefined tables 608, 610 and 612. Subsequently, power estimation may be carried out on any number of activity profiles (i.e., operation sequences) using the previously created predefined tables 608 , 610 and 612. While various embodiments of the present invention have been illustrated and described, it will be clear that the present invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the present invention, as described in the claims.
{"url":"http://www.google.com/patents/US7971082?ie=ISO-8859-1","timestamp":"2014-04-19T12:38:42Z","content_type":null,"content_length":"111711","record_id":"<urn:uuid:0ade6649-7f50-4f8d-88b3-741ecd05b794>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Lazier functional programming, part 2 In Lazier functional programming, part 1, I suggested that some of the standard tools of lazy functional programming are not as lazy as they might be, including even if-then-else. Generalizing from boolean conditionals, I posed the puzzle of how to define a lazier either function, which encapsulates case analysis for sum types. The standard definition: data Either a b = Left a | Right b either :: (a → c) → (b → c) → Either a b → c either f g (Left x) = f x either f g (Right y) = g y The comments to part 1 fairly quickly converged to something close to the laxer definition I had in mind: which is a simple generalization of Luke Palmer‘s laxer if-then-else (reconstructed from memory): bool c a b = (a ⊓ b) ⊔ (if c then a else b) My thanks to Dave for “laxer” as a denotational alternative to the operational term “lazier”. Note in the either definition that the argument ordering allows computing (f ⊥ ⊓ g ⊥) just once and reusing the result for different values of the third argument. Similarly, cond :: a → a → Bool → a cond a b = const (a ⊓ b) ⊔ (λ c → if c then a else b) Let’s look at some examples: cond 6 8 ⊥ ≡ (6 ⊓ 8) ⊔ ⊥ ≡ 6 ⊓ 8 ≡ ⊥ cond 7 7 ⊥ ≡ (7 ⊓ 7) ⊔ ⊥ ≡ 7 ⊓ 7 ≡ 7 cond (3,4) (4,5) ⊥ ≡ ((3,4) ⊓ (4,5)) ⊔ ⊥ ≡ (⊥,⊥) cond (3,4) (3,5) ⊥ ≡ ((3,4) ⊓ (3,5)) ⊔ ⊥ ≡ (3,⊥) cond [2,3,5] [1,3] ⊥ ≡ ([2,3,5] ⊓ [1,3]) ⊔ ⊥ ≡ ⊥ : 3 : ⊥ These results are more useful than they might at first appear. Monotonicity implies that the following information-inequalities also hold for an arbitrary boolean c: cond 6 8 c ⊒ ⊥ cond 7 7 c ⊒ 7 cond (3,4) (4,5) c ⊒ (⊥,⊥) cond (3,4) (3,5) c ⊒ (3,⊥) cond [2,3,5] [1,3] c ⊒ (⊥ : 3 : ⊥) In other words, given only a and b, we can already start extracting some information about the value of cond a b c. If c takes a long time to compute (forever in the extreme case), we may be able to get some useful information out of cond before cond gets any information out of c. Moreover, the presence of “⊔” hints at parallelization. Evaluation of a ⊓ b can proceed in one thread, while another computes the standard conditional. At the function level, cond 6 8 ⊒ const ⊥ cond 7 7 ⊒ const 7 cond (3,4) (4,5) ⊒ const (⊥,⊥) cond (3,4) (3,5) ⊒ const (3,⊥) cond [2,3,5] [1,3] ⊒ const (⊥ : 3 : ⊥) This observation is handy when cond is partially applied to two arguments and the resulting function is reused. We can even compute a ⊓ b once and share the results among many calls. These same considerations apply more generally to the laxer either definition above. Does the definition of either above really satisfy the conditions of the puzzle, and how did I come up with it? A key insight is that monotonicity implies the following two inequalities for all a and eitherL f g ⊥ ⊑ eitherL f g (Left a) eitherL f g ⊥ ⊑ eitherL f g (Right b) Consistency with the defining equations then imply that eitherL f g (Left a) ≡ f a eitherL f g (Right b) ≡ g b eitherL f g ⊥ ⊑ f a eitherL f g ⊥ ⊑ g b and hence eitherL f g ⊥ ⊑ f a ⊓ g b Specializing to a ≡ b ≡ ⊥ gives us an single least of these upper bounds: eitherL f g ⊥ ⊑ f ⊥ ⊓ g ⊥ similarly to Albert’s reasoning for the similar case of if-then-else. I asked for a information-maximal version of eitherL so as to get as much information out with as little information in, in the spirit of lax (non-strict) functional programming. For the maximal version, choose equality: eitherL f g ⊥ ≡ f ⊥ ⊓ g ⊥ There are three fairly simple proofs remaining, namely that (a) this equality holds for the definition of eitherL above, (b) the use of (⊔) is legitimate in that definition, and (c) the definition as a whole is consistent with the defining equations. Finally, note that lub (⊔) is used in a restricted way here. In this use u ⊔ v, not only are the arguments u and v information-compatible (the semantic pre-condition of lub), but also, u ⊑ v. This property shows up in several applications of lub, and I suspect it can be useful in efficient implementation. 10 Comments 1. Paul Brauner: So, if I understand, the ideal version cannot be reached effectively without solving the halting problem. The best you can do is to approximate it as much as you want by choosing a timeout as long as you want on the computation of (f ⊥ ⊓ g ⊥). Am I right? 20 September 2010, 2:04 am 2. conal: Paul: No. I meant the definitions in the post as actual, computable, ideal implementations, except that I left out some class constraints, which I’ll add in an edit. 20 September 2010, 1:16 pm 3. David Sankel: Dimtar’s implementation, eitherL' f g x = ( f ⊥ ⊓ g ⊥ ) ⊔ either f g x , seems to generally have the same properties as your implementation. On the other hand, I'm not certain if it "allows computing ( f ⊥ ⊓ g ⊥ ) just once and reusing the result for different values of the third argument". Couldn't an evaluator easily detect ( f ⊥ ⊓ g ⊥ ) as a common subexpression in two different applications of eitherL' f g? 21 September 2010, 7:08 am 4. conal: Couldn’t an evaluator easily detect ( f ⊥ ⊓ g ⊥ ) as a common subexpression in two different applications of eitherL’ f g? What do you mean by an “evaluator” in this question? Something that structurally analyzes and transforms thunks at run time? If you’re asking about compilers, then my understanding is a compiler could perform the same code transformation that I applied, and that GHC intentionally does not, so as to avoid space leaks. 21 September 2010, 9:09 am 5. David Sankel: I guess the thing that’s tripping me up is that I don’t know what is being allowed to compute (who is computing?) in the sentence “Note in the either definition that the argument ordering allows computing (f ⊥ ⊓ g ⊥) just once and reusing the result for different values of the third argument”. Maybe the subject here is a Haskell interpreter with certain assumed optimizations (or lack 21 September 2010, 10:30 am 6. conal: David: Ah, thanks. Now I get the ambiguity. I’ll try to clarify. Suppose you define h = eitherL f g (for some functions f and g), and then apply h to several Either-valued arguments. Then under GHC or GHCi, the work of computing f ⊥ ⊓ g ⊥ will be done just once, not repeatedly. Now suppose instead that eitherL were defined as eitherL f g x = (f ⊥ ⊓ g ⊥) ⊔ either f g x and again, define h = eitherL f g (for some functions f and g). In this case f ⊥ ⊓ g ⊥ will get redundantly computed every time h is applied, because it is expressed within the scope of the “λ x → ...” (after desugaring). The same sort of operational difference holds between the two following denotationally-equivalent expressions: const (nthPrime 100000) \ _ -> nthPrime 100000 21 September 2010, 2:28 pm 7. David Sankel: Thanks. That clears it up. For those interested, I’ve asked haskell-cafe for more information on that particular optimization. 22 September 2010, 9:42 am 8. conal: The new version of the lub package has modules for glb and for lazier ‘if-then-else’ and ‘either’. 22 September 2010, 11:41 am 9. conal: Make that laxer if-then-else and either. Just fixed module name. Thanks to Luke Palmer for the suggestion, via Twitter. luqui: “@conal, lazier? Not laxer?”. conal: @luqui For the module name? “Laxer” would be more fitting than “Lazier”, wouldn’t it? luqui: @conal, well according to the motivating blog posts, yes. I’m wondering why you went to the trouble to find a good term then didn’t use it. conal: @luqui simple reason: i forgot. just fixed. thanks! 22 September 2010, 12:24 pm 10. Douglas McClean: @Paul, I thought so too at first, but now I think I understand why that isn’t the case. If you alternate between taking steps to evaluate f \bot and taking steps to evaluate g \bot, one of three things will happen. Either the partial answers you start to get will differ at some point (in which case you are done), or you will get to the end of both without encountering a difference (in which case you are done), or you will keep evaluating forever (in which case the answer was \bot anyway). (Is this the gist of it? Or am I missing a different reason?) 22 December 2010, 5:03 pm
{"url":"http://conal.net/blog/posts/lazier-functional-programming-part-2","timestamp":"2014-04-21T07:03:25Z","content_type":null,"content_length":"85555","record_id":"<urn:uuid:570f83b1-23d3-45d8-8a80-fcd91e495793>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Prolegomena to a Theory of Kinds "... re are two places U and V, subject A lives in U and subject B lives in V, and U- and V-environments are phenomenally indiscernible for A and B. Yet, similar things called `N' by both A and B are, really, examples of the natural kind K in U and K' in V, with 34 investigated in AI (as frames, scrip ..." Add to MetaCart re are two places U and V, subject A lives in U and subject B lives in V, and U- and V-environments are phenomenally indiscernible for A and B. Yet, similar things called `N' by both A and B are, really, examples of the natural kind K in U and K' in V, with 34 investigated in AI (as frames, scripts, etc.) are only a parasitic example of basic prototipical schemes. Already in logical analysis of language variational patterns and related thresholds have become manifest in counterfactuals --- in view of their ceteris paribus clauses. Even limiting ourselves to the logical point of view, research on non-monotonicity (implicit in the diagram above) has shown that it can be dealt with rigorously -- or rigorously explained away. Finally, the very applicability of mathematics, with its modular architecture, excludes a globally holistic heuristics. To summarize, the frame problem demands a finer mathematical description of symbolic Gestalten, a problem addressed in Peruzzi (1992). Optimal f
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=8925728","timestamp":"2014-04-16T22:24:18Z","content_type":null,"content_length":"12203","record_id":"<urn:uuid:8e6cdb78-bf2c-4f74-a322-90c785c325aa>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
DAC Ladder DAC Ladder Networks Choose Subtopic │Ladder Networks │Coding Schemes│Speed and Glitches │ Anyone who has turned on a light switch has run a one bit digital to analog converter (DAC). When the switch is off, output is zero. When it's on, the full potential of the power system (battery or power grid) is applied to whatever is beyond the switch. With a multibit DAC, we throw switches, and some fraction of a reference potential or current is transmitted to the device output. There are two common networks used in DACs: resistive ladders and capacitive ladders. We will deal mainly with the former; the latter are used mostly in high speed DACs whose low frequency performance isn't particularly important (an example would be a DAC in an audio device. People can't hear ultra-low frequencies, so an offset in the DC or low-frequency performance is irrelevant). How can we understand ladders? A step at a time. In all cases, we'll have some reference voltage V[ref] that is the full scale range of the DAC. For purposes of this section, presume that the DAC encodes straight binary from 0 to V[ref]. Other coding schemes aren't hard to understand once this first one is clear. The first circuit to consider is a simple voltage divider. For two equal resistors, we have 3 points we can monitor: zero, the mid-point of the resistive ladder (where V = V[in]/2), and the top of the ladder (where V = V[in]). If we had an electronic switch, we could choose among 3 outputs. Clearly, given N resistors, we can select among N+1 outputs. Unfortunately, there are problems with this simple design. Leaving out details of electrical engineering, we need N-1 resistors to get N output. If we want 12 bits of resolution, 2^12 = 4096, and that's a lot of resistors. Isn't there a way to use fewer resistors? It turns out that there's a method that uses only 2N resistors. It's called an R/2R ladder. Start at the far right. How does V[3] compare to V[2]? By analogy to the previous figure, V[3] = V[2]/2. At first, it looks like computing how V[2] and V[1] relate could be ugly. However, there are 2 routes to ground from V[2]. Both of them have a total resistance of 2R. The effective resistance of two resistors in parallel relate as 1/R[effective] = 1/R[path 1] + 1/R[path 2]. The effective resistance from V[2] to ground via the two parallel paths is R, the same as the resistance of the individual resistors! This means that each numbered voltage is 1/2 the potential of the next lower numbered voltage. Suddenly, we have a way to get 1/2^n fractions of a reference potential for any n, just by using a large enough number of identical resistors. Actually, there's a limit; the precision of the resistors has to be such that the errors in each stage are smaller than the smallest fractional voltage. That means that, if we have a 12 bit ladder, all resistances must be precise to 1 part in 2^12 = 1/4096. That's about 0.025%. That sounds difficult to achieve, but resistors can be trimmed to 0.1% fairly easily, 0.01% with effort, and even 0.001% if temperature is carefully controlled and adequate standards are available. We thus have a way to look at a reference potential and 1/2, 1/4, 1/8, ... of that reference. How do we combine those potentials so we can get potentials that vary in small, equal steps from 0 to V [in]? We need to combine the current in the R/2R network with one amplifier. Let's take a moment to examine the Inverting Operational Amplifier With Gain, one of the most common analog electronic Inset A above is the symbol used for the collection of transistors that are, collectively, an operational amplifier. There are two inputs, + and -. The output voltage is equal to a gain A times the difference between the potentials of the two inputs. Typically, A is large, at least 10^4 and usually 10^6. Thus, a change of a few microvolts between the inputs can change the output by several volts. If we feed back some of the output to one of the inputs and anchor one input at a fixed potential, the only stable behavior is to have the potentials of the two inputs very close to each other. Thus, if we connect the output to the inverting (-) input, V[out] = A(V^+ - V[out]) which, with a little algebra, gives V[out] = A/(A+1) V^+. Because A is big, A/(A+1) is close to 1, so the output potential equals the input. This re-enforces the point that the operational amplifier, properly wired, drives the potentials of its two inputs to nearly the same potential. So now look at inset B. The inverting (-) input will be driven to ground potential. Because the inverting input is at ground potential but not physically wired to ground, it is said to be a "virtual ground." If the output is at a potential V[out], the current through the resistor R is just V[out]/R. But where does that current come from? It all must come from the input current I[in] since ideal operational amplifiers draw no current through their inputs. Thus, the current through R must be I[in]. Using a consistent sign convention, if the input current comes from a positive voltage source, then passes the inverting input at V=0, the output must have a negative potential, so V[out] = -I[in] R. We now combine the ladder network with the circuit in inset B. All but one of the grounds in the ladder network is connected via switches either to a real ground or to the virtual ground of the operational amplifier circuit. Here's a drawing. As far as the R/2R ladder is concerned, the operational amplifier circuit isn't even there. Either the points that were formerly grounded still are (the switches S1, S2, and S3 switched to the left) or they're connected to virtual ground (one or more of the switches switched to the right). Thus, the potentials we computed previously are unchanged. What happens to V[out] as the switches are switched? V[out] = -I[in] R, but I[in] comes from whichever branches of the resistive ladder are switched to the right. If all of the switches are to the left, no current comes to the inverting input of the amplifier, and output = 0 V. What if only S3 is to the right? We already know that V[2] = V[in]/4. This voltage drops across 2R to virtual common, so I[in] = V[in]/(4*2R) = V[in]/(8R). So V[out] = -V[in]/8. Now fill in the following table. A "0" for a switch means it is connected to physical ground, while a "1" means it is connected to virtual common on the amplifier. Clicking anywhere in the table to pop up a window showing the full set of values. │S1│S2│S3│ V[out] │ │0 │0 │0 │0 │ │0 │0 │1 │-V[in]/8 │ │0 │1 │0 │ │ │0 │1 │1 │ │ │1 │0 │0 │ │ │1 │0 │1 │ │ │1 │1 │0 │ │ │1 │1 │1 │ │ Voila! Straight binary coding of the switches gives a voltage proportional to that binary number! In fact, if (in this example) V[in]=-8 V, then V[out] is the binary number, expressed in volts. It is now clear how a straight binary DAC works -- one simply uses an R/2R network and an appropriate reference potential.
{"url":"http://www.asdlib.org/onlineArticles/elabware/Scheeline_ADC/ADC_DAC_ladder.html","timestamp":"2014-04-17T19:07:45Z","content_type":null,"content_length":"14396","record_id":"<urn:uuid:d3ac2283-2f10-4995-af2b-8aef2fcdf5bf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Milmay Math Tutor Find a Milmay Math Tutor ...The undergraduate course that I took focused mostly on techniques to solve basic ordinary differential equations. The two courses that I took at the graduate level placed a significant emphasis on proving the properties of various dynamical systems. I also worked as a teaching assistant for a C... 19 Subjects: including algebra 1, algebra 2, calculus, grammar ...As we get closer to the test, my schedule we become filled with students and taking on new students will be difficult. I also will be increasing my rate in late August. Students who get started now will be able to lock in my current rate. 26 Subjects: including prealgebra, reading, English, SAT math It's a great day to learn algebra! I am a certified high school math teacher here in Atlantic County with three years of experience teaching pre-algebra and algebra 1. I enjoy working with students 1-on-1 and ensuring that everyone I work with genuinely learns the material. 2 Subjects: including algebra 1, prealgebra ...First by ensuring a solid foundation in Basic Math and Algebra I, we will then work on "new" material as necessary. I will try to find new and intuitive ways to explore problems, ideas and relationships ... leading to HOW to think about solving the problem at hand. The goal will be to "solve" problems by understanding them, instead of just "doing the math" to get an answer. 9 Subjects: including trigonometry, algebra 1, algebra 2, geometry ...My area of expertise is in math, science, social studies, reading, writing, and speech and language therapy. Please feel free to contact me at anytime. Thank you Victoria D.I currently hold a bachelors degree in speech and language therapy. 15 Subjects: including prealgebra, algebra 1, writing, grammar Nearby Cities With Math Tutor Buena Vista Township, NJ Math Tutors Cologne, NJ Math Tutors Corbin City, NJ Math Tutors Dorchester, NJ Math Tutors Dorothy, NJ Math Tutors Elwood, NJ Math Tutors Franklinville, NJ Math Tutors Malaga, NJ Math Tutors Marmora Math Tutors Mauricetown Math Tutors Newtonville, NJ Math Tutors Norma Math Tutors Port Elizabeth Math Tutors Richland, NJ Math Tutors Tuckahoe, NJ Math Tutors
{"url":"http://www.purplemath.com/milmay_nj_math_tutors.php","timestamp":"2014-04-19T07:19:03Z","content_type":null,"content_length":"23575","record_id":"<urn:uuid:fcc56758-203d-4a0b-8de1-332f3dde5527>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching stacks to differential geometry students up vote 6 down vote favorite Does anyone have any experience teaching stacks over the category of manifolds to students whose background is, say, a semester-long course on manifolds? Does anyone know of any publicly available notes on the subject, preferably in English? [My French is limited to the knowledge of the alphabet :). I can read Russian.] I am aware of a paper by Behrend and Xu, Metzler's paper in the arxiv, and notes by Heinloth. Hepworth has a nice exposition of vector fields on stacks, but his papers are rather terse. Vistoli's notes on descent are quite nice, but are clearly aimed at algebraic geometers. And there differences between the categories of manifolds and schemes --- fiber products of manifolds are badly behaved, for one thing. The challenges in teachign such a course seem many. For one thing I don't know how to talk about stacks without getting into 2-category theory. And most differential geometers don't know much of 1-category theory. But I don't want to start with a crash course on category theory. stacks teaching dg.differential-geometry 3 One suggestion would be to limit yourself to orbifolds; then there are more resources available and teaching this to geometric topology students is not too difficult (in my experience). – Misha Jan 10 '13 at 18:03 Have looked at Weimin Chen's paper "A homotopy theory of orbispaces"? front.math.ucdavis.edu/0102.5020 – Liviu Nicolaescu Jan 10 '13 at 18:57 2 @Misha Thank you for the suggestion. But presenting orbifolds as topological spaces with extra structure kind of defeats the purpose of explaining how to think of them as stacks, doesn't it? – Eugene Lerman Jan 10 '13 at 19:23 @Liviu I had, when it first came out. I don't understand it. – Eugene Lerman Jan 10 '13 at 19:25 Hi Eugene. I have given some informal lectures about differentiable stacks to differential geometers a couple of times. I might be able to find some handwritten notes of mine from this. I also spend a good 100 pages or so giving a careful introduction to them in my thesis. (You can find a copy on my webpage: people.mpim-bonn.mpg.de/carchedi) – David Carchedi Jan 25 '13 at 0:50 add comment 1 Answer active oldest votes I had a good experience with Heinloth's notes. I tried to explain the two-categorical stuff in the example of the stack of principal $G$-bundles. For example, a nice way to understand 2-pull-backs is to calculate $G\cong *\times_{BG}*$ explicitly. And of course, orbifolds and gerbes, e.g. of $Spin^{c}$-reductions of a $Spin^{c}$-principal bundle a provide examples up vote 2 accessible to differential geometers. down vote add comment Not the answer you're looking for? Browse other questions tagged stacks teaching dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/118548/teaching-stacks-to-differential-geometry-students","timestamp":"2014-04-21T10:17:14Z","content_type":null,"content_length":"56436","record_id":"<urn:uuid:d9c2a547-b2b9-4d41-9c9b-4e7d88311f98>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Rational number if n is even proof January 13th 2009, 09:07 AM Jason Bourne Rational number if n is even proof Prove that the number ( $\sqrt{5} + 1)^n + (\sqrt{5} - 1)^n , (n \in \mathbb{N})$ is rational if and only if n is even. (I think going one way if n is even then its clear that its rational, what about assuming that n is even then the number is rational? Is it inductive?) January 13th 2009, 09:37 AM You can use Newton's formula to develop $(\sqrt{5} + 1)^n$ and $(\sqrt{5} - 1)^n$ then study the 2 cases (n even and n odd). When n is even all the terms involving $\sqrt{5}$ disappear. January 13th 2009, 10:17 AM Jason Bourne Newton's formula? January 13th 2009, 10:42 AM Sorry I don't know how it is called in English I am talking about this formula $(a + b)^n = \sum_{k=0}^{n} \binom{n}{k}\: a^k \:b^{n-k}$ January 15th 2009, 11:19 AM Consider the sequence $a_0,a_1,a_2,a_3,...$ defined as: $\left\{ \begin{array}{c} a_0 = a_1 = 2 \\ a_{n+2} = 2a_{n+1} + 24a_n \text{ for }n\geq 0 \end{array} \right.$ Certaintly, the terms of the sequence $\{ a_n \}$ are integers. Furthermore, the solution to this recurrence relation is given by: $a_n = \left( 1 + \sqrt{5} \right)^n + \left( 1 - \sqrt{5} \right)^n$. If $n$ is even then $a_n = \left( 1 + \sqrt{5} \right)^n + \left( \sqrt{5} - 1 \right)^n \in \mathbb{N}$ Consider the Fibonacci sequence $f_0,f_1,f_2,...$ it satifies $f_n = \frac{1}{\sqrt{5}} \left[ \left( \frac{1+\sqrt{5}}{2} \right)^n - \left( \frac{1-\sqrt{5}}{2} \right)^n \right]$ Define $b_n = \left( 1+\sqrt{5}\right)^n + \left( \sqrt{5}-1\right)^n$. If $n$ is odd then $\frac{b_n}{2^n\sqrt{5}} = \frac{1}{\sqrt{5}} \left[ \left( \frac{1+\sqrt{5}}{2} \right)^n - \left( \frac{1-\sqrt{5}}{2} \right)^n \right] = f_n$. Therefore for odd $n$ we have $b_n = \sqrt{5}\cdot 2^n \cdot f_not \in \mathbb{Q}$.
{"url":"http://mathhelpforum.com/number-theory/68017-rational-number-if-n-even-proof-print.html","timestamp":"2014-04-24T17:21:02Z","content_type":null,"content_length":"10252","record_id":"<urn:uuid:0d1542af-7a67-4c1a-bf36-34ad00719630>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Program help with counting even and odd integers in input value. July 3rd, 2010, 07:10 PM Program help with counting even and odd integers in input value. Code Java: import java.util.Scanner; public class Integer { // Read a value from the user and counts the number of odd and even integers in the value // ------------------------------------------------------------------- public static void main (String[ ] args) { int integer, oddinteger, eveninteger; Scanner scan = new Scanner (System.in); System.out.print (&#8220;Enter an integer: &#8221;) What I need the program to do is to read the value input by the user and identify the number of even and odd integers in the value. For example, if the user inputs 403 I need the program to output : Odd Integers: 2 Even Integers: 1. Does that make sense? Please help me! :) Thank you in advance. I'm fairly new at this. July 3rd, 2010, 07:46 PM Re: Program help with counting even and odd integers in input value. Do you have the algorithm yet for extracting a digit from a number? You need to figure out how to get 3 digits out of 403. This will involve modulus and division. Then look at the digits one by one to determine if they are odd or even. July 6th, 2010, 07:02 PM Re: Program help with counting even and odd integers in input value. To be exact, the code to do this would look like this: Code : int num1 = 6; System.out.print("Even Number"); System.out.print("Odd Number"); Doing "int % 2" will give you either the number 0 or 1. If the result is 0, it means it is even. If the result is 1, it means it is odd. Basically, mod ( % ) will grab the two number, divide them, and return the remainder. If you divide 4 by 2, you get 2 with a remainder of 0, so it returns 0. If you divide 3 by 2, you get 1 with a remainder of 1, so it returns 1. You can do the same thing if you needed to find out if a number was divisible by something. For instance, if you needed to know if 40 was divisible by 10, you would simply code "40%10" and it would return 0, which would indicate that it is divisible by 0. If you did "45%10" it would return 5, because 5 is the remainder of 45/10. I dont think I can explain it any more detailed then that. Once you get used to using mod, you will find out how extremely useful it is.
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/4703-program-help-counting-even-odd-integers-input-value-printingthethread.html","timestamp":"2014-04-20T01:43:25Z","content_type":null,"content_length":"9254","record_id":"<urn:uuid:d06995c1-8197-4708-a752-685fb1dcd156>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability axioms From Wikipedia, the free encyclopedia In Kolmogorov's probability theory, the probability P of some event E, denoted $P(E)$, is usually defined in such a way that P satisfies the Kolmogorov axioms, named after the famous Russian mathematician Andrey Kolmogorov, which are described below. These assumptions can be summarised as: Let (Ω, F, P) be a measure space with P(Ω)=1. Then (Ω, F, P) is a probability space, with sample space Ω, event space F and probability measure P. An alternative approach to formalising probability, favoured by some Bayesians, is given by Cox's theorem. First axiom The probability of an event is a non-negative real number: $P(E)\in\mathbb{R}, P(E)\geq 0 \qquad \forall E\in F$ where $F$ is the event space. In particular, $P(E)$ is always finite, in contrast with more general measure theory. Theories which assign negative probability relax the first axiom. Second axiom This is the assumption of unit measure: that the probability that some elementary event in the entire sample space will occur is 1. More specifically, there are no elementary events outside the sample space. $P(\Omega) = 1.$ This is often overlooked in some mistaken probability calculations; if you cannot precisely define the whole sample space, then the probability of any subset cannot be defined either. Third axiom This is the assumption of σ-additivity: Any countable sequence of disjoint (synonymous with mutually exclusive) events $E_1, E_2, ...$ satisfies $P(E_1 \cup E_2 \cup \cdots) = \sum_{i=1}^\infty P(E_i).$ Some authors consider merely finitely additive probability spaces, in which case one just needs an algebra of sets, rather than a σ-algebra. Quasiprobability distributions in general relax the third From the Kolmogorov axioms, one can deduce other useful rules for calculating probabilities. $\quad\text{if}\quad A\subseteq B\quad\text{then}\quad P(A)\leq P(B).$ The probability of the empty set The numeric bound It immediately follows from the monotonicity property that $0\leq P(E)\leq 1\qquad \text{∀} E\in F.$ The proofs of these properties are both interesting and insightful. They illustrate the power of the third axiom, and its interaction with the remaining two axioms. When studying axiomatic probability theory, many deep consequences follow from merely these three axioms. In order to verify the monotonicity property, we set $E_1=A$ and $E_2=B\backslash A$, where $\quad A\subseteq B \text { and } E_i=\emptyset$ for $i\geq 3$. It is easy to see that the sets $E_i$ are pairwise disjoint and $E_1\cup E_2\cup\ldots=B$. Hence, we obtain from the third axiom that $P(A)+P(B\backslash A)+\sum_{i=3}^\infty P(\emptyset)=P(B).$ Since the left-hand side of this equation is a series of non-negative numbers, and that it converges to $P(B)$ which is finite, we obtain both $P(A)\leq P(B)$ and $P(\emptyset)=0$. The second part of the statement is seen by contradiction: if $P(\emptyset)=a$ then the left hand side is not less than $\sum_{i=3}^\infty P(E_i)=\sum_{i=3}^\infty P(\emptyset)=\sum_{i=3}^\infty a = \begin{cases} 0 & \text{if } a=0, \\ \infty & \text{if } a>0. \end{cases}$ If $a>0$ then we obtain a contradiction, because the sum does not exceed $P(B)$ which is finite. Thus, $a=0$. We have shown as a byproduct of the proof of monotonicity that $P(\emptyset)=0$. More consequences Another important property is: $P(A \cup B) = P(A) + P(B) - P(A \cap B).$ This is called the addition law of probability, or the sum rule. That is, the probability that A or B will happen is the sum of the probabilities that A will happen and that B will happen, minus the probability that both A and B will happen. This can be extended to the inclusion-exclusion principle. $P(\Omega\setminus E) = 1 - P(E)$ That is, the probability that any event will not happen is 1 minus the probability that it will. Simple example: coin toss Consider a single coin-toss, and assume that the coin will either land heads (H) or tails (T) (but not both). No assumption is made as to whether the coin is fair. We may define: $\Omega = \{H,T\}$ $F = \{\emptyset, \{H\}, \{T\}, \{H,T\}\}$ Kolmogorov's axioms imply that: $P(\emptyset) = 0$ The probability of neither heads nor tails, is 0. $P(\{H,T\}) = 1$ The probability of either heads or tails, is 1. $P(\{H\}) + P(\{T\}) = 1$ The sum of the probability of heads and the probability of tails, is 1. See also This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (November 2010) Further reading External links
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Probability_axioms","timestamp":"2014-04-16T19:06:05Z","content_type":null,"content_length":"84751","record_id":"<urn:uuid:94770a8c-1f55-4377-80c0-30933623052b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Why do inner products require conjugation? up vote 11 down vote favorite For Hermitian matrices and operators, the most "natural" inner product is $f^H \cdot g$ or $\int f^* g\; dx$. A similar situation holds interpreting Fourier transforms as the inner product of functions with complex exponential functions. My question is, why is this the most "natural" choice? Is there something deeper to this choice (other than making $< f,f>$ a norm) related to some kind of duality of vector spaces? 2 It has something to do with the fact that the Hom functor is contravariant in the first variable. Baez's Higher-Dimensional Algebra II: 2-Hilbert spaces (arxiv.org/abs/q-alg/9609018) might be a good place to start reading, but I'm not sure it's a complete explanation. – Qiaochu Yuan Jul 26 '11 at 0:28 1 I always wondered about this myself... – Igor Rivin Jul 26 '11 at 0:32 1 This kind of inner products makes orthogonality a symmetric relation. If you have a bilinear form B on some vector space V, call two vectors v and w orthogonal if B(v,w) = 0. One would like this to be a symmetric relation: if B(v,w) = 0 then B(w,v) = 0. One can show (see Grove, Classical Groups and Geometric Algebra, Prop. 2.7) that having orthogonality w.r.t. B being symmetric is equivalent to B being a symmetric or alternating bilinear form (which lead to orthogonal and symplectic groups, respectively). [continued...] – KConrad Jul 26 '11 at 4:26 On a complex vector space, with sesquilinear forms B in place of bilinear forms, one can ask when the orthogonality relation B(v,w) = 0 is symmetric in v and w. If B is non-degenerate in a 5 suitable sense, then orthgonality w.r.t. B is a symmetric relation if and only if B is a scalar multiple of a Hermitian form. This, to me, explains in an interesting why why symmetric bilinear forms, alternating bilinear forms, and Hermitian sesquilinear forms are special: they are essentially the interesting ways of making a geometry in which perpendicularity is a symmetric relationship. – KConrad Jul 26 '11 at 4:28 add comment 2 Answers active oldest votes Bi- (or sesqui-) linear forms are nicer if they're nondegenerate. But they can always be restricted to subspaces. So, they're even nicer if they're nondegenerate on all subspaces. For symmetric forms on ${\mathbb R}^n$, that forces definiteness (positive or negative). The usual bilinear form on ${\mathbb C}^n$ doesn't have this inherited-nondegeneracy property, but the Hermitian one does. up vote 8 down vote accepted Anyway that's a practical issue, rather than a naturality statement. One way to decide it is natural is to embed $M_n({\mathbb C})$ into $M_{2n}({\mathbb R})$ by using the obvious $\ mathbb R$-basis of ${\mathbb C}^n$. This fills the $2n\times 2n$ matrix with lots of $2\times 2$ real matrices whose transposes correspond to complex conjugate. Transposes come up in dot products if you notice that $\langle v,w\rangle$ is the unique entry in the $1\times 1$ matrix $v^T w$. 2 Looking up the definition of a sesquilinear form led to its Wikipedia article, which gives a nice naturality argument for both bilinear and sesquilinear forms. – Victor Liu Jul 26 '11 at 3:49 Maybe I am wrong, but I think nondegeneracy does not imply definiteness. A direct sum of a positive and a negative definite bilinear form is still nondegenerate (using the def from wikipedia) and it is not definite (if both forms were nontrivial). – HenrikRüping Jul 26 '11 at 15:35 2 Indefiniteness of the space implies that some subspace will be degenerate. – Allen Knutson Jul 26 '11 at 23:12 add comment Not all inner products do in fact require conjugation. There are 8 elementary types of inner products on modules over the associative real division algebras: $\mathbb{R}$, $\mathbb{C}$ and $ \mathbb{H}$. Over $\mathbb{R}$ and $\mathbb{C}$ you can consider bilinear inner products: symmetric and symplectic, whereas over $\mathbb{C}$ and $\mathbb{H}$ you can consider sesquilinear inner products: hermitian and skew-hermitian. (Over $\mathbb{C}$ the distinction between hermitian and skew-hermitian is just a factor of $i$, but over $\mathbb{H}$ it is really a different type of structure.) You can associate a classical group to each such type of inner product, corresponding to the transformations which leave the inner product invariant and having unit 1. $\mathrm{SO}$ for symmetric bilinear over $\mathbb{R}$ and $\mathbb{C}$ 2. $\mathrm{Sp}$ for skew-symmetric bilinear over $\mathbb{R}$ and $\mathbb{C}$ and also hermitian over $\mathbb{H}$ 3. $\mathrm{SU}$ for hermitian (and skew-hermitian) over $\mathbb{C}$ up vote 4. $\mathrm{SO}^\ast$ for skew-hermitian over $\mathbb{H}$ 6 down vote Of these, only the hermitian and skew-hermitian require conjugation, but the skew-hermitian over $\mathbb{H}$ is not positive definite, hence does not give rise to a norm. The existence of the inner product says that as a representation of corresponding symmetry group $G$, (one of $\mathrm{SO}$, $\mathrm{Sp}$, $\mathrm{SU}$, $\mathrm{SO}^\ast$) the module $V$ is isomorphic either to the dual module $V^\ast$ (in the case of $\mathrm{SO}$, $\mathrm{Sp}$ and $\mathrm{SO}^\ast$) or to the conjugate dual module $\overline{V}^\ast$ (in the case of $\ So one possible "high level" explanation (although it feels more like a rephrasing) of the fact that the hermitian inner product on a complex vector space requires conjugation is that the defining representation of the (special) unitary group is isomorphic to its conjugate dual and not to its dual. And by the way, all 8 classes of inner products appear as invariant inner products on the representations of Clifford algebras. This is explained in the book Spinors and Calibrations by Reese Harvey. – José Figueroa-O'Farrill Jul 26 '11 at 1:06 I've never seen the notation $SO^*$. And it is a classical group?! – Mariano Suárez-Alvarez♦ Jul 26 '11 at 1:15 Your answer is a bit above my level of understanding (I come from an engineering background, but dabble in mathematical physics). It seems that your enumeration of various inner products is motivated by symmetries they possess (e.g. hermiticity), and perhaps my question is then: why are these symmetries the most natural? Maybe there is just no answer to this. I come about this question looking at inner products for quasi-normal modes (in the context of open systems in optics or gravitation) which extend self-adjointness to open systems by defining a "better" inner product. – Victor Liu Jul 26 '11 at 3:58 @Mariano: I'm not sure if $\mathrm{SO}^\ast$ is standard notation, but it is used in some books on Lie groups, e.g., Rossmann's. It is the defined as the group of $\mathbb{H}$-linear transformations in $\mathbb{H}^n$, thought of as a right $\mathbb{H}$-module, which preserves the skew-hermitian inner product given by $z^\ast j w$ for $z,w \in \mathbb{H}^n$. – José Figueroa-O'Farrill Jul 26 '11 at 9:59 add comment Not the answer you're looking for? Browse other questions tagged inner-product or ask your own question.
{"url":"http://mathoverflow.net/questions/71277/why-do-inner-products-require-conjugation","timestamp":"2014-04-18T01:08:29Z","content_type":null,"content_length":"70378","record_id":"<urn:uuid:4893505c-4486-4b1d-87c1-b972cdae1260>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Marginal Effects Hi there, I use a probit model and my dependent variable has two categories. My question: Is there an easy way to calculate the marginal effects of my independent variables? If you have an solution, please describe it as simple as possbile, since I'm not so familiar with all this technical stuff PS: I use Eviews 6 Re: Marginal Effects The manual has a good discussion of this. Look under "Procedures for Binary Equations" in Chapter 30. Re: Marginal Effects Thanks for your answer! I’ve looked it up in the handbook. Just to get sure: 1) I have my estimation output 2) Then I click on “Proc”, Forecast 3) There I click on “Index – where Prob = 1-F(-Index)” 4) Then “Proc” and “Make Residual Series”, let’s say name xy 5) Afterwards I multiply each single independent variable with @dnorm(-xy) Is that right? I’m so sorry for that stupid article, but as I’ve mentioned it, I need it simple Thanks again, Re: Marginal Effects In step 5 you multiply by the coefficient, not the independent variable. Re: Marginal Effects Just to get that right: let's say one of my exogenous variables is "age" an its value in my estimation output is 0.5 So I mulitply 0.5*@dnorm(-xy) and the result is the marginal effect. Is that correct? Thanks again! Re: Marginal Effects Re: Marginal Effects Once again thanks for your patience!! Re: Marginal Effects I am also working on a probit model, and I would like to see the marginal effect of a categorical variable on the dependent variable, in particular, the marginal effect of higher education on the probability of being poor. I checked the "Procedures for Binary Equations" which QMS Gareth suggested but for some reason, the advice on this part couldn't work in Eviews. In other words, when I click Proc, I don't see anything like "Forecast (Fitted Probability/Index)" or "Make Residual Series" button. By the way, I am using Eviews 6. So I couldn't calculate with that way and I am sort of stuck. I tried another way to calculate it but it didn't give expected results. I mean the results are by far higher than expected. Let me share my code here, and I would appreciate if somebody advises me to overcome this problem. Thanks in advance. This is my model; probit(h) poor4e c reg1 reg2 reg3 reg5 eth2 eth3 eth4 dem2 n_adult n_child ed12 edu5 This is what I tried to calculate the marginal affect of edu5, by keeping all other variables constant at their mean values; table (2,4)education education (1,1)="!z_highedu" education (1,2)="!p_highedu" education (2,1)=!z_highedu education (2,2)=!p_highedu education (1,3)="!z_nonhighedu" education (1,4)="!p_nonhighedu" education (2,3)=!z_nonhighedu education (2,4)=!p_nonhighedu show education By comparing !z_highedu and !z_nonhighedu or !p_highedu and !p_nonhighedu, I was hoping to see the marginal effect, but it is way higher than expected Re: Marginal Effects I am using Probit estimation and want to get marginal effects. previous posting suggest using command of @dnorm(-xy) and then multiplying with the estimated coefficients. I cannot see this series @dnorm(-xy) in the working file, how to get this series and even If I have it then how to get a single value of mariginal effect for each estimated coefficeint? look forward to someone's suggestions, thanks. Re: Marginal Effects I would also like to know how to get one value for marginal effects. I've multiplied this @dnorm(-xy) with my coefficient for a certain variable; but now i get a whole column of values. How do I turn it into one value for the marginal effect of this variabele? Re: Marginal Effects I am trying to estimate marginal effects for a logit model. I have followed the instructions of several prior blogs: - estimate the logit - forecast the index and save as indexF - create scalar: scalar xb = @mean(indexF) (this value is 0.4166) - create scalar: scalar l_xb = @dlogistic(-xb) (this value is 0.239) The problem is that some of the coefficient estimates in the logistic regression are quite large, eg 39.8 and 25.2. If I multiply these coefficient estimates times 0.239 I get a number well above 1 which doesn't make sense. Thanks for your help. Re: Marginal Effects O-kay. I have had a follow on thought. The values I get from those calculations are the change in probability for a 1 unit change in the explanatory variable. I get the large coefficients for variables with relatively small standard deviations. So, is it accurate to take the standard deviation for an explanatory variable and multiply it by the value I get from the above calculations and conclude that this is the effect of a one standard deviation change in x? eg, the 39.8 coefficient estimate I reference above is for a variable with a standard deviation of 0.03. So, (39.8 * 0.23) * 0.03 = 0.275. This tells me a one st dev increase in this variable is associated with a higher probability of 27.5% Is this a correct interpretation? Many thanks. Re: Marginal Effects Those marginal effects are derivatives, not unit change differences.
{"url":"http://forums.eviews.com/viewtopic.php?f=4&t=659","timestamp":"2014-04-19T04:20:07Z","content_type":null,"content_length":"34680","record_id":"<urn:uuid:9f014932-389c-458b-9414-8781d20c6e0b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
300 grams equals how many pounds You asked: 300 grams equals how many pounds Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/300_grams_equals_how_many_pounds","timestamp":"2014-04-20T03:24:09Z","content_type":null,"content_length":"57352","record_id":"<urn:uuid:10304c56-1b8d-41f9-a6a0-771a67e770e2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Calibration help required [Date Prev][Date Next][Thread Prev][Thread Next] - [Date Index][Thread Index][Author Index] Calibration help required • Subject: Calibration help required • From: Clive Wallis <clivew@xxxxxxxxxxxx> • Date: Sun, 10 May 1998 12:49:39 +0100 I wonder if someone coluld help. am looking fo a mathmetical procedure to adjust the calibrations of the OSCAR-11 magnetometer so that the mean square of the error between the measured field and the theoretical field is a minimum. I would really like to be able to do this on a spread sheet. I have a table of about 3000 values of the theoretical field BT, and corresponding measurements. The measurements consist of two values, N and BH. The relationship between the measured field BM and the measurements is - BM = SQRT(BZ*BZ + I*BH*BH) BZ = K*N + J I have initial values of I, J, and K which give a fairly good fit. I believe that there may be an iterative procedure which enables optimum values of I, J,and K to be calculated. I think that the approach may be to obtain some linear equations by partial differentation, and solve these by substituting statistical values obtained from the table of data. Any help would be greatly appreciated. Clive G3CWV Hitchin, North Hertfordshire, UK.
{"url":"http://www.amsat.org/amsat/archive/amsat-bb/199805/msg00124.html","timestamp":"2014-04-20T03:19:19Z","content_type":null,"content_length":"3766","record_id":"<urn:uuid:50372950-41b7-424c-868c-66cadf4a710c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Citizendium - building a quality free general knowledge encyclopedia. Click here to join and contribute—free Many thanks December donors; special to Darren Duncan. January donations open; need minimum total $100. Let's exceed that. Donate here. By donating you gift yourself and CZ. Galileo's paradox From Citizendium, the Citizens' Compendium Galileo's paradox is a demonstration of one of the surprising properties of infinite sets. In his final scientific work, the Two New Sciences, Galileo made two apparently contradictory statements about the positive whole numbers. First, some numbers are perfect squares, while others are not; therefore, all the numbers, including both squares and non-squares, must be more numerous than just the squares. And yet, for every square there is exactly one number that is its square root, and for every number there is exactly one square; hence, there cannot be more of one than of the other. (This is an early use, though not the first, of a proof by one-to-one correspondence of infinite sets.) Galileo concluded that the ideas of less, equal, and greater applied only to finite sets, and did not make sense when applied to infinite sets. In the nineteenth century Cantor, using the same methods, showed that while Galileo's result was correct as applied to the whole numbers and even the rational numbers, the general conclusion did not follow: some infinite sets are larger than others, in that they cannot be put into one-to-one correspondence.
{"url":"http://en.citizendium.org/wiki/Galileo's_paradox","timestamp":"2014-04-21T02:01:09Z","content_type":null,"content_length":"25623","record_id":"<urn:uuid:127678ba-0c5d-4101-995c-03de07741be4>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Lab 1 Math 341 Modern Geometry Computer Lab I 1. Construct an equilateral triangle that stays equilateral no matter how you move the points around. (To do this, start with a segment and use the construction that Euclid describes in his Prop. 1.) Likewise, construct a square that remains a square. (You'll need to think about how to do this. You'll want to start with another segment, and construct a square on the segment. One useful command for this is "Construct Perpendicular Line," which is in the construct menu. ) Save your sketch as “sym1”. 2. For each of these, see how many symmetries you can find. Construct a new arbitrary line, and reflect the figures over it. To do this, select the line, then go to "Mark Mirror" in the Transform menu. Then, select the things you want to reflect, go to the Transform menu, and choose "reflect." After you have reflected the figures over the line, you can move the line around, and see what reflection symmetries you can find. When you find one, the figure will coincide with its reflection. Also, create a new point and a new angle (made from two line segments), and rotate the figures about the point by the angle, using the transform menu. Again, see if you can find symmetries by making the figure coincide with its rotation. 3. Now go back to your saved sketch sym1. For each symmetry that you found, construct the line/point of symmetry directly from the original figure, and reflect/rotate the figure about that line/ point. If you’ve constructed it correctly, the image should stay on top of the original figure no matter how you move it. 4. Finally, for each symmetry that you found, try to see what kinds of triangles or quadrilaterals have that symmetry. You can do this by starting with a line or point of symmetry and a point or segment of a figure, reflecting/rotating it about the point or line, and seeing how much of the figure is determined. What happens if you require two or more symmetries? Homework: Write up a lab report explaining your results. You can do this by yourself or in pairs. Your report should include: 1) A description of each of the symmetries that you found, including explicit instructions for how to find/construct the points, lines, and angles of symmetry; and 2) For each set of symmetries on your list, a complete list of all of the types of triangles and quadrilaterals that have that symmetry.
{"url":"http://www.unco.edu/nhs/mathsci/ClassSites/hoppercourse/math341/lab1.html","timestamp":"2014-04-19T13:12:06Z","content_type":null,"content_length":"3451","record_id":"<urn:uuid:a5d455ae-fa78-47cb-bde6-788a2fec076b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
How many milliseconds is in 10000000000 years? You asked: How many milliseconds is in 10000000000 years? 315,569,520,000,000,000,000.0 milliseconds Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/how_many_milliseconds_is_in_10000000000_years","timestamp":"2014-04-16T04:25:12Z","content_type":null,"content_length":"59514","record_id":"<urn:uuid:1b3240bd-eb85-4d12-ab7b-afeb87a344a9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
How Many Feet in a Mile and miles are widely used units of length. There are different types of these units of length, and the most common ones are the international foot and the statue miles. In order you get the picture clearly, let’s talk about them a bit. It will help you to understand how many feet there are in a mile. History of the Foot According to some historians, the first foot was used in Sumer. Its definition was found in one of the statues, dated to about 2600 BC. Later feet were adapted by the Egyptians, and then by the Greeks and the Romans. So it is an ancient unit of length. Now there are various types of feet in the world, but most of them are obsolete. In 1959, the USA and the Commonwealth countries stated that the length of one yard is 0.9144 meters. As the yard comprises three feet, it is easy to calculate that the foot is equal to 0.3048 meters. This definition has become official for almost every country, and if you come across a foot, you can be sure that this is the international foot, which is 0.3048 meters. Types of Miles Miles also have various definitions. First of all, you should understand the difference between land and nautical miles. The land mile (also called the statue mile, American mile, Imperial mile, British mile, European mile, or English mile) is the most widely spread variant. It was defined in 1592 by the Act of Parliament as equal to 1760 yards, which is 5280 feet. The international nautical mile is longer. It has 6076 feet. You should also remember that in some countries they use other types of miles, like the Roman mile (5000 Roman feet), the metric mile (1500 or 1600 meters), the Scots mile (5920 feet), the Russian milya (7468 meters), the Irish mile (6720 feet), etc. Thus, the most common answer to the question about how many feet in a mile is 5280 (about 1609 meters), if you mean international feet and statue miles. How to Convert Feet to Miles In order to convert feet to miles, you should divide the number of miles by 5280. A calculator or the table below will help you to calculate quickly and without mistakes.
{"url":"http://1howmany.com/how-many-feet-in-a-mile","timestamp":"2014-04-18T15:38:50Z","content_type":null,"content_length":"37863","record_id":"<urn:uuid:03a6a708-5551-4e20-9ae0-66c920a0dd50>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Haymarket Math Tutor Find a Haymarket Math Tutor ...I received classical training at Brigham Young University and there obtained a minor in music. I am a cellist who has played for 18 years. I have experience teaching first and second-year cello students and tutoring fourth-year students. 19 Subjects: including prealgebra, reading, Spanish, English ...She also felt comfortable with taking her AP exam at the end of the year. I love education, and I think it is essential to being a success in this country. This is true even of elementary and middle school education, which lays the foundation of a persons' high school, collegiate, and graduate education. 20 Subjects: including precalculus, algebra 1, prealgebra, reading Dear Parents and Students, My name is Anwar. I am a certified teacher with many years of teaching and tutoring experience in math and science at college and secondary level. As an educator who instructs and mentors others, the most rewarding aspect of my teaching career is the ability to guide a struggling student by helping him/her solve problems and grasp new concepts. 10 Subjects: including prealgebra, algebra 1, algebra 2, geometry ...I received an A in both my Advanced grammar and composition course and my creative writing course. I have a B.S. in English Education and currently work as an ESL teacher. I currently working as an ESL teacher. 16 Subjects: including prealgebra, Spanish, reading, English As a certified Elementary School Teacher and a certified Middle School Mathematics teacher, I am familiar with the content that students are expected to learn in Virginia. I have ten years of experience and have taken pride in finding ways to explain Math to people of all ages in a way that they wi... 11 Subjects: including discrete math, prealgebra, SAT math, writing
{"url":"http://www.purplemath.com/Haymarket_Math_tutors.php","timestamp":"2014-04-18T08:57:40Z","content_type":null,"content_length":"23626","record_id":"<urn:uuid:49db5b69-d376-4132-aaf6-55ea5411534b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 20 Part 1: Home and School Investigation Send the Letter to Family (PDF file) home with each child. Once all the children have brought in their numbers from newspapers or magazines, put the children in groups of two. Ask them to talk about the two numbers and about what is alike and different about them. For example, they might talk about the fact that both numbers have three digits and that the digit in one of the places in both numbers is the same. They might also talk about how the digit in one of the places in both numbers is different. After the children have done this, have some of the partners share what they talked about with the whole class. Part 2: Be an Investigator A good time to do this investigation is after Lesson 8 on ordering three-digit numbers. Introducing the Investigation Put the digits 1, 7, and 9 on the board. Ask the children for different ways to write a three-digit number using those digits. Write two or three of their suggestions on the board. Show the children the worksheet and tell them that they are going to find as many different ways as they can to use the digits 2, 3, and 8 to write a 3-digit number. Tell them that they can use each digit only one time in a number. Tell them when they are finished to write the numbers in order from least to greatest. Put the children in groups of two for the investigation. Doing the Investigation This investigation provides an excellent opportunity for children to practice the strategy of making an organized list. For example, they could list all the three-digit numbers that have 2 in the hundreds place, then those that have 3 in the hundreds place, and finally those that have 8 in the hundreds place. Have the children share their results when they are finished. Numbers in order from least to greatest: 238, 283, 328, 382, 823, 832 Extending the Investigation Have the children do the same investigation with a different set of three digits. For more of a challenge, give them a set of four digits and have them make as many three-digit numbers as they can.
{"url":"http://www.eduplace.com/math/mw/minv/2/minv_2c20.html","timestamp":"2014-04-16T10:47:46Z","content_type":null,"content_length":"5174","record_id":"<urn:uuid:41371eb4-4baf-4f44-8424-ef4354561c9c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Professors: Askew (Chair), Cole, Érdi, McDowell, Tobochnik The physics curriculum at Kalamazoo College provides preparation for the potential physicist as well as a solid background for students in the other sciences. A student majoring in physics can pursue further study in physics, engineering, computer science, astronomy, medical physics, or environmental science. Other opportunities include teaching at the high school level and working in a business that involves modern technology, and other careers such as finance, patent law, and technical editing. Students interested in majoring in one of the physical sciences should plan to take CHEM 110, MATH 112-113, and PHYS 150 during the first two quarters of the first year. Students with an AP score of 4 or 5 on the Physics C-Mechanics exam will also be granted credit in PHYS 150 and should begin their sequence with PHYS 152. Students with the same score on the Physics C-E&M exam will also be granted credit in PHYS 152 and should begin their sequence with PHYS 220. Students may also receive credit for PHYS 152 by receiving a 5,6, or 7 on the IB Physics HL exam. Students planning on a major in physics should achieve at least "B" level academic work in the department by the time they complete PHY 220. Students interested in engineering should consider the combined curriculum in engineering. This typically follows the program of the physics major during the first three years. (See the 3/2 Engineering Program description.) Requirements for the Major in Physics Number of Units Eight courses in physics, numbered 150 and higher, with a minimum grade of C- are required for the major. A SIP in physics is not required for the major and, if completed, does not count toward the eight courses. A maximum of one AP, IB, dual enrollment, transfer, or study abroad credit may be counted toward the eight courses. Any number of required cognates may be met with AP, IB, dual enrollment credit, or local placement exam results. Departmental approval is required for all use of AP, IB, dual enrollment credit, and transfer credit toward major requirements. Required Courses PHYS 150, 152, Introductory Physics I and II, with Lab PHYS 220 Intro to Relativity and Quantum Physics with Lab PHYS 340 Classical Dynamics with Lab PHYS 360 Thermal Physics with Lab PHYS 370 Electronics and Electromagnetism with Lab PHYS 380 or PHYS 410, Semiconductors and Magnetism with Lab or Advanced Electricity and Magnetism Required Cognates MATH 112, 113, and 214 Calculus I, II, and III MATH 240 Linear Algebra and Vectors MATH 280 Differential Equations and Numerical Methods All cognates in math must be at C- or above. Successful completion of the major requires taking a departmental comprehensive exam, normally offered in late January of the senior year. The Advanced Physics GRE exam may be used in place of the locally administered departmental exam. A least one course in Computer Science, one course in Complex Systems, and MATH 310, Complex and Vector Variables, are recommended for all students in the major. Students planning on graduate study in Physics, Applied Physics, or Electrical Engineering should take both PHYS 380 and 410, and PHYS 420, Quantum Mechanics. Students interested in further study in environmental engineering or related programs should take CHEM 110 and 120, and consider additional coursework in chemistry and biology. Students interested in biological physics or neuroscience should explore the concentrations available in those Requirements for the Minor in Physics Number of Units Six units, exclusive of lab credit, in Physics are required, with a minimum grade of C-. Required Courses PHYS 150, 152 Introductory Physics I, II with Lab PHYS 220 Intro to Relativity and Quantum Physics with Lab Three additional physics courses, two at the 200 level or above and at least one at the 300 level or above. Students may not major in 3/2 engineering and minor in physics. Physics courses PHYS102AstronomyStudy of modern astronomy beyond the solar system: stars, galaxies, pulsars, quasars, black holes, and cosmology. Emphasis on fundamental physics and its application to understanding the structure and evolution of astronomical objects. PHYS105Energy and the EnvironmentApplication of scientific concepts and analyses to the study of the production, conversion, and consumption of energy, and an understanding of the associated environmental and societal implications. Designed primarily for students not majoring in the physical sciences; especially appropriate for those in the environmental studies concentration. PHYS150Introductory Physics I with LabConceptual and practical study of the basic conservation laws (momentum, energy, and angular momentum) and the Newtonian world view. Prerequisite: MATH-111 or MATH-112 All course prerequisites must be met with a minimum grade of C-. PHYS152Introductory Physics II with LabStudy of the fundamental and practical concepts associated with electric and magnetic fields and their unification. Prerequisite: PHYS-150 and MATH-111 or MATH-112 All course prerequisites must be met with a minimum grade of C-. PHYS205Applications of Physics in the BiosciencesHow can we observe nano-scale biological systems? How does the flexibility of a molecule contribute to its biological function? How can we make sense of vast amounts of complex and sometimes "messy" biological data? This course is an introduction to the advantages and limitations of using physical techniques and models to address biological questions. We will focus on molecular-scale systems and dynamics, with topics to include optics and microscopy, physical properties of biomolecules, and modeling dynamic molecules and systems. Current biophysical research and interdisciplinary communication skills will be emphasized through periodic discussion of articles from the primary literature.Prerequisite: BIOL-112 and PHYS-150 or Instructor Permission All course prerequisites must be met with a minimum grade of C-. PHYS210Nuclear and Medical Physics with LabEmphasis on application of physics to medicine, focusing on radioactivity, radiation therapy, and diagnostic and imaging techniques.Prerequisite: PHYS-152 All course prerequisites must be met with a minimum grade of C-. PHYS/IDSY215Introduction to Complex SystemsStudy of how collective behavior emerges from the interaction between a system's parts and its environment. Model systems from the natural sciences and social sciences will be used as examples. Both historical and contemporary approaches will be discussed. PHYS220Introduction to Relativity and Quantum Physics with LabStudy of light, special relativity, and quantum physics with applications. Prerequisite: PHYS-152 and MATH-113. (MATH-214 & 240 recommended.) All course prerequisites must be met with a minimum grade of C-. PHYS/COMP255Computer Programming and SimulationComputer modeling of physical phenomena. Programming skills will be developed in the context of doing physics. Topics include numerical integration of Newton's equations, cellular automata, and random walks, including Monte Carlo methods.Prerequisite: PHYS-150 All course prerequisites must be met with a minimum grade of C-. PHYS/MATH270Nonlinear Dynamics and ChaosDynamical systems are mathematical objects used to model phenomena of natural and social phenomena whose state changes over time. Nonlinear dynamical systems are able to show complicated temporal, spatial, and spatiotemporal behavior. They include oscillatory and chaotic behaviors and spatial structures including fractals. Students will learn the basic mathematical concepts and methods used to describe dynamical systems. Applications will cover many scientific disciplines, including physics, chemistry, biology, economics, and other social sciences. Appropriate for Math or Physics Majors. Either MATH 305 or this course, but not both, may be counted towards the major in mathematics.Prerequisite: MATH-113 All course prerequisites must be met with a minimum grade of C-. PHYS340Classical Dynamics with LabStudy of classical dynamics emphasizing physical reasoning and problem solving. The Newtonian, Lagrangian, and Hamiltonian formulations are discussed, and applications are made to planetary motion, oscillations, stability, accelerating reference frames, and rigid body motion. Prerequisite: PHYS-152 and MATH-280 All course prerequisites must be met with a minimum grade of C-. PHYS360Thermal Physics with LabIntroduction to thermal physics with emphasis on a statistical approach to the treatment of thermodynamic properties of bulk material. Prerequisite: PHYS-220. (MATH-280 recommended.) All course prerequisites must be met with a minimum grade of C-. PHYS370Electronics and Electromagnetism with LabBasic concepts of analog and digital electronics are taught along with intermediate level electrostatics and electrodynamics. Mathematical topics include introductory vector calculus and field theory. The laboratory portion emphasizes circuit analysis, measurement technique, and the skillful use of modern digital instrumentation. Prerequisite: PHYS-220 and co-enrollment in or completion of MATH-280. All course prerequisites must be met with a minimum grade of C-. PHYS380Semiconductors and Magnetism with LabThe relationship between electricity and magnetis is studied through the introduction of Maxwell's equations. Semiconductor material properties are studied, along with device structions for diodes, transistors, and simplle integrated circuits. The laboratory portion emphasizes circuit construction techniques, device characterization, amplifier design and feedback, and signal/noise analysis. Prerequisite: PHYS-370 and MATH-280 All course prerequisites must be met with a minimum grade of C-. PHYS410Advanced Electricity and Magnetism with LabStudy of electromagnetic field theory, electrostatics, potential theory, dielectric and magnetic media, Maxwell's field equations, and electromagnetic waves; vector calculus developed as needed. Prerequisite: PHYS-370 and MATH-280 All course prerequisites must be met with a minimum grade of C-. PHYS420Quantum Mechanics with LabStudy of the principles and mathematical techniques of quantum mechanics with applications to barrier problems, the harmonic oscillator, and the hydrogen atom. Prerequisite: PHYS-340 and MATH-280 All course prerequisites must be met with a minimum grade of C-. PHYS480Special Topics TechniquesSpecial Topics offerings focus on a physics topic not addressed in the department's regular offerings. Possible topics include general relativity and cosmology, solid state physics, particle physics, soft condensed matter physics, biological physics, advanced laboratory techniques, and fluid mechanics. Check the course schedule to see when Special Topics courses are being offered. PHYS481Special Topics: General Relativity & CosmologyGeneral relativity is a geometric theory of gravity which has significant implications upon cosmology from gravitational redshift and bending of light rays to black holes and the largescale structure of the universe. We will learn to use tensors to perform calculations and study the implications of the Einstein equation. PHYS482/MATH 305/IDSY 305Special Topics: Dynamic Models in Social SciencesThe study of why mathematical and computational methods are important in understanding social phenomena, and how different social phenomena can be described by proper mathematical models. Specifically, applications of the theory of dynamical systems will be presented. Designed for math/science and social science students. Either MATH/PHYS 270 or this course, but not both, may be counted towards the major in mathematics. PHYS593Senior Individualized ProjectEach program or department sets its own requirements for Senior Individualized Projects done in that department, including the range of acceptable projects, the required background of students doing projects, the format of the SIP, and the expected scope and depth of projects. See the Kalamazoo Curriculum -> Curriculum Details and Policies section of the Academic Catalog for more details.Prerequisite: Permission of department and SIP supervisor required.
{"url":"http://kzoo.edu/catalog/?id=940","timestamp":"2014-04-17T21:34:00Z","content_type":null,"content_length":"39010","record_id":"<urn:uuid:576a14d0-c436-4223-a865-a47e4f07ef08>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Laura Pakaln Laura Pakaln; Group 2B Money: First Grade Commencement Content Standards Standard 1: Analysis, Inquiry, and Desipn: Students will use mathematical analysis, scientific inquiry, and engineering design, as appropriate, to pose questions, seek answers, and develop solutions. Standard 2: Information Systems: Students will access, generate, process, and transfer information using appropriate technologies. Standard 3: Mathematics: Students will understand mathematics and become mathematically confident by communicating and reasoning mathematically, by applying mathematics in real-world settings, and by solving problems through the integrated study of number systems, geometry, algebra, data analysis, probability and trigonometry. Standard 6: Interconnectedness: Common Themes: Students will understand the relationships and common themes that connect mathematics, science, and technology and apply the themes to these and other areas of learning. Benchmark Standards: Elementary Level • Abstraction and symbolic representation are used to communicate mathematically. Deductive and inductive reasoning are used to reach mathematical conclusions. Critical thinking skills are used in the solution of mathematical problems. • Information technology is used to retrieve, process and communicate information and as a tool to enhance learning. • Students use mathematical reasoning to analyze mathematical situations, make conjectures,gather evidence, and construct an argument. • Students use number sense and numeration to develop an understanding of the multiple uses of numbers in the real world, the use of numbers to communicate mathematically, and the use of numbers in the development of mathematical ideas. • Students use mathematical operations and relationships among them to understand mathematics. • Students use mathematical modeling/multiple representation to provide a means of presenting, interpreting, communicating, and connecting mathematical information and relationships. • Models are simplified representations of objects, structures, or systems used in analysis, explanation, interpretation, or design. • Solving interdisciplinary problems involves a variety of skills and strategies, including effective work habits; gathering and processing information; generating and analyzing ideas; realizing 'ideas; making connections among the common themes of mathematics, science and technology; and presenting results. Performance Standards • Students use special mathematical notation and symbolism to communicate in mathematics and to compare and describe quantities, express relationships, and relate mathematics to their immediate • Students explore and solve problems generated from school, home, and community situations, using concrete objects or manipulative materials when possible. • Students use a variety of equipment and software. packages to enter, process, display, an communicate information in different forms using text, tables, pictures and sound. • Students telecommunicate a message to a different location with teacher help. • Students use models, facts, and relationships to draw conclusions about mathematics and explained their thinking. • Students justify their answers and solution processes. • Students use logical reasoning to reach simple conclusions. • Students relate counting to grouping and to place value. • Students add subtract whole numbers. • Students construct tables, charts and graphs to display and analyze real-world data. • Students use multiple representations (simulations, manipulative materials, pictures, and diagrams) as tools to explain the operation of everyday procedures. • Students use different types of models, such as graphs, sketches, diagrams, and maps, to represent various aspects of the real world. • Students make informed consumer decisions by applying knowledge about the attributes of particular products and making cost/benefit tradeoffs to arrive at an optimal choice. • Students work effectively; gather and process information; generate and analyze ideas; observe common themes; realize ideas; present results. Content Standards • will learn that a penny is worth one cent. • will add pennies and determine how many cents they have altogether. • will understand that when a penny is tossed, it will not always land on the same side. • will understand that I 00 pennies has the same value as $ 1.00, a dollar bill. • will predict and count the number of pennies that fit inside of their footprints. • will learn the value of a dime. • will determine the value of objects to be sold in a class store. • will learn to add together coins of different value. • will group coins by value. • will examine coins closely and investigate qualities that are the same and different. • will make a coin graph in more than one way. • will add together coins of different values to make $ 1.00. • will learn how to make change. • will shop in and run their own class store. Performance Measures Level 1: Beginner Student has difficulty grasping the concept of coin value. The student may be able to identify coins by name, but unable to explain its value. This student will have more success in enabling activities that emphasize one-to-one correspondence such as counting pennies. Level II: Intermediate Student grasps concepts of coin value and can identify values of different coins, but has difficulty accurately finding sums of mixed coins. Student will have more success in adding coins that are the same, such as only nickels or only dimes. Level III: Mastery Student grasps all concepts presented 'tn unit and is able to accurately add and subtract when using coins of different value. Pocket Pennies Objectives: Children will learn that a penny is worth one cent. Children will add pennies and determine bow many cents they have altogether. containers of pennies for each child (approx. 10 pennies) overhead projector and transparency extra pennies for overhead overhead markers Peter's Pockets, by Eve Rice portion cups chart paper spare shirts or smocks Read the story Peter's Pockets to the children. Discuss the ways that Peter used his pockets. Ask the children bow many objects be could carry in each pocket. Could they determine how many pockets Peter had? Explain to the children that they are going to be using their pockets to put pennies in. Ask if they know what a penny is? How much is it worth? Use the overhead and place one penny on the transparency. Ask the children how much money is there. After they answer, write "I cent" below the penny. Rewrite the amount, using the cent sign, and discuss why we use the symbol in place of the Put two pennies on the overhead. Ask how much money there is now. Write the amount. Continue in this manner until you reach 10 cents. Explain to the children that they are going to place only one penny in each of their pockets. If a child does not have any pockets, allow the child to wear a jacket, sweater, smock, etc. After they complete this task, tell the children you want them to count out how many pennies they have altogether. Demonstrate by putting your pennies on the overhead and counting them. Write down the amount. Next, have the children work in small groups to combine their pennies, putting 10 pennies into each portion cup. When they're finished, ask the children how they could find out how many pennies the class has altogether. If counting by tens is not suggested, you can suggest that it would be the fastest way. On a piece of chart paper, write the days of the week (Monday-Friday) in a column. Next to Monday, write the amount of pennies that you and the children counted. Explain that they will do this every day for one week. Have them discuss how they might end up with more pennies on another day. Children will correctly identify a penny and its monetary value. Children will accurately count pennies: Every day of the week, have three or four children take turns counting out their individual pennies on the overhead. The Great Penny Toss Objective: Children will understand that when a penny is tossed, it will not always land on the same side. 1 hand lens for each pair of students one penny for each pair one recording sheet for each pair two different colored crayons overhead projector and transparency of recording sheet two vis a vis markers, different colors chart paper Children will work in pairs. Tell the children to look carefully at both sides of their penny with the band lens. Ask them what they notice about the different sides. On chart paper, list the children's observations using one side of the paper for each side of the penny Tell the children that we call the side of the penny with Abraham Lincoln's head, the "heads" side. The other side we call the "tails" side. Label the chart paper accordingly. Explain to the children that they are going to play a probability game, or a game of chance. Using the overhead and the recording sheet, ask for two volunteers to come up. Ask one child to toss the coin onto the overhead. Ask which side is showing. Have the child color in the first penny on either the head's side or tail's side (use different colors for each side). Player two tosses the penny and colors in the corresponding penny on the recording sheet. Tell the partners to repeat this four more times (two tosses per child), but before they do, ask the class to predict which side they think will show up more times. Have the children play in pairs. Tell them to toss the penny and record, 10 times altogether. Have them predict after their first or second turn. When done, have the pairs sit in a circle with the class to share their findings. Ask the children what they discovered. Ask them what might happen if they tossed the penny more than IO times? Would they get the same results? If there is time, let the children continue playing to see what would happen. Put this game in a center so that children can play this game as many times as they wish. Encourage them to find out what happens when they toss the penny more times. Assessment: Children can accurately explain and show the "head's and "tail's" side of a penny, Children can correctly describe what will happen if you toss a penny more than one time. Children can accurately record observations on recording sheet. Race to $1.00 Objectives: Children will understand that 100 pennies has the same value as $1 .00. Children will understand that a $ 1. 00 bill has the same value as I 00 pennies. Arthur's Funny Money, by Lillian Hoban large photograph of 100 pennies, in rows of 10 x 10 a $ 1. 00 bill "Race to $1.00" gameboard I pair dice for each group numbered 0-5 chart paper overhead projector gameboard for overhead Read the story Arthur's Funny Money. Have the children recall what items Arthur bought with his money. Ask the children what they would buy if they bad $1.00? Explain and show that 100 pennies is the same as $1.00. Tell the children that they will be playing a game called "Race to a Dollar". The object of the game is for your group to get 100 pennies as quickly as they can. Using an overhead projector and copy of the gameboard, ask for three volunteers to play a demonstration game. Player one rolls the dice, adds the numbers, and puts that number of pennies on the gameboard. Players continue taking turns until they have filled the board. A single die may be used to modify the game. Have the children play in groups of three. If time allows, they can play the game again. They can also play "Race to 0", going backwards. Use this game in a center so children can play it again. Assessment: Children will accurately add numbers and count out the same number of pennies. Children will correctly place pennies on the gameboard. Children will correctly state that 100 pennies is the same as $1.00, or a dollar bill. How Much is Your Footprint Worth? Objectives: Children will predict and count the number of pennies that fit inside of their footprints. colored construction paper, 9x 1 2 drawing or writing materials overhead projector overhead transparency of a footprint Graph Club CD ROM Procedure: Explain to the children that they are going to work in pairs to help each other trace their footprints. They can take their shoes off or leave them on. Ask for two volunteers to demonstrate for the class. Then, ask the children to predict how many pennies they think will fit inside of their footprints. Put the transparency up and make a prediction for your own footprint. Write the number somewhere in the border of the page. Explain that after they make their own predictions, they will fill their footprints with as many pennies as they can, without going over the outline. Pennies should be placed side by side. Show on the overhead how they will do this. Then, count the number of pennies and write the number in the border. Discuss with the children beforehand whether they should count the pennies that go over the edge of the footprint. Circle the actual number of pennies. When the children finish this task, they can share their predictions and results with the class. Ask what might happen if they did this again with or without shoes on. Would there be a difference? Help the children enter the results on the computer, using a graphing program such as Tom Snyder's "The Graph Club". Create different types of graphs to show the results in a variety of ways. Leave materials in a center for the children to try again. Assessment: Children will correctly count the number of pennies in outlines of their -feet. Adding Coins Objective:. Children will use a variety of coins to make a dollar. Jelly Beans for Sale, by Bruce McNfillan large paper coins coin paper money real coins (100 pennies, 2 half-dollars, 3 quarters, 5 dimes, 5 nickels per pair of students) 3x5 index cards on which are written values to be used (for example, show 25 cents; show 25 cents using only nickels; show 25 cents using no nickels) 81/2 x I I boards on which are written the words, "Put It Here" Read and discuss Jelly Beans for Sale. Attach large paper coins to strips of paper and place the strips at various places along the floor. Select a sum of coins from one of the strips and ask the children to find the strip that matches that sum. Try this with some of the other strips. Give each pair a bag of assorted coins. 1. Have the students sort the coins by value. 2. Have the students select one coin from each pile and arrange them from smallest to largest. 3. Have students select another coin from each pile. Beside each coin use pennies to demonstrate the value of each coin. 4. Have the students arrange the second set of coins in a column from largest to smallest. Using the index cards, read one to the class. Children will place that card's value on their desk. Explain to the children that they are going to play a game. When you read one of the index cards, you want the students to place coins equal to that value on the "Put It Here" board. When each space on the board has been filled, a new caller can be chosen for a new game. Assessment: Children will identify specific coin values and add a random set to determine their value. Observing Coins Students will examine pennies, nickels, and dimes closely and Investigate the qualities that are the same and different among them. One for each student: 8 1/2 x 14 photocopy paper, folded in half (short way) a penny, a nickel and a dime magnifying lens pencil chart paper, ruled into two columns Place student materials at each desk. Have children work in small groups. Give the children a few minutes to use the hand lenses to explore the coins. Have them stop, and explain that when you say "Go!" you want them to continue their exploration of the coins in a special way. Tell them that you want them to look at the coins and find out all of the ways they are the same. As an example, you could say that you noticed all of the coins had the word "liberty" written on them. On the chart paper, write the word "same" at the top of the left side of the paper. Underneath, write "all say liberty". Tell the children that you also want them to find ways in which the coins are different. As an example, explain that you noticed the coins each have different dates on them. Write "different" at the top of the right column. Below, write "all have different numbers". Explain to the children that as they are working, they should share their discoveries with others in their small groups. Tell the children to "Go!" Let the children explore for a little while. Then, ask them if anyone found something that was the same about all three of the coins. Continue with things that are the same, then ask about differences. Write the children's responses on the chart paper. After this task, have the children unfold their paper. tell them to make their own chart of things that are the same and different about the coins. They can write things from the class list or they can write other things that they discovered. Assessment: Children can accurately describe different ways in which the three coins were alike or different. People Coins Children learn to add together coins of different values. ChiIdren will learn how to group coins by value to make them easier to count. large paper coins in respective sizes, labeled I cent, 5 cent, and IO cents Ask for seven volunteers to come up in front of the class. Pass out pennies and nickels in a random order to the children. Ask the rest of the class to find the sum of the coins. Ask the children if they could think of an easier way to count the coins? Direct their discussion so that they try their different ideas, but ultimately group the nickels together, followed by the pennies. Ask for a different group of volunteers and repeat this process, gradually adding the dimes. One variation of this activity Is to have the children record the counting patterns on chalkboards. They can work individually or with a partner. To make the activity more challenging, add quarters. Children will accurately group coins by value and correctly find the sums of mixed values. Benny's Pennies Children will learn the value of a dime. Children will begin to think about the value of objects that might be sold in a store. Benny's Pennies, by Pat Bn'sson chart paper coin stamps: pennies, nickels, dimes an enlarged photograph of a dime an enlarged Photograph of 10 pennies chart paper drawing paper Show the coin photographs to the children. Explain that 10 pennies has the same value as one dime. Read the story Benny's Pennies to the children. As you read, ask them to predict what they think Benny will buy on each page. Re-read the story, asking how much Benny has left after each purpose. Write a number sentence on the chart paper to show the amount. After the story, ask the children to brainstorm a list of things that they would like to buy if they had their own money. After a list is compiled, ask the children to think about how much each item costs and why they think so. Write the amount next to the item on the list. Encourage discussion before writing down the final amount next to each object. Tell the children they can choose one thing from the list to "buy". Model the activity first by choosing an item from the list. Take a piece of paper and draw a picture of the item on the paper. Write the value. Fold the paper into fourths. Unfold the paper. Rewrite the value. Ask the children to help you use the coin stamps to show how much the item costs in one box. Then, have them help you show the amount in a different way with different coin combinations. Continue until all four boxes are done. Tell the children to pick one item that they would I Iike to buy and have them complete the task. Share when finished. Children will correctly identify the value of a dime. Children will show one amount of money using different coin combinations. Let's Go Shopping!! Objectives: Students will be able to: 1. Determine how much money they have in hand. 2. Find and read the price of a product. 3. Determine which product they would like to buy. 4. Determine if they have enough money for the item. 5. Count out the exact change or determine how much change they are due. Sheep in a Shop, by Nancy Shaw Clean trash (empty soup cans, cereal boxes, vegetable cans, etc.) stickers to use as price tags calculators computers with an internet server Read and discuss Sheep in a Shop. Explain to the children that they will be going shopping in their classroom. If possible, involve the children beforehand in collecting and pricing items to sell in their store. Involve all of the children in discussing how the store should be set up. Make a rotating schedule so that all of the children have a chance to work in the store. Have them brainstorm a list of different jobs that need to be done in the store. While the class is working on individual projects, send two or three students at a time to go "shopping". Give them a set amount of money to shop with. The shoppers need to figure out what items can be purchased with the amount of money they have. They need to tell the cashier how much change, if any, they should get. Students can check their figures using calculators. Take photographs of the children while they are shopping and working in the store. Use the pictures to make a class book. If you have the capability, use the Photographs on a school web page. Have the children write about their store. Young children love to play store. Leave your store up as a center. Have a supply of real or plastic coins for the children to use. Have the children think of a name for their store. Children can communicate with children from other schools via the internet, telling them about their store. Children will correctly add and subtract using a variety of coins. Students will correctly use a calculator to check their work. Hoban,Tana. 26 Letters and 99 cents. 1987. WilliamMorroNv. N.Y. Lewis,Brenda Ralph. Coins and Currency. 1993. RandomHouse. N.Y. Additional Children's Literature Adams,BarbaraJohnston. The Go-Around Dollar. 1992. MacMillan PublishingCo. N.Y. Caple,Kathy. The Purse. 1986. Houghton Miffllin. Boston. Daley,Niki. Papa's Lucky Shadow. 1992. MargaretK.McEiderryBooks. N.Y. Dumbleton,Mike. Dial-A-Croc. 1991. Orchard Books. N.Y. Hoban, Lillian. Arthur's Funny Money. 1981. Harper Collins. N.Y. Rodriguez, Anita. Aunt Martha and the Golden Coin. 1993. Clarkson Potter. N.Y. Schwartz,DavidM. If You Made A Millon. 1989. Lothrop,Lee,andShepard. Sharmat, Marjorie Weinman. Nate the Great Goes Down in the Dumps. 1989 Exploring Measurement, Math and Money: Edmark Corp. The Graph Club: Tom Snyder Productions Mighty Math.for Zillion: Edmark Corp. Money Challenge: Gamco Money Town: Davidson and Assoc., Inc. Money Works: MECC
{"url":"http://www.stac.edu/mcc/pakaln.htm","timestamp":"2014-04-20T13:19:14Z","content_type":null,"content_length":"73304","record_id":"<urn:uuid:8657f333-46cb-4b43-b81c-a53eeefb609d>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Forcing the existence of a Condorcet Winner up vote 2 down vote favorite Suppose that there is an election with three candidate and an infinite number of voters whose opinion lie in a two-dimensional issue space according to some distribution, and that voter's candidate preferences are based on the euclidean distance of the voter from each candidate. If the distribution is rotationally symmetrical, it's easy to show that there will be a Condorcet Winner with probability 1 (Basically by showing that two candidates equidistant from the center will split the vote equally in a run-off). Are there any broader conditions on the opinion distribution in which we can force the probability of the existence of a Condorcet Winner to be 1? pr.probability voting-theory social-choice You could demand that codimension one subspaces have no mass. – S. Carnahan♦ Sep 25 '11 at 17:27 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged pr.probability voting-theory social-choice or ask your own question.
{"url":"http://mathoverflow.net/questions/76324/forcing-the-existence-of-a-condorcet-winner","timestamp":"2014-04-20T03:46:57Z","content_type":null,"content_length":"46868","record_id":"<urn:uuid:c8cb32ba-c6ba-4b23-8eda-e30bb0cddec6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Primes in Arithmetic Progression February 16th 2010, 01:19 PM #1 Nov 2009 Primes in Arithmetic Progression Hey guys, in our NT course we have the topic arithmetic progressions and we had to prove that there are infinitely many primes in the progressions $4n + 3$ and $4n + 1$. Now the task says to find more arithmetic progressions containing infinitely primes. Our professor said at some point that every arithmetic progression(a. p.) contains infinitely many primes, but is it also possible to find a non-difficult proof that there are infinitely many a. p. with infinitely many primes? I thought about $8n + 1$, $8n + 3$ ... but the proof for the cases $4n + 1$ and $4n + 3$ were a bit longer, is there a possibility to generalize them for $2^k*n + 1$? I would be thankful for any help I can get. here's a little proof for you Assume your arithmetic progression has form $x_n = an + b$, $a > 0$, $b > 0$. $\Rightarrow$ If $a$ and $b$ are not coprime, call $d$ their common divisor greater than one. Then $x_n = an + b = d \left ( \frac{an}{d} + \frac{b}{d} \right )$ Which is obviously not prime (divisible by $d$). Thus if $a$ and $b$ share a common divisor, the arithmetic progression does not contain any prime. $\Rightarrow$ Now, if $a$ and $b$ are coprime. (do not mind what I wrote here before, there is something terribly wrong with it). Dirichlet's Theorem states that there are infinitely many primes of the form $an + b$ with $a$ and $b$ coprime. Thus the arithmetic progression $x_n = an + b$ contains infinitely many primes if and only if $a$ and $b$ are coprime. Therefore, there are infinitely many primes in the arithmetic progression $x_n = an + b$, $a > 0$, $b > 0$, if and only if $a$ and $b$ are coprime. but is it also possible to find a non-difficult proof that there are infinitely many a. p. with infinitely many primes? My previous proof makes this easy : since any arithmetic progression $x_n = an + b$ with $gcd(a, b) = 1$ contains infinitely many primes, there are infinitely many such progressions because there are infinitely many numbers $a$, $b$ that are coprime (quick example : every power of two is coprime to an odd number, this can be shown fairly easily). Last edited by Bacterius; February 16th 2010 at 06:42 PM. We defined (non-trivial) arithmetic progressions as sequences $a_n = b*n + c$ with $(c,b) = 1$ $\Rightarrow$ Now, if $a$ and $b$ are coprime. (do not mind what I wrote here before, there is something terribly wrong with it). Dirichlet's Theorem states that there are infinitely many primes of the form $an + b$ with $a$ and $b$ coprime. Thus the arithmetic progression $x_n = an + b$ contains infinitely many primes if and only if $a$ and $b$ are coprime. The problem is that we are not allowed to use Dirichlets theorem, then the problem would be trivial. Any other suggestions to show that there are infinitely many a.p. containing infinitely many primes? Any other suggestions to show that there are infinitely many a.p. containing infinitely many primes? Wanna prove Dirichlet's Theorem ? At least you would deserve to use it Seriously, I don't know if there is an easy way to prove that a nontrivial arithmetic sequence contains infinitely many primes Well I proved it for 4n+1 and 4n+3, and it was kinda difficult, i thought many one can generalize this result to $2^k*n + 1$, but I couldn't see how. well, actually it's quite easy to prove that for any given integer $k \geq 1$, the arithmetic progression $A=\{2^k n + 1 \}_{n \geq 0}$ contains infinitely many prime numbers. here is a proof: suppose $p_1, \cdots , p_r$ are the only prime elements of $A$ and let $m=(2p_1p_2 \cdots p_r)^{2^{k-1}} + 1$ (you might ask what if there's no prime in $A$? well, then we let $m=2^{2^{k-1}} + 1$ let $p$ be a prime divisor of $m.$ clearly $p eq p_i,$ for all $1 \leq i \leq r.$ so we only need to prove that $p \in A$ to get a contradiction. so we have $(2p_1p_2 \cdots p_r)^{2^{k-1}} \equiv -1 \mod p$ and thus $(2p_1p_2 \cdots p_r)^{2^k} \equiv 1 \mod p.$ therefore the order of $2p_1p_2 \cdots p_r$ modulo $p$ is $2^k.$ but, by the Fermat's little theorem, we also have $(2p_1p_2 \cdots p_r)^{p-1} \ equiv 1 \mod p.$ hence we must have $2^k \mid p-1,$ which means $p \in A. \ \Box$ Last edited by NonCommAlg; February 19th 2010 at 05:44 PM. Nice proof, I was looking for something like this. much more generally, it's quite elementary and not hard at all to prove that, given any integer $k \geq 1,$ the arithmetic progression $\{kn + 1 \}_{n \geq 0}$ contains infinitely many prime numbers. some basic knowledge of cyclotomic polynomials is needed for understanding the proof though. it'd make a nice undergrad level presentation in number theory! much more generally, it's quite elementary and not hard at all to prove that, given any integer $k \geq 1,$ the arithmetic progression $\{kn + 1 \}_{n \geq 0}$ contains infinitely many prime numbers. some basic knowledge of cyclotomic polynomials is needed for understanding the proof though. it'd make a nice undergrad level presentation in number theory! Proof: Consider $\Phi_m(a)$ for some $a \in \mathbb{N}$ such that $p\mid \Phi_m(a)$ and $p ot| m$. In $\mathbb{F}_p$ we have $a^m-1=\prod_{d\mid m} \Phi_d(a) = 0$. Now suppose $a^d-1=0$ for some $d\mid m$, then from above we see that $x^m-1$ has a double root which is impossible since $(x^m-1,mx^{m-1})=1$ (f(x) and f'(x) are relatively prime). This implies Note $pot| m$, otherwise $mx^{m-1} = 0 \implies (x^m-1,mx^{m-1})eq 1$. Therefore $m\mid p-1 \implies p=mk+1$ for some $k\in \mathbb{N}$. Now let $f(x)\in \mathbb{Z}[x]$ be monic, and suppose $\{f(a) | a\in \mathbb{N} \}$ has only a finite amount of prime divisors $p_1,p_2, \cdot\cdot\cdot, p_k$. Choose $a$ such that $f(a)=ceq 0$. Define $g(x) = c^{-1} f(a+cy)$ such that $y=p_1 p_2 \cdot\cdot\cdot p_k x$. $g(x)=c^{-1}\left(f(a)+f'(a)cy+\frac{f''(a)}{2}(cy)^2+\cdot\ cdot\cdot+\frac{f^{(n)}(a)}{n!}(cy)^n\right)$ $=1+f'(a)y+\frac{f''(a)}{2}cy^2+\cdot\cdot\cdot+\fr ac{f^{(n)}(a)}{n!}c^{n-1}y^n \in \mathbb{Z}[x]$ since $\frac{f^{(n)}(a)}{n!} \in \mathbb{Z}$. So we have $g(b)\equiv 1 \mod{p_1p_2\cdot\cdot\cdot p_k}$ for any $b\in \mathbb{Z}$. Pick $b$ such that $|g(b)|>1$ and let $p$ be a prime factor of $g(b)$. This means $peq p_i$ and $p\mid f(a+cp_1p_2\cdot\cdot\cdot p_kb)$ which is a contradiction to the hypothesis. So looking back at what's been done... I've shown for any monic polynomial $f(x)\in\mathbb{Z}[x]$, there are an infinite amount of primes dividing $f(x)$. Also $p\mid \Phi_m(a)$ and $pot| m \ implies p\equiv 1 \mod{m}$. Therefore $\{mk+1|k\in\mathbb{Z}\}$ contains an infinite amount of primes! you didn't state this part properly. you proved that for any monic polynomial $f(x) \in \mathbb{Z}[x]$ there are infinitely many primes dividing at least one element of the set $\{f(1), f(2), \ cdots \}.$ anyway, i would present the proof in this order: first i'd prove the above result. then i'd fix a positive integer $m.$ then, since $m$ has a finite number of prime divisors, we conclude from our first that for any monic polynomial $f(x)\in\mathbb{Z}[x]$, there are an infinitely many primes which do not divide $m$ but divide at least one element of the set $\{f(1), f(2), \cdots \}.$ then i would prove that any prime numer which does not divide $m$ but divides at least one element of the set $\{\Phi_m(1), \Phi_m(2), \cdots \}$ is equivalent to $1$ modulo $m.$ finally i'd put $f(x)=\Phi_m(x)$ to finish the proof. February 16th 2010, 02:46 PM #2 MHF Contributor Apr 2008 February 16th 2010, 06:16 PM #3 February 17th 2010, 12:07 AM #4 Nov 2009 February 17th 2010, 12:15 AM #5 February 17th 2010, 12:39 AM #6 Nov 2009 February 17th 2010, 01:49 AM #7 MHF Contributor May 2008 February 17th 2010, 03:52 AM #8 Nov 2009 February 17th 2010, 07:14 AM #9 MHF Contributor May 2008 March 20th 2010, 08:07 PM #10 March 21st 2010, 10:58 PM #11 MHF Contributor May 2008
{"url":"http://mathhelpforum.com/number-theory/129134-primes-arithmetic-progression.html","timestamp":"2014-04-16T14:07:00Z","content_type":null,"content_length":"91431","record_id":"<urn:uuid:419c35a3-c886-48b1-a339-adf995dc775d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Stable graphs: Feynman diagrams and Deligne-Mumford space up vote 7 down vote favorite I do not know very much about quantum field theory, but I have seen, in my reading, that stable graphs can appear in QFT in the form of, I think, Feynman diagrams. By stable graph I mean a "graph with tails", whose vertices are labelled by nonnegative integers, and such that each vertex with labeling 0 has valence at least 3, and each vertex with labeling 1 has valence at least 1. Algebraic geometers of course know that stable graphs also give a stratification of the Deligne-Mumford spaces $\overline{M}_{g,n}$: Vertices with label $g$ correspond to genus $g$ curves; edges correspond to nodes; tails correspond to marked points. Valency conditions correspond to finitude of automorphism group of the nodal curve. Is there an explanation for this coincidence? I guess there is probably some kind of explanation via Gromov-Witten theory. But I get the impression that stable graphs show up in QFTs more generally, and beyond Gromov-Witten theory. Do they? If so, how? And where? quantum-field-theory ag.algebraic-geometry moduli-spaces mp.mathematical-physics One can "explain" why such graphs show up in path-integral QFT in the following way: they exactly parametrize the $\hbar \to 0$ asymptotics of finite-dimensional integrals of the form $\int\exp\ 2 bigl((i\hbar)^{-1}f(x)\bigr)dx$, where $f(x)\in C^\infty(\mathbb R^n)[[\hbar]]$. (More generally, the asymptotics only depend on the Taylor coefficients of $f$ near critical points of $[f]\in C^\ infty(\mathbb R^n)=C^\infty(\mathbb R^n)[[\hbar]]/\hbar$.) But this isn't the kind of explanation you are looking for, since it does not explain the coincidence with geometry. And I don't know any geometry. – Theo Johnson-Freyd Apr 23 '10 at 5:34 add comment 3 Answers active oldest votes From the sound of it, you are reading Costello's book. In point particle QFT the stable graphs you are referring to can be thought of as the paths of particles through spacetime the vertices are where the particles interact. Now in the Deligne-Mumford case you can think of taking the stable graph and "thickening" it and replacing the particles with little loops of string to obtain a Riemann surface. The up vote 5 interactions then can be thought of as corresponding to the joining and splitting of these little loops of string. down vote It happens to be the case that in bosonic string theory one is in this second "thickened" case and the symmetries of the theory are such that in doing the path integral one ends up integrating over the Deligne-Mumford spaces of such Riemann surfaces. Thus, the Deligne-Mumford stratification is of use there. Furthermore, you can relate Deligne-Mumford spaces of such Riemann surfaces to point particle QFT by simply taking the limit of infinite string tension. Yep, I'm reading Costello's book. Yes, I am familiar with the string theory/Gromov-Witten theory story you describe. But I am more wondering about QFTs outside of string theory/GW theory. – Kevin H. Lin Apr 23 '10 at 8:39 1 I don't mean to be contrary, but I am not sure you understand the string theory case very well. One can think of point particle theory as string theory in the limit of infinite string tension. This fact, along with my comment above, should give the relation you seek. – Kelly Davis Apr 23 '10 at 17:42 1 Kelly, IIRC, not all QFTs are obtainable as limits of string theories. However, any perturbative expansion in $\hbar$ can be organized using stable graphs. (This is a quibble, of course. The string case did inspire the QFT construction.) – userN Apr 26 '10 at 18:17 @A.J. I did not say all point-particle QFT are obtainable as limits of string theories, and hope I didn't imply such is the case. Also, yes, any perturbative expansion in $\hbar$ can be organized using stable graphs. However, the problem with saying "any perturbative...stable graphs" and only this to a mathematician is that they then have no idea why this is the case, the fountainhead of this question. The example I gave from string theory, I think we both agree, motivates why stable graphs play a role. – Kelly Davis Apr 28 '10 at 20:24 add comment I'm not sure exactly what the question is, but let me comment that lots of Feynman graphs with lots of different rules for labelling the vertices come up in QFT, as explained by Theo. From the QFT point of view, the labelling you're describing is not particularly common; you have infinitely many terms in your Lagrangian, for instance. So I would say that the answer is no, up vote 5 stable graphs do not come up much beyond the cases that are clearly related to Gromov-Witten theory or other string theories. down vote From a physicist point-of-view this labeling is non-standard. However, for the context in which this appears, Kevin Costello's book on perturbative renormalization for mathematicians, it makes sense as a way of easing mathematicians in to Feynman digrams. – Kelly Davis Apr 25 '10 at 19:23 I don't think we're disagreeing with each other... – Dylan Thurston Apr 25 '10 at 23:20 What Grassmannian? – Kevin H. Lin Apr 26 '10 at 0:00 @Dylan I agree that we agree :-) – Kelly Davis Apr 26 '10 at 6:28 @Kevin: Sorry, typo for "Lagrangian". Fixed now. – Dylan Thurston Apr 26 '10 at 7:56 show 3 more comments To amplify Theo's comment slightly: The graphs that show up in Feynman diagram perturbation theory are stable because the physicists use a different accounting system for the genus zero vertices with 0,1, or 2 edges. The diagrams with these graphs don't show up in the perturbation series because the physical effects they represent are the situation one is perturbing away up vote from. A genus zero graph with two edges is an order $\mathcal{O}(\hbar^0)$ correction to the propagator. Likewise, one edge gives a tadpole correction to the expectation value of the field 4 down (usually gotten rid of by redefining the field), and zero edges a correction to the vacuum energy (usually set to zero by convention). add comment Not the answer you're looking for? Browse other questions tagged quantum-field-theory ag.algebraic-geometry moduli-spaces mp.mathematical-physics or ask your own question.
{"url":"http://mathoverflow.net/questions/22291/stable-graphs-feynman-diagrams-and-deligne-mumford-space/22348","timestamp":"2014-04-19T17:25:12Z","content_type":null,"content_length":"72463","record_id":"<urn:uuid:0e29c962-d350-4356-a998-d1b95ab9c7f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Survival of the Fittest: A New Model for NCAA Tournament Prediction (Editor’s note: You can see the model’s 2012 1-68 rankings here and full bracket here) Every year, millions of Americans tune in to watch the NCAA Men’s Basketball Tournament, colloquially known as March Madness. And every year, millions participate in Bracket Pools, where they attempt to outwit their peers by picking more of the 63 tournament games correctly. There are quite a few very good prediction models available to people that predict NCAA tournament results, but every publicly available approach—including Las Vegas casino futures markets—leaves significant room for improvement. Other prognosticators lean on traits like experience, confidence, and performance under duress that are much harder to quantify. Of course, individual basketball games can often be determined by chance events, so no system will be perfect, but network analysis provides promising tools for quantifying some “intangible” traits. What all of these prediction systems have in common is that they treat NCAA tournament games as similar to regular season games, despite ample theoretical and anecdotal evidence that the two are quite different. NCAA tournament games carry much more added pressure and added attention, and for quite a few schools, are played in far bigger arenas in front of far bigger crowds than any regular season game that they will play. This evidence suggests that rating systems that attempt to predict the NCAA tournament as if it were a collection of regular season games can be improved. 2006 NCAA Tournament Network The purpose of this paper is to create a model that will rank NCAA tournament teams—and thus provide a basis for predicting the NCAA tournament—that uses network characteristics, principally a measure of degree centrality. The network characteristics in the model will serve to quantify traits that specifically apply to the tournament and the other teams in it, which form a network as they play each other over the course of the season. Ideally, this model will perform better in pseudo-out of sample testing than other available prediction systems. To my knowledge, this is the first attempt of its kind. Research Design: For each NCAA tournament from 2004 until 2011, I took schedule data from the entire season for every team in the field. I created an adjacency matrix where the a[ij] entry = 1 if Team[i] beat Team [j], 0 if they did not play. This matrix describes the directed network of the NCAA tournament teams from that season where the nodes are teams, and the links are games played between two teams. Thus a team’s out-degree represents the number of other NCAA tournament teams it defeated during the season, and its in-degree represents the number of other NCAA tournament teams it was defeated by during the season. A diagram showing the 2006 network is shown above. The NCAA tournament is a different arena, with much brighter lights and perhaps different determinants of success. Analyzing the NCAA tournament network of games played between NCAA teams can provide insights into how teams within the Tournament are affected by their history of interactions with top-tier teams. More subtle psychological factors, like confidence and performance under pressure, can be measured via a network approach. Imagine this hypothetical: Team A has played a very tough schedule, facing thirteen teams that are in the NCAA tournament field and beating seven of them. Team B has played an easier schedule, only having played three Tournament teams, but it has defeated all of them. Through the vagaries of the season, and through games against non-tournament opponents, Team A and Team B have very similar Pythag ratings and consistency metrics. But Team A and Team B inhabit very different parts of the NCAA tournament network. General statistical models might predict Team A and Team B to have equal tournament success, but we might hypothesize that Team A will do better because they have confidence from having played and defeated quite a few NCAA tournament teams. Any model needs to have some statistics to control for team strength. I believe Ken Pomeroy’s excellent Pythagorean rankings, which can be found at KenPom.com, are the best measures of team strength. As such, its constituent parts, Adjusted Offensive Rating and Adjusted Defensive Rating, form the backbone of my control variables in the NCAA tournament prediction model. Another important factor to take into account is Consistency. Consistency is a measure of a team’s variance in point spread at the conclusion of a game. For example, winning by five points = +5, losing by twenty = -20. This can be used as a measure of a team’s in-season strength because under the assumption that the teams being analyzed are winning teams (fair for the NCAA tournament bracket), consistency represents a measure of variance of performance. There are no “consistently bad” teams in the NCAA tournament. Strength of Schedule (SOS) also needs to be included in any predictive model. SOS is an index of the strength of opponents played by the team in-season. Additionally, because variables like seed and strength of schedule are correlated with other predictors, I included (and tested) interaction terms between variables. Another “intangible” variable often cited by experts as important for NCAA tournament success is experience. To quantify experience, specifically experience in the NCAA tournament, I created a dataset of minutes played at an individual level for every tournament team from 2003 until 2011, and aggregated them at the team level to create Returning Minutes Percentage for each team. Returning Minutes % for Team I was then multiplied by the number of NCAA tournament games Team I had played in the previous year. I chose games rather than wins because there should be some credit given to teams who simply make the NCAA tournament and get the experience of playing in it, even if they do not win a game. Sensibly, this makes the experience term proportional to both past NCAA tournament success and the percentage of players returning who contributed to that success. Survival Analysis Model: A major challenge in predicting the NCAA tournament is specifying an appropriate model. The best way to judge NCAA tournament success is how many games a team wins in the tournament. The goal is to win six games and be champion; the closer a team gets to that goal, the better they have performed. Thus tournament wins should be the dependent variable in any predictive model. But NCAA tournament wins is an ordinal discrete variable that takes on values from {0,6}. This type of dependent variable violates the assumptions of normality and continuousness that underlay ordinary least squares (OLS) regression, making it an inappropriate model specification. Additionally, the vast majority of observations would be teams who lost early, potentially drowning out the signal of the teams that survive. In statistical terms, the dependent variable is not normally distributed. To solve these problems, I borrow a concept from sociology known as time-to-event analysis (henceforth called survival analysis). Survival analysis is the name for a class of models that deal with time series data and generally attempt to measure time to failure in a system. In this case, we will treat the NCAA tournament as the system, use each round as a time step, and treat losing, and thus falling out of the tournament, as “failure.” I chose the Cox Proportional Hazards model, as it is the most general and non-parametric. It is based off of a baseline hazard function that is estimated for the population. The survival analysis model solves the major concern of not being able to capture the signal of teams that succeed, as it recognizes the additional length of time to failure for teams that win multiple games, and the difficulty of attaining that success, and thus generates coefficients that reflect that success. This model, which to my knowledge has never before been applied to the NCAA tournament, is superior to the others I considered because it best addresses the particular and unique challenges involved in estimating NCAA tournament success. I) Model Fit: The Cox proportional hazard model was fit in STATA, and the final model fit is summarized in Table(1). As discussed above, the choice of which measure of centrality to use was made by the fit of the model. The best fit was found using simple out-degree centrality, modified to give increased weight to road and neutral site wins. Out-degree was the only measure of centrality that was significant on its own and significant when interacted with the Experience term. Table 1: The specified Cox Proportional Hazard Model: As the table shows, all coefficients were significant at at least the 10 percent level. The coefficients of the Cox Proportional Hazard model can be interpreted as increasing or decreasing the risk of failure, holding all else constant. For instance, increasing Offensive Rating by one percent, which corresponds to increasing a team’s scoring by one point per hundred possessions, decreases the hazard of failure by 6.6 percent, relative to the baseline.[1] The network portion of the model is the out-degree term, and the interaction with Experience. A logistic ratio test was used in STATA to determine whether this model, with the interaction term of Experience and out-degree, was significantly better than the nested model without the interaction. This test was significant at the 95 percent level (p=0.003), and confirmed that the interaction term is a significant predictor of success. Interestingly, the interaction term is positive, implying that there might be decreasing returns to experience and in-season centrality. The individual terms are negative, showing that holding all else constant, increasing in-season wins over other NCAA tournament teams and having more NCAA tournament experience decrease the hazard of NCAA failure. When these two are increased together, however, while the effect is net positive for NCAA tournament success, it is not as large as the two on their own. Fifty percent of the NCAA tournament teams have zero NCAA tournament experience from the previous year. For those teams, increasing out-degree by winning more games against NCAA tournament teams is strictly positive in increasing NCAA success expectancy. For teams that do have NCAA experience, however, increasing these factors in concert yields diminishing returns to the probability of Regardless of the diminishing returns finding, it is clear that network centrality is an important predictor of NCAA tournament success. The interaction with Experience is based on strong theoretical grounds, and is significant in the model. Even when controlling for statistical team strength, this network component is a strong predictor of NCAA tournament success. The control variables also deserve some mention. Increasing Offensive Rating and decreasing Defensive Rating (becoming more efficient on defense) both lower the hazard of failure. The natural log transformation of Strength of Schedule is the best fit, illustrating decreasing marginal returns to schedule strength. Interestingly, consistency is significant at the five percent level and positively correlated with risk of failure. This supports the hypothesis that more consistent teams perform better in the NCAA tournament. II) Out of Sample Testing: The ultimate judge of model fit, however, is how it performs in out of sample testing. The goal of this prediction model should be to predict the NCAA tournament as correctly as possible. This prediction needs to be judged on a relative basis—relative, that is, to other available prediction systems. To test the model out of sample, I simply removed one year’s worth of teams from the dataset, estimated the model, then estimated a ranking based on the covariate values for each team and the model coefficients. This ranking is known as the Prognostic Index, and represents the log odds of an individual team surviving—by winning six games and thus wining the tournament—compared to some baseline hazard function unspecified in the Cox model. We can raise these log odds using the mathematical constant e to find the relative risk of an individual team winning the NCAA tournament, compared to the baseline hazard. This interpretation is nice in that it allows us to easily determine whom the Prognostic Index determines as the favorites in the tournament. I have used the Index to fill out a blank tournament bracket for each of the last five years.[1] To fill out the bracket and predict each game, I am constrained to using a very simple decision rule: if Team A is more highly ranked in the Prognostic Index than its opponent, Team B, I predict Team A to advance. This limitation is frustrating when two very evenly ranked teams meet each other, but in fact mirrors the real process of filling out a bracket. Even if you believe the true odds of either team winning the game are 50-50, a coin flip, you must pick one team to advance. I chose two other prediction systems that I believe are some of the best currently available models to compare my model’s out of sample performance with. These models are Ken Pomeroy’s Pythagorean Expectation model, which has been extensively discussed above, and TeamRankings’ model, which combines its own unique power rankings with public picking trends data to create a bracket that maximizes the chances of winning a bracket pool, and thus winning prize money or prestige. To score the bracket, I used ESPN.com and Yahoo! Sports’ scoring system. In this system, points for correct picks increase exponentially by a factor of two in each round of the tournament. Thus first round correct selections are worth one point, and picking the national champion is worth 32 points. This may not be the best system—especially because it undervalues correctly picking upsets in the first two rounds—but it is the most common system, and so it is used to score the brackets here. The results of the out of sample testing are summarized in the table below: Table 2: Out of Sample Performance, 2007-2011 As the table shows, the Network Model does better in out of sample prediction than either the TeamRankings model or the Pomeroy model. It predicts slightly more games correctly and significantly more points correctly, illustrating that its strength is in predicting which teams will make the later rounds of the tournament (the Final Four and Championship Game). Indeed, the Network model predicted three of the five National Champions correctly. One thing to note is the performance of all three models in the 2011 NCAA tournament. This tournament saw lower seeded teams (a 3 seed, a 4 seed, an 8 seed, and an 11 seed) make the Final Four. All three of the models performed well below their levels for the other four years in the sample, predicting fewer games correctly and garnering fewer points. It remains to be seen if 2011 was simply an anomaly, or an inflection point in NCAA tournament outcomes. Predicting NCAA tournament success is a subtler problem than simply identifying the “best team” from the regular season. The tournament’s single-elimination format makes results far more random than most other playoff formats, which use multiple-game series. This means that ranking systems that do a very good job of classifying team strength over the course of the regular season may not be the best strategy for predicting postseason success. This analysis shows that network analysis is a valuable tool for quantifying traits that lead specifically to NCAA tournament success. The position of a team in the network, as measured by degree centrality and when interacted with the level of a team’s previous NCAA tournament experience, is a significant positive predictor of NCAA tournament wins, controlling for team strength. The strength of the model in the out of sample testing lends added credence to the importance of network analysis in predicting the tournament. What this model ultimately does is magnify the importance games played against teams of the quality that a team will face in the tournament. The assumption that NCAA tournament games should be predicted in the same manner as regular season games does not seem to hold. This finding, which to my knowledge has no antecedent, should lead to a new interest in improving NCAA tournament prediction models, perhaps even leading to a whole new method for quantifying traits that predict NCAA tournament success. [1] Obviously when I do this for multiple years, I use different models that do not have the data for that year included. For instance, the 2009 bracket will be filled out using a model that does not have the 2009 data, the 2008 bracket with a model that does not have the 2008 data, etc. 20 Responses to Survival of the Fittest: A New Model for NCAA Tournament Prediction 1. So what does this model tell us about 2012! 2. Sorry, see now that it was in the previous post. 3. Pingback: Bad Seeds : baseballmusings.com 4. Are you going to post the rankings in your Prognostic Index 5. I look forward to further work on this topic, but is this truly out of sample for your model? I mean, had your model performed horribly out of sample, my guess is that you wouldn’t have published the article. I.e., this effect: http://marginalrevolution.com/marginalrevolution/2005/09/why_most_publis.html. I like the work you’ve done to be sure, but I just don’t think it’s quite fair to compare test results from it vs. those from models that actually existed prior to the test period. □ I really like the idea of incorporating the “intangibles” that aren’t so “intangible” any more. Though hhohw’s point is a common controversy….almost like looking at meta-studies and making claims about overarching p-values when so many studies were left unreported. Type II error is not often considered in collective study analysis. That article basically shows that with a type II error rate of 40%, only 75% of reportedly “true” studies are actually true (as in the alternative accurately reported as being true). But even a more acceptable type-II error rate of 5% still means that just 82.6% of reported studies are actually true. It will be interesting to see how this method does in 2012 versus Pomeroy and “TeamRankings.” ☆ …that was assuming the 800/200 false/true alternative hypothesis ratio. □ That is a fair point, and I do think “past-the-post” significance is wrong and effect sizes are often overstated. I would, however, disagree that the testing I did was not out of sample. When the model “trains” itself on data that is missing a year, and then predicts that year as if it were the week before that Tournament, that is truly out of sample. You might argue that it is unfair to compare to prior systems, but if I had published this last year, I would have looked silly given the 2011 Tournament. I still would have published it, however. Maybe I’m just missing your claim? 6. I think it was HSAC that did an article last season about how 8 and 9 seeds get screwed relative to 10-12 seeds. Based on Pomeroy’s rankings and your results, I think Memphis really got hit hard with that 8 seed. A team that perhaps deserved a 5-7 seed now has to face Michigan St. in the second round. 7. I noticed your bracket had Syracuse in the Elite 8. Does this take in to account their loss of Fab Melo for the tournament? Would the way to adjust for his absence be removing his minutes played from the experience factor and his numbers that contribute to the offensive and defensive ratings? If you do adjust for that loss, assuming you haven’t already, doesn’t if affect significantly Syracuse’s predicted success in the tournament? 8. I love your work, just wondering if you could update your analysis with tonight’s results of the Cal/USF play-in game. Do you predict Temple would beat USF? or Ohio? 9. Based upon these interpretations the ’12 Final Four will consist of Kentucky vs Marquette and Ohio State vs Temple. 10. This model takes into account the player’s tournament experience, but does it take into account a coach’s tournament experience. Coach’s like Tom Izzo and Billy Donovan have continued tournament success despite have different players on a year-in, year-out basis. 11. Where are the results? How does this compare with LMRC (Bayesian)?? Let’s see the results? Did you remove them? 12. Pingback: Monday Medley « No Pun Intended 13. Pingback: March Madness and Employment Litigation - It's All About the Numbers ... and a Little Luck : Michigan Employment Law Advisor 14. Hey, Thanks for your work! I used your exact bracket for my pool which consisted of 40 participants. I finished in 1st place. In the past I was always near the bottom of the pool no matter how much research I did. Your model sounded like as good a way as any to pick teams so I just copied it. Please do this again next year so perhaps I can win again! :) 15. Pingback: The RPI is Not the Real Predictive Indicator | The Harvard College Sports Analysis Collective 16. Pingback: Survival of the Fittest: Predicting the 2013 NCAA Tournament | The Harvard College Sports Analysis Collective This entry was posted in NCAA Basketball and tagged Bracketology. Bookmark the permalink.
{"url":"http://harvardsportsanalysis.wordpress.com/2012/03/14/survival-of-the-fittest-a-new-model-for-ncaa-tournament-prediction/","timestamp":"2014-04-16T22:15:57Z","content_type":null,"content_length":"107059","record_id":"<urn:uuid:1f76e8ba-2a71-4531-acef-1057581b5b09>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Differential forms, metrics, and the reflectionless absorption of electromagnetic waves. (English) Zbl 0941.78001 Summary: Two classes of formulations are prevalent on the perfectly matched layer (PML) concept for the reflectionless absorption of electromagnetic waves. In the first, additional degrees of freedom modify the curl and div operators of Maxwell’s equations. This results in the so-called non-Maxwellian PML. The original Berenger formulation belongs to this class. The non-Maxwellian PML can be systematically derived by an analytic continuation of the coordinate space to a complex variables coordinate space (complex-space) which permits the extension of the PML to general geometries and In the second class of PML formulations, the additional degrees of freedom are entirely incorporated into modified constitutive tensors and the usual Maxwell’s equations are recovered. This results in a Maxwellian PML. Interestingly enough, for all cases where the non-Maxwellian PML was derived, a Maxwellian PML was also later derived. This suggests a duality between the formulations and the possibility of a fundamental reason behind the existence of the Maxwellian PML. In this work, we review the PML concept using the language of differential forms to (i) explain the deeper reason allowing for the ubiquitous presence of the Maxwellian PML; (ii) to provide the general framework which unifies the various PML formulations; and (iii) to show that, in principle, many other classes (hybrid) of PML formulations can be derived in the frequency-domain. This is done by introducing a novel, geometrical interpretation of the PML in terms of a change on the metric of space and exploring the metric independence of Maxwell’s equations unfolded by such a 78A40 Waves and radiation (optics) 78M20 Finite difference methods (optics)
{"url":"http://zbmath.org/?q=an:0941.78001","timestamp":"2014-04-16T07:24:43Z","content_type":null,"content_length":"22080","record_id":"<urn:uuid:c186ea57-9ac3-44a1-8600-cd95f5cc0314>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2010/599 Secure Multiparty Computation with Partial FairnessAmos Beimel, Eran Omri, and Ilan OrlovAbstract: A protocol for computing a functionality is secure if an adversary in this protocol cannot cause more harm than in an ideal computation where parties give their inputs to a trusted party which returns the output of the functionality to all parties. In particular, in the ideal model such computation is fair – all parties get the output. Cleve (STOC 1986) proved that, in general, fairness is not possible without an honest majority. To overcome this impossibility, Gordon and Katz (Eurocrypt 2010) suggested a relaxed definition – 1/p-secure computation – which guarantees partial fairness. For two parties, they construct 1/p-secure protocols for functionalities for which the size of either their domain or their range is polynomial (in the security parameter). Gordon and Katz ask whether their results can be extended to multiparty protocols. We study 1/p-secure protocols in the multiparty setting for general functionalities. Our main result is constructions of 1/p-secure protocols when the number of parties is constant provided that less than 2/3 of the parties are corrupt. Our protocols require that either (1) the functionality is deterministic and the size of the domain is polynomial (in the security parameter), or (2) the functionality can be randomized and the size of the range is polynomial. If the size of the domain is constant and the functionality is deterministic, then our protocol is efficient even when the number of parties is O(log log n) (where n is the security parameter). On the negative side, we show that when the number of parties is super-constant, 1/p-secure protocols are not possible when the size of the domain is polynomial. Category / Keywords: cryptographic protocols / Date: received 23 Nov 2010Contact author: amos beimel at gmail comAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20101125:045605 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2010/599","timestamp":"2014-04-16T19:16:40Z","content_type":null,"content_length":"3432","record_id":"<urn:uuid:1c0aa473-566f-4460-961a-75c91a22c0da>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
This page collects tips and tricks to increase the speed of your code using numpy/scipy. For general tips and tricks to improve the performance of your Python programs see http://wiki.python.org/moin/PythonSpeed/PerformanceTips. Python built-ins vs. numpy functions Note that the built-in python min function can be much slower (up to 300-500 times) than using the .min() method of an array. I.e.: use x.min() instead of min(x). The same applies to max. This is also true for the new any and allfunctions for Python >=2.5. Beyond pure Python Sometimes there are tasks for which pure python code can be too slow. Possible solutions can be obtained via: • hand-written C extensions • psyco • pyrex • ctypes • f2py • weave • swig • boost • SIP • CXX For a full discussion with examples on performance gains through interfacing with other languages see this article. Tips and tricks for specific situations. Finding the row and column of the min or max value of an array or matrix A slow, but straightforward, way to find the row and column indices of the minimum value of an array or matrix x: import numpy as np def min_ij(x): i, j = np.where(x == x.min()) return i[0], j[0] This can be made quite a bit faster: def min_ij(x): i, j = divmod(x.argmin(), x.shape[1]) return i, j The fast method is about 4 times faster on a 500 by 500 array. Removing the i-th row and j-th column of a 2d array or matrix The slow way to remove the i-th row and j-th column from a 2d array or matrix: import numpy as np def remove_ij(x, i, j): # Remove the ith row idx = range(x.shape[0]) x = x[idx,:] # Remove the jth column idx = range(x.shape[1]) x = x[:,idx] return x The fast way, because it avoids making copies, to remove the i-th row and j-th column from a 2d array or matrix: def remove_ij(x, i, j): # Row i and column j divide the array into 4 quadrants y = x[:-1,:-1] y[:i,j:] = x[:i,j+1:] y[i:,:j] = x[i+1:,:j] y[i:,j:] = x[i+1:,j+1:] return y For a 500 by 500 array the second method is over 25 times faster. Linear Algebra on Large Arrays Sometimes the performance or memory behaviour of linear algebra on large arrays can suffer because of copying behind the scenes. Consider for example, computing the dot product of two large matrices with a small result: >>> N = 1e6 >>> n = 40 >>> A = np.ones((N,n)) >>> C = np.dot(A.T, A) Although C is only 40 by 40, inspecting the memory usage during the operation of dot will indicate that a copy is being made. The reason is that the dot product uses underlying BLAS operations which depend on the matrices being stored in contiguous C order. The transpose of operation does not effect a copy, but places the transposed array in Fortran order: >>> A.flags C_CONTIGUOUS : True F_CONTIGUOUS : False >>> A.T.flags C_CONTIGUOUS : False F_CONTIGUOUS : True If we give dot two matrices that are both C_CONTIGUOUS, then the performance is better: >>> from time import time >>> AT_F = np.ones((n,N), order='F') >>> AT_C = np.ones((n,N), order='C') >>> t = time();C = np.dot(A.T, A);time() - t >>> t = time();C = np.dot(AT_F, A);time() - t >>> t = time();C = np.dot(AT_C, A);time() - t This is not ideal, however, because it required us to start with two arrays. Is there not some way of computing dot(A.T, A without making an extra copy? There is support for this in the BLAS through the method _SYRK method (see http://www.netlib.org/blas/blasqr.pdf). SciPy exposes this through scipy.linalg.blas (which can be used with C_CONTIGUOUS arrays and F_CONTIGUOUS arrays respectively). Alas, this function is not yet implemented, however, _GEMM is **it is now (scipy 0.13.0)**. Here we use it: >>> import scipy.linalg.blas >>> t = time();C = scipy.linalg.blas.dgemm(alpha=1.0, a=A.T, b=A.T, trans_b=True);time() - t This gives the same performance as dot but with the advantage that we did not need to make an extra copy. Note that we passed A.T here so that the array was in Fortran order because blas is the Fortran BLAS. One must be careful about this: passing the wrong type of array will not realize the performance gains. Here, for example, we pass the C_CONTIGUOUS arrays for which copies must me made. >>> t = time();C = scipy.linalg.blas.dgemm(alpha=1.0, a=A, b=A, trans_a=True);time() - t There is not much documentation about this, but some useful discussions can be found in the mailing list archives. Another related question is about inplace operations (again, avoiding copying). Some functions provide an option for this, and again, some of the BLAS functions can be used for this. Here are some related discussions:
{"url":"http://wiki.scipy.org/PerformanceTips?action=subscribe","timestamp":"2014-04-18T10:36:35Z","content_type":null,"content_length":"41108","record_id":"<urn:uuid:86498939-c9b2-49a7-9fbe-e7fbfe1f34d9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
18 Embedded Systems Embedded Systems frequently Asked Questions by expert members with experience in Embedded Systems. So get preparation for the Embedded Systems job interview 18 Embedded Systems Questions and Answers: Fifo(First In Last Out) is a memory sturcture where datas can be stored and retrived (in the order of its entery only). This is a queue,wheras Memory is a storage device which can hold datas dynamically or at any desired locations and can be retrived in any order. Post Your Answer "Normally you are at liberty to give functions whatever names you like, but ``main'' is special - your program begins executing at the beginning of main. This means that every program must have a main somewhere." Kernighan & Ritchie - The C Programming Language 2ed. p.6 Post Your Answer %program for FIR filters disp('choose the window from the list'); ch=menu('types of rp=input('enter the passband ripple in db'); rs=input('enter the stopband ripple in db'); wsample=input('enter sampling frequency in hertz'); wp=input('enter the passband frequency in hertz'); ws=input('enter the stopband frequency in hertz'); wp=2*wp/wsample; ws=2*ws/wsample; switch ch case 1 case 2 case 3 case 4 case 5 beta=input('enter beta for kaiser window'); case 6 disp('enter proper window number'); disp('select the type of filter from the list'); type=menu('types of switch type case 1 case 2 case 3 b=fir1(N,[wp ws],'bandpass',y); case 4 b=fir1(N,[wp ws],'stop',y); disp('enter type number properly'); subplot(2,1,1); plot(w,magn),grid on;title('magnitude plot'); subplot(2,1,2); plot(w,phase),grid on;title('phase Post Your Answer Before reading the data if you are giving the stop bit then the communication is stopped.so after sending the data you will give the stop bit. Post Your Answer Anti aliasing filter reduces errors due to aliasing. If a signal is sampled at 8 kS/S, the max frequency of the input should be 4 kHz. Otherwise, aliasing errors will result. Typically a 3.4kHz will have an image of 4.6 khz, and one uses a sharp cut off filter with gain of about 1 at 3.4kHz and gain of about 0.01 at 4.6 kHz to effectively guard against aliasing. Thus one does not quite choose max frequency as simply fs/2 where fs is sampling frequency. One has to have a guard band of about 10% of this fmax, and chooses max signal frequency as 0.9*fs/2 Post Your Answer Add New Question
{"url":"http://www.globalguideline.com/interview_questions/Questions.php?sc=Embedded_Systems","timestamp":"2014-04-18T10:44:12Z","content_type":null,"content_length":"31492","record_id":"<urn:uuid:3104b9e2-8685-4270-b6a9-c220a910a152>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Option Gamma The gamma of an option indicates how the delta of an option will change relative to a 1 point move in the underlying asset. In other words, the Gamma shows the option delta's sensitivity to market price changes. Gamma is important because it shows us how fast our position delta will change as the market price of the underlying asset changes. Remember: One of the things the delta of an option tells us is effectively how many underlying contracts we are long/short. So, the Gamma is telling us how fast our "effective" underlying position will change. In other words, Gamma shows how volatile an option is relative to movements in the underlying asset. So, by watching your gamma will let you know how large your delta (position risk) changes. The above graph shows Gamma vs Underlying price for 3 different strike prices. You can see that Gamma increases as the option moves from being in-the-money reaching its peak when the option is at-the-money. Then as the option moves out-of-the-money the Gamma then decreases. Note: The Gamma value is the same for calls as for puts. If you are long a call or a put, the gamma will be a positive number. If you are short a call or a put, the gamma will be a negative number. When you are "long gamma", your position will become "longer" as the price of the underlying asset increases and "shorter" as the underlying price decreases. Conversely, if you sell options, and are therefore "short gamma", your position will become shorter as the underlying price increases and longer as the underlying decreases. This is an important distinction to make between being long or short options - both calls and puts. That is, when you are long an option (long gamma) you want the market to move. As the underlying price increases, you become longer, which reinforces your newly long position. If being "long gamma" means you want movements in the underlying asset, then being "short gamma" means that you do not want the price of the underlying asset to move. A short gamma position will become shorter as the price of the underlying asset increases. As the market rallies, you are effectively selling more and more of the underlying asset as the delta becomes more negative. Long Gamma Trading Take a look at this video from Options Unversity. It provides an overview of the concept of Gamma Trading. Comments (21) April 30th, 2012 at 7:25pm Hi Darryl, The Position Delta = Delta x Contract Size x Number of Option Contracts. In the post by Charlie, he mentioned that the contract size was 10mt (btw, I don't know what mt is, but let's just treat it as some standard unit) with the option delta being -0.2138 on 100 short call options. So, -0.2138 x 10 x 100 = 213. For your put option question - in both cases, whether you're long or short the option the delta approaches 0 as the underlying price increases. Your long/short position just determines whether your delta will approach 0 from a positive number or a negative number. Also, the concept of ITM/OTM doesn't depend on your long/short position either. If a put has a strike price of 90 and the stock is expires at 100 then the put option is OTM. If you are long at the expiration date your position is worthless and you loss is the premium. If you are short the position is still worthless, however, you make a profit - being the premium received for selling the option. Let me know if anything is unclear. April 30th, 2012 at 4:37pm Hi Peter, I'm new to the greeks and wanted to pick up on the last comment. Why if delta is -0.2138, why would you buy 213 lots rather than 21? In addition, 'I.e. for a long put if the underlying price increase from 50 to 60 the delta will go from -0.40 to -0.20 (longer)'. - i undertsand this as the option moves further otm and so delta moves closer to zero. 'For a short put the delta is reversed. So as the underlying price goes from 50 to 60 the short put delta will go from +0.40 to +0.20 (shorter)'. - If i'm short a put and the underlying price increases then doesn't my option move closer to/become more itm, so doesn't my delta increase? Many thanks March 26th, 2012 at 8:58pm Hi Charlie, If the contract size is 10mt then your position delta, which is the amount that you need to hedge is 213 contracts (shares). The value change to your position according to the delta isn't with "every" 1 point move - just with the first. Because, as you've indicated the delta itself will change as the underlying changes, which is given by the gamma. So after the underlying has moved 1 point you will have a new delta and gamma value. Also, gamma is basis a 1 point move unless it is specified as 1% gamma. March 23rd, 2012 at 12:54am Hello Peter, i have new software for options portfolio/prcing, and with a postion i have on the delta is -0.2138 (short a call) and the gamma -0.0158. If I am 100 lots short of the call (contract size = 10mt) then to be delta hedged i would need to buy 21 lots. As i understand correctly, the delta also means that the value of my position would decrease by $213 with every tick upwards. However in this instance the gamma of -0.0158 would mean that my delta goes shorter by -1.58 lots every % point higher NOT every tick higher (otherwsie gamma would be too big). The gamma value of the trade says -$15.8 which i think means the delta will decrease by this value for every % move upwards? Can you help. You agree confusing that delta is basis one tick move and gamma basis 1% move? February 10th, 2012 at 12:53am Great work Peter November 5th, 2011 at 4:06am Hi Rick, Yes, that's correct. Both calls and puts have the same gamma value, which will decrease either side of ATM. November 4th, 2011 at 8:45am Hi I want your feedback If a call, initially otm, and then the stock price approaching the exercise price, the gamma would increase, when the call is in the money, gamma would decrease? If a put, initially OTM, then if a stock price decreases, gamma would increase, and when the put is in the money, gamma would decrease if the stock still going down? Am I saying the right things June 5th, 2011 at 5:55am Hi Peter87, it might help to take a look at the delta graphs on the option delta page. Take a look at the Put Delta vs Underlying Price graph. This represents a long put - so just reverse the numbers for a short put. I.e. for a long put if the underlying price increase from 50 to 60 the delta will go from -0.40 to -0.20 (longer). For a short put the delta is reversed. So as the underlying price goes from 50 to 60 the short put delta will go from +0.40 to +0.20 (shorter). June 4th, 2011 at 8:06am Thanks for your detailed and fast answer! I got the part concerning calls and long puts but not the part with short puts: A short put has a concave (and negative) pay off profile. So, the higher the underlying value gets, the more approaches the pay off line the x-axis which implies that the slope (<=> delta) becomes bigger (= less negative = approaches 0). That's why I don't get it that the position in a short put becomes shorter when the underlying price increases. In my opinion the position becomes LESS shorter (it becomes longer). But I guess there must be some reasoning errors in my argumentation!? :-) June 4th, 2011 at 6:36am The Delta depends on the option; call options have a position Delta and put options have a negative Delta. So, if you "sell" an option the call with have a negative Delta and the put a positive Now, given that Gamma is positive for both calls and puts, if you sell an option your Gamma with therefore be negative. When you're short an option and hence short Gamma both a short call and short put will "lose" Delta as the underlying price rises - this is also refered to as being "shorter". For a call option, as the underlying price rises the option itself becomes more in-the-money and hence the Delta will move from 0 to 1. But if you are "short" the call the opposite happens meaning that the option Delta of your position will move from 0 to -1 (getting shorter). For a put option, as the underlying price rises the option itself becomse more out-of-the-money and hence the Delta will move from -1 to 0 (getting longer). But if you are "short" the put the opposite happens meaing that the option Delta of your position will move from 1 to 0 (getting shorter). Let me know if this is not clear. June 3rd, 2011 at 2:47pm I'm a beginner in options but understand almost the whole article. What I just don't understand is this: "Conversely, if you sell options, and are therefore "short gamma", your position will become shorter as the underlying price increases [...]" Delta (as first derivative) is negative and grows with increasing Underlying price, so it becomes LESS negative which means "less short" <=> "more long" !?!?!?!? I would appreciate your feedback! November 3rd, 2010 at 4:54am Are you talking about the video on this site above? That's where I say "take a look at this video". Then I provide a link to the OU site. The video above on "this" site does indeed do more than "describe" what gamma is and elaborates on gamma trading. Please let me know if I have missed something or if you think the video above is incorrect. November 2nd, 2010 at 11:20pm i clicked on the "options university" link under the long gamma trading heading, it says you can see a video that gives an overview of gamma trading. instead of a video that gives an over view of gamma, it is a 100 percent sales video for options university. NO GAMMA EVEN MENTIONED. rip off. July 28th, 2010 at 9:06pm you are right. delta of put is decreasing function from -1 to 0 as the stock price increases. I was thinking in terms of absolute value of delta... July 28th, 2010 at 6:10pm Hi Sam, it's a good question. You have to remember that a put's delta is negative so with a positive gamma and an increasing stock price the delta of a put becomes less negative - or "longer". The more the stock rallies the closer the put's delta approaches zero as more gamma is added to it. Call options, with a positive delta and positive gamma will also "get longer" as the stock price rises. The higher the stock moves away from the strike price the closer the call option's delta approaches 1. July 28th, 2010 at 4:14pm May be I am missing something. Mathematically, gamma is always positive for both call and put. But as the stock price increases, shouldn't the put have negative gamma as the graph of put delta vs stock price is decreasing? Please someone clarify Seth Baker February 9th, 2010 at 3:04pm This is interesting stuff. I use google to help me find stuff about options. One cool site has a different approach - they claim to not have an opinion on the market. Rather, they work with you on which type of trade to make, based on the Greeks, etc. I may spell this wrong, but I think it's http://www.timeforoptions.com October 8th, 2009 at 7:05pm Hi Anthony, I agree that the video doesn't get off to a good start...I link directly to the video on the OU site. They've changed the video to what they've had previously, which provided a longer At the start of the video Ron has already begun discussing "short gamma", where if you are short gamma and the market is going down your position gets "longer" i.e. your delta position grows. That's what he means when he says "buying deltas" on the way down. Do you think my description (not the video) above differs from what you've read elsewhere? If so, let me know where the contradiction is and if I'm wrong I'll correct the content accordingly. Thanks for the feedback! October 7th, 2009 at 10:39pm I am learning to trade options by the greeks (delta, gamma, theta, vega) but have traded options for many years. I have looked up several definitions and am doing an online course. This definition here and the subsequent video are by far the most confusing I have ever come across. The video begins with "In a sense on the way down, our short gamma position is buying deltas for us...". How in the heck can someone trying to understand Gamma as a definition begin to understand this. September 20th, 2009 at 8:09pm Thanks for the suggestion...much appreciated. I'll write up something on delta neutral trading and a bit more on gamma scalping. September 20th, 2009 at 10:38am I have basic knowledge of options buying and selling calls and puts. I would appreciate it if more detailed explanation is added in for gamma and delta trading. I am still confused as to how gamma trading works. Add a Comment
{"url":"http://www.optiontradingtips.com/greeks/gamma.html","timestamp":"2014-04-19T22:06:19Z","content_type":null,"content_length":"21922","record_id":"<urn:uuid:53b3332d-216a-4c10-9b68-3fea386b37b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Equidistant points in negatively curved metric spaces up vote 0 down vote favorite Suppose that $X$ is a simply connected metric space, with a non-positively curved metric (for example, Euclidean or hyperbolic space). Let $A,B,C$ be disjoint, convex sets in $X$, and suppose that the shortest path from $A$ to $B$ passes through $C$. Under these hypotheses, it should follow that there does not exist a point in $X$ that is equidistant to $A$, $B$, and $C$. In the special case where $A,B,C$ are points, this statement amounts to checking inequalities between the sides of a triangle. That is, for any $D \in X$, one of the triangles $ACD$ or $BCD$ -- say, $ACD$ -- will have an obtuse angle at $C$. Then the side $AD$ is longer than $CD$, hence $D$ is not the equidistant point. But I'm stumped about how to show this for more general convex sets. My hunch is that geometers should have encountered this question before. Does anyone have a reference, an argument, or (gasp) a counterexample? dg.differential-geometry mg.metric-geometry add comment 1 Answer active oldest votes Hello Dave, up vote 4 down vote accepted Three disks of equal radius in Euclidean plane with centers on a circle of sufficiently large radius seems to be an easy counter-example. Right you are! Apparently, it was a brain fart on my part to believe that this is true. – Dave Futer Sep 26 '10 at 20:17 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry mg.metric-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/40046/equidistant-points-in-negatively-curved-metric-spaces?answertab=oldest","timestamp":"2014-04-18T05:47:45Z","content_type":null,"content_length":"52043","record_id":"<urn:uuid:1d2ec1ee-b1f0-451f-93b3-cc1c699172f9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: exponential regression Replies: 1 Last Post: Feb 3, 2013 8:19 PM Re: exponential regression Posted: Feb 3, 2013 8:19 PM > I entered Clear[a, b, x]; FindFit[{{1, 4.5}, {3, 14.0}, {5, 28.6}, {7, 54.1}, >{8, 78.6}}, a*b^x, {a, b}, x] as a text of exponential regression. The input >returned {a->4.66625, b->1.42272} > Fine. However, a student of mine entered the same data in a TI-84 calculator >and it returned 3.947506 (x^1.334589). One problem is that you and your student fitted different curves. You fitted the data to a*b^x and your student fitted the data to a*x^b. This fits the data to the same curve as your student: Clear[a, b, x]; FindFit[{{1, 4.5}, {3, 14.0}, {5, 28.6}, {7, 54.1}, {8, 78.6}}, a x^b, {a, b}, x] The result will be different: Your student's calculator probably did a modified linear regression; Mathematica did a more sophisticated fit. If you plot your student's 3.9475*x^1.3346 and Mathematica's 1.17537*x^2.00353 along with the origional data, you'll see that the Mathematica version gives a much better fit. The Mathematica answer for your first try, 4.66625*1.42272^x, also gives a better fit.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2432717&messageID=8247401","timestamp":"2014-04-17T22:04:40Z","content_type":null,"content_length":"15031","record_id":"<urn:uuid:706c72e7-9281-40af-9e2d-971c414444c5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
continuous metric spaces May 7th 2011, 07:25 AM #1 continuous metric spaces Show that 1. the projection map p : R^2 → R given by p(x; y) := x is continuous. My proofs: We want to show that $\forall$$\epsilon$ > 0, $\exists$$\delta$>0 such that: d(x,y)< $\delta$$\Rightarrow$ d(f(x),f(y)) < $\epsilon$ (x1,y1) (x2,y2) are points in R^2 1) d(x1,x2) < $\epsilon$ |x1-x2| < $\epsilon$ We can define $\delta$ = the square root of ( $\epsilon$^2 - (y2-y1)^2) Since there exists a delta for any epsilon we choose, implies that p is continuous. Is this correct... Looks quite correct to me. May 7th 2011, 07:44 AM #2 Junior Member Oct 2010
{"url":"http://mathhelpforum.com/differential-geometry/179775-continuous-metric-spaces.html","timestamp":"2014-04-19T14:41:10Z","content_type":null,"content_length":"33467","record_id":"<urn:uuid:57d9dae7-5cc9-46a1-b14a-0b4eef11a9c3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
NSGA ii algorithm implementation January 9th, 2013, 01:12 PM #1 Junior Member Join Date Jan 2013 Thanked 0 Times in 0 Posts Hi Guys Very sorry if I am posting this in the wrong forum! At the minute I am working on implementing the nsga ii algorithm like so http://www.cs.nott.ac.uk/~mvh/teachi...tedSorting.pdf At the minute I am using the same figures in the example provided and am looking to generate the first front, otherwise known as the undominated front. I have started to program this and I have got the part working where either 1 dominates the other but the problem comes in when I try program the code that works when neither solution dominates the other, this is severely hindering progress on my project and any help would greatly obliged. Please tell me this makes sense to some1 out there Code Provided: package dominationcheck; import java.util.ArrayList; * @author 09523642 public class DominationCheck { * @param args the command line arguments public static void main(String[] args) Solution s1 = new Solution(13,35); Solution s2 = new Solution(20,18); Solution s3 = new Solution(18,5); Solution s4 = new Solution(8,10); Solution s5 = new Solution(10,25); Solution s6 = new Solution(5,30); ArrayList<Solution> Undominated = new ArrayList<Solution>(); ArrayList<Solution> Unsorted= new ArrayList<Solution>(); int z; System.out.println("First List:"); for(int i =0;i<=Unsorted.size()-1;i++) int endIdx = 0; for(int x = 0; x <= Undominated.size()-1; x++) for(int i=0; i <= Unsorted.size()-1; i++) Solution unsorted,undominated; unsorted = Unsorted.get(i); undominated = Undominated.get(x); int p = DominationCheck(unsorted,undominated); switch (p) { case 1: case 2: case 3: if(x==Undominated.size()-1) default: System.out.println("Error"); System.out.println("Second List:"); for(int a =0;a<=Undominated.size()-1;a++) public static int DominationCheck(Solution unsorted, Solution undominated) int ans=0; if(unsorted.f1<undominated.f1 && unsorted.f2<undominated.f2) ans = 1; else if(unsorted.f1>undominated.f1 && unsorted.f2>undominated.f2) //s2 dominates s1 ans = 2; else //neither dominate ans = 3; return ans; the problem comes in when I try program the code that works when neither solution dominates the other Can you explain the problem? If you don't understand my answer, don't ignore it, ask a question. Sorry if I didn't explain this well If you can see when neither dominates the other the solution taken from the unsorted list is added if the element it's tested against is the last in the undominated list, other wise if there are more elements to be tested against we move on to the next to be tested against . I hope I have this cleared up, the link above shows what I'm aiming for How can the posted code be tested? How would the problem be shown when the program is executed? What does the current program print out when executed? What should it print out? How can anyone tell if the algorithm is correctly implemented? If you don't understand my answer, don't ignore it, ask a question. It's a main class it should run as it is, all I'm missing from above is the solution class which all contains 2 int attributes, f1 and f2 The link I have left in my 1st should describe what I'm trying to do A the min my program prints the f1 value of the unsorted list then when it goes to find the dominant front it hits a loop it cannot get out of. If it was correct it should print out the f1 values of the undominated solutions, so it should print out 18, 8, 5 If it's implemented properly I should get that output The posted code does not compile without errors. What does the current program print? Post its output. What should it print? Post what it should output. If you don't understand my answer, don't ignore it, ask a question. January 9th, 2013, 01:34 PM #2 Super Moderator Join Date May 2010 Eastern Florida Thanked 1,958 Times in 1,932 Posts January 9th, 2013, 05:09 PM #3 Junior Member Join Date Jan 2013 Thanked 0 Times in 0 Posts January 9th, 2013, 05:13 PM #4 Super Moderator Join Date May 2010 Eastern Florida Thanked 1,958 Times in 1,932 Posts January 9th, 2013, 05:30 PM #5 Junior Member Join Date Jan 2013 Thanked 0 Times in 0 Posts January 9th, 2013, 06:15 PM #6 Super Moderator Join Date May 2010 Eastern Florida Thanked 1,958 Times in 1,932 Posts
{"url":"http://www.javaprogrammingforums.com/algorithms-recursion/21732-nsga-ii-algorithm-implementation.html","timestamp":"2014-04-19T07:30:41Z","content_type":null,"content_length":"90564","record_id":"<urn:uuid:e115f5f0-ec0a-463e-a076-8eca4190fb95>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
How Do We Determine When To Take The Current Source ... | Chegg.com How do we determine when to take the current source in writing a nodal equation is positive or negative. I have looked over the solutions in chp 4 question 24 and 25. The direction of the current source in both are the same however the current source in prob 24 is taken as positive and in prob 25 is taken as negative. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/determine-take-current-source-writing-nodal-equation-positive-negative-looked-solutions-ch-q121941","timestamp":"2014-04-17T09:19:32Z","content_type":null,"content_length":"20853","record_id":"<urn:uuid:156bc7a6-8704-4097-9209-b6170535e90f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] one more scipy.optimize.line_search question dmitrey openopt@ukr.... Mon Aug 6 15:44:33 CDT 2007 Alan G Isaac wrote: > On Mon, 06 Aug 2007, dmitrey apparently wrote: >> I wonder what does the parameter amax=50 mean (in >> optimize.line_search func)? Seems like this parameter is >> never used in the func > Is there something wrong with the minpack2 documentation? > c stpmax is a double precision variable. > c On entry stpmax is a nonnegative upper bound for the step. > c On exit stpmax is unchanged. the line_search func from scipy.optimize has no any relation to the fortran routine you have mentioned. Maybe you mean line_search func from scipy.optimize.linesearch, but as I mentioned, scipy has 2 funcs line_search, one python-written (from optimize.py), other from /optimize/linesearch.py. So, as I have mentioned, amax is unused in scipy.optimize.linesearch >> moreover, amax is defined in scipy.optimize module as >> a func > I think you are confusing this with the numpy array method? >> Also, in the middle of the func it has line >> (optimize.py) >> maxiter = 10 >> this one seems to be very small to me. >> don't you think it's better to handle the param in input args? > That is just for the bracketing phase. > Are any troubles resulting from this value? I have troubles all over the func. Seems like sometimes Matthieu's func that declines same goal (finding x that satisfies strong Wolfe conditions) works much more better (in 1st iteration, but makes my CPU hanging on in 2nd iter), but in other cases scipy.optimize provides at least some decrease, while Matthieu's - not (as i mentioned above, also, sometimes Matthieu's func return same x0 after 2nd iter and makes my alg stop very far from x_optim). 1) this test, Matthieu: itn 0: Fk= 8596.39550577 maxResidual= 804.031293334 (between these lines I call Matthieu's optimizer) objfun: 227.538805239 max residual: 2.17603712827e-12 (because now I use test where only linear inequalities are present) (CPU hanging on) 2) Same test, scipy.optimize: itn 0: Fk= 8596.39550577 maxResidual= 804.031293334 itn 100 : Fk= 8141.04226717 maxResidual= 784.640128722 itn 200 : Fk= 7708.62739973 maxResidual= 765.716680829 as you see, objFun decrease, as well as max constraint decrease, is very If I provide gradient of my func numerically obtaining, nothing changes except of some calculation speed decrease. I tried to modify sigma (Matthieu notation) = c2 (scipy notation) = 0.1...0.9, almost nothing changes (Matthieu's default val = 0.4, scipy - 0.9, as it is in afaik c1 is default 0.0001 in both mentioned. So now I'm trying to learn where's the problem. Regards, D. >> Also, don't you think that having 2 line_search funcs is ambiguous? >> (I mean one in scipy.optimize, python-written, and one in >> scipy.optimize.linesearch, binding to minpack2) > It would be nice to have some history on this. > I expect we'll have to wait until Travis has time to look > this discussion over, which may not be soon. > Cheers, > Alan Isaac > _______________________________________________ > Scipy-dev mailing list > Scipy-dev@scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2007-August/007618.html","timestamp":"2014-04-16T04:53:27Z","content_type":null,"content_length":"6714","record_id":"<urn:uuid:bea54d44-6e26-4746-9079-9a11a95849ca>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
East Hampstead Math Tutor ...A's here also) Trigonometry is simply the study of triangles and the relationships between sides and angles. It is frequently taught as a part of Precalculus. Trig functions must be well understood before the student moves on to Calculus. 13 Subjects: including calculus, SAT math, trigonometry, ACT Math ...Currently teaching math in the Nashua School District. Certified New Hampshire Math teacher. Currently teaching pre-algebra in the Nashua School District. 8 Subjects: including calculus, ACT Math, algebra 1, algebra 2 ...I have performed many teaching sessions including those with my own staff, company clients, and the general public as well. I also have successfully tutored my own children in math and science during middle school and high school. I have always enjoyed teaching others and am easy going and patient with my students. 13 Subjects: including trigonometry, statistics, linear algebra, logic My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including logic, ACT Math, reading, probability ...I have been teaching physics as an adjunct faculty at several universities for the last few years and very much look forward to the opportunity of offering personalized support to those seeking it, so please don't hesitate to contact me! My schedule is extremely flexible and am willing to meet y... 9 Subjects: including algebra 1, algebra 2, calculus, geometry
{"url":"http://www.purplemath.com/east_hampstead_math_tutors.php","timestamp":"2014-04-17T07:52:03Z","content_type":null,"content_length":"23717","record_id":"<urn:uuid:bb9024e9-7921-4b2e-b319-20a575365d21>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
slice-ribbon for links (surely it's wrong) up vote 10 down vote favorite The slice-ribbon conjecture asserts that all slice knots are ribbon. This assumes the context: 1) A `knot' is a smooth embedding $S^1 \to S^3$. We're thinking of the 3-sphere as the boundary of the 4-ball $S^3 = \partial D^4$. 2) A knot being slice means that it's the boundary of a 2-disc smoothly embedded in $D^4$. 3) A slice disc being ribbon is a more fussy definition -- a slice disc is in ribbon position if the distance function $d(p) = |p|^2$ is Morse on the slice disc and having no local maxima. A slice knot is a ribbon knot if one of its slice discs has a ribbon position. My question is this. All the above definitions have natural generalizations to links in $S^3$. You can talk about a link being slice if it's the boundary of disjointly embedded discs in $D^4$. Similarly, the above ribbon definition makes sense for slice links. Are there simple examples of $n$-component links with $n \geq 2$ that are slice but not ribbon? Presumably this question has been investigated in the literature, but I haven't come across it. Standard references like Kawauchi don't mention this problem (as far as I can tell). knot-theory 2-knots gt.geometric-topology 2 Can't d(saddle) > d(min) be attained trivially by extending fingers of the disk toward the center of the ball? If so, then the essence of ribbonness of a slice knot is just eliminating the local maxima on the disk. – Greg Kuperberg Jan 14 '10 at 1:31 Ah, right. I'll clean up that statement. – Ryan Budney Jan 14 '10 at 1:37 add comment 2 Answers active oldest votes Ryan, I think this is an open problem. The best related result I know is a theorem of Casson and Gordon [A loop theorem for duality spaces and fibred ribbon knots. Invent. Math. 74 (1983)] saying that for a fibred knot that bounds a homotopically ribbon disk in the 4-ball, the slice complement is also fibred. More precisely, they are assuming that the knot K bounds a disk R in the 4-ball such that the inclusion up vote 9 down $S^3 \smallsetminus K \hookrightarrow D^4 \smallsetminus R$ vote accepted induces an epimorphism on fundamental groups. If one glues R to a fibre of the fibration $S^3 \smallsetminus K \to S^1$ to obtain a closed surface F, then the statement is that the monodromy extends from F to a solid handlebody which is a fibre of a fibration $D^4 \smallsetminus R \to S^1$ extending the given one on the boundary. Thanks Peter. This is surprising to me. – Ryan Budney Mar 22 '10 at 9:16 add comment As far as I know, extension of slice and ribbon to links is not unique. There are "strong slice", "weak slice", "strong ribbon" and "weak ribbon" for links. up vote 1 down "CHARACTERIZATION OF SLICES AND RIBBONS" (by H.FOX) mentioned these concepts. I'm referring to the specific generalization above, but any results positive or negative on any generalization would be nice, I suppose. It doesn't appear that he has a result of this form though. – Ryan Budney Jan 19 '10 at 21:13 add comment Not the answer you're looking for? Browse other questions tagged knot-theory 2-knots gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/11713/slice-ribbon-for-links-surely-its-wrong?sort=oldest","timestamp":"2014-04-20T03:45:06Z","content_type":null,"content_length":"58928","record_id":"<urn:uuid:c7505d43-fb92-4f07-9d15-2828a8bd1fee>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
The Science of Sound 3rd Edition Chapter 9 Solutions | Chegg.com The ratio of the frequencies in the semitone of equal temperament is given as follows: The ratios in major and minor third can be calculated by direct multiplication. For a major third: A major third has four semitones. Then the ratio of frequency of major third in equal temperament can be obtained by multiplying the ratio of the frequency in the semitone of equal temperament 4 times as,
{"url":"http://www.chegg.com/homework-help/the-science-of-sound-3rd-edition-chapter-9-solutions-9780805385656","timestamp":"2014-04-17T20:43:29Z","content_type":null,"content_length":"32140","record_id":"<urn:uuid:8a64c90d-62a5-484f-82f8-f360eb36818b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
SV: RE: Re: st: Linear Trend Tests of ORs [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] SV: RE: Re: st: Linear Trend Tests of ORs From "Kim Lyngby Mikkelsen (KLM)" <klm@ami.dk> To <statalist@hsphsun2.harvard.edu> Subject SV: RE: Re: st: Linear Trend Tests of ORs Date Tue, 23 May 2006 12:32:21 +0200 Young Hee Rho wrote: I have encountered many "trend tests" of linearity concerning odds ratios (OR) of a categorical variable. For example, I am modeling a logistic model Y=b1x1 + b2x2 + b3x3 +b4. x2 is a 5-level categorical variable, for example the level of drinking (while Y is the presence/absence of hyperuricemia). When the results are displayed, the ORs of the 5 levels are shown and the linear trend is shown as a single p value. The individual ORs may not have significance, however the overall trend does. To do a test for linear trend, you may use the log likelihood ration test! First you run the logistic regression with the categorical variable expanded by 'xi' (getting 4 estimates relative to the reference category of b2) and store the log likelihood in 'A': Model 1 xi:logistic y b1 i.b2 b3 b4 estimate store A then you repeat the regression without the 'xi' expansion of the categorical variable (Now you only one estimate of b2, which is the linear effect of b2). Model 2 xi:logistic y b1 b2 b3 b4 (Note: the 'i.' in front of b2 is removed). You then simply need to se if the reduced model (model 2) is as good as your previous model (Model 1). You do that using the likelihood ration test: lrtest A To conclude that you have a linear trend the p-value of the lrtest needs to be insignificant (Model 2 is not significantly worse than Model 1) AND the estimate for b2 (the linear effect per category in model 2) must be significant! Kim Lyngby Mikkelsen Cand.med. Ph.D. -----Oprindelig meddelelse----- Fra: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] På vegne af Rho YH Sendt: 23. maj 2006 05:23 Til: statalist@hsphsun2.harvard.edu Emne: RE: RE: Re: st: Linear Trend Tests of ORs I have just found out that the tabodds command may meet what I wanted - linear trend of ORs, however making multivariate adjustments is not easy (I tried and it gave no results after adjusting for >2 or 3 variables.) Is there any "immediate command" by just inputing the OR (and CI, if needed) and the independent variable category and produces a p value? > ---- Original Message ---- > From : Rho YH [mania1@korea.ac.kr] > To : statalist@hsphsun2.harvard.edu > Date : 2006년 5월 23일(화) 09:43:23 > Subject : RE: Re: st: Linear Trend Tests of ORs >Hmm.. It looks like the aformentioned Cochrane-Armitage Test, however I'll check it out. >> ---- Original Message ---- >> From : Suzy [scott_788@wowway.com] >> To : statalist@hsphsun2.harvard.edu >> Date : 2006년 5월 22일(월) 21:24:22 >> Subject : Re: st: Linear Trend Tests of ORs >>Perhaps Szklo and Nieto's book can help: Epidemiology. Beyond the >>Basics, discusses test for trend (dose reponse) in Appendix B (pp 459-462). >>Formula is from Mantel: >>Mantel N. Chi square tests with one degree of freedom: etensions of the >>Manetel-Haenszel procedure. J Am Stat Assoc. 1963;58: 690-700. >>Hope this helps. >>Young Hee Rho wrote: >>>I have encountered many "trend tests" of linearity concerning odds ratios (OR) of a >>>categorical variable. >>>For example, I am modeling a logistic model Y=b1x1 + b2x2 + b3x3 +b4. x2 is a 5-level >>>categorical variable, for example the level of drinking (while Y is the presence/absence of >>>hyperuricemia). When the results are displayed, the ORs of the 5 levels are shown and >>>the linear trend is shown as a single p value. The individual ORs may not have significance, >>>however the overall trend does. It is said that it was tested through regressing the median of >>>the levels on the ORs. Otherwise in other cases, there are many trend tests of linearity >>>expresed in many papers, however, the actual method is not explained in detail. (It does not >>>apear to come from polynomial contrasts of ANOVA nor from categorical trend tests >>>(Cochrane-Armitage) since the arformentioned test is from values coming from >>>one categorical variable having several estimates. How is this done and how much methods >>>exsist on this topic? Are there any useful references? >>>** For those who got twice this article, I sent this article again since it did not seem to register on >>>Statalist. Many apologies if there was a duplicate delivery. >>* For searches and help try: >>* http://www.stata.com/support/faqs/res/findit.html >>* http://www.stata.com/support/statalist/faq >>* http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-05/msg00813.html","timestamp":"2014-04-16T13:57:50Z","content_type":null,"content_length":"10618","record_id":"<urn:uuid:f252fe57-bad9-4110-ba58-d06e75830702>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: The Center for Control, Dynamical Systems, and Computation University of California at Santa Barbara Spring 2009 Seminar Series Linear Iterative Strategies for Information Dissemination and Processing in Distributed Systems Shreyas Sundaram University of Illinois at Urbana - Champaign Monday, June 1, 2009 10:00 - 11:00 AM HFH 4164 Abstract: A core requirement in distributed systems and networks is the ability to disseminate information from some or all of the nodes in the network to the other nodes. In this talk, we describe a linear iterative strat- egy for information dissemination, where each node repeatedly updates its value to be a weighted linear com- bination of its previous value and those of its neighbors. We show that this strategy can be compactly modeled as a linear dynamical system, and use control-theoretic tools (such as observability theory, structured system theory, and linear system theory) to characterize its capabilities. First, we show that in connected networks with time-invariant topologies, the linear iterative strategy allows every node to obtain the values of all other nodes after a finite number of iterations (or time-steps). The number of time-steps required is determined by the net- work topology and, in fact, may be minimal over all possible strategies for information dissemination. Next, we demonstrate the ability of the linear iterative strategy to handle a set of malicious (and possibly coordinated) nodes that update their values arbitrarily at each time-step. It has been established in the literature that when
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/1008/4181950.html","timestamp":"2014-04-18T22:35:56Z","content_type":null,"content_length":"8722","record_id":"<urn:uuid:b5367373-dbfb-4ed8-b751-2a25ff4dc9cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
double precision math builtins on OSX 05-01-2011, 02:38 PM double precision math builtins on OSX I am trying to use math builtin functions (exp, cos) with double precision on OSX (macbook pro, nvidia GT 330M). My sample code looks like #pragma OPENCL EXTENSION cl_khr_fp64 : enable __kernel void g( __global double* input, __global double* output, const unsigned int count) int i = get_global_id(0); if(i < count) { output[i] = exp(.1*input[i]); i get a failure when trying to build the opencl executable with a message like: error: more than one matching function found in __builtin_overload output[i] = exp((.1*input[i])); i have tried exp, native_exp, and have also tried various typecastings as was suggested on some other posts. it seems like i am not giving the compiler enough info to pick out the double precision routine? i am able to make this all work using float, but need to move to double. any ideas? thanks. 05-02-2011, 09:34 AM Re: double precision math builtins on OSX Can a GT 330M actually do double precision? I didn't think it could. 05-02-2011, 09:51 AM Re: double precision math builtins on OSX ah...you may well be right...http://en.wikipedia.org/wiki/CUDA seems to support this. appreciate the quick response, i don't need to beat my head against this if it is never going to work! 05-02-2011, 03:30 PM Re: double precision math builtins on OSX You should always check which extensions supported using clGetDeviceInfo(CL_DEVICE_EXTENSIONS). In this case you are looking for "cl_khr_fp64". 05-02-2011, 03:33 PM Re: double precision math builtins on OSX thanks. checked this out and indeed this device doesn't support double. thanks for the pointers. moving on towards better hardware...
{"url":"http://www.khronos.org/message_boards/printthread.php?t=7411&pp=10&page=1","timestamp":"2014-04-16T22:10:21Z","content_type":null,"content_length":"6346","record_id":"<urn:uuid:15200c2f-f464-4154-bf61-de88bd12af4b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
San Anselmo Geometry Tutor ...I received an A in both classes. It was a very interesting subject and I'll be more than happy to help students with any difficulty they have. I took many pharmacy-related courses and have solid and extensive knowledge in pharmacology. 22 Subjects: including geometry, calculus, statistics, biology ...These students were mainstreamed in general education classes at an elementary school in Davis, CA. After earning my credential, I worked as a literacy instructor with students K-8 in Washington DC. After doing that for a year with an education non-profit, they promoted me to coordinator and then again to program director. 16 Subjects: including geometry, English, algebra 2, special needs ...I also participated in several undergraduate and graduate student mentoring programs at UC Berkeley, which included group seminar presentations as well as one-on-one tutoring sessions of students from science and engineering majors, including minority students. I actually co-founded the Bioengin... 24 Subjects: including geometry, chemistry, physics, algebra 1 ...I also teach time management and organization skills so that students can use their time more efficiently. Teaching these study skills to my students has helped them dramatically increase their grades. Feel free to check out my feedback for positive testimonials. 59 Subjects: including geometry, English, chemistry, physics ...I provide tools that work from techniques for multiple choice questions, to methods for translating equations to graphs and vice-versa, to creative ways to tackle word problems. Besides studying World History on the side at university and then doing my own study and reading ever since, I have ex... 18 Subjects: including geometry, calculus, statistics, GRE
{"url":"http://www.purplemath.com/san_anselmo_ca_geometry_tutors.php","timestamp":"2014-04-19T09:54:08Z","content_type":null,"content_length":"23903","record_id":"<urn:uuid:4f8a79a4-b1ae-4606-860e-e14bfe16e14e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Double Digit Bias in Lotto A double digit bias is a lotto trend to look for in the Odd/Even Bias Tracker Chart Advantage Plus . A double digit bias is when there are 2 lotto numbers of equal probability, but one is odd and the other is even and the past ten games produced an E=+10, or a different double digit difference, you would choose the odd number because the bias would favor the off number being drawn. Screenshot of Advantage Plus, Odd Even Bias Tracker Double digit bias example is highlighted.
{"url":"https://www.smartluck.com/lotteryterms/double-digit-bias.htm","timestamp":"2014-04-20T05:47:32Z","content_type":null,"content_length":"4920","record_id":"<urn:uuid:f5b3a5c1-f878-47ee-8c8a-85a22a2778d0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Purpose: Find the relationship between the position (x) of an object as it rolls down an incline and the time taken (t) to reach each position. Picture represents equal position intervals. If so, what t1,x1 would happen to the time? Procedure: 1) Set up the ramp. 2) Control the ramp angle and the What’s required: mass of the ball. 3) Perform 3 trials for each of the 5 1) Data Table 2) Original Graph positions along the ramp. 3) Modified graph. 4) Derivation of Equation. 5) Answer Questions #1-3 4) Plot Position vs. time and then modify. (Plot time on x axis) 5) Derive an equation for the One could speak of the average velocity of the object in the graph, but since the object started very slowly and steadily increased its speed, the term average velocity has little meaning. t t What happens as you shrink the time interval t over which you calculate the average velocity? x x x x x x t t t t As one shrinks the interval, t to zero, the secant becomes a tangent; the slope of the tangent is the average velocity at this instant, or simply the instantaneous velocity at that clock reading. t t Instantaneous velocity (v1, v2, vf, vi, etc.) the velocity at a given point (instant) in time. It is equal to the slope of the tangent to the x vs. t graph. x Δv t t Slope = = acceleration The slope of a v vs. t graph is the acceleration. Acceleration is a vector quantity that is defined as the rate at which an object changes its velocity. An object is accelerating if it is changing its velocity (magnitude or Acceleration values are expressed in units of velocity/time. Typical acceleration units include the Changing speed, constant direction Acceleration exists Constant speed, changing direction Acceleration exists Changing speed, constant direction Acceleration exists Direction of Motion Action/Sign of Sign of Velocity Acceleration New Graphs Written Description Motion Map (x vs t) (v vs t) (a vs t) Starting Dir (v) (a) t t2 • Position α time2 • y = mx2 + b • Final Equation: x (m) = 2.5 m/s2 x time2 (s2) • How does the position vs time graph differ from the previous unit’s lab? There is no longer a direct relationship between position and time. • Explain how the position vs. time graph shows the motion exhibited by the ball? The slope is no longer constant, therefore the velocity is changing. CONSTANT ACCELERATION Galileo (1564 – 1642) displacement 3 1 second = 1 displacement 2 second = 1 + 3 displacement = 2 2 3 second = 1 + 3 + 5 displacement = 32 4 second = 1 + 3 + 5 + 7 displacement = 42 5 second = 1 + 3 + 5 + 7 + 9 displacement = 52 Therefore, displacement a time2 x Time Group Ave. (m/s) (m/s) 1 2.5 1.5 5.63 2.5 13.6 3.0 22.5 0 3.6 t (s) Read Directions: 1. Draw tangent lines to the curve below at all but the last point. 2. Find the slope of each tangent line. Then slope of the tangent line is the instantaneous velocity at that point in time Record the value of the velocity in the table above. 3. Average the slope values within your lab group for each point. Use Graphical analysis to obtain a velocity vs. time graph using your groups average values. Draw a picture of the graph on another sheet of paper. Label axis w/ units and record stats. Compare Graphs Same graph t • What is the difference between a X vs t2 graph & the V vs t graph. • Derive the equation for both. t2 t 4. Write the mathematical model for this 5. The equation for the curve is x=2.5m/s^2(t^2). graph underneath your V vs t graph. How does the slope of your velocity vs. time graph compare with the slope of the curve above. Y = mX + b Y = mX2 + b vf = 5 m/s2 (time) + vi Final Equation: X = 2.5 m/s2 (time2) Since slope = ave. accel. So, for X vs t2 graph slope = ½ a vf = a Δ t + vi or vf = vi + a Δ t vf - vi X = ½ at2 a= Motion Graphs x v a t t t Motion Graphs x v a t t t Starting Point Direction Velocity Acceleration 0 + forward + V (speeding up) +constant Acceleration Example #1 V (m/s) a (m/s/s) 0 +5 +5 +5 +10 +5 15 +5 Δv from +5 to +10 m/s requires a +5 m/s/s acceleration! Motion Graphs x v a t t t Starting Point Direction Velocity Acceleration 0 + forward + V (slowing down) -constant Acceleration Example #2 V (m/s) a (m/s/s) + 10 -5 + 5 -5 0 -5 Δv from +10 to + 5 m/s requires a - 5 m/s/s acceleration! Motion Graphs x v a t t t Starting Point Direction Velocity Acceleration above backward -V (speeding up) -constant Acceleration Example #3 V (m/s) a (m/s/s) 0 -5 -5 -5 -10 -5 -15 -5 Δv from -5 to -10 m/s requires a -5 m/s/s acceleration! a negative acceleration does not mean the velocity is slowing down, it has to do with direction! Motion Graphs Starting Point Direction Velocity Acceleration above backward - V (slowing down) +constant Acceleration Example #4 V (m/s) a (m/s/s) - 10 +5 - 5 +5 0 +5 Δv from -10 to -5 m/s requires a +5 m/s/s acceleration! More Conventions… Speeding Up + v, + a Same sign - v, - a Slowing Down + v, - a Opposite sign - v, + a Motion Graphs x v a t t Speed is not acceleration As the ball rolls down this hill a) Its speed increases and its acceleration decreases. b) Its speed decreases and acceleration increases. c) Both increase d) Both remain constant e) Both decrease Graphing Acceleration v A C t D A: speeding up + velocity ↑ + acceleration B: const motion const. velocity 0 acceleration C: slowing down + velocity ↓ - acceleration D: speeding up - velocity ↑ - acceleration Rearranging Equation #1 vf - vi a= Slope of an V vs t graph a Δt = vf - vi vf = vi + a Δt Deriving the Acceleration Equations • modify x vs t graph t x Equation is: x = 2.5 m/s/s x time2 t2 So, for x vs t2 graph m = ½ a Therefore, Δx = ½ at2 + vit Deriving the Acceleration Equations Through a complex derivation – when you combine equations #1 and #2 you get: vf2 = vi2 + 2aΔx Constant Acceleration Equations Missing variable ΔX vf = vi + a Δt Vf ΔX = ½ at2 + vit Vf2 = vi 2 + 2aΔx a ΔX = ½ (v f+vi)t All the equations in one place constant velocity uniform acceleration x v v a t t x vt v f at v 0 x v0t at v v 2ax 1. A bicyclist has an acceleration of 2 m/s2. If he starts from rest, determine his velocity and position at t = 5s. 2. A car is traveling at a speed of 30 m/s when the brakes are suddenly applied, causing a constant deceleration of 4 m/s2. Determine the time required to stop the car and the distance traveled before stopping. 1.10 m/s, 25 m 2. 7.5 s, 112.5 m Early one morning Farmer Joiner tries his hand at milking a new breed of cow. He chooses one named Hackensack. (Yes he’s milking Hackensack, the new Jersey). However, Hackensack fails to realize how much pull Farmer Joiner has and reacts oddly. Every time Dr. J milks her, she has trouble with her moo. It sounds like M- m-m-ooo (a sort of udder stutter, you might say). Finally she can’t take it any longer. a) Starting from rest, she accelerates at 2 m/s2 for 4 seconds out the barn door. b) She then runs at a constant velocity for 5 minutes, c) until Dr. J succeeds in throwing a rope around her neck, after which she slows down to a stop at a rate of 1 m/s2. When the ordeal is over, Dr. J admits that it was a mooving experience indeed. Qualitatively sketch the following graphs for Hackensack’s trip. A. position vs. time graph B. velocity vs. time graph C. acceleration vs. time graph D. motion map. An automobile accelerates from rest at 2 m/s2 for 20 s. The speed is then held constant for 20 s. Then there is an acceleration of - 3 m/s2 until the automobile stops. a. the distance traveled between the 4th and 5th seconds. b. the total distance traveled. c. the total time traveled a. 9.00 m b. 1466.53 m c. 53.33 s Ella Vader is driving her flashy Porsche down the highway at 30 m/s. She sees a deer ahead and 2.0 s after she spots the deer she hits the brakes. It takes her 7.5 more seconds to stop. a) How far did Ella travel before stopping? b) What was Ella’s acceleration while she slowed down? Terms for Constant Acceleration Test Instantaneous Velocity: find from x vs t (tangent line, slope) definition (a = Δv / Δt) find from a v vs t (slope) For accelerated Motion: x vs t, v vs t, a vs t graphs motion map Graphs & Tracks Exercise Conceptual Questions – Acceleration & Velocity Constant Acceleration Equations vf = vi + at Δx = ½at2 + vit vf2 = vi2 + 2aΔx
{"url":"http://www.docstoc.com/docs/135394001/CONSTANT-ACCELERATION","timestamp":"2014-04-20T05:11:13Z","content_type":null,"content_length":"65607","record_id":"<urn:uuid:d4fbf7d5-f690-4915-b5cd-8b305475780d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Shrinkable maps and universal weak equivalences up vote 1 down vote favorite Recall that a morphism $f:X \to B$ is called shrinkable is there exists a section $s:B \to X$ together with a homotopy $$H:I \times X \to X$$ from $sf$ to $id_X$ over $B,$ i.e. for all $t$, the map $$H_t:X \to X$$ is a map in $Top/B$ from $f$ to $f$. Call a morphism $f:X \to B$ parashrinkable if for every map $g:T \to B$ from a paracompact Hausdorff space $T,$ the induced map $$X \times_{B} T \to T$$ is shrinkable. Call a morphism $f:X \to B$ a universal weak equivalence if for every map $g:T \to B,$ from any topological space, the induced map $$X \times_{B} T \to T$$ is a weak equivalence. For example: every trivial fibration in the standard model structure. One can prove that all parashrinkable maps are universal weak equivalences. Aside: I haven't actually seen the proof, nor do I know where to find it. I think one way would be, given $$g:T \to B,$$ composing it with a $CW$-approximation $q:T' \to T,$ i.e. a map $q:T' \to T$ which is weak equivalence, but for the proof to go through, one would need $q$ to be a universal weak equivalence (e.g. a trivial fibration). Maybe this is can be done. Question: If a morphism $f:X \to B$ satisfies: "For every map $g:T \to B$ from a paracompact locally-compact Hausdorff space $T,$ the induced map $$X \times_{B} T \to T$$ is shrinkable," can one infer that $f$ is a universal weak equivalence? Of course, the proof I suggested for the parashrinkable case will not carry through, but perhaps there is another one. Counter-examples are fine too of course. homotopy-theory gn.general-topology at.algebraic-topology 1 You don't need to assume $X\times _BT\to T$ is shrinkable; a weak equivalence will do. And you only need to assume it when $T$ is a disk $D^n$ for any $n\ge 0$. – Tom Goodwillie May 15 '11 at 1:59 Hi Tom, that's great to know, so this fixes my problem. Do you mind explaining to me how this works? – David Carchedi May 15 '11 at 2:46 add comment 1 Answer active oldest votes Suppose that $f:X\to B$ is such that for every $n$, for every map $D^n\to B$, the resulting map $X\times_BD^n\to D^n$ is a weak equivalence. Then $f$ is a weak equivalence. Proof: It suffices to show that for every point $b\in B$ the homotopy fiber of $f$ w.r.t. $b$ is such for every $n\ge 0$ every map of $S^{n-1}$ into it is homotopic to a constant map. The choice of a point $b$ and a map of $S^{n-1}$ into the homotopy fiber is equivalent to the choice of compatible maps $S^{n-1}\to X$ and $D^n\to B$. Given such a choice, pull back $f$ up vote 2 to get a map $X\times_BD^n\to D^n$. By assumption the latter map is a weak equivalence. On the other hand, the given map from the sphere to the homotopy fiber of $f$ factors through a down vote homotopy fiber of this map, so through a weakly contractible space. Of course, $f$ is in fact a universal weak equivalence because the map $X\times_BT$ obtained by pulling back $f$ along a map $T\to B$ satisfies the same hypothesis that $f$ does. add comment Not the answer you're looking for? Browse other questions tagged homotopy-theory gn.general-topology at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/65018/shrinkable-maps-and-universal-weak-equivalences","timestamp":"2014-04-21T12:54:09Z","content_type":null,"content_length":"54635","record_id":"<urn:uuid:aff45d46-e0a0-46cf-9d1b-cbf95ed37c23>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Litchfield, AZ Math Tutor Find a Litchfield, AZ Math Tutor ...The course covered how to give positive reinforcement, identifying the students learning style, and engaging the student by asking them how much they know and work from there.I first learned Algebra 1 through a software disk program that my dad got me in high school. It was not easy for me to le... 7 Subjects: including algebra 1, algebra 2, calculus, geometry ...This math is very unique. You use all the knowledge obtained from geometry, algebra 1/2, and even college algebra. The SAT math is all about logical analysis. 21 Subjects: including calculus, physics, statistics, Java ...If you need last minute help for a test or exam I will do my best to accommodate you, however please be aware that I may charge an increased rate depending on the circumstances. I think that tutoring can be a mutual rewarding experience, and I look forward to helping students excel!I've taken bi... 28 Subjects: including trigonometry, ACT Math, algebra 1, algebra 2 ...With an appreciation for numbers comes a desire to grasp and master mathematical concepts, leading to success in the classroom and beyond. With a bachelor's degree in engineering, I took numerous math and science courses in college, and subsequently acquired a mastery of mathematics. My precalc... 20 Subjects: including calculus, English, trigonometry, writing ...I service 424 students, taking charge of their assessment as well as their instruction. I work with dozens of young students every day. My job though requires that I work with those who struggle most. 30 Subjects: including algebra 1, linear algebra, statistics, reading Related Litchfield, AZ Tutors Litchfield, AZ Accounting Tutors Litchfield, AZ ACT Tutors Litchfield, AZ Algebra Tutors Litchfield, AZ Algebra 2 Tutors Litchfield, AZ Calculus Tutors Litchfield, AZ Geometry Tutors Litchfield, AZ Math Tutors Litchfield, AZ Prealgebra Tutors Litchfield, AZ Precalculus Tutors Litchfield, AZ SAT Tutors Litchfield, AZ SAT Math Tutors Litchfield, AZ Science Tutors Litchfield, AZ Statistics Tutors Litchfield, AZ Trigonometry Tutors Nearby Cities With Math Tutor Avondale Goodyear, AZ Math Tutors Avondale, AZ Math Tutors Cordes Lakes, AZ Math Tutors Eleven Mile Corner, AZ Math Tutors Eleven Mile, AZ Math Tutors Goodyear Math Tutors Haciendas Constancia, PR Math Tutors Haciendas De Borinquen Ii, PR Math Tutors Navajo Station, AZ Math Tutors Peeples Valley, AZ Math Tutors Queen Valley, AZ Math Tutors Rock Springs, AZ Math Tutors Spring Valley, AZ Math Tutors Strawberry, AZ Math Tutors Toltec, AZ Math Tutors
{"url":"http://www.purplemath.com/Litchfield_AZ_Math_tutors.php","timestamp":"2014-04-18T13:58:44Z","content_type":null,"content_length":"23986","record_id":"<urn:uuid:2334318a-5dfa-482f-8363-e72d72b83bc5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Properties of Natural Numbers | Counting Numbers | Even and Odd Natural numbers Properties of Natural Numbers The properties of natural numbers are as follow: (i) Natural numbers are also called counting numbers. (ii) The first and the smallest natural number is 1 (one). (iii) Every natural number (except 1) can be obtained by adding 1 to the previous natural number. (iv) For the natural number 1, there is no ‘previous’ natural number (Though 1 = 0 + 1, but 0 is not a natural number). (v) There is no last or greatest natural number since they are infinite. (vi) Natural numbers are denoted by 'ℕ' normally. (vii) We cannot complete the counting of all natural numbers. We express this fact by saying that there are infinitely many natural numbers. The counting numbers 1, 2, 3, 4, ..... are called naturals numbers. The set of natural numbers is denoted by 'ℕ'. Thus, ℕ = {1, 2, 3, 4, .....}. Even Natural Numbers (E): A system of naturals numbers, which are divisible by 2 or are multiples of 2, is called a set of even numbers. It is denoted by 'E'. Thus, E = {2, 4, 6, 8, 10, 12, .....} There are infinite even numbers. Odd Natural Numbers (O): A system of naturals numbers, which are not divisible by 2 or are not multiples of 2, is called a set of odd numbers. It is denoted by 'O'. Thus, O = {1, 3, 5, 7, 9, 11, .....} There are infinite odd numbers. Taking together the odd and even numbers, we get Natural Numbers. Numbers Page 6th Grade Page From Properties of Natural Numbers to HOME PAGE Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need. New! Comments Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
{"url":"http://www.math-only-math.com/Properties-of-Natural-Numbers.html","timestamp":"2014-04-17T12:57:19Z","content_type":null,"content_length":"21539","record_id":"<urn:uuid:d0c5cff1-09ef-41e7-8a56-4e8abfac595a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Finiteness properties of general topological spaces MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. It is known that all metric compact ANR have the homotopy type of finite CW complexes. Which spaces are homotopy equivalent or finitely dominated by CW complexes of finite up vote 2 down vote favorite gt.geometric-topology add comment It is known that all metric compact ANR have the homotopy type of finite CW complexes. Which spaces are homotopy equivalent or finitely dominated by CW complexes of finite type? Let $X$ be a space, which we may as well assume connected. If $X$ is an ANR, then Milnor gives that $X$ is homotopy equivalent to a countable CW-complex $X'$. The question of when $X'$ is homotopy equivalent to a complex of finite type is addressed in Theorem A of Wall, C. T. C. Finiteness conditions for CW-complexes. Ann. of Math. (2) 81 1965 56–69. up vote 1 down and discussed further in Wall, C. T. C. Finiteness conditions for CW complexes. II. Proc. Roy. Soc. Ser. A 295 1966 129–139. Note the necessary condition that $\pi_1(X')$ be finitely presented. I suspect there are more modern references building on Wall's work, but I couldn't find any. add comment Let $X$ be a space, which we may as well assume connected. If $X$ is an ANR, then Milnor gives that $X$ is homotopy equivalent to a countable CW-complex $X'$. The question of when $X'$ is homotopy equivalent to a complex of finite type is addressed in Theorem A of Wall, C. T. C. Finiteness conditions for CW-complexes. Ann. of Math. (2) 81 1965 56–69. Wall, C. T. C. Finiteness conditions for CW complexes. II. Proc. Roy. Soc. Ser. A 295 1966 129–139. Note the necessary condition that $\pi_1(X')$ be finitely presented. I suspect there are more modern references building on Wall's work, but I couldn't find any.
{"url":"http://mathoverflow.net/questions/109309/finiteness-properties-of-general-topological-spaces","timestamp":"2014-04-18T06:16:10Z","content_type":null,"content_length":"52585","record_id":"<urn:uuid:a7ec66b7-deeb-4174-a646-32747af827dd>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Haskell Ambiguous type error up vote 0 down vote favorite I have the following definitions {-# LANGUAGE MultiParamTypeClasses, FlexibleContexts #-} import qualified Data.Map as M class Graph g n e | g -> n e where empty :: g -- returns an empty graph type Matrix a = [[a]] data MxGraph a b = MxGraph { nodeMap :: M.Map a Int, edgeMatrix :: Matrix (Maybe b) } deriving Show instance (Ord n) => Graph (MxGraph n e) n e where empty = MxGraph M.empty [[]] When I try to call empty I get an ambiguous type error *Main> empty Ambiguous type variables `g0', `n0', `e0' in the constraint: ... Why do I get this error? How can I fix it? ... and what's your question? – melpomene Nov 24 '12 at 23:16 3 Show us your code that contains empty and gives you the type error. If you are typing empty on its own into ghci, you will have to give it a type annotation e.g. empty :: MxGraph Int Int. – dave4420 Nov 24 '12 at 23:19 I just say empty, edited my question – Jeremy Knees Nov 24 '12 at 23:25 add comment 1 Answer active oldest votes You are seeing this type error because Haskell is not provided with sufficient information to know the type of empty. Any attempt to evaluate an expression though requires the type. The type is not defined yet because the instance cannot be selected yet. That is, as the functional dependency says, the instance can only be selected if type parameter g is known. Simply, it is not known because you do not specify it in any way (such as with a type annotation). up vote 2 down vote The type-class system makes an open world assumption. This means that there could be many instances for the type class in question and hence the type system is conservative in selecting an instance (even if currently there is only one instance that makes sense to you, but there could be more some other day and the system doesn't want to change its mind just because some other instances get into scope). 3 Can you add a few paragraph breaks? That would make it much more readable. – Daniel Fischer Nov 24 '12 at 23:37 Added comments as suggested. – rlaemmel Nov 25 '12 at 16:46 add comment Not the answer you're looking for? Browse other questions tagged haskell or ask your own question.
{"url":"http://stackoverflow.com/questions/13546621/haskell-ambiguous-type-error","timestamp":"2014-04-23T23:49:50Z","content_type":null,"content_length":"68295","record_id":"<urn:uuid:ec6247a2-2107-43b8-8af8-5878ca438039>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2006 [00601] [Date Index] [Thread Index] [Author Index] Re: Bug in calculating -3^49999.0 • To: mathgroup at smc.vnet.net • Subject: [mg66034] Re: [mg65989] Bug in calculating -3^49999.0 • From: Daniel Lichtblau <danl at wolfram.com> • Date: Thu, 27 Apr 2006 02:26:38 -0400 (EDT) • References: <200604260837.EAA02684@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com kowald at molgen.mpg.de wrote: > Hello everybody, > I'm using Mathematica 5.1 under Win2k and I found a bug when I try to compute > -3^49999. I get: > -3^49999 //N => -3.85 * 10^23855 okay > -3^49999. => -3.85 * 10^23855 okay > (-3)^49999. => -3.85 * 10^23855 + 4.56*10^23844 i wrong > (-3)^49999 => -3.85 * 10^23855 okay > Is this a known problem? > Is there a work around ? > Obviously this is part of a more complicated calculation and so I > cannot simply leave out the brackets. > Many thanks, > Axel It is known behavior. I would not regard it as a problem insofar as it is also expected behavior (that is, not a bug). Raising a number to a power in approximate arithmetic will be done more or less as follows. (1) Find the (approximate) log of the base. Call this logb. (2) Compute the exponential of logb*power. Given that you use N without requesting specific precision, these computations will be performed using machine arithmetic if possible. Otherwise they will sue bignum (software) arithmetic but at $MachinePrecision. That is what happens in this case due to the sizes of exponents required. We can do this "by hand" as below. In[18]:= InputForm[lb = Log[-3.]] Out[18]//InputForm= 1.0986122886681098 + 3.141592653589793*I In[20]:= InputForm[Exp[lb*49999]] -3.8513654349963041533518106689182`15.954589770191005*^23855 + The factor of 49999 has the effect of amplifying error in that exponent by about 5 orders of magnitude. As machine precision handles about 16 decimal places, we see that a result good to only 11 or so places is a quite plausible outcome. Notice (e.g. by comparison with N[3^49999,50]) that the real part of the above is only "correct" to 11 places. As the imaginary part is about 11 orders of magnitude smaller than the real part, it's contribution to the relative error is also in line with If you want higher precision results your options are to request higher precision (the hammer approach) or to try to rearrange the computations in such a way as to reduce effects of truncation and other error when doing approximate numerical evaluations. A beneficial effect of using higher precision is that significance arithmetic will be utilized, and this will let you know that some "digits" are not to be trusted. For example, below we learn that the imaginary part has NO significant digits, and is at least 19 orders of magnitude smaller than the real part. In[23]:= Exp[N[lb*49999,25]] Out[23]= -3.8513654349686329705 10 + 0. 10 I Daniel Lichtblau Wolfram Research • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/Apr/msg00601.html","timestamp":"2014-04-17T13:09:38Z","content_type":null,"content_length":"37283","record_id":"<urn:uuid:78a62421-dff9-4da8-98c2-fb0a7f55994d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Browsing Harbaugh, Bill by Subject "Probability weighting" Now showing items 1-2 of 2 • Harbaugh, William; Krause, Kate; Vesterlund, Lise (University of Oregon, Dept. of Economics, July 20, 2002) Harbaugh, William Krause, Kate Vesterlund, Lise 2003-08-13T19:35:06Z 2003-08-13T19:35:06Z 2002-07-20 http://hdl.handle.net/1794/84 The most distinctive prediction of prospect theory is the fourfold pattern (FFP) of risk attitudes. People are said to be (1) risk-seeking over low-probability gains, (2) risk-averse over low-probability losses, (3) risk-averse over high-probability gains, and (4) risk-seeking over high-probability losses. Using simple gambles over real payoffs, we conduct a direct test of this FFP prediction. We find that when pricing gambles subjects’ risk attitudes are consistent with the FFP. However, when they choose between the gamble and its expected value, their decisions are not distinguishable from random choice and are often the exact opposite of the prediction. These results hold both between and within subjects, and are robust even when we allow the subjects to simultaneously review and change their price and choice decisions. 0 bytes application/pdf en_US University of Oregon, Dept. of Economics University of Oregon Economics Department Working Papers; 2002-2 Probability weighting Expected utility Prospect theory Cumulative prospect theory Preference reversal Microeconomics Probabilities Decision making Game theory Economics, Mathematical Prospect Theory in Choice and Pricing Tasks Working Paper • Harbaugh, William; Krause, Kate; Vesterlund, Lise (November 5, 2001) Harbaugh, William Krause, Kate Vesterlund, Lise 2005-03-25T19:17:00Z 2005-03-25T19:17:00Z 2001-11-05 Experimental Economics 5(1): 53-84. http://hdl.handle.net/1794/694 45 p. In this paper we examine how risk attitudes change with age. We present participants from age 5 to 64 with choices between simple gambles and the expected value of the gambles. The gambles are over both gains and losses, and vary in the probability of the non-zero payoff. Surprisingly, we find that many participants are risk seeking when faced with high-probability prospects over gains and risk averse when faced with small-probability prospects. Over losses we find the exact opposite. Children’s choices are consistent with the underweighting of low-probability events and the overweighting of high-probability ones. This tendency diminishes with age, and on average adults appear to use the objective probability when evaluating risky prospects. This research was funded by grants from the National Science Foundation and the Preferences Network of the MacArthur Foundation. 307657 bytes application/pdf en_US Probability weighting Subjective expected utility Prospect theory Children Risk Risk Attitudes of Children and Adults : Choices over Small and Large Probability Gains and Losses Article Now showing items 1-2 of 2
{"url":"https://scholarsbank.uoregon.edu/xmlui/handle/1794/693/browse?value=Probability+weighting&type=subject","timestamp":"2014-04-18T18:36:21Z","content_type":null,"content_length":"23752","record_id":"<urn:uuid:f4019631-1539-4aeb-bb13-228612063f43>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> About Latent Class Qiao Hu posted on Monday, September 19, 2011 - 6:23 am I have 5 variables: V D A E C and centered the score of five variables, that is to say, the means of five variables is zero. Each variable correlate with the other four variables. I want to see the distribution of the five variables in the different latent group (class 2, class 3, class4 etc), then choose the most suitable class. First, I choose two classes: (1)I get such a warning, how to solve it? (2) Why I cannot find Chi-Square Test of Model Fit in the output? (3)The AIC BIC is negative, how to do with it? Akaike (AIC) -15602.828 Bayesian (BIC) -15452.572 Sample-Size Adjusted BIC -15535.180 (n* = (n + 2) / 24) This is my syntax. DATA: FILE='C:\la.dat'; listwise= 0n; VARIABLE: NAMES ARE V D A E C; CLASSES = c(2); ANALYSIS: TYPE = MIXTURE; V WITH D A E C; D WITH A E C; A WITH E C; E WITH C; Linda K. Muthen posted on Monday, September 19, 2011 - 11:16 am 1. Increase the random starts, for example, STARTS= 200 50; 2. Chi-square and related fit statistics are not available when means, variances, and covariances are not sufficient statistics for model estimation. 3. This is not a problem. Qiao Hu posted on Monday, September 19, 2011 - 11:37 am Dear Linda, This time I get another warning when I use START 200 50; How to do with it? Linda K. Muthen posted on Monday, September 19, 2011 - 12:37 pm Increase the starts to 500 125. You need to increase the starts until the best loglikelihood value is replicated. Qiao Hu posted on Tuesday, September 20, 2011 - 9:20 am Dear Linda, This time I increase the number of variables from 5 to 7. I get such a warning, how to do with it? CONDITION NUMBER IS -0.531D-21. PROBLEM INVOLVING PARAMETER 43. Qiao Hu posted on Tuesday, September 20, 2011 - 9:33 am Another question: How big STARTS I should use? Because I increase the starts to 2000 1000, it still cannot work. Linda K. Muthen posted on Tuesday, September 20, 2011 - 10:13 am Please send the relevant files and your license number to support@statmodel.com. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=13&page=8080","timestamp":"2014-04-19T22:08:08Z","content_type":null,"content_length":"25055","record_id":"<urn:uuid:586a5678-4a21-4c2c-889d-4a5743399957>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Galois theory From Wikipedia, the free encyclopedia In mathematics, more specifically in abstract algebra, Galois theory, named after Évariste Galois, provides a connection between field theory and group theory. Using Galois theory, certain problems in field theory can be reduced to group theory, which is in some sense simpler and better understood. Originally, Galois used permutation groups to describe how the various roots of a given polynomial equation are related to each other. The modern approach to Galois theory, developed by Richard Dedekind, Leopold Kronecker and Emil Artin, among others, involves studying automorphisms of field extensions. Further abstraction of Galois theory is achieved by the theory of Galois connections. Application to classical problems The birth of Galois theory was originally motivated by the following question, whose answer is known as the Abel–Ruffini theorem. Why is there no formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)? Galois theory not only provides a beautiful answer to this question, it also explains in detail why it is possible to solve equations of degree four or lower in the above manner, and why their solutions take the form that they do. Further, it gives a conceptually clear, and often practical, means of telling when some particular equation of higher degree can be solved in that manner. Galois theory also gives a clear insight into questions concerning problems in compass and straightedge construction. It gives an elegant characterisation of the ratios of lengths that can be constructed with this method. Using this, it becomes relatively easy to answer such classical problems of geometry as Which regular polygons are constructible polygons?^1 Why is it not possible to trisect every angle using a compass and straightedge?^1 Galois theory originated in the study of symmetric functions – the coefficients of a monic polynomial are (up to sign) the elementary symmetric polynomials in the roots. For instance, (x – a)(x – b) = x^2 – (a + b)x + ab, where 1, a + b and ab are the elementary polynomials of degree 0, 1 and 2 in two variables. This was first formalized by the 16th century French mathematician François Viète, in Viète's formulas, for the case of positive real roots. In the opinion of the 18th century British mathematician Charles Hutton,^2 the expression of coefficients of a polynomial in terms of the roots (not only for positive roots) was first understood by the 17th century French mathematician Albert Girard; Hutton writes: ...[Girard was] the first person who understood the general doctrine of the formation of the coefficients of the powers from the sum of the roots and their products. He was the first who discovered the rules for summing the powers of the roots of any equation. In this vein, the discriminant is a symmetric function in the roots that reflects properties of the roots – it is zero if and only if the polynomial has a multiple root, and for quadratic and cubic polynomials it is positive if and only if all roots are real and distinct, and negative if and only if there is a pair of distinct complex conjugate roots. See Discriminant:Nature of the roots for The cubic was first partly solved by the 15th/16th century Italian mathematician Scipione del Ferro, who did not however publish his results; this method only solved one of three classes, as the others involved taking square roots of negative numbers, and complex numbers were not known at the time. This solution was then rediscovered independently in 1535 by Niccolò Fontana Tartaglia, who shared it with Gerolamo Cardano, asking him to not publish it. Cardano then extended this to the other two cases, using square roots of negatives as intermediate steps; see details at Cardano's method. After the discovery of Ferro's work, he felt that Tartaglia's method was no longer secret, and thus he published his complete solution in his 1545 Ars Magna. His student Lodovico Ferrari solved the quartic polynomial; his solution was also included in Cardano’s Ars Magna. A further step was the 1770 paper Réflexions sur la résolution algébrique des équations by the French-Italian mathematician Joseph Louis Lagrange, in his method of Lagrange resolvents, where he analyzed Cardano and Ferrarri's solution of cubics and quartics by considering them in terms of permutations of the roots, which yielded an auxiliary polynomial of lower degree, providing a unified understanding of the solutions and laying the groundwork for group theory and Galois theory. Crucially, however, he did not consider composition of permutations. Lagrange's method did not extend to quintic equations or higher, because the resolvent had higher degree. The quintic was almost proven to have no general solutions by radicals by Paolo Ruffini in 1799, whose key insight was to use permutation groups, not just a single permutation. His solution contained a gap, which Cauchy considered minor, though this was not patched until the work of Norwegian mathematician Niels Henrik Abel, who published a proof in 1824, thus establishing the Abel–Ruffini While Ruffini and Abel established that the general quintic could not be solved, some particular quintics can be solved, such as (x − 1)^5=0, and the precise criterion by which a given quintic or higher polynomial could be determined to be solvable or not was given by Évariste Galois, who showed that whether a polynomial was solvable or not was equivalent to whether or not the permutation group of its roots – in modern terms, its Galois group – had a certain structure – in modern terms, whether or not it was a solvable group. This group was always solvable for polynomials of degree four or less, but not always so for polynomials of degree five and greater, which explains why there is no general solution in higher degree. Permutation group approach to Galois theory Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say A and B, that A^2 + 5B^3 = 7. The central idea of Galois theory is to consider those permutations (or rearrangements) of the roots having the property that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. An important proviso is that we restrict ourselves to algebraic equations whose coefficients are rational numbers. (One might instead specify a certain field in which the coefficients should lie but, for the simple examples below, we will restrict ourselves to the field of rational numbers.) These permutations together form a permutation group, also called the Galois group of the polynomial (over the rational numbers). To illustrate this point, consider the following examples: First example: a quadratic equation Consider the quadratic equation $x^2 - 4x + 1 = 0.\$ By using the quadratic formula, we find that the two roots are $A = 2 + \sqrt{3},$ $B = 2 - \sqrt{3}.$ Examples of algebraic equations satisfied by A and B include $A + B = 4,\$ $AB = 1.\$ Obviously, in either of these equations, if we exchange A and B, we obtain another true statement. For example, the equation A + B = 4 becomes simply B + A = 4. Furthermore, it is true, but far less obvious, that this holds for every possible algebraic equation with rational coefficients relating the A and B values above (in any such equation, swapping A and B yields another true equation). To prove this requires the theory of symmetric polynomials. (One might object that A and B are related by the algebraic equation $A - B - 2\sqrt{3} = 0$, which does not remain true when A and B are exchanged. However, this equation does not concern us, because it has the coefficient $-2\sqrt{3}$ which is not rational). We conclude that the Galois group of the polynomial x^2 − 4x + 1 consists of two permutations: the identity permutation which leaves A and B untouched, and the transposition permutation which exchanges A and B. It is a cyclic group of order two, and therefore isomorphic to Z/2Z. A similar discussion applies to any quadratic polynomial ax^2 + bx + c, where a, b and c are rational numbers. • If the polynomial has only one root, for example x^2 − 4x + 4 = (x−2)^2, then the Galois group is trivial; that is, it contains only the identity permutation. • If it has two distinct rational roots, for example x^2 − 3x + 2 = (x−2)(x−1), the Galois group is again trivial. • If it has two irrational roots (including the case where the roots are complex), then the Galois group contains two permutations, just as in the above example. Second example Consider the polynomial $x^4 - 10x^2 + 1,\$ which can also be written as $(x^2 - 5)^2 - 24.\$ We wish to describe the Galois group of this polynomial, again over the field of rational numbers. The polynomial has four roots: $A = \sqrt{2} + \sqrt{3}$ $B = \sqrt{2} - \sqrt{3}$ $C = -\sqrt{2} + \sqrt{3}$ $D = -\sqrt{2} - \sqrt{3}.$ There are 24 possible ways to permute these four roots, but not all of these permutations are members of the Galois group. The members of the Galois group must preserve any algebraic equation with rational coefficients involving A, B, C and D. Among these equations, we have: It follows that, if $\varphi$ is a permutation that belongs to the Galois group, we must have: $\varphi(B)=\frac{-1}{\varphi(A)}, \quad \varphi(C)=\frac{1}{\varphi(A)}, \quad \varphi(D)=-\varphi(A).$ This implies that the permutation is well defined by the image of A, that the Galois group has 4 elements, which are (A, B, C, D) → (A, B, C, D) (A, B, C, D) → (B, A, D, C) (A, B, C, D) → (C, D, A, B) (A, B, C, D) → (D, C, B, A), and the Galois group is isomorphic to the Klein four-group. Modern approach by field theory In the modern approach, one starts with a field extension L/K (read: L over K), and examines the group of field automorphisms of L/K (these are bijective ring homomorphisms α: L → L such that α(x) = x for all x in K). See the article on Galois groups for further explanation and examples. The connection between the two approaches is as follows. The coefficients of the polynomial in question should be chosen from the base field K. The top field L should be the field obtained by adjoining the roots of the polynomial in question to the base field. Any permutation of the roots which respects algebraic equations as described above gives rise to an automorphism of L/K, and vice In the first example above, we were studying the extension Q(√3)/Q, where Q is the field of rational numbers, and Q(√3) is the field obtained from Q by adjoining √3. In the second example, we were studying the extension Q(A,B,C,D)/Q. There are several advantages to the modern approach over the permutation group approach. • It permits a far simpler statement of the fundamental theorem of Galois theory. • The use of base fields other than Q is crucial in many areas of mathematics. For example, in algebraic number theory, one often does Galois theory using number fields, finite fields or local fields as the base field. • It allows one to more easily study infinite extensions. Again this is important in algebraic number theory, where for example one often discusses the absolute Galois group of Q, defined to be the Galois group of K/Q where K is an algebraic closure of Q. • It allows for consideration of inseparable extensions. This issue does not arise in the classical framework, since it was always implicitly assumed that arithmetic took place in characteristic zero, but nonzero characteristic arises frequently in number theory and in algebraic geometry. • It removes the rather artificial reliance on chasing roots of polynomials. That is, different polynomials may yield the same extension fields, and the modern approach recognizes the connection between these polynomials. Solvable groups and solution by radicals The notion of a solvable group in group theory allows one to determine whether a polynomial is solvable in radicals, depending on whether its Galois group has the property of solvability. In essence, each field extension L/K corresponds to a factor group in a composition series of the Galois group. If a factor group in the composition series is cyclic of order n, and if in the corresponding field extension L/K the field K already contains a primitive n-th root of unity, then it is a radical extension and the elements of L can then be expressed using the nth root of some element of K. If all the factor groups in its composition series are cyclic, the Galois group is called solvable, and all of the elements of the corresponding field can be found by repeatedly taking roots, products, and sums of elements from the base field (usually Q). One of the great triumphs of Galois Theory was the proof that for every n > 4, there exist polynomials of degree n which are not solvable by radicals—the Abel–Ruffini theorem. This is due to the fact that for n > 4 the symmetric group S[n] contains a simple, non-cyclic, normal subgroup, namely the alternating group A[n]. A non-solvable quintic example Van der Waerden^3 cites the polynomial $f(x) = x^5-x-1$. By the rational root theorem this has no rational zeros. Neither does it have linear factors modulo 2 or 3. The Galois group of $f(x)$ modulo 2 is cyclic of order 6, because $f(x)$ factors modulo 2 into $x^2+x+1$ and a cubic polynomial. $f(x)$ has no linear or quadratic factor modulo 3, and hence is irreducible modulo 3. Thus its Galois group modulo 3 contains an element of order 5. It is known^4 that a Galois group modulo a prime is isomorphic to a subgroup of the Galois group over the rationals. A permutation group on 5 objects with elements of orders 6 and 5 must be the symmetric group $S_5$, which is therefore the Galois group of $f(x)$. This is one of the simplest examples of a non-solvable quintic polynomial. According to Serge Lang, Emil Artin found this Inverse Galois problem All finite groups do occur as Galois groups. It is easy to construct field extensions with any given finite group as Galois group, as long as one does not also specify the ground field. For that, choose a field K and a finite group G. Cayley's theorem says that G is (up to isomorphism) a subgroup of the symmetric group S on the elements of G. Choose indeterminates {x[α]}, one for each element α of G, and adjoin them to K to get the field F = K({x[α]}). Contained within F is the field L of symmetric rational functions in the {x[α]}. The Galois group of F/L is S, by a basic result of Emil Artin. G acts on F by restriction of action of S. If the fixed field of this action is M, then, by the fundamental theorem of Galois theory, the Galois group of F/M is G. It is an open problem to prove the existence of a field extension of the rational field Q with a given finite group as Galois group. Hilbert played a part in solving the problem for all symmetric and alternating groups. Igor Shafarevich proved that every solvable finite group is the Galois group of some extension of Q. Various people have solved the inverse Galois problem for selected non-abelian simple groups. Existence of solutions has been shown for all but possibly one (Mathieu group M[23]) of the 26 sporadic simple groups. There is even a polynomial with integral coefficients whose Galois group is the Monster group. See also 1. ^ ^a ^b Ian Stewart (1989). Galois Theory. Chapman and Hall. ISBN 0-412-34550-1. 2. ^ van der Waerden, Modern Algebra (1949 English edn.), Vol. 1, Section 61, p.191 3. ^ V. V. Prasolov, Polynomials (2004), Theorem 5.4.5(a) 4. ^ Lang, Serge (1994), Algebraic Number Theory, Graduate Texts in Mathematics 110, Springer, p. 121, ISBN 9780387942254. External links Some on-line tutorials on Galois theory appear at: Online textbooks in French, German, Italian and English can be found at:
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Galois_theory","timestamp":"2014-04-21T08:43:41Z","content_type":null,"content_length":"113661","record_id":"<urn:uuid:4a3d7907-b7a4-4b94-8636-23a6efd71a66>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by BERNIE on Sunday, July 22, 2012 at 12:36pm. If some one could help me with this one I am stuck 256^-1/4 and thankyou • algebra 117b - Dustin, Sunday, July 22, 2012 at 12:40pm 1/4 or 0.25 • algebra 117b - Ms. Sue, Sunday, July 22, 2012 at 12:49pm • algebra 117b - Reiny, Sunday, July 22, 2012 at 2:04pm = 1/256^(1/4) = 1/4 I assumed there were brackets around the -1/4 Ms Sue entered the calculation just the way you typed, and the answer is correct according to your typing. Can you see how important it is to have proper use of brackets ? Related Questions algebra 117b - I please need help in solving a cube surface 6x^2+108x+486 algebra 117b - I am really stuck if you could me please with this problem y=kxto... algebra 117b - i need help with solving for this expression, I keep gett stuck ... algebra 117b - Are these factorable Beecause I tried and could figure it out can... Math 117B Help - subtract (xy-ab-6)-(xy-5ab-5) How do I do this Algebra 2 - Find the last digit in 39999. I can not figure this one out. Can ... algebra - can some one help me with 2 problems thanks. y=1/2x+1,y=-3x+8, and y=1... algebra - (2-x)/(2x+6)+ x/(x^2-9) Can some one please help solve this one ... algebra - I do not how to do this problem could some one help me please. x+1/x+2... Algebra - Hello. It's the beginning of the school year, and I've quite forgotten...
{"url":"http://www.jiskha.com/display.cgi?id=1342975003","timestamp":"2014-04-19T21:24:48Z","content_type":null,"content_length":"9589","record_id":"<urn:uuid:5af9a38e-3564-47a0-897c-58d4e2678cfc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Info and Power data on Floyd's stage 17 ride from Allen Lim 07-22-06, 12:22 PM #1 Info and Power data on Floyd's stage 17 ride from Allen Lim Interesting info on Floyd: In stage 17, he used 70+ bottles of water (both drinking and dumping over his head). His radio malfunctioned on the final climb (he thinks due to the water he poured over his head), so he didn't know his gaps going up the Joux Plane. He had lost a lot of water/weight in stage 16 so to counteract that (and keep him from sweating) they gave him ice cold bottles of water to pour over his head throughout stage 17. (This kept him from sweating as much - which Lim says is better since you can't replace the water in your blood, which is where sweat water comes from, as quickly as you lose it -- so he had more blood to transport oxygen in his legs) (Landis allowed publication of his power numbers) When he made his move on the Col des Saisies he averaged 544 watts in the first thirty seconds of his acceleration. This settled down to a 5-minute peak of 451 watts, which then continued for 10 minutes at an average of power of 431 watts. His 30 minute average was 401 watts. "Floyd averaged 280 watts for the entire ride, but it was 318 for the last two hours. That is while the bike is moving, so you have take into account that he has all those long descents," Lim said. "On the descents he spent 13.2 percent of his time or 43 minutes coasting. If you spend that much coasting but are as good a descender as he is, you are making up time on the descents as "However, if we don’t include the coasting time, he averaged 324 watts while pedalling for the stage and 364 watts over the last two hours. That gave him a total of 5,456 Kjoules of work, at an average cadence of 89 rpm. The nature of it is that everything he did today is within the realms of physiological capacity. It was the style with which he did it, the panache and the bravado and the courage [which stood out]." Despite the outcome of the ride, Lim said that the outputs weren’t actually the best he has seen from Landis. Of course, two and a half weeks of racing will tend to have that effect. thanks for posting. very interesting. 70+ bottles of water! good trick...it obviously helped. amazing numbers. The first ones really aren't that big. Heck I can do the 544 to get away, and not that far off the 450 for 5 minutes (unfortunaltely I'm also hauling 50 more pounds). However, the 364 for 2 hours and over 300 for the day is pretty incredible. i'm sure there are a number of people on this forum, faster than me, that can do 450 for 5 minutes. However I doubt anybody can approach the numbers over the longer haul. What these numbers tell you is that the lead riders could have stayed with him initially. But it would have been hard to match the effort he put out over the day, and they were baking on the belief he couldn't do it. And didn't want to crack themselves trying to match him. amazing numbers. The first ones really aren't that big. Heck I can do the 544 to get away, and not that far off the 450 for 5 minutes (unfortunaltely I'm also hauling 50 more pounds). However, the 364 for 2 hours and over 300 for the day is pretty incredible. i'm sure there are a number of people on this forum, faster than me, that can do 450 for 5 minutes. However I doubt anybody can approach the numbers over the longer haul. What these numbers tell you is that the lead riders could have stayed with him initially. But it would have been hard to match the effort he put out over the day, and they were baking on the belief he couldn't do it. And didn't want to crack themselves trying to match him. Really, you can do 450 for 5 minutes going up a category 1 hill? Really, you can do 450 for 5 minutes going up a category 1 hill? 450w is 450w.. climb doesnt matter 450w is 450w.. climb doesnt matter 450 for a second or two or three is the same, but sustaining it going up a hill is definitely diff. It's the sustained effort that doesn't last. 450 for a second or two or three is the same, but sustaining it going up a hill is definitely diff. It's the sustained effort that doesn't last. 450 watts is 450 watts hill or no hill. I just got my power meter and live on flat land, so I can't test what I have read by the experts, but they say you can actually put out more watts climbing than on a flat. Seems you use more muscles climbing than on flat land. I think you are asking about the amount of time Merlin says he can sustain 450 watts. He said he was close to 450 for 5 minutes. Based on his previous posts about wattage this seems to be 450 watts is 450 watts hill or no hill. I just got my power meter and live on flat land, so I can't test what I have read by the experts, but they say you can actually put out more watts climbing than on a flat. Seems you use more muscles climbing than on flat land. I think you are asking about the amount of time Merlin says he can sustain 450 watts. He said he was close to 450 for 5 minutes. Based on his previous posts about wattage this seems to be Right, but what I've read is it's watts-to-weight that is the true indication of what you're putting out in power, and going up a hill, your weight actually becomes more of a factor (and in a sense more) since it's always "tending" back downward and you're pushing it up hill. In other words, 450 watts with 100 lbs can be sustained longer than 450 watts with 102 lbs by the same person. I'm not sure, however, how much of a diff. in relative weight (if that makes sense) going up a cat. 1 climb makes. Right, but what I've read is it's watts-to-weight that is the true indication of what you're putting out in power, and going up a hill, your weight actually becomes more of a factor (and in a sense more) since it's always "tending" back downward and you're pushing it up hill. In other words, 450 watts with 100 lbs can be sustained longer than 450 watts with 102 lbs by the same person. I'm not sure, however, how much of a diff. in relative weight (if that makes sense) going up a cat. 1 climb makes. Watts to weight as you say will determine how fast you can go up a hill, but it has no bearing on how many watts you can generate. I am not sure what you mean by 450 watts with 100 pounds can be sustained longer than 450 watts with 102 pounds. The weight of a person will not have any bearing on how long they can produce 450 watts. The bigger rider may even have the advantage. I hate to use you as an example Merlin, but here it goes. Lets say Merlin weighs 200 pounds, and he produces 450 watts on a climb. He is racing Floyd who also puts out 450 watts, but weighs 150 pounds. They both may be able to produce 450 watts for the entire climb, but Floyd will beat him to the top because he weighs less not because it is easier to produce power. They indicated Robiie McEwen can generate 1800W in a burst. Any idea what Magnus can do? Watts to weight as you say will determine how fast you can go up a hill, but it has no bearing on how many watts you can generate. I am not sure what you mean by 450 watts with 100 pounds can be sustained longer than 450 watts with 102 pounds. The weight of a person will not have any bearing on how long they can produce 450 watts. The bigger rider may even have the advantage. I hate to use you as an example Merlin, but here it goes. Lets say Merlin weighs 200 pounds, and he produces 450 watts on a climb. He is racing Floyd who also puts out 450 watts, but weighs 150 pounds. They both may be able to produce 450 watts for the entire climb, but Floyd will beat him to the top because he weighs less not because it is easier to produce power. I never meant to imply that Landis wouldn't drop me like a rock in a matter of seconds. And I can't do 450for 5 minutes, but probably a shade under 400. (which at my size makes me a decent cat 4, nothing more) My point was that the intiial attack numbers aren't that high relatively speaking, and any of the contenders that had elected to could have matched the intial move. The impressive numbers were the sustained power. And I think the contenders didn't cover his intial move because they were betting LAndis couldn't sustain that, and they also knew they wouldlikely blow if they had to do that effort for that time. They indicated Robiie McEwen can generate 1800W in a burst. Any idea what Magnus can do? I read that Marty Nothstein was hitting 2200 the year he won the gold in the match sprint. big quads here... While the track sprinters have the edge in peak wattage, they are not riding 120 miles before starting their sprints. "You should already be aware that riding with people who steer with their elbows, stick food to the top tube of their frames and ride around in dick togs is not a great idea." -- Classic1 nice post OP, very interesting do·mes·tique (dms-tk) n. A member of a competitive bicycle-racing team whose role is to assist the team leader, as by setting the pace. C'Dale Six13 (Record 08), Olmo Antares (Record/Chorus 06) 07-22-06, 12:32 PM #2 more ape than man Join Date Nov 2003 0 Post(s) 0 Thread(s) 07-22-06, 03:15 PM #3 pan y agua Join Date Aug 2005 My Bikes Wilier trestina Zero 7; Merlin Extralight; Co-Motion Robusta; Schwinn Paramount; Motobecanne Phantom Cross; Cervelo P2 3 Post(s) 0 Thread(s) 07-22-06, 08:50 PM #4 07-22-06, 09:27 PM #5 Join Date Aug 2005 boise idaho My Bikes 2005 Specialized Tarmac Comp, Steyr Clubman, Motobecane Phantom Cross Uno, Cannondale Caad9, IRO bffgss 0 Post(s) 0 Thread(s) 07-23-06, 09:51 AM #6 07-23-06, 10:04 AM #7 Senior Member Join Date Jul 2004 Wilmington, NC My Bikes Serotta Nove 0 Post(s) 0 Thread(s) 07-23-06, 10:10 AM #8 07-23-06, 12:10 PM #9 Senior Member Join Date Jul 2004 Wilmington, NC My Bikes Serotta Nove 0 Post(s) 0 Thread(s) 07-23-06, 01:15 PM #10 Senior Member Join Date Aug 2003 arlington, VA 0 Post(s) 0 Thread(s) 07-23-06, 09:46 PM #11 pan y agua Join Date Aug 2005 My Bikes Wilier trestina Zero 7; Merlin Extralight; Co-Motion Robusta; Schwinn Paramount; Motobecanne Phantom Cross; Cervelo P2 3 Post(s) 0 Thread(s) 07-23-06, 11:44 PM #12 Elitist Jackass Join Date Oct 2003 My Bikes Cannondale 2.8, Specialized S-works E5 road, GT Talera 0 Post(s) 0 Thread(s) 07-25-06, 08:41 PM #13 Shut Up and Ride Join Date May 2005 PA (Worst roads in existence) My Bikes 05 Cannondale Six 13 (Record 2008 with DT rr 1.1 rims, WI H2 Hubs and CX-ray spokes), OLMO Antares (Micx of 06 Record and Chorus), 1988 Tunturri, 1980's Fuji, 1970's Crescent (Sweeden) 0 Post(s) 0 Thread(s)
{"url":"http://www.bikeforums.net/professional-cycling-fans/213041-info-power-data-floyd-s-stage-17-ride-allen-lim.html","timestamp":"2014-04-18T05:41:25Z","content_type":null,"content_length":"83967","record_id":"<urn:uuid:c05ac54e-aeac-48b6-a9f0-808182cae9e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientific Papers - Vi ON LEGENDRE'S FUNCTION Pn(0), WHEN n IS GREAT AND 0 HAS ANY VALUE*. [Proceedings of the Royal Society, A, Vol. xcn. pp. 433—437, 1916.] As is well known, an approximate formula for Legendre's function Pn(0), when n is very large, was given by Laplace. The subject has been treated with great generality by Hobsonf, who has developed the complete series proceeding by descending powers of n, not only for Pn but also for the "associated functions." The generality aimed at by Hobson requires the use of advanced mathematical methods. I have thought that a simpler derivation, sufficient for practical purposes and more within the reach of physicists with a smaller mathematical equipment, may be useful. It had, indeed, been worked out independently. The series, of which Laplace's expression constitutes the first term, is arithmetically useful only when 116 is at least moderately large. On the other hand, when 6 is small, Pn tends to identify itself with the Bessel's function J"0(w0), as was first remarked by Mehler. A further development of this approximation is here proposed. Finally, a comparison of the results of the two methods of approximation with the numbers calculated by A. Lodge for n = 20J is exhibited. The differential equation satisfied by Legendre's function Pn is ~j/fa + °Ot 0 -JTJ + If we assume u = v (sin 6) ~ , and write m for n + |, we have + mzv 4 sin2 6' * [1917. It would be more correct to say Pn (oos 0), where cos0 lies between i 1.] + " On a Type of Spherical Harmonics of Unrestricted Degree, Order, and Argument," Phil, Trans. A, Vol. OLXXXVII. (1896). J " On the Acoustic Shadow of a Sphere," Phil. Trans. A, Vol. com. (1904); Scientific Papers, Vol. v. p. 163. by On, so that only Jp appears, and
{"url":"http://www.archive.org/stream/ScientificPapersVi/TXT/00000408.txt","timestamp":"2014-04-20T22:39:29Z","content_type":null,"content_length":"12339","record_id":"<urn:uuid:7d62d1ba-4c90-4924-8be8-93d5005285d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Stochastic resonance From Scholarpedia Broadly speaking, stochastic resonance is a mechanism by which a system embedded in a noisy environment acquires an enhanced sensitivity towards small external time-dependent forcings, when the noise intensity reaches some finite level. As such it highlights the possibility that noise, a universal phenomenon and yet one considered traditionally to constitute a nuisance, may actually play a constructive role in large classes of both natural and artificially designed systems. The concept of stochastic resonance was invented in 1981-82 in the rather exotic context of the evolution of the earth's climate. It has long been known that the climatic system possesses a very pronounced internal variability. A striking illustration is provided by the last glaciation which reached its peak some 18,000 years ago, leading to mean global temperatures of some degrees lower than the present ones and a total ice volume more than twice its present value. Going further back in the past it is realized that glaciations have covered, in an intermittent fashion, much of the Quaternary era. Statistical data analysis shows that the glacial-interglacial transitions that have marked the last \(10^6\) years display an average periodicity of \(10^5\) years, to which is superimposed a considerable, random looking variability (see Figure 1). This is intriguing, since the only known time scale in this range is that of the changes in time of the eccentricity of the earth's orbit around the sun, as a result of the perturbing action of the other bodies of the solar system. This perturbation modifies the total amount of solar energy received by the earth but the magnitude of this astronomical effect is exceedingly small, about \(0.1\%\ .\) The question therefore arises, whether one can identify in the earth-atmosphere-cryosphere system mechanisms capable of enhancing its sensitivity to such small external time-dependent forcings. The search of a response to this question led to the concept of stochastic resonance. Specifically, glaciation cycles are viewed as transitions between glacial and interglacial states that are somehow managing to capture the periodicity of the astronomical signal, even though they are actually made possible by the environmental noise rather than by the signal itself. Starting in the late 1980's the ideas underlying stochastic resonance were taken up, elaborated and applied in a wide range of problems in physical and life sciences. Classical setting In its classical setting stochastic resonance deals with bistable systems, a class of nonlinear dynamical systems that are encountered in a wide range of phenomena across different scientific fields. More specifically, one considers one-variable bistable dynamical systems subjected simultaneously to noise and to a weak periodic forcing: \[\tag{1} \frac{dx}{dt}=-\frac{\partial U}{\partial x}+F(t)+ \epsilon h(x)\cos (\omega_0t+\phi) \] Here \(x\) is the state variable (e.g., the global temperature or the global ice volume in the context of the Quaternary glaciations); \(U\) is the "potential" driving the internal dynamics, taken to possess two minima \(x_+\) and \(x_-\) associated to the two stable states, separated by a maximum corresponding to an intermediate unstable state \(x_0\ ;\) \(F(t)\) is a "random force" accounting for internal variability or environmental noise and modeled classically as a Gaussian white noise of zero mean and strength equal to \(q^2\ ;\) and \(\epsilon\ ,\) \(\omega_0\) and \(\phi\) are, respectively, the amplitude, frequency and phase of the periodic forcing. Actually, the forcing contribution can be cast in a form similar to the first term in the right hand side of (1) by introducing a generalized time-dependent potential \[\tag{2} W(x,t)=U(x)-\epsilon g(x) \cos (\omega_0 t+\phi) \] with \(dg(x)/dx=h(x)\ .\) According to the theory of stochastic processes the stochastic differential equation for the random process \(x(t)\ ,\) eq. (1), is equivalent to a Fokker-Planck equation for the probability distribution function \(P(x,t)\) of values of \(x\ .\) In the absence of periodic forcing this latter equation defines a particular type of Markov process known as diffusion process: The variable \(x \) realizes, for most of the time, small scale excursions around \(x_+\) or \(x_-\ ,\) which are interrupted every now and then by noise-driven abrupt transitions from \(x_+\) to \(x_-\) or vice versa across the unstable state \(x_0\ ,\) which constitutes a barrier of some sort. The kinetics of these transitions are determined by two quantities: The noise strength \(q^2\) and the potential barrier \(\Delta U_{\pm}\ ,\) defined by \[\tag{3} \Delta U_{\pm}=U(x_0)-U(x_{\pm}) \] In the limit where \(q^2\) is much smaller than \(\Delta U_{\pm}\) the mean value of the transition time is given by the celebrated Kramers formula \[\tag{4} \tau_{\pm}^{-1}=r_{\pm}=\frac{1}{2\pi} {(-U''(x_0)U''(x_{\pm}))}^{1/2} \ \exp(- \frac{\Delta U_{\pm}}{q^2/2}) \] where the double prime designates the second derivative. The transitions themselves occur in an incoherent fashion, as their dispersion around the above mean value is comparable to the mean itself. When the periodic forcing is switched on \(U\) is replaced by the generalized potential \(W\) (eq. (2)). The corresponding barrier \(\Delta W_{\pm}\) is now modulated in time leading periodically to situations where states \(x_{\pm}\) are found at the bottom of wells that are, successively, less shallow and more shallow than those in the forcing-free system. One is thus led to expect that the transitions will be facilitated during a part of this cycle, provided the periodicity of the forcing matches somehow the Kramers time in eq. (4). As it turns out this intuitive idea is fully justified in the asymptotic limit of small \(q^2\) in which the Fokker-Planck equation can be reduced, using an adiabatic approximation, to a closed equation for the probability \(p_{\pm}\) to be in the attraction basin of state \(x_+\) or \(x_-\ :\) \[\tag{5} \frac{dp_+(t)}{dt}=r_-(t)\ p_-(t)-r_+(t)\ p_+(t) \] with \(p_++p_-=1\) and \(r_{\pm}\) given by an expression similar to (4) in which \(U\) is replaced by the generalized potential \(W\ .\) This equation can be solved straightforwardly. In most of the quantitative studies of stochastic resonance the result is further expanded to the first non-trivial order in the forcing amplitude \(\epsilon\ .\) A popular minimal model capturing the essence of the results is to set \(h(x)=1\) (and hence \(g(x)=x\)) and to consider a symmetric quartic potential \(U(x)=-\lambda x^2/2+x^4/4\) (\(\lambda >0\)), corresponding to \(x_{\pm}=\pm \lambda ^{1/2}\) and \(x_0=0\ .\) This leads to the following expression for the periodic component \(\delta p(t)\) of the response, \[\tag{6} \delta p(t)=A \cos(\omega_0 t+\phi+\psi) \] Here the amplitude \(A\) and phase shift \(\psi\) are given by \[\tag{7} A=\epsilon\frac{\lambda}{q^2}\ \frac{r(q^2)}{{(r^2(q^2)+\omega_0^2/4)}^{1/2}} \ \ \ \ \psi=-\arctan (\frac{\omega_0}{2r}) \] where \(r(q^2)=r_+=r_-=(\sqrt{2} \pi)^{-1}\lambda\exp{(-\lambda^2/(2q^2))}\) for the symmetric potential model. The essential point is now that • (a) the transitions across the barrier have been synchronized to follow, in the mean, the periodicity of the external forcing; • (b), the response is negligible unless the period of the forcing comes close to the (noise intensity-dependent!) Kramers time; and • (c), for given \(\omega_0\) and \(\epsilon\ ,\) \(A\) goes through a sharp maximum for an intermediate (finite) value of \(q^2\) (see Figure 2), thereby enhancing considerably the response to the (weak) periodic signal. This latter property is the principal signature of stochastic resonance and should be clearly differentiated from the mechanisms underlying classical resonance. More refined studies based on Floquet theory or on a spectral decomposition of the full Fokker-Planck equation confirm fully the validity of these conclusions. Further indicators of stochastic resonance In addition to the periodic response (eq. (6)), stochastic resonance can also be characterized by a number of useful indicators related in one way or the other to quantities easily amenable to experimental visualizations. Despite the periodicity of the response at the probability level (eq. (6)), the process itself (eq. (1)) contains a marked random component. The signal-to-noise ratio (SNR) provides a measure of the relative importance of the noisy and the systematic parts of the response. The basic quantity involved in SNR is the power spectrum \(G_{xx}(\omega)\) of the variable \(x\ ,\) essentially the Fourier transform of its time autocorrelation function. In the presence of a weak periodic input \(G_{xx}(\omega)\) can be decomposed into a sum of contributions due to the noisy background, \(G_{xx}^{(0)}(\ omega)\) and to the signal itself, the latter being proportional to a sum of delta peaks centered at frequencies \(\pm \omega_0\ .\) The SNR is then defined as the ratio of the total power \(G_{xx}(\ omega)\) on a narrow band \(\Delta \omega\) surrounding \(\omega_0\) and \(G_{xx}^{(0)}(\omega)\) evaluated at the input frequency \(\omega_0\ ,\) in the limit of \(\Delta \omega\) tending to zero (practically of \(\Delta \omega\) limited to the frequency bins around \(\omega_0\)). Much like the behavior in Figure 2, the SNR goes through a sharp maximum at some finite value of \(q^2\ .\) Residence time distribution Within the framework of the adiabatic approximation (eq. (5)) and in the absence of the periodic signal, the time \(\theta\ ,\) spent by the system in the attraction basins of \(x_{\pm}\) is a random quantity whose probability distribution decays exponentially with \(\theta\ .\) The presence of the periodic signal induces a fine structure in the form of a sequence of Gaussian-like peaks, while the envelope of the overall distribution still decreases exponentially. Stochastic resonance is here manifested by the enhancement of the first peak (at half of the signal period), reflecting the phase synchronization of the switchings between \(x_+\) and \(x_-\) with the periodic signal. Information theoretic measures Inasmuch as stochastic resonance has to do with the quality of signal processing and, in particular, with the enhancement of information transmission through a system, it may be expected that information theoretic ideas should provide yet another natural characterization. A first quantity of this kind is mutual information, defined as the difference between the information entropies of an output time series per se and an output time series constrained by a given input time series. Of interest is also the Fisher information, defined as the amount of information carried by an observable (e.g., the output of a device) about a parameter (e.g., the strength of an input signal) and related closely to statistical estimation theory. Both quantities have been analyzed in systems in which noise allows an otherwise subthreshold (and thus undetectable) signal to overcome a prescribed threshold, and shown to exhibit (much like the SNR) an extremum at a well-defined finite value of the noise strength. Stochastic resonance beyond the classical setting The concept of stochastic resonance can be carried through to systems subjected to both noise and an external periodic forcing, and possessing coexisting stable states in the form of periodic or chaotic attractors. Theoretical analyses of these more involved situations draw on the existence, for such systems, of generalized potentials, not necessarily analytic in the state variables, possessing local minima on the corresponding attractors. Stochastic resonance in deterministic chaotic systems Dynamical systems in the regime of deterministic chaos evolve under certain conditions through a sequence of intermittent jumps between two preferred regions of phase space, without the intervention of a background noise. Such systems, which give rise to multimodal probability distributions, display an enhanced sensitivity to external periodic forcings through a stochastic resonance-like Slowly varying parameters In many systems, the dynamics in the absence of both noise and forcing is controlled by a number of parameters \(\lambda\) describing the constraints acting from the external world. Ordinarily these parameters are assumed to remain constant, but there are situations in which this constitutes an oversimplification (gradual switching on of a device, man-biosphere-climate interactions, etc.). In the absence of external periodic forcing the simultaneous action of noise and of a slow variation of \(\lambda\) in the form of a ramp, \(\lambda=\lambda_0+\mu t\) \((0<\mu <<1)\ ,\) may lead to the freezing of the system on a preferred state by practically quenching the transitions across the barrier. The interaction between stochastic resonance and the action of the ramp provides a new method for the control of the transition rates by allowing the system to perform, transiently, a certain number of transitions (depending on the forcing frequency and the noise strength) prior to quenching. Complex signals, aperiodic stochastic resonance Stochastic resonance-like enhancements of the response of a noisy system have also been established when the signal possesses a complex spectrum as is the case in many real situations (multiperiodic signals, aperiodic signals with a finite bandwidth around a preferred frequency). In each case, the crux is the existence of an optimum of a suitably defined measure of the response, attained for some intermediate (finite) value of the noise strength. Non-dynamical stochastic resonance The concept of stochastic resonance can also be extended to situations where the system itself does not derive from a deterministic dynamics. For instance, the sole feature to be retained can be that a threshold (or a series of thresholds) is (are) somehow generated by a mechanism that need not be specified in its details. The result is then, again, that in the presence of noise of moderate strength the detection of a weak signal can be optimized. Spatial couplings Spatially extended systems of coupled bistable elements are widely spread in nature and technology, from neurophysiology to computer science. Under the presence of noise and an external periodic forcing the stochastic resonance that would be observed in the limit of totally independent elements is further enhanced by the spatial coupling, the enhancement being at a maximum for a certain well-defined finite coupling strength. Analytic studies show that under the assumption of isotropy and translational invariance the response of each individual unit is in the form of eq. (7) where the Kramers rate is now scaled by a factor related to the coupling strength. Quantum stochastic resonance Quantum mechanics allows a system to tunnel through a barrier separating two states without going over it. Such contributions enhance the classical stochastic resonance. They eventually dominate the thermally activated transitions below some crossover temperature, which in certain systems can be as large as \(1000^oK\ .\) Typically, the Quantum Stochastic Resonance phenomenon requires some amount of asymmetry. For symmetric systems it can occur nevertheless but then requires sufficiently strong quantum friction. Experimental aspects, simulations and applications Stochastic resonance has been observed in a wide variety of experiments involving electronic circuits, chemical reactions, semiconductor devices, nonlinear optical systems, magnetic systems and superconducting quantum interference devices (SQUID). Of special interest are the neurophysiological experiments on stochastic resonance, three popular examples of which are the mechanoreceptor cells of crayfish, the sensory hair cells of cricket and human visual perception. Numerical solutions are an important tool in studies of stochastic resonance, since even in the classical setting of eq. (1) and the associated Fokker-Planck equation there is no full analytic solution available. Direct simulation is another useful tool. Of special interest are analog simulations based on electronic circuits modeling the nonlinearities involved in bistable systems, which lie in fact at the frontier between simulation and experiment. Their advantage is that the parameters can be easily tuned over a wide range of values and the response can be followed Stochastic resonance is a generic phenomenon. It has to do with the fact that adding noise to certain types of nonlinear systems possessing several simultaneously stable states may improve their ability to process information. As such, it is at the origin of intense interdisciplinary research at the crossroads of nonlinear dynamics, statistical physics, information and communication theories, data analysis, life and medical sciences. It opens tantalizing perspectives, from the development of new families of detectors to brain research. From the fundamental point of view it is still a largely open field of research. Its microscopic foundations have been hardly addressed, its quantum counterpart needs to be further elucidated, and its relevance in global bifurcations and in complex transition phenomena in spatially extended systems remains to be explored. Original papers R. Benzi, A. Sutera and A. Vulpiani, The mechanism of stochastic resonance, J. Phys. A14, L453-L457 (1981). R. Benzi, G. Parisi, A. Sutera and A. Vulpiani, Stochastic resonance in climate change, Tellus, 34, 10-16 (1982). C. Nicolis, Solar variability and stochastic effects on climate, Sol. Phys. 74, 473-478 (1981). C. Nicolis, Stochastic aspects of climatic transitions-response to a periodic forcing, Tellus 34, 1-9 (1982). Early applications in physics and biology S. Fauve and F. Heslot, Stochastic resonance in a bistable system, Phys. Lett. 97A, 5-7 (1983). P. Jung and P. Hanggi, Amplification of small signals via Stochastic Resonance, Phys. Rev. A 44, 8032-8042 (1991). B. McNamara and K. Wiesenfeld, Theory of Stochastic Resonance, Phys. Rev. A 39, 4854-4869 (1989). Indicators of stochastic resonance Signal-to-noise ratio B. McNamara, K. Wiesenfeld and R. Roy, Observation of Stochastic Resonance in a Ring Laser, Phys. Rev. Lett. 60, 2626-2629 (1988). Residence time distribution L. Gammaitoni, F. Marchesoni, E. Menichella-Saetta and S. Santucci, Stochastic Resonance in Bistable Systems, Phys. Rev. Lett. 62, 349-352 (1989). Information theoretic measures P. Greenwood, L. Ward and W. Wefelmeyer, Statistical analysis of stochastic resonance in a simple setting, Phys. Rev. E60, 4687-4695 (1999). T. Munakata, A. Sato and T. Hada, Stochastic Resonance in a Simple Threshold System from a Static Mutual Information Point of View, J. Phys. Soc. Japan 74, 2094-2098 (2005). A. Anischenko, V. Astakhov, A. Neiman, T. Vadivasova and L. Schimansky-Geier, Nonlinear dynamics of chaotic and stochastic systems, Springer, Berlin (2002). A. Bulsara, P. Hänggi, F. Marchesoni, F. Moss and M. Shlesinger, eds, Stochastic Resonance in Physics and Biology, J. Stat. Phys. 70, 1-512 (1993). M. Dykman and P. Mc Clintock, Stochastic Resonance, Science Progress 82, 113-134 (1999). L. Gammaitoni, P. Hänggi, P. Jung and F. Marchesoni, Stochastic Resonance, Rev. Mod. Phys. 70, 223-287 (1998). P. Hänggi, Stochastic resonance in biology - How noise can enhance detection of weak signals and help improve biological information processing, ChemPhysChem 3, 285-290 (2002). M. D. McDonnell and D. Abbott, What Is Stochastic Resonance? Definitions, Misconceptions, Debates, and Its Relevance to Biology, PLoS Computational Biology 5:e1000348 (2009). M. D. McDonnell and L. M. Ward, The benefits of noise in neural systems: bridging theory and experiment, Nature Reviews Neuroscience 12, 415-426 (2011). F. Moss, L. Ward and W. Sannita, Stochastic resonance and sensory information processing: a tutorial and review of application, Clinical Neurophysiology 115, 267-281 (2004). C. Nicolis, Long-Term Climatic Transitions and Stochastic Resonance, J. Stat. Phys. 70, 3-14 (1993). Th. Wellens, Y. Shatokhin and A. Buchleitner, Stochastic Resonance, Rep. Progr. Phys. 67, 45-105 (2004). Recent developments D. Alcor, V. Croquette, L. Jullien and A. Lemarchand, Molecular sorting by stochastic resonance, Proc. Nat. Acad. Sci. U.S.A. 101, 8276-8280 (2004). R. Alley et al, Abrupt climatic change, Science 299, 2005-2010 (2003). A. Ganopolski and S. Rahmtorf, Abrupt glacial climatic changes due to stochastic resonance, Phys. Rev. Letters 88, 038501 (2002). J. Harry, J. Niemi, A. Priplata and J. Collins, Balancing Act, IEEE Spectrum 42, 36-41 (2005). T. Mori and S. Kai, Noise-induced entrainment and stochastic resonance in human brain waves, Phy. Rev. Letters 88, 218101 (2002). C. Rao, D. Wolf and A. Arkin, Control, exploitation and tolerance of intracellular noise, Nature 420, 231-237 (2002). Internal references See also Floquet Theory, Perturbation Methods, Neuronal Synchronization, Resonance, Mechanoreceptors and Stochastic Resonance, Neuronal Noise, Self-organization of Brain Function, Bistability, Dynamical Systems, Chaos, Fokker-Planck Equation, Signal to Noise Ratio, Stochastic Dynamical Systems, Observability and Controllability, Suprathreshold Stochastic Resonance.
{"url":"http://www.scholarpedia.org/article/Stochastic_resonance","timestamp":"2014-04-17T22:05:22Z","content_type":null,"content_length":"59134","record_id":"<urn:uuid:8f03db45-4380-4326-8b03-dfb3cf015d0e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Aston Algebra 2 Tutor Find an Aston Algebra 2 Tutor ...I use exciting and engaging activities to promote learning and instill retention of the material. I have taught at Julia R. Mastermen Demonstration school which was just ranked the #1 public school in Pennsylvania. 32 Subjects: including algebra 2, reading, English, writing ...If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of the struggles all students face. 14 Subjects: including algebra 2, physics, calculus, ASVAB ...I have tutored privately most of that time as well. I know that everyone learns in a different way and I try to use real world objects, models and examples to help students understand abstract concepts with which they may be struggling. I also try to explain concepts in a variety of ways becaus... 28 Subjects: including algebra 2, calculus, geometry, ASVAB ...I work with students to develop strong conceptual understanding and high math fluency through creative math games. Having worked with a diverse population of students, I have strong culturally competent teaching practices that are adaptive to diverse student learning needs. I have a robust knowledge of various math curricula and resources that will get your child to love math in no 9 Subjects: including algebra 2, geometry, ESL/ESOL, algebra 1 ...The basic skills learned in Prealgebra start the pathway that leads students through high school and into college. Precalculus is that last step in the Algebraic sequence of math taken at the high school level. It takes all the skills learned in Algebra 1&2 to a much deeper level, while at the same time advances several concepts begun in Geometry. 30 Subjects: including algebra 2, chemistry, writing, calculus Nearby Cities With algebra 2 Tutor Brookhaven, PA algebra 2 Tutors Chester Heights algebra 2 Tutors Chester Township, PA algebra 2 Tutors Chichester, PA algebra 2 Tutors Eddystone, PA algebra 2 Tutors Garnet Valley, PA algebra 2 Tutors Glen Riddle Lima algebra 2 Tutors Lenni algebra 2 Tutors Logan Township, NJ algebra 2 Tutors Marcus Hook algebra 2 Tutors Media, PA algebra 2 Tutors Rose Valley, PA algebra 2 Tutors Trainer, PA algebra 2 Tutors Upland, PA algebra 2 Tutors Village Green, PA algebra 2 Tutors
{"url":"http://www.purplemath.com/Aston_algebra_2_tutors.php","timestamp":"2014-04-18T00:28:26Z","content_type":null,"content_length":"23911","record_id":"<urn:uuid:44f5f111-9da5-48ce-836f-84fec6f70df9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Colloquium Talk: October 20 October 4th, 2011 Thursday, October 20, 2011 Dr. Jonathan Farley, UMaine Dept. of Computer Science. “The Most Embarrassing Inequality of My Life” Matchings in the Permutation Lattice 3:30 pm – 4:30 pm, 421 Neville Hall. “Can you do it?” In the spring of 1997, Anders Björner, a visitor at Berkeley’s Mathematical Sciences Research Institute, sent me a handwritten note in response to a question I had asked him. He wanted to know if I could prove, combinatorially, for an n-element poset of height r, that h[k]≥h[n-1-k] when k < (n-1)/2. I had been hunting this inequality for perhaps the previous four or five years. I believed that any fact about ordered sets, except artificially-rigged statements, must be provable by order-theoretic means. To my embarrassment, however, I could not deduce this inequality combinatorially. Nor could I concede defeat. Perhaps I should explain. Let S[n] be the symmetric group on n letters with the weak Bruhat order, which means the following: Write each σ in S[n] in the one-line notation σ = σ[1]…σ[n]. We call (σ[i],σ[j]) an inversion if i <j but σ[i]>σ[j]. The relation σ≤τ holds if and only if every inversion of σ is an inversion of τ; it turns out this makes S[n] into a lattice ― in fact, a very special kind of lattice, one which is “bounded” in the sense of McKenzie. We say a number i is a descent of σ if σ[i] > σ[i+1]; ascents are defined similarly. For fixed n, let J[k] be the set of all permutations with exactly k descents, and let M[k] be the set of all permutations with exactly k ascents. The covering relation in S[n] with the weak order is given by: σ is a lower cover of τ if you can obtain τ from σ by taking exactly one ascent and reversing the corresponding two letters, turning them into a descent. Label an n-element poset P with the numbers 1 through n so that 123…n is a linear extension, that is, label P with the numbers 1, 2, 3, up to n so that, whenever an element p is below an element q in the poset, the label of p (a natural number) is less than the label of q. This is called a natural labeling. Let h[j] be the number of linear extensions of P ― that is, the permutations such that, if p is less than q in the poset, then the label for p appears to the left of the label for q in the permutation―with j descents. For instance, if P is the four-element zig-zag or fence, one gets (1,3,1) for the h-vector; if P is a five-element antichain, one gets the famous eulerian numbers (1,26,66,26,1). I could answer Björner’s question if, for n≥3 and 2 ≤ k< n/2-1, I could find an explicit bijection G from J[k] to M[k] such that for all σ in J[k], σ≤G(σ). Edelman and Reiner found an argument for k= 1 and general n in response to a question of mine. After I gave a talk on this topic at the University of Maine, while flying back to Austria I had an idea. Posted in Upcoming events
{"url":"http://umaine.edu/mathematics/2011/10/04/colloquium-talk-october-20/","timestamp":"2014-04-19T17:27:47Z","content_type":null,"content_length":"16600","record_id":"<urn:uuid:1206171c-54ba-438e-b6c2-523eab44dcf9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
The Underground -- International STEM Project Expand your student's international education with an exploration of the London Underground in this 21st Century Math Project. Through building skills of estimation, measurement, and use of scale, students will conclude their learning with an estimation of London's Circle Line. This is a hands-on STEM Project for hands-on learners and provides a nice departure from the grind of the daily Name: The Underground Suggested Grade Level: 7-12 (Algebra & Geometry skills) Math Concepts: Measurement, Scale, Proportion, Distance and Estimation Interdisciplinary Connections: Travel, Global Studies Teaching Duration: 3-5 Days (can be modified) Cost: $5 for a 14 Page PDF (1 project, 1 assignment and answer key) The Product: Students develop a strategy to approximate the length of a train line on England’s famous Tube and use their approximation to find the distance between the tube stops of tourist Whenever our math department gets disaggregated data back on standardized tests there always one consistency that we can count on – students are low in measurement. Usually they are really, really bad. It’s as if they have no concept of an inch or never touched a protractor. When the word scale is thrown in the mix – it’s every math teacher for themselves. Anarchy. My idea behind this project was to make a creative task that would be both mathematically rigorous, but interesting. In this project, students will have to measure, use a scale, calculate proportions to estimate distances… all in a seemingly sensible way. This packs in about as many problem spots into one project as humanly possible. Buckle your chin strap and dive right in. Haha. Gotcha. Proportion and measurement are elementary skills! I learned something interesting in developing this project. The Great Wall of China is in fact spread apart in many different pieces. Miles apart. I always thought it was one long wall. Uh… hey what am I talking about? Too much cough syrup? When I originally conceived this idea I planned to approximate the length of the Great Wall. When I discovered this would be nearly impossible I went for something a little more achievable. With all the math packed in, I couldn’t bear to complicate the project by estimating multiple pieces of the Great Wall. Truth be told, the necessary level of these separate math skills (measurement, scale, proportion, distance and estimation) vary great. In the Common Core, some of these skills are listed in 5^th grade, others in 8^th grade. However as a high school teacher, these are areas of significant skills deficiency. Students can cross multiply and divide, but they have difficulty recognizing a proportion problem and have trouble setting it up. Certainly, the higher order thinking skills necessary in this project raise the level of difficulty. Calculus? I could get used to that... Depending on your state tests, this could be a useful prep assignment. If you are working with remedial students, it’s a more concrete assignment to help work on past skills, while pushing forward their thinking skills. Usually when I teach high school geometry, I like to slip this in after I want to review Perimeter. Perimeter is an elementary skills, but non-linear perimeter? We’re talking about Pre-Calculus and Calculus. EXTENSION: This can get a little tricky, but an idea that I have to extend this is that you can have students find a picture from the internet of a non-geometric shape. After determining a scale, they could do the same activity. It's tricky, but maybe you could pull it off. No comments:
{"url":"http://www.21stcenturymathprojects.com/2012/12/the-underground-international-stem.html","timestamp":"2014-04-18T05:43:13Z","content_type":null,"content_length":"89716","record_id":"<urn:uuid:03386446-a626-41ef-b059-83b49b19e064>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
David Eppstein - Publications • Testing bipartiteness of geometric intersection graphs. D. Eppstein. 15th ACM-SIAM Symp. Discrete Algorithms, New Orleans, 2004, pp. 853-861. ACM Trans. Algorithms 5(2):15, 2009. We consider problems of partitioning sets of geometric objects into two subsets, such that no two objects within the same subset intersect each other. Typically, such problems can be solved in quadratic time by constructing the intersection graph and then applying a graph bipartiteness testing algorithm; we achieve subquadratic times for general objects, and O(n log n) times for balls in R^d or simple polygons in the plane, by using geometric data structures, separator based divide and conquer, and plane sweep techniques, respectively. We also contrast the complexity of bipartiteness testing with that of connectivity testing, and provide evidence that for some classes of object, connectivity is strictly harder due to a computational equivalence with Euclidean minimum spanning trees. (BibTeX -- Citations -- SODA talk slides) • All maximal independent sets and dynamic dominance for sparse graphs. D. Eppstein. 16th ACM-SIAM Symp. Discrete Algorithms, Vancouver, 2005, pp. 451-459. ACM Trans. Algorithms 5(4):A38, 2009. We show how to apply reverse search to list all maximal independent sets in bounded-degree graphs in constant time per set, in graphs from minor closed families in linear time per set, and in sparse graphs in subquadratic time per set. The latter two results rely on new data structures for maintaining a dynamic vertex set in a graph and quickly testing whether the set dominates all other vertices. (SODA05 talk slides -- BibTeX) • Squarepants in a tree: sum of subtree clustering and hyperbolic pants decomposition. D. Eppstein. 18th ACM-SIAM Symp. Discrete Algorithms, New Orleans, 2007, pp. 29-38. ACM Trans. Algorithms 5(3): Article 29, 2009. We find efficient constant factor approximation algorithms for hierarchically clustering of a point set in any metric space, minimizing the sum of minimimum spanning tree lengths within each cluster, and in the hyperbolic or Euclidean planes, minimizing the sum of cluster perimeters. Our algorithms for the hyperbolic and Euclidean planes can also be used to provide a pants decomposition with approximately minimum total length. • Manhattan orbifolds. D. Eppstein. Topology and its Applications 157(2): 494-507, 2009. We investigate a class of metrics for 2-manifolds in which, except for a discrete set of singular points, the metric is locally isometric to an L[1] metric, and show that with certain additional conditions such metrics are injective. We use this construction to find the tight span of squaregraphs and related graphs, and we find an injective metric that approximates the distances in the hyperbolic plane analogously to the way the rectilinear metrics approximate the Euclidean distance. • Edges and switches, tunnels and bridges. D. Eppstein, M. van Kreveld, E. Mumford, and B. Speckmann. 23rd European Workshop on Computational Geometry (EWCG'07), Graz, 2007. 10th Worksh. Algorithms and Data Structures, Halifax, Nova Scotia, 2007. Lecture Notes in Comp. Sci. 4619, 2007, pp. 77-88, © Springer-Verlag. Tech. Rep. UU-CS-2007-042, Utrecht Univ., Dept. of Information and Computing Sciences, 2007. Comp. Geom. Theory & Applications 42(8): 790-802, 2009 (special issue for EWCG'07). We show how to solve several versions of the problem of casing graph drawings: that is, given a drawing, choosing to draw one edge as upper and one lower at each crossing in order to improve the drawing's readability. • On verifying and engineering the well-gradedness of a union-closed family. D. Eppstein, J.-C. Falmagne, and H. Uzun. 38th Meeting of the European Mathematical Psychology Group, Luxembourg, 2007. J. Mathematical Psychology 53(1):34-39, 2009. We describe tests for whether the union-closure of a set family is well-graded, and algorithms for finding a minimal well-graded union-closed superfamily of a given set family. • The Complexity of Bendless Three-Dimensional Orthogonal Graph Drawing. D. Eppstein. Proc. 16th Int. Symp. Graph Drawing, Heraklion, Crete, 2008. Lecture Notes in Computer Science 5417, 2009, pp. 78-89. J. Graph Algorithms and Applications 17 (1): 35-55, 2013. Defines a class of orthogonal graph drawings formed by a point set in three dimensions for which axis-parallel line contains zero or two vertices, with edges connecting pairs of points on each nonempty axis-parallel line. Shows that the existence of such a drawing can be defined topologically, in terms of certain two-dimensional surface embeddings of the same graph. Based on this equivalence, describes algorithms, graph-theoretic properties, and hardness results for graphs of this type. (Slides from talk at U. Arizona, February 2008 -- Slides from GD08) ) • Approximate Topological Matching of Quadrilateral Meshes. D. Eppstein, M. T. Goodrich, E. Kim, and R. Tamstorf. Proc. IEEE Int. Conf. Shape Modeling and Applications (SMI 2008), Stony Brook, New York, pp. 83-92. The Visual Computer 25(8): 771-783, 2009. We formalize problems of finding large approximately-matching regions of two related but not completely isomorphic quadrilateral meshes, show that these problems are NP-complete, and describe a natural greedy heuristic that is guaranteed to find good matches when the mismatching parts of the meshes are small. • Succinct greedy geometric routing using hyperbolic geometry. D. Eppstein and M. T. Goodrich. Proc. 16th Int. Symp. Graph Drawing, Heraklion, Crete, 2008 (under the different title "Succinct greedy graph drawing in the hyperbolic plane"), Lecture Notes in Computer Science 5417, 2009, pp. 14-25. IEEE Transactions on Computing 60 (11): 1571-1580, 2011. Greedy drawing is an idea for encoding network routing tables into the geometric coordinates of an embedding of the network, but most previous work in this area has ignored the space complexity of these encoded tables. We refine a method of R. Kleinberg for embedding arbitrary graphs into the hyperbolic plane, which uses linearly many bits to represent each vertex, and show that only logarithmic bits per point are needed. • Isometric diamond subgraphs. D. Eppstein. Proc. 16th Int. Symp. Graph Drawing, Heraklion, Crete, 2008. Lecture Notes in Computer Science 5417, 2009, pp. 384-389. We describe polynomial time algorithms for determining whether an undirected graph may be embedded in a distance-preserving way into the hexagonal tiling of the plane, the diamond structure in three dimensions, or analogous structures in higher dimensions. The graphs that may be embedded in this way form an interesting subclass of the partial cubes. • Self-Overlapping Curves Revisited. D. Eppstein and E. Mumford. 20th ACM-SIAM Symp. Discrete Algorithms, New York, 2009, pp. 160-169. We consider problems of determining when a curve in the plane is the projection of a 3d surface with no vertical tangents. Several problems of this type are NP-complete, but can be solved in polynomial time if a casing of the curve is also given. • Linear-time algorithms for geometric graphs with sublinearly many crossings. D. Eppstein, M. T. Goodrich, and D. Strash. 20th ACM-SIAM Symp. Discrete Algorithms, New York, 2009, pp. 150-159. SIAM J. Computing 39(8): 3814-3829, 2010. If a connected graph corresponds to a set of points and line segments in the plane, in such a way that the number of crossing pairs of line segments is sublinear in the size of the graph by an iterated-log factor, then we can find the arrangement of the segments in linear time. It was previously known how to find the arrangement in linear time when the number of crossings is superlinear by an iterated-log factor, so the only remaining open case is when the number of crossings is close to the size of the graph. • Finding large clique minors is hard. D. Eppstein. J. Graph Algorithms and Applications 13(2):197-204, 2009. Proves that it's NP-complete to compute the Hadwiger number of a graph. • Area-universal rectangular layouts. D. Eppstein, E. Mumford, B. Speckmann, and K. Verbeek. 25th Eur. Worksh. Comp. Geom., Brussels, Belgium, 2009. 25th ACM Symp. Comp. Geom., Aarhus, Denmark, 2009, pp. 267-276. A partition of a rectangle into smaller rectangles is "area-universal" if any vector of areas for the smaller rectangles can be realized by a combinatorially equivalent partition. These partitions may be applied, for instance, to cartograms, stylized maps in which the shapes of countries have been distorted so that their areas represent numeric data about the countries. We characterize area-universal layouts, describe algorithms for finding them, and discuss related problems. The algorithms for constructing area-universal layouts are based on the distributive lattice structure of the set of all layouts of a given dual graph. Merged with "Orientation-constrained rectangular layouts" to form the journal version, "Area-universal and constrained rectangular layouts". • Animating a Continuous Family of Two-Site Distance Function Voronoi Diagrams (and a Proof of a Complexity Bound on the Number of Non-Empty Regions). M. Dickerson and D. Eppstein. 25th ACM Symp. Comp. Geom., Aarhus, Denmark, 2009, video and multimedia track, pp. 92-93. We investigate distance from a pair of sites defined as the sum of the distances to each site minus a parameter times the distance between the two sites. A given set of n sites defines n(n-1)/2 pairs and n(n-1)/2 distances in this way, from which we can determine a Voronoi diagram. As we show, for a wide range of parameters, the diagram has relatively few regions because the pairs that have nonempty Voronoi regions must be Delaunay edges. • The Fibonacci dimension of a graph. S. Cabello, D. Eppstein, and S. Klavžar. IMFM Preprint 1084, Institute of Mathematics, Physics and Mechanics, Univ. of of Ljubljana, 2009. Electronic J. Combinatorics 18(1), Paper P55, 2011. We investigate isometric embeddings of other graphs into Fibonacci cubes, graphs formed from the families of fixed-length bitstrings with no two consecutive ones. • The h-index of a graph and its application to dynamic subgraph statistics. D. Eppstein and E. S. Spiro. Algorithms and Data Structures Symposium (WADS), Banff, Canada. Lecture Notes in Comp. Sci. 5664, 2009, pp. 278-289. J. Graph Algorithms and Applications 16(2):543-567, 2012. We define the h-index of a graph to be the maximum h such that the graph has h vertices each of which has degree at least h. We show that the h-index, and a partition of the graph into high and low degree vertices, may be maintained in constant time per update. Based on this technique, we show how to maintain the number of triangles in a dynamic graph in time O(h) per update; this problem is motivated by Markov Chain Monte Caro simulation of the Exponential Random Graph Model used for simulation of social networks. We also prove bounds on the h-index for scale-free graphs and investigate the behavior of the h-index on a corpus of real social networks. • On the approximability of geometric and geographic generalization and the min-max bin covering problem. W. Du, D. Eppstein, M. T. Goodrich, G. Lueker. Algorithms and Data Structures Symposium (WADS), Banff, Canada. Lecture Notes in Comp. Sci. 5664, 2009, pp. 242-253. We investigate several simplified models for k-anonymization in databases, show them to be hard to solve exactly, and provide approximation algorithms for them. The min-max bin covering problem is closely related to one of our models. An input to this problem consists of a collection of items with sizes and a threshold size. The items must be grouped into bins such that the total size within each bin is at least the threshold, while keeping the maximum bin size as small as possible. • Orientation-constrained rectangular layouts. D. Eppstein and E. Mumford. Algorithms and Data Structures Symposium (WADS), Banff, Canada. Lecture Notes in Comp. Sci. 5664, 2009, pp. 266-277. We show how to find a stylized map in which regions have been replaced by rectangles, preserving adjacencies between regions, with constraints on the orientations of adjacencies between regions. For an arbitrary dual graph representing a set of adjacencies, and an arbitrary set of orientation constraints, we can determine whether there exists a rectangular map satisfying those constraints in polynomial time. The algorithm is based on a representation of the set of all layouts for a given dual graph as a distributive lattice, and on Birkhoff's representation theorem for distributive lattices. Merged with "Area-universal rectangular layouts" to form the journal version, "Area-universal and constrained rectangular layouts". • Optimal embedding into star metrics. D. Eppstein and K. Wortman. Algorithms and Data Structures Symposium (WADS), Banff, Canada (best paper award). Lecture Notes in Comp. Sci. 5664, 2009, pp. 290-301. We provide an O(n^3 log^2n) algorithm for finding a non-distance-decreasing mapping from a given metric into a star metric with as small a dilation as possible. The main idea is to reduce the problem to one of parametric shortest paths in an auxiliary graph. Specifically, we transform the problem into the parametric negative cycle detection problem: given a graph in which the edge weights are linear functions of a parameter λ, find the minimum value of λ for which the graph contains no negative cycles. We find a new strongly polynomial time algorithm for this problem, and use it to solve the star metric embedding problem. • Combinatorics and geometry of finite and infinite squaregraphs. H.-J. Bandelt, V. Chepoi, and D. Eppstein. SIAM J. Discrete Math. 24(4): 1399-1440, 2010. Characterizes squaregraphs as duals of triangle-free hyperbolic line arrangements, provides a forbidden subgraph characterization of them, describes an algorithm for finding minimum subsets of vertices that generate the whole graph by medians, and shows that they may be isometrically embedded into Cartesian products of five (but not, in general, fewer than five) trees. • Graph-theoretic solutions to computational geometry problems. D. Eppstein. Invited talk at the 35th International Workshop on Graph-Theoretic Concepts in Computer Science (WG 2009), Montpellier, France, 2009. Lecture Notes in Comp. Sci. 5911, 2009, pp. 1-16. We survey problems in computational geometry that may be solved by constructing an auxiliary graph from the problem and solving a graph-theoretic problem on the auxiliary graph. The problems considered include the art gallery problem, partitioning into rectangles, minimum diameter clustering, bend minimization in cartogram construction, mesh stripification, optimal angular resolution, and metric embedding. • Optimal angular resolution for face-symmetric drawings. D. Eppstein and K. Wortman. J. Graph Algorithms and Applications 15(4):551-564, 2011. We consider drawings of planar partial cubes in which all interior faces are centrally symmetric convex polygons, as in my previous paper Algorithms for Drawing Media. Among all drawings of this type, we show how to find the one with optimal angular resolution. The solution involves a transformation from the problem into the parametric negative cycle detection problem: given a graph in which the edge weights are linear functions of a parameter λ, find the minimum value of λ for which the graph contains no negative cycles. • Curvature-aware fundamental cycles. P. Diaz-Gutierrez, D. Eppstein, and M. Gopi. 17th Pacific Conf. Computer Graphics and Applications, Jeju, Korea, 2009. Computer Graphics Forum 28(7):2015-2024, 2009. Considers heuristic modifications to the tree-cotree decomposition of my earlier paper Dynamic generators of topologically embedded graphs, to make the set of fundamental cycles found as part of the decomposition follow the contours of a given geometric model. • Going off-road: transversal complexity in road networks. D. Eppstein, M. T. Goodrich, and L. Trott. Proc. 17th ACM SIGSPATIAL Int. Conf. Advances in Geographic Information Systems, Seattle, 2009, pp. 23-32. Shows both theoretically and experimentally that the number of times a random line crosses a road network is asymptotically upper bounded by the square root of the number of road segments. • Paired approximation problems and incompatible inapproximabilities. D. Eppstein. 21st ACM-SIAM Symp. Discrete Algorithms, Austin, Texas, 2010, pp. 1076-1086. Considers situations in which two hard approximation problems are presented by means of a single input. In many cases it is possible to guarantee that one or the other problem can be approximated to within a better approximation ratio than is possible for approximating it as a single problem. For instance, it is possible to find either a (1+ε)-approximation to a 1-2 TSP defined from a graph or a n^ε-approximation to the maximum independent set of the same graph, despite lower bounds showing nonexistence of approximation schemes for 1-2 TSP and nonexistence of approximations better than n^1 − ε for independent set. However, for some other pairs of problems, such as hitting set and set cover, we show that solving the paired problem approximately is no easier than solving either problem independently. • Optimally fast incremental Manhattan plane embedding and planar tight span construction. D. Eppstein. Journal of Computational Geometry 2(1):144-182, 2011. Shows that, when the tight span of a finite metric space is homeomorphic to a subset of the plane, it has the geometry of a Manhattan orbifold and can be constructed in time linear in the size of the input distance matrix. As a consequence, it can be tested in the same time whether a metric space is isometric to a subset of the L[1] plane. • Hyperconvexity and metric embedding. D. Eppstein. Invited talk at Fifth William Rowan Hamilton Geometry and Topology Workshop, Dublin, Ireland, 2009. Invited talk at IPAM Workshop on Combinatorial Geometry, UCLA, 2009. Surveys hyperconvex metric spaces, tight spans, and my work on Manhattan orbifolds, tight span construction, and optimal embedding into star metrics. • Growth and decay in life-like cellular automata. D. Eppstein. Game of Life Cellular Automata (Andrew Adamatzky, ed.), Springer-Verlag, 2010, pp. 71-98. We classify semi-totalistic cellular automaton rules according to whether patterns can escape any finite bounding box and whether patterns can die out completely, with the aim of finding rules with interesting behavior similar to Conway's Game of Life. We survey a number of such rules. • Steinitz theorems for orthogonal polyhedra. D. Eppstein and E. Mumford. 26th Eur. Worksh. Comp. Geom., Dortmund, Germany, 2010. 26th ACM Symp. Comp. Geom., Snowbird, Utah, 2010, pp. 429-438. We provide a graph-theoretic characterization of three classes of nonconvex polyhedra with axis-parallel sides, analogous to Steinitz's theorem characterizing the graphs of convex polyhedra.
{"url":"http://www.ics.uci.edu/~eppstein/pubs/2009.html","timestamp":"2014-04-17T18:25:10Z","content_type":null,"content_length":"27712","record_id":"<urn:uuid:2bdaaea1-04dd-4fc5-97a5-2fe7d2262bc5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Typing First-Class Continuations in - Information and Computation , 1992 "... We present a new approach to proving type soundness for Hindley/Milner-style polymorphic type systems. The keys to our approach are (1) an adaptation of subject reduction theorems from combinatory logic to programming languages, and (2) the use of rewriting techniques for the specification of the la ..." Cited by 538 (21 self) Add to MetaCart We present a new approach to proving type soundness for Hindley/Milner-style polymorphic type systems. The keys to our approach are (1) an adaptation of subject reduction theorems from combinatory logic to programming languages, and (2) the use of rewriting techniques for the specification of the language semantics. The approach easily extends from polymorphic functional languages to imperative languages that provide references, exceptions, continuations, and similar features. We illustrate the technique with a type soundness theorem for the core of Standard ML, which includes the first type soundness proof for polymorphic exceptions and continuations. 1 Type Soundness Static type systems for programming languages attempt to prevent the occurrence of type errors during execution. A definition of type error depends on a specific language and type system, but always includes the use of a function on arguments for which it is not defined, and the attempted application of a non-function. ... - FUNCTIONAL PROGRAMMING, CONCURRENCY, SIMULATION AND AUTOMATED REASONING, VOLUME 693 OF LNCS , 1992 "... Standard ML is one of a number of new programming languages developed in the 1980s that are seen as suitable vehicles for serious systems and applications programming. It offers an excellent ratio of expressiveness to language complexity, and provides competitive efficiency. Because of its type an ..." Cited by 198 (4 self) Add to MetaCart Standard ML is one of a number of new programming languages developed in the 1980s that are seen as suitable vehicles for serious systems and applications programming. It offers an excellent ratio of expressiveness to language complexity, and provides competitive efficiency. Because of its type and module system, Standard ML manages to combine safety, security, and robustness with much of the flexibility of dynamically typed languages like Lisp. It is also has the most well-developed scientific foundation of any major language. Here I review the strengths and weaknesses of Standard ML and describe some of what we have learned through the design, implementation, and use of the language. , 1992 "... This paper investigates the transformation of v -terms into continuation-passing style (CPS). We show that by appropriate j-expansion of Fischer and Plotkin's two-pass equational specification of the CPS transform, we can obtain a static and context-free separation of the result terms into "esse ..." Cited by 81 (7 self) Add to MetaCart This paper investigates the transformation of v -terms into continuation-passing style (CPS). We show that by appropriate j-expansion of Fischer and Plotkin's two-pass equational specification of the CPS transform, we can obtain a static and context-free separation of the result terms into "essential" and "administrative" constructs. Interpreting the former as syntax builders and the latter as directly executable code, we obtain a simple and efficient one-pass transformation algorithm, easily extended to conditional expressions, recursive definitions, and similar constructs. This new transformation algorithm leads to a simpler proof of Plotkin's simulation and indifference results. Further we show how CPS-based control operators similar to but more general than Scheme's call/cc can be naturally accommodated by the new transformation algorithm. To demonstrate the expressive power of these operators, we use them to present an equivalent but even more concise formulation of , 1993 "... Machine" [BB90], except that there are no "cooling" and "heating" transitions (the process sets of this semantics can be thought of as perpetually "hot" solutions). The concurrent evaluation relation extends "7\Gamma!" to finite sets of terms (i.e., processes) and adds additional rules for process c ..." Cited by 33 (0 self) Add to MetaCart Machine" [BB90], except that there are no "cooling" and "heating" transitions (the process sets of this semantics can be thought of as perpetually "hot" solutions). The concurrent evaluation relation extends "7\Gamma!" to finite sets of terms (i.e., processes) and adds additional rules for process creation, channel creation, and communication. We assume a set of process identifiers, and define the set of processes and process sets as: ß 2 ProcId process IDs p = hß; ei 2 Proc = (ProcId \Theta Exp) processes P 2 Fin(Proc) process sets We often write a process as hß; E[e]i, where the evaluation context serves the role of the program counter, marking the current state of evaluation. Definition4. A process set P is well-formed if for all hß; ei 2 P the following hold: -- FV(e) = ; (e is closed), and -- there is no e 0 6= e, such that hß; e 0 i 2 P. It is occasionally useful to view well-formed process sets as finite maps from ProcId to Exp. If P is a finite set of process , 1990 "... We describe the design, implementation and use of a mechanism for handling asynchronous signals, such as user interrupts, in the New Jersey implementation of Standard ML. Providing this kind of mechanism is a necessary requirement for the development of real-world application programs. Our mechanism ..." Cited by 31 (1 self) Add to MetaCart We describe the design, implementation and use of a mechanism for handling asynchronous signals, such as user interrupts, in the New Jersey implementation of Standard ML. Providing this kind of mechanism is a necessary requirement for the development of real-world application programs. Our mechanism uses first-class continuations to represent the execution state at the time at which a signal occurs. It has been used to support pre-emptive scheduling in concurrency packages and for forcing break-points in debuggers, as well as for handling user interrupts in the SML/NJ interactive environment. 1 Introduction Programs normally receive communication from the outside world via input operations. This method of communication is inherently synchronous: there is no way for the outside world to force the program to accept communication. But sometimes it is necessary to communicate asynchronously; for example, if the user wants to interrupt execution, or if the operating system needs to inform a... - Theoretical Computer Science: Exploring New Frontiers of Theoretical Informatics, volume 1872 of Lecture Notes in Computer Science , 2000 "... . Partial continuations are control operators in functional programming such that a function-like object is abstracted from a part of the rest of computation, rather than the whole rest of computation. Several dierent formulations of partial continuations have been proposed by Felleisen, Danvy&F ..." Cited by 12 (4 self) Add to MetaCart . Partial continuations are control operators in functional programming such that a function-like object is abstracted from a part of the rest of computation, rather than the whole rest of computation. Several dierent formulations of partial continuations have been proposed by Felleisen, Danvy&Filinski, Hieb et al, and others, but as far as we know, no one ever studied logic for partial continuations, nor proposed a typed calculus of partial continuations which corresponds to a logical system through the Curry-Howard isomorphism. This paper gives a simple type-theoretic formulation of a form of partial continuations (which we call delimited continuations), and study its properties. Our calculus does reect the intended operational semantics, and enjoys nice properties such as subject reduction and conuence. By restricting the type of delimiters to be atomic, we obtain the normal form property. We also show a few examples. 1 Introduction The mechanism of rst-class cont... , 2000 "... In this paper, we take an abstract view of search by describing search procedures via particular kinds of proofs in type theory. We rely on the proofs-as-programs interpretation to extract programs from our proofs. Using these techniques we explore, in depth, a large family of search problems by par ..." Cited by 8 (2 self) Add to MetaCart In this paper, we take an abstract view of search by describing search procedures via particular kinds of proofs in type theory. We rely on the proofs-as-programs interpretation to extract programs from our proofs. Using these techniques we explore, in depth, a large family of search problems by parameterizing the speci cation of the problem. A constructive proof is presented which has as its computational content a correct search procedure for these problems. We show how a classical extension to an otherwise constructive system can be used to describe a typical use of the nonlocal control operator call/cc. Using the classical typing of nonlocal control we extend our purely constructive proof to incorporate a sophisticated backtracking technique known as ‘con ict-directed backjumping’ (CBJ). A variant of this proof is formalized in Nuprl yielding a correct-by-construction implementation of CBJ. The extracted program has been translated into Scheme and serves as the basis for an implementation of a new solution to the Hamiltonian circuit problem. This paper demonstrates a nontrivial application of the proofs-as-programs paradigm by applying the technique to the derivation of a sophisticated search algorithm; also, it shows the generality of the resulting implementation by demonstrating its application in a new problem , 2007 "... We consider the natural combinations of algebraic computational effects such as side-effects, exceptions, interactive input/output, and nondeterminism with continuations. Continuations are not an algebraic effect, but previously developed combinations of algebraic effects given by sum and tensor ext ..." Cited by 8 (3 self) Add to MetaCart We consider the natural combinations of algebraic computational effects such as side-effects, exceptions, interactive input/output, and nondeterminism with continuations. Continuations are not an algebraic effect, but previously developed combinations of algebraic effects given by sum and tensor extend, with effort, to include commonly used combinations of the various algebraic effects with continuations. Continuations also give rise to a third sort of combination, that given by applying the continuations monad transformer to an algebraic effect. We investigate the extent to which sum and tensor extend from algebraic effects to arbitrary monads, and the extent to which Felleisen et al.’s C operator extends from continuations to its combination with algebraic effects. To do all this, we use Dubuc’s characterisation of strong monads in terms of enriched large Lawvere theories. , 2004 "... We investigate continuations in the context of idealized call-by-value programming languages. On the semantic side, we analyze the categorical structures that arise from continuation models of call-by-value languages. On the syntactic side, we study the call-by-value continuation-passing transformat ..." Cited by 5 (0 self) Add to MetaCart We investigate continuations in the context of idealized call-by-value programming languages. On the semantic side, we analyze the categorical structures that arise from continuation models of call-by-value languages. On the syntactic side, we study the call-by-value continuation-passing transformation as a translation between equational theories. Among the novelties are an unusually simple axiomatization of control operators and a strengthened completeness result with a proof based on a delaying transform.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=279339","timestamp":"2014-04-20T17:21:24Z","content_type":null,"content_length":"37839","record_id":"<urn:uuid:4c2aa8f4-4417-4893-bd5c-cf5ef9604b2b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Making Dilutions Many of you appear to panic when you must dilute something, yet the mathematics involve nothing worse than the simplest algebra. One reason is simply that when you are busy with a laboratory procedure you are distracted and it is difficult to think in the abstract. That problem can be overcome by practicing in advance of the need. Even with practice, though, you may find dilution problems confusing unless you very clearly define your objectives. We will give you a useful formula for making dilutions, one that you may have seen before. The formula is worse than useless, though, if you don't use it properly. Notes on using micropipettors • To obtain the best accuracy with variable volume pipettors pre-rinse each new disposable tip • To avoid error due to hysteresis when setting volume on a variable volume pipettor be consistent in the direction in which you change volume (either always increase to the desired volume or always decrease to the desired volume) • When conducting a dilution using a micropipettor make sure that the tip can reach the bottom of the test tube; for example, our 1000 µl pipettors with blue tips cannot reach the bottom of a 13 x 100 mm culture tube; use an Eppendorf sample tube instead • It is very awkward to have one person hold a tube while the other pipets from it; when students work in pairs it is better simply to take turns pipetting Establish a frame of reference For the sake of simplicity, lets say we are talking about sucrose solutions. Suppose you have a starting solution of sucrose (in water) with volume V1 and concentration C1. What is the total amount of sucrose in your solution? Answer: C1 V1. Example. Volume = 0.2 liter; concentration is 50 grams/liter. C1 V1 = 50 grams/liter 0.2 liter= 10 grams sucrose. Now suppose that you dilute that solution with water the whole thing to some larger, predetermined volume (V2). What amount of sucrose is present in the new, diluted solution? If you said 10 grams, you get the gold star. But wait a minute C1 V1 = 10 grams, and the new solution has a different volume, V2. The same amount of sucrose is present in the new solution as was in the original solution, so the following relationship must hold: C1*V1 = C2*V2, where C2 = concentration of the new solution. Example. Dilute the previous sucrose solution to 2 liters. What is the concentration of the new solution? We must solve for C2, of course. C1 V1 = C2 V2 = 10 grams. We know that V2 = 2 liters, so now we have C2 (2 liters) = 10 grams Solve for C2 to obtain 5 grams/liter. Determining what you already knows and putting the informatin into the equation C1 V1 = C2 V2 establishes the relationship that you need in order to solve dilution problems. Determine the objective What do you want to do? Or, more realistically, what does the instructor want you to do? Two types of diution problems are quite common in biology and biochemistry labs. • Dilute a known volume of known concentration to a desired final concentration • Dilute a known concentration to a desired final concentration AND volume The second type of problem really throws people off! Let's start with the first one, though. You know V1, C1, and C2 is predetermined. It remains, then, to solve for V2, namely the final volume to which to dilute the solution. This one is easy, since you keep the amount of solute the same and only have to change one factor. Now for the second type problem. You know C1, namely the concentration of the starting solution. You have predetermined both V2 and C2, namely the final volume and concentration that you desire. There is one undetermined variable left, namely V1. V1 is the volume of original solution that you will dilute to the desired final volume and concentration. C1 V1 = C2 V2 V1 = (C2 V2)/C1 or, V1 = (final amount of solute)/C1 Example. You have a sucrose solution of 47 grams/liter. You want to prepare 100 milliliters (0.1 liter) of sucrose solution of concentration 25 grams/liter. Since you know the starting concentration of sucrose and you know both the final concentration and volume of solution that you want, all you need to find out is what volume of starting solution (V1) to use. V1 = (C2 V2)/C1, that is, V1 = (25 grams/liter 0.1 liter) ÷ 47 grams/liter = 0.053 liter = 53 ml Notice that the calculation comes out to 0.05319149L using full precision, but I rounded off the required volume to the nearest ml. There is a limit to the precision with which we can prepare a solution, and also a limit to the precision that we really need. You would use a 100 ml graduated cylinder to determine final volume. You would be able to read the markings to the nearest 1 ml, as a It is critical that you report units for concentrations, volumes, and amounts, and when you make calculations for dilutions you must not mix up the units. For example, it doesn't work to write, V1 (160 milliters) C1(160 milligrams/liter = V2 (unknown) C2(desired-3 grams/liter) However, because concentration represents a proportional relationship, you can select from a variety of units. For example, 1 milligram/milliliter is the same as 1 gram/liter, 1 microgram/microliter, or 1 nanogram/nanoliter. It is also the same as 1000 milligrams/liter, but why would we write it that way? Select units that simplify your expressions.
{"url":"http://www.ruf.rice.edu/~bioslabs/methods/solutions/dilutions.html","timestamp":"2014-04-19T22:06:20Z","content_type":null,"content_length":"17775","record_id":"<urn:uuid:f62b0e68-ff19-40df-80ed-fa689c4a42be>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Empirical Characteristic Function The empirical characteristic function (ecf) of a random sample {, , ...} from a statistical distribution is defined by In this representation, each random variable can be envisioned as a particle orbiting the unit circle in the complex plane. The ecf is the expected orbit or mean of the random variable orbits. For large , the ecf converges to the distribution characteristic function. The graphic shows the orbit of a standardized stable distribution with parameters and in blue. The orbit of the ecf of 500 random variables with the same parameters is shown in red and the position, at , of each random variable on the unit circle is shown as a blue dot. The red dot is the mean of these positions. Each time you change the or slider a new random sample is generated. When or , the distribution will be symmetric about zero and the characteristic function will be confined to the real line, the axis in this Demonstration. The characteristic function of a statistical distribution is the Fourier transform of the derivative of its distribution function. A stable distribution is used in the example. Stable distributions lack for most cases a distribution function with an explicit formula. The standardized stable characteristic function, however, is straightforward: At , , as must the empirical characteristic function. The ecf converges with the characteristic function most quickly where is close to zero. As the sample size grows, the convergence becomes closer further from . Empirical characteristic functions can be used for parameter estimation in cases where the characteristic function of the statistical distribution is known. They can be used alone in the same way one would use a known characteristic function. For instance, the characteristic function of a sum of random variables is equal to the product of the characteristic functions of each independent random variable. Or a symmetrized characteristic function can be created from the product of the characteristic function and its conjugate. Examples and code for use of empirical CharacterCode functions can be found at
{"url":"http://demonstrations.wolfram.com/EmpiricalCharacteristicFunction/","timestamp":"2014-04-19T04:25:57Z","content_type":null,"content_length":"46555","record_id":"<urn:uuid:a38e2b5c-2070-4fd2-886d-b8532691ecb8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Frequency Weighting Equations Note: Due to Internet Explorer's brain-dead handling of CSS, the equations below may not look right in Internet Explorer (or IE-based browsers). Sorry, nothing I can do about it on my end. You can try another browser, or complain to Microsoft. Frequency Weighting Equations: This was the response to a question about A-, B- and C-weighted filters on alt.sci.physics.acoustics (you can see the original posting here). These equations are also described in ANSI Standards S1.4-1983 and S1.42-2001. Thanks to Neil Glenister: The s-domain transfer function for C-weighting is: Adding an extra real-axis pole to the C-weighting transfer function gives us B-weighting: Adding two real-axis poles to the C-weighting transfer function gives us A-weighting: where π=3.14159...etc and s is the complex variable. If you are only interested in the steady-state response then the weightings may be expressed in terms of frequency alone: These filters show a loss at 1kHz of 2.0dB ,0.17dB , 0.06dB for A , B and C weightings respectively and , since it is usual to normalise the response of each filter to 1kHz , this loss must be added to the to the modulus . In other words the responses may be expressed (in dB's) as follows: C = 0.06 + 20*log(Rc(f)) B = 0.17 + 20*log(Rb(f)) A = 2.0 + 20*log(Ra(f)) I don't have the transfer function for D-weighting easily to hand , but I can tell you the position of the poles and zeroes (from IEC 537) Poles Zeroes -282.7 + j0 -519.8 + j876.2 -1160 + j0 -519.8 - j876.2 -1712 + j2628 0 + j0 -1712 - j2628 A graph of the relative response of the functions is shown below: Updated April 20, 1998 A set of Matlab scripts (“Octave”) filter scripts are available at Matlab Central. The script defines A-weighting and C-weighting scripts in terms of their poles and zeroes as defined in IEC/CD 1672 (and ANSI S1.42-2001). The normalization constants are integrated into the zeroes. The implementation appears to meet the ANSI standard; the (time domain) impulse response is spot-on with the calculated IR given in ANSI S1.42-2001. The phase response appears to wrap differently than the response given in the ANSI standard, but I suspect that is related to the model inputs rather than any weakness in the model. As long as you use these filters to calculate magnitudes, the phase anomalies should not pose any problems. Note that if you're running these scripts under GNU Octave, you'll need to make the following changes to adsgn.m and cdsgn.m: [B,A] = bilinear(NUMs,DENs,Fs); should be changed to: [B,A] = bilinear(NUMs,DENs,1/Fs); in both files to account for Octave's syntax. September 6, 2004
{"url":"http://www.cross-spectrum.com/audio/weighting.html","timestamp":"2014-04-18T10:34:25Z","content_type":null,"content_length":"7577","record_id":"<urn:uuid:0bf1f325-4893-4b76-9736-07d7814e7908>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
finding equidistant point November 26th 2007, 09:49 AM #1 Oct 2007 finding equidistant point i need to find the poing on the x-axis that is equidistant from (10,4) and (6,2). is there a specific formula for this? The set of points in the plane equidistant from two points is the perpendicular bisector of the line segment determined by the points. Where does that line intersect the x-axis? November 26th 2007, 10:13 AM #2
{"url":"http://mathhelpforum.com/pre-calculus/23539-finding-equidistant-point.html","timestamp":"2014-04-18T15:43:35Z","content_type":null,"content_length":"32472","record_id":"<urn:uuid:75442337-cc3b-4842-a247-93a5d348c7aa>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Spherical geometry From Encyclopedia of Mathematics An area of mathematics concerned with geometric figures on a sphere, in the same way as planimetry is concerned with geometric figures in a plane. Every plane that intersects a sphere gives a certain circle as section; if the intersecting plane passes through the centre Figure: s086680a Figure: s086680b Figure: s086680c Figure: s086680d Figure: s086680e Figure: s086680f Figure: s086680g Figure: s086680h The great circles of a sphere are its geodesics (cf. Geodesic line), and for this reason their role in spherical geometry is the same as the role of straight lines in planimetry. However, whereas any segment of a straight line is the shortest curve between its ends, an arc of a great circle on a sphere is only the shortest curve when it is shorter than the complementary arc. Spherical geometry differs from planimetry in many other senses; for example, there are no parallel geodesic lines: two great circles always intersect, and, moreover, they intersect in two points. The length of a segment When two great circles intersect on a sphere, four spherical digons, or lunes, are formed (Fig. c). A lune is defined by specifying its angle. The area of a lune is determined by the formula Three great circles that do not intersect in one pair of diametrically-opposite points form eight spherical triangles on the sphere (Fig. d); if the elements (angles and sides) of one of these is known, it is easy to determine the elements of all the others. It is therefore usual to consider only triangles whose sides and angles are less than Triangles that can be matched up by a movement around the sphere are said to be directly congruent. Such triangles have equal elements and the same orientation. Triangles that have equal elements and a different orientation are called oppositely symmetric; the triangles In every spherical (Euler) triangle, each side is less than the sum of, and more than the difference between, the other two; the sum of all the sides is always less than Spherical trigonometry. The position of each point on a sphere is completely defined by the specification of two numbers; these two numbers (coordinates) can be defined in the following way (Fig. g). A great circle The length [1] N.N. Stepanov, "Spherical trigonometry" , Leningrad-Moscow (1948) (In Russian) [2] P.S. Alexandroff [P.S. Aleksandrov] (ed.) et al. (ed.) , Enzyklopaedie der Elementarmathematik , 4. Geometrie , Deutsch. Verlag Wissenschaft. (1969) (Translated from Russian) As mentioned above, a spherical triangle with sides A pole of a great circle is a point of the sphere perpendicular to the plane cutting out that great circle; i.e. if the great circle is regarded as the equator, the two poles are the North and South [a1] M. Berger, "Geometry" , II , Springer (1987) [a2] D. Hilbert, S.E. Cohn-Vossen, "Geometry and the imagination" , Chelsea (1952) (Translated from German) [a3] B.A. [B.A. Rozenfel'd] Rosenfel'd, "A history of non-euclidean geometry" , Springer (1988) (Translated from Russian) [a4] J.L. Coolidge, "A treatise on the circle and the sphere" , Clarendon Press (1916) [a5] H.S.M. Coxeter, "Introduction to geometry" , Wiley (1961) pp. 11; 258 How to Cite This Entry: Spherical geometry. V.I. Bityutskov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Spherical_geometry&oldid=15193 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Spherical_geometry","timestamp":"2014-04-18T03:20:59Z","content_type":null,"content_length":"28641","record_id":"<urn:uuid:ba9a24e9-0b7e-4435-b643-343ba79dc457>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Greenbrae Algebra 2 Tutor Find a Greenbrae Algebra 2 Tutor ...I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply understand math and take great joy in communicating this to reluctant and struggling students, as well as to able students who want to maximize their achieveme... 20 Subjects: including algebra 2, calculus, geometry, biology ...It was a very interesting subject and I'll be more than happy to help students with any difficulty they have. I took many pharmacy-related courses and have solid and extensive knowledge in pharmacology. I have also successfully passed the pharmacy technician certification exam with top scores and obtained the certificate as evidence of certification. 22 Subjects: including algebra 2, calculus, statistics, geometry ...In addition, he has a lifelong passion for mathematics and, in addition to tutoring all grade levels in math, has volunteered for 6 years in the local public schools in San Rafael (including mathematics instruction and Odyssey of the Mind coach). Dr. G. has a daughter who is currently in high school. He enjoys music, hiking and geocaching.Dr. 13 Subjects: including algebra 2, calculus, statistics, physics ...I was the coordinator of the program for two years, which means I was in charge of, among other things, developing new ides and updating our old ones. It's an activity I thoroughly enjoyed. My experience with elementary-age students is heavier on math than language arts, but I have successfully tutored students in the latter as well. 32 Subjects: including algebra 2, chemistry, Spanish, calculus ...I strive to help my students understand chemistry concepts on an intuitive level so they can tackle a wide range of problems. I score in the top of my class in writing, and my work has been in several online publications. I have also performed my poetry and spoken at many university events. 32 Subjects: including algebra 2, chemistry, English, reading
{"url":"http://www.purplemath.com/greenbrae_ca_algebra_2_tutors.php","timestamp":"2014-04-21T02:06:51Z","content_type":null,"content_length":"24208","record_id":"<urn:uuid:55e9bdba-182c-4d52-ba22-86050cc9250a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Transparent Soap Re: Transparent Soap Because I am not using humanities silly science and mathematics. You are correct, they could not find their own butt with both hands. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://mathisfunforum.com/viewtopic.php?pid=292537","timestamp":"2014-04-18T16:19:31Z","content_type":null,"content_length":"12652","record_id":"<urn:uuid:e731d225-c22a-4d61-a64c-9ebe6ef67442>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
The FINTSCHED function calculates the interest portion of the payments on a series of fixed-rate installment loans that are paid off over a specified number of time periods. For each time period, you specify the amount of the loans incurred during that time period and a single interest rate that will apply to those loans over their lifetime. FINTSCHED calculates the result for a given time period as the sum of the interest due on each loan that is incurred or outstanding in that period. The result returned by the FINTSCHED function is dimensioned by the union of all the dimensions of loans, rates, n, and the dimension used as the time-dimension argument. FINTSCHED(loans, rates, n, [time-dimension] [STATUS]) A numeric expression that contains the initial amounts of the loans. When loans does not have a time dimension, or when loans is dimensioned by more than one time dimension, the time-dimension argument is required. A numeric expression that contains the interest rates charged for loans. When rates is a dimensioned variable, it can be dimensioned by any dimension, including a different time dimension. When rates is dimensioned by a time dimension, you specify the interest rate in each time period that will apply to the loans incurred in that period. The interest rate for the time period in which a loan is incurred applies throughout the lifetime of that loan. The rates are expressed as decimal values; for example, a 5 percent rate is expressed as.05. A numeric expression that specifies the number of payments required to pay off the loans in the series. The n expression can be a dimensioned variable, but it cannot be dimensioned by the time dimension argument. One payment is made in each time period of the time dimension by which loans is dimensioned or in each time period of the dimension specified in the time-dimension argument. For example, one payment is made each month when loans is dimensioned by MONTH. The name of the dimension along which the interest payments are calculated. When the time dimension has a type of DAY, WEEK, MONTH, QUARTER, or YEAR, the time-dimension argument is optional, unless loans has more than one time dimension. Specifies that FINTSCHED should use the current status list (that is, only the dimension values currently in status in their current status order) when computing the interest portion of the payments. By default FINTSCHED uses the default status list. When loans has a value other than NA and the corresponding value of rates is NA, an error occurs. FINTSCHED is affected by the NASKIP option. When NASKIP is set to YES (the default), and a loan value is NA for the affected time period, the result returned by FINTSCHED depends on whether the corresponding interest rate has a value of NA or a value other than NA. Table 7-11, "Effect of NASKIP When Loan or Rate Values are NA for a Time Period" illustrates how NASKIP affects the results when a loan or rate value is NA for a given time period. Table 7-11 Effect of NASKIP When Loan or Rate Values are NA for a Time Period │ Loan Value │ Rate Value │ Result When NASKIP = YES │ Result When NASKIP = NO │ │ Non-NA │ NA │ Error │ Error │ │ NA │ Non-NA │ Interest values │ NA for the affected time periods │ │ │ │ │ │ │ │ │ (NA loan value is treated as zero) │ │ │ NA │ NA │ NA for affected time periods │ NA for the affected time periods │ As an example, suppose a loan expression and a corresponding interest expression both have NA values for 1997 but both have values other than NA for succeeding years. When the number of payments is 3, FINTSCHED returns NA for 1997, 1998, and 1999. For 2000, FINTSCHED returns the interest portion of the payment due for loans incurred in 1998, 1999, and 2000. FINTSCHED Ignores the Status of the Time Dimension The FINTSCHED calculation begins with the first time dimension value, regardless of how the status of that dimension may be limited. For example, suppose loans is dimensioned by year, and the values of year range from Yr95 to Yr99. The calculation always begins with Yr95, even when you limit the status of year so that it does not include Yr95. However, when loans is not dimensioned by the time dimension, the FINTSCHED calculation begins with the first value in the current status of the time dimension. For example, suppose loans is not dimensioned by year, but year is specified as time-dimension. When the status of year is limited to Yr97 to Yr99, the calculation begins with Yr97 instead of Yr95. Example 7-98 Calculating Interest The following statements create two variables called loans and rates. DEFINE loans DECIMAL <year> DEFINE rates DECIMAL <year> Suppose you assign the following values to the variables loans and rates. YEAR LOANS RATES -------------- ---------- ---------- Yr95 100.00 0.05 Yr96 200.00 0.06 Yr97 300.00 0.07 Yr98 0.00 0.00 Yr99 0.00 0.00 For each year, loans contains the initial value of the fixed-rate loan incurred during that year. For each year, the value of rates is the interest rate that will be charged for any loans incurred in that year; for those loans, this same rate is charged each year until the loans are paid off. The following statement specifies that each loan is to be paid off in three payments, calculates the interest portion of the payments on the loans, REPORT W 20 HEADING 'Payment' FINTSCHED(loans, rates, 3, year) and produces the following report. YEAR Payment -------------- -------------------- Yr95 5.00 Yr96 15.41 Yr97 30.98 Yr98 18.70 Yr99 7.48 The interest payment for 1995 is interest on the loan of $100 incurred in 1995, at 5 percent. The interest payment for 1996 is the sum of the interest on the remaining principal of the 1995 loan, at 5 percent, plus interest on the loan of $200 incurred in 1996, at 6 percent. The 1997 interest payment is the sum of the interest on the remaining principal of the 1995 loan, at 5 percent; interest on the remaining principal of the 1996 loan, at 6 percent; and interest on the loan of $300 incurred in 1997, at 7 percent. Since the 1995 loan is paid off in 1997, the payment for 1998 represents interest on the remaining principal of the 1996 and 1997 loans. In 1999, the interest payment is on the remaining principal of the 1997 loan.
{"url":"http://docs.oracle.com/cd/B28359_01/olap.111/b28126/dml_functions_1085.htm","timestamp":"2014-04-16T17:31:45Z","content_type":null,"content_length":"18672","record_id":"<urn:uuid:e929f939-f71b-4081-92d0-2035cebe73ea>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help May 24th 2013, 07:17 AM #1 May 2009 I'm trying to find the limit of this function as x tends to 0 but I'm completely unsure of how to proceed. I can't seem to find any identities that would allow me remove the denominator. Any idea about how to go about it? $\frac{3 + cos{x} - 4e^{2e}}{1 + sin{3x} - cos{3x} + x}$ Re: limit As this is an indeterminate form at the limit (0/0) you are probably expected to use L'hopital's rule. An alternative is to try small angle approximations. Last edited by zzephod; May 24th 2013 at 07:52 AM. May 24th 2013, 07:21 AM #2 Apr 2012
{"url":"http://mathhelpforum.com/calculus/219281-limit.html","timestamp":"2014-04-20T23:31:42Z","content_type":null,"content_length":"33987","record_id":"<urn:uuid:21d9d66a-ad15-4000-9a8a-15aa30883485>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponentiable objects in a category, valued in a larger, containing category up vote 7 down vote favorite Recall that when dealing with topological spaces one usually likes dealing with a subcategory of $Top$ which is convenient, one facet of which is that it is cartesian closed. However to get to a similar point with smooth manifolds one needs to consider things like diffeological spaces. Not that there is anything wrong with that. But we have a partial solution if we are just looking for exponentiable objects, and willing to consider infinite-dimensional smooth manifolds (usually Frechet manifolds). More formally, an object $A$ of category $C$ with binary products is exponentiable if the functor $-\times A\colon C\to C$ has a right adjoint. The classification of which topological spaces are exponentiable is well known, and cartesian closed categories are defined by the fact that every object is exponentiable. But in the category of (Hausdorff, finite-dimensional) smooth manifolds the only exponentiable objects are the compact manifolds of dimension at most zero. But we can still sensibly talk about smooth mapping spaces between a general compact manifold and an arbitrary manifold, where the mapping space is an object in the category of Frechet manifolds $Frech$, in which the category $Diff$ of finite-dimensional smooth manifolds sits as a full subcategory. There are clear analogies with, say, finite CW-complexes, where the 'internal' hom is a topological space of a rather more infinite nature. Similarly, we can consider the mapping presheaf $X \mapsto C(X\times A,Y)$ on C. What I would like to know is if there is a name for this sort of phenomenon, that we have a category $C$, and full embedding $C\hookrightarrow D$ and $D$-valued mapping objects for certain objects of $C$: these are objects of $C$ which are exponentiable as objects of $D$. It seems to fall into some gap between cartesian closedness and enrichment, but I don't have a way of making that precise. ct.category-theory smooth-manifolds cartesian-closed I don't think there's a name for this, but this property can probably be rephrased as the existence of some Kan extensions along the inclusion. – Fernando Muro Aug 15 '12 at 15:37 add comment 1 Answer active oldest votes Fernando is right that it has something to do with Kan extensions. However, it is not about Kan extensions along an inclusion, but more about extension of an inclusion. I thought about similar issues a few years ago (and recently --- yesterday) , but in a slightly different context --- after the excellent answer by Todd Trimble to my question Completion of a category, I wondered if there was a general 2-categorical setting that could explain such constructions (I was mainly interested in carrying to a 2-categorical setting the highly related concept of Day convolution). Now I try to slowly reproduce some of these ideas. I shall introduce the concept of a Yoneda triangle (perhaps I should call it the right Yoneda triangle, because there are obvious dual concepts). Let $\mathbb{W}$ be a 2-category. A Yoneda triangle in $\mathbb{W}$ consists of 1-morphisms $y \colon A \rightarrow \overline{A}$, $f \colon A \rightarrow B$, $g \colon B \rightarrow \ overline{A}$ together with a 2-morphism $\eta \colon y \rightarrow g \circ f$ which exhibits $g$ as a pointwise left Kan extension of $y$ along $f$ and exhibits $f$ as an absolute left Kan lifting of $y$ along $g$. (BTW: these data are exactly what led Mark Weber to strengthen the definition of a Yoneda structure introduced by Street and Walters). The idea of a Yoneda triangle is that, we have a morphism $y \colon A \rightarrow \overline{A}$ which plays the role of a "defective identity" and for a given morphism $f \colon A \rightarrow B$ we try to characterise its right adjoint up to the "defective identity" $y$. Example: [Yoneda triangles in $\mathbf{Cat}$] If we take $\mathbb{W}$ to be the 2-category $\mathbf{Cat}$ of locally small categories, functors and natural transformations, then the condition that $G$ is a pointwise left Kan extension of $Y$ along $F$ reduces to: $$G(-) = \int^{A\in\mathbb{A}} \hom(F(A), -) \times Y(A)$$ (where the coend has to be interpreted as the colimit of $Y$ weighted by $\hom(F(=), -)$ in case the category is not tensored over $\mathbf{Set}$). And the condition that $F$ is an absolute left Kan lifting of $Y$ along $G$ reduces to: $$\hom(Y(-), G (=)) \approx \hom(F(-), =)$$ Particularly, if $Y$ is dense, than $G$ is canonically a pointwise Kan extension --- from density we have: $$G(-) \approx \int^{A\in\mathbb{A}} \hom(Y(A), G(-)) \times Y(A)$$ and using the formula for an absolute lifting: $$G(-) \approx \int^{A\in\mathbb{A}} \hom(F(A), -) \times Y(A)$$ Example: [Adjunction as Yoneda triangle] It is folklore that an adjunction $f \dashv g$ in a 2-category $\mathbb{W}$ may be equally characterised in the following way: $f$ is an absolute left lifting of the identity along $g$. In such a case $g$ is automatically a pointwise left extension of the identity along $f$ and $\mathit{id}, f, g$ together with the unit of the adjunction form a Yoneda triangle. Example: [Yoneda triangle as a relative adjunction] There is an old concept of so called "relative adjunction", which is defined in the same way as the Yoneda triangle, but without the requirement that $g$ is a left Kan extension. Note however, that in such a case $g$ need not be uniquely determined by $f$. Let me move to the more specific example that you asked about. Example: [Yoneda triangle along Yoneda embedding] Let $F \colon \mathbb{A} \rightarrow \mathbb{B}$ be a functor between locally small categories (or more generally, a locally small functor). There is also an inclusion $y_\mathbb{A} \colon \mathbb{A} \rightarrow \mathbf{Set}^{\mathbb{A}^{op}}$. One may easily verify that these data may be always extended to the Yoneda triangle with $G(-) = \hom(F(=), -) \colon \mathbb{B} \rightarrow \mathbf{Set}^{\mathbb{A}^{op}}$ --- which reassembles the fact that every functor always has a "distributional" right adjoint. The same is true for internal categories and for categories enriched in a complete and cocomplete symmetric monoidal closed category, and generally (almost by definition) for any 2-category equipped with a Yoneda structure. The essence of the above example is that because the Yoneda functor $y_\mathbb{B} \colon \mathbb{B}\rightarrow \mathbf{Set}^{\mathbb{B}^{op}}$ is a full and faithful embedding, functors $F\ colon\mathbb{A} \rightarrow \mathbb{B}$ may be thought as of distributors $$y_\mathbb{B} \circ F = \hom(=, F(-))$$ Every distributor arisen in this way has a right adjoint distributor $\hom(F (=), -)$ in the bicategory of distributors. The distributor $\hom(F(=), -)$ has actually the type $\mathbb{B} \rightarrow \mathbf{Set}^{\mathbb{A}^{op}}$, which is the only think that may prevent $F$ of having the ordinary (functorial) right adjoint $G \colon \mathbb{B} \rightarrow \mathbb{A}$ --- just recall, that we say that $F$ has a right adjoint, if there exists $G$ such up vote that: $$y_\mathbb{A} \circ G \approx \hom(F(=), -)$$ which means: $$\hom(=, G(-)) \approx \hom(F(=), -)$$ 2 down vote Unfortunately, as a non-mathematician I will not help you with your other examples involving highly mathematical and completely non-understandable terms like a topological space or a manifold, so perhaps you have to calculate the other examples yourself :-) However, I will give you another example that actually led me to the above considerations. One may similarly define the concept of a Yoneda bi-triangle and a Yoneda monoidal bi-triangle. Example: [2-powers from Yoneda triangle] The motivating example is to start with a 2-functor $J \colon \mathbb{W} \rightarrow \mathbb{D}$ equipping a 2-category $\mathbb{W}$ with proarrows, and an extension $Y \colon \mathbb{W} \rightarrow \overline{\mathbb{W}}$ embedding "small objects" into "locally small" (or large) objects in $\overline{\mathbb{W}}$. Then to extend these data to the Yoneda triangle, we have to find a functor $P \colon \mathbb{D} \rightarrow \overline{\mathbb{W}}$ representing a proarrow $A \nrightarrow B$ as a morphism $A \rightarrow P(B)$ in $\overline{\mathbb{W}}$, and a natural transformation $\eta \colon Y \rightarrow P\circ J$ playing the role of a familly of Yoneda morphisms $\eta_A \colon A \rightarrow P(A)$. The archetypical situation is when we take $\mathbb{W} = \mathbf{cat}$, $\overline{\mathbb{W}} = \mathbf{Cat}$, $\mathbb{D} = \mathbf{Dist}$, where $\mathbf{cat}$ is the 2-category of small categories, $\mathbf{Cat}$ is the 2-category of locally small categories, and $\mathbf{Dist}$ is the bicategory of distributors between small categories. Then $J \colon \mathbf{cat} \ rightarrow \mathbf{Dist}$, $Y \colon \mathbf{cat} \rightarrow \mathbf{Cat}$ are the usual embeddings, $P \colon \mathbf{Dist} \rightarrow \mathbf{Cat}$ is the covarinat 2-power pseudofunctor $\mathbf{Set}^{(-)^{op}}$ defined on distributors via left Kan extensions, and $\eta_\mathbb{A} \colon \mathbb{A} \rightarrow \mathbf{Set}^{\mathbb{A}^{op}}$ is the Yoneda embedding of a small category $\mathbb{A}$. We know that there are isomorphisms of categories: $$\hom_{\mathbf{Dist}}(\mathbb{A}, \mathbb{B}) \approx \hom_{\mathbf{Cat}}(\mathbb{A}, \mathbf{Set}^{\mathbb{B}^{op}})$$ where $\mathbb{A}$ and $\mathbb{B}$ are small. Therefore, to show that $P$ is a (bi)pointwise left Kan extension it suffices to show that $Y$ is 2-dense. However, $Y$ is obviously 2-dense, because the the terminal category is a 2-dense subcategory of $\mathbf{Cat}$ and $Y$ is fully faithful. The point is that in most situations $\mathbb{D}$ is a monoidal (bi)category, where the monoidal structure is inherited from the closed structure on $\mathbb{W}$. Moreover, functors and the natural transformation constituting the Yoneda triangle are (lax)monoidal. This means that monoids in $\mathbb{W}$ are mapped to the (pro)monoids in $\mathbb{D}$ which are mapped to monoids in $\overline{\mathbb{W}}$. If I am not mistaken this observation leads to an abstract characterisation of the concept of the Day convolution (and in a similar manner one may try to define a Dedekind-MacNeille completion of an object). In our archetypical situation, a category $\mathbb{A} \times \mathbb{B}$ is mapped by $P$ to $\mathbf{Set}^{\mathbb{A}^{op} \times \mathbb{B}^{op}}$ and the missing morphisms making the unit of the triangle lax monoidal: $$\mathbf{Set}^{\mathbb{A}^{op}} \times \mathbf{Set}^{\mathbb{B}^{op}} \rightarrow \mathbf{Set}^{\mathbb{A}^{op} \times \mathbb{B}^{op}}$$ is given by the convolution of the distributional identity $\mathbb{A} \times \mathbb{B} \nrightarrow \mathbb{A} \times \mathbb{B}$: $$\langle F, G \rangle \mapsto \int^{A \in \mathbb{A}, B \in \mathbb{B}} F (A) \times G(B) \times \hom(-, A) \times \hom(=, B) = F(-) \times G(=)$$ Now, a promonoidal category $M \colon \mathbb{A} \times \mathbb{A} \nrightarrow \mathbb{A}$ is mapped by $P$ to: $$H \ mapsto \int^{\langle A, B \rangle \in \mathbb{A}\times \mathbb{A}} H(A, B) \times M(-, A, B)$$ and by composing it with the above map: $$\langle F, G \rangle \mapsto \int^{\langle A, B \ rangle \in \mathbb{A}\times \mathbb{A}} F(A) \times G(B) \times M(-, A, B)$$ we obtain the well-known formula for convolution. One may also go in the other direction --- starting from the composition $P \circ J$ satisfying monoidal-like laws and try to find a right or left resolution in the category of (right/left) modules over monoid on $P \circ J$. If I am not mistaken, the left resolution (the Eilenberg-Moore object) of $P \circ J$ in our archetypical situation consists of the category of cocomplete categories and cocontinous functors and the right resolution (the Kleisli object) consists of the bicategory of distributors (i.e. the category of free cocomplete categories and cocontinous (BTW: this in some sense relates the concept of a proarrow equipment with the concept of a Yoneda structure.) (BTW: perhaps the concept of a 2-topos should be defined as a Yoneda monoidal bi-triangle induced by the embedding of a 2-category of small objects into a category of bigger objects relatively to a category of "relations" in $\mathbb{W}$, which, for some purposes may be defined as the 2-category of discrete fibred spans, and for another purposes may be defined as the 2-category of codiscrete cofibred cospans). This will take some digesting! Thanks for thinking about this. – David Roberts Apr 18 '13 at 5:33 +1. I only gave this a cursory read and can't comment on anything past where the word "topos" appears, but the beginning makes a lot of sense to me and the fact that distributors came up in the middle convinces me that this is the right thing. I'm not sure, but I suspect there's a simpler way to do this, relying on distributors and Kan extensions without mention of Yoneda triangles. BTW, I think Michal should identify at least in part as a mathematician. I certainly count this as math, and good math at that! – David White Apr 18 '13 at 14:16 1 @David Roberts, let me know when you find any interesting examples! @David White, I have expanded the example of a 2-topos (now it is called 2-power) --- do hope it is more readable now. – Michal R. Przybylek Apr 19 '13 at 22:00 add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory smooth-manifolds cartesian-closed or ask your own question.
{"url":"http://mathoverflow.net/questions/104742/exponentiable-objects-in-a-category-valued-in-a-larger-containing-category/127848","timestamp":"2014-04-21T03:01:46Z","content_type":null,"content_length":"67202","record_id":"<urn:uuid:02deede8-f41c-4a5f-97a3-9ceb42ff33d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Dockweiler, CA Algebra 2 Tutor Find a Dockweiler, CA Algebra 2 Tutor ...I believe that building a good foundation is the key to success in any subject. I have come up with many different tricks for helping students remember key ideas in math and the biggest compliment I received was when one client told me she was showing her class my "wedding cake" trick (one that ... 11 Subjects: including algebra 2, physics, geometry, algebra 1 ...As a lifelong learner with a distinct passion for teaching and a great interest in helping students succeed, I believe that I will be an effective tutor for you or your child. I recently moved to Los Angeles from my hometown of Ann Arbor, Michigan, and I am currently pursuing a master’s degree a... 28 Subjects: including algebra 2, reading, English, writing ...However some concepts in algebra 1 can be challenging as well. But challenging doesn't mean impossible. With hard work and some patience from the student and tutor, algebra 1 can be mastered. 19 Subjects: including algebra 2, English, Spanish, chemistry ...My first language is Spanish, and my basic academic education was in Spanish as well. During the 1980s, I worked for LAUSD as Bilingual assistant until the Bilingual Education was over. I also have been a tutor for over 15 years in various subjects, including Spanish. 8 Subjects: including algebra 2, Spanish, geometry, chemistry ...In addition to providing support with the aforementioned tasks, I've also enjoyed tremendous success helping students draft razor-sharp personal statements for their college and graduate school applications. Please let me know if I can be of assistance.The California Basic Educational Skills Tes... 27 Subjects: including algebra 2, reading, English, physics Related Dockweiler, CA Tutors Dockweiler, CA Accounting Tutors Dockweiler, CA ACT Tutors Dockweiler, CA Algebra Tutors Dockweiler, CA Algebra 2 Tutors Dockweiler, CA Calculus Tutors Dockweiler, CA Geometry Tutors Dockweiler, CA Math Tutors Dockweiler, CA Prealgebra Tutors Dockweiler, CA Precalculus Tutors Dockweiler, CA SAT Tutors Dockweiler, CA SAT Math Tutors Dockweiler, CA Science Tutors Dockweiler, CA Statistics Tutors Dockweiler, CA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Cimarron, CA algebra 2 Tutors Dowtown Carrier Annex, CA algebra 2 Tutors Farmer Market, CA algebra 2 Tutors Foy, CA algebra 2 Tutors Green, CA algebra 2 Tutors Lafayette Square, LA algebra 2 Tutors Miracle Mile, CA algebra 2 Tutors Oakwood, CA algebra 2 Tutors Pico Heights, CA algebra 2 Tutors Rimpau, CA algebra 2 Tutors Sanford, CA algebra 2 Tutors Vermont, CA algebra 2 Tutors Westvern, CA algebra 2 Tutors Wilcox, CA algebra 2 Tutors Wilshire Park, LA algebra 2 Tutors
{"url":"http://www.purplemath.com/Dockweiler_CA_Algebra_2_tutors.php","timestamp":"2014-04-19T06:57:00Z","content_type":null,"content_length":"24212","record_id":"<urn:uuid:ff46eac8-5ac9-4470-a79d-304dd6b3f401>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Adaptive Selection Combining Receiver over Time Varying Frequency Selective Fading Channel in Class-A Noise ISRN Signal Processing Volume 2013 (2013), Article ID 894542, 6 pages Research Article Adaptive Selection Combining Receiver over Time Varying Frequency Selective Fading Channel in Class-A Noise German University in Cairo, New Cairo City 11835, Egypt Received 27 February 2013; Accepted 21 April 2013 Academic Editors: S. Kalitzin and W. Liu Copyright © 2013 Ahmed El-Sayed El-Mahdy. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. An adaptive selection combining (SC) scheme is proposed for time varying mobile communication channel in Class-A impulsive noise. The receiver adaptively selects a diversity branch out of the available branches and discards the others. This is performed by computing the maximum likelihood (ML) metric of each diversity branch and selects the branch with the maximum metric. The proposed adaptive SC scheme dynamically adjusts the threshold value according to the time variations of the channel. Equalization and data detection are performed after combining using maximum likelihood sequence estimation implemented by Viterbi algorithm (MLSE-VA). The minimum survivor technique is employed to reduce the complexity of the receiver. 1. Introduction In wireless communication networks, fading phenomenon imposes serious limitations upon the system performance. Diversity techniques as means of achieving high capacity communication systems and combating fading effects have been the subject of interest for many years. The traditional diversity combining techniques include maximal ratio combining (MRC), equal gain combining (EGC), and selection combining (SC). MRC coherently combines all diversity branches after weighing each branch with the respective gain of the branch. EGC coherently combines all diversity branches after weighing each branch with equal gain. In SC only one diversity branch is used for data reception. The usual way of selecting this branch is to choose the branch with the largest instantaneous SNR. Most literature in diversity is mainly limited to the conventional assumption of AWGN. AWGN realistically represents the thermal noise at the receiver but ignores the impulsive nature of atmospheric noise, electromagnetic interference, or man-made noise. Automatic ignition noise and power transmission lines are examples of impulsive noise sources encountered mainly in metropolitan areas [1]. One of the noise models that combines the Gaussian noise with a non-Gaussian impulsive noise is Class-A impulsive noise proposed by Middleton. Despite the practical and theoretical importance of the problem, only few results on diversity combining for Class-A noise are available in the literature [1–6]. In [1], the performance of a multirelay network with amplify-and-forward relaying over a flat Rayleigh fading channel in impulsive noise is considered. In [2], the performance of maximum ratio combining (MRC), equal gain combining (EGC), selection combining (SC), and postdetection combining under Class-A impulsive noise is analyzed. In [3], the bit error rate of diversity combining schemes for a single user communication system operating over flat Rayleigh fading channel subject to impulsive alpha-stable noise is derived. In [4], the authors study the asymptotic behavior of the bit error probability and the symbol error probability of quadratic diversity combining schemes such as MRC, differential EGC, and noncoherent combining in correlated Rician fading and non-Gaussian noise. In [5], the performance of postdetection combining over Rayleigh fading channel with impulsive noise is obtained and compared with the performance of MRC. In [6], optimum and suboptimum diversity combining schemes for coherent and differential M-ary phase shift keying impaired by Class-A impulsive noise over Rician fading channel are proposed. From the previous discussion we observe that SC schemes are developed for slow flat fading channel. However, in practice, most wireless channels of the communication systems, such as mobile radio, are time varying frequency selective fading channels and it is shown that diversity can also lead to significant performance improvements for frequency selective fading channels [7]. Moreover, as mentioned previously, most studies in this area consider the interfering noise as Gaussian. However, in many cases, the transmission is additionally disturbed by man-made noise which is impulsive noise. In this paper, an adaptive SC receiver is proposed for time varying frequency selective fading channel in presence of Class-A impulsive noise. The selection of the branch is performed according to the time variations of the channel. Therefore, the proposed adaptive SC receiver is more suitable for mobile channels. Channel estimation is performed by sign algorithm which is more stable than LMS algorithm in presence of strong impulsiveness of the noise. The rest of the paper is organized as follows. In Section 2, the Class-A impulsive noise is presented. In Section 3, the proposed adaptive SC receiver is introduced. Section 4 provides the numerical results and the conclusions are given in Section 5. 2. System Model In this section, the model of the frequency selective channel and the class-A impulsive noise is described. 2.1. Channel Model The channel is characterized by branches, each of which is time varying and has the same fading characteristics but is statistically independent of one another. For the th branch, , the sampled received signal is given by where is the channel memory length (the ISI length), is the number of symbols, are independent and identically distributed (i.i.d) complex valued zero mean white class-A impulsive noise samples of the th channel, is the sampled transmitted sequence with alphabet size and autocorrelation , and is the discrete time varying parameters of the th channel. The channel time varying parameters are usually modeled as Gaussian random process. However, a more precise description of the time variations of the channel coefficients can be provided for the multipath channels, which have small number of reflectors. For example, for constant vehicle velocity, the mobile radio channel is almost periodically varying when the multipath delays change linearly with time due to the carrier modulation inherent in the transmitted signal [8]. Its time varying parameters can be expressed as a combination of exponentials whose frequency depends on the carrier frequency and the vehicle speed. We consider the channels, whose time varying parameters can be approximated by a linear combination of a finite number of basis sequences [8], [9, page 383]: where are nonrandom expansion coefficients. For mobile radio channels, these basis sequences are expressed as , where are some known frequencies [8]. 2.2. Class-A Impulsive Noise Model Class-A impulsive noise model of Middleton is a generalized model of the Gaussian noise combined with a non-Gaussian impulsive noise. In this model, a frequency component of the impulsive noise is constrained within the bandwidth of the receiver. The class-A impulsive noise for complex channel has a probability density function (pdf), , given by [10] where the parameter is called the impulsive index: it is the product of the received average number of impulses per unit time and the duration of an impulse. This parameter defines the impulsiveness of the noise. For small , the noise becomes more impulsive; that is, exhibits large impulsive “tails” and for larger , the statistical characteristics of the class-A impulsive noise approach those of Gaussian noise. The variances are related to the physical parameters and are given by where the parameter defines the mean variance of the class-A impulsive noise. The model of the white class-A noise combines the presence of an additive man-made noise component with variance and a white Gaussian noise component with variance . The parameter in (4) is the ratio of the mean power of the Gaussian noise component to the non-Gaussian impulsive noise component. The white Gaussian noise component is presented in the class-A noise model to describe the influence of thermal noise which is naturally present in the real physical receiver. Note that consists of an infinite weighted sum of zero mean Gaussian densities with decreasing weights and increasing variances. An approximation to the model in (3) can be obtained by limiting the sum to the first three terms only which are found to be sufficient to give excellent approximation to the noise probability density functions [10]. 3. The Proposed Adaptive SC Receiver 3.1. Selection of Diversity Branch In this subsection, the method of selection of the best diversity branch is described. By substitution of (2) into (1), can be expressed as Let us define the following vectors: Let the parameters be assembled into the unknown vector : and also where the superscript denotes matrix transposition. Note that the vector collects the unknown channel parameters from all paths and it is called the channel parameters vector (CPV). Using the previous definitions, we can rewrite (5) in the following representation: Let and denote -symbols of the noisy received signal and the transmitted signal, respectively. Since the observation noise is assumed to be class-A impulsive noise, then the probability density function (pdf) of the received signal vector , conditioned on the vectors and , can be written as For equiprobable messages, the ML metric of the received signal from the th diversity branch can be written as Evaluating has a great difficulty because it requires evaluation of an infinite sum, which is not possible. A simplification can be performed under the condition that the impulsive index is sufficiently small. In this case, the infinite sum in (1) can be approximated by the maximum value of its first three terms [10]. According to this approximation the pdf of the class-A impulsive noise becomes Using this approximated pdf, can be written as The selection of the best diversity branch is performed by evaluating the ML metric for and selecting the branch that has the maximum and turning off the rest of branches. The selected diversity branch is most probably ML diversity branch at that time. This branch is regarded as the most likely close to the signal samples at that time. Note that, since the channel is time varying, the values of are changed every , and consequently, the best branch is updated every corresponding to channel fading level. Therefore, the receiver selects the optimum ML branch dynamically every time according to the variation of the channel. The selection of the branch is optimum in the sense of maximizing the log-likelihood function. Let denote the estimation of the CPV at time , then, in order to evaluate , the CPV at time , , and the data vector must be known. Therefore, at the start-up, a preamble sequence is transmitted and used to obtain and . After the start-up phase, the data sequence corresponding to the survivor with the maximum metric is used to update these values. The structure of the proposed adaptive SC receiver is discussed in detail in Section 3.3. 3.2. Data Detection and CPV Estimation After selecting the strongest branch, data detection is performed using MLSE-VA. The trellis structure for this problem has states, and each state , for the th signaling interval, corresponds to one of the possible previous symbols so that . For each state , there are transitions emerging from it and going to different states . Each transition corresponds to one of the possible choices for the symbol . In our problem, the trellis branch metric that is associated with each transition from state to state in the Viterbi trellis is defined as and the trellis path metric is given by The sign algorithm is used to estimate and uses the sequence estimated from VA. The decision delay inherent in the VA, that is necessary to obtain reliable data estimates, causes performance degradation in the adaptive channel estimation algorithm. To overcome this problem and to reduce the complexity of the algorithm, we use minimum survivor processing technique [8], in which one channel coefficients estimate is performed per time step for all states in the trellis instead of one estimate per survivor path. This estimate is performed using the data sequence associated with survivor path which has the lowest metric among all survivors. This procedure reduces the complexity of the algorithm. The realization of this procedure requires only the comparison of all survivor metrics at every time step. The sequence with the lowest metric is used to estimate the CPV at the new time step and then update the branch metric given by (15). It is important to observe that the true CPV is time invariant, so the task of the adaptive estimation algorithm is to converge to the CPV as opposed to tracking them as in the case of time varying coefficients. The estimation of CPV can be performed using the least mean square (LMS) algorithm. Occasionally, the LMS algorithm becomes unstable when the noise impulsiveness becomes stronger. This is because the LMS is based on a squared error function which is sensitive to strong impulsive samples [10]. A more robust alternative to the LMS algorithm, when the noise becomes more impulsive, is the sign algorithm (SA). The iterations of this algorithm are given as where is the estimation error conjugate at time step . This algorithm is based on clipping the error signal to its sign and it is also called least mean absolute deviation algorithm. The initialization of both algorithms is obtained by setting . 3.3. Structure of the Adaptive SC Receiver The structure of the proposed adaptive SC is shown in Figure 1. At the start-up phase, the transmitter sends a preamble sequence to the receiver. The receiver uses this sequence to calculate the initial values of and for all diversity branches. The receiver selects the branch which has maximum . After selecting the strongest diversity branch, sequence detection is performed using MLSE-VA with trellis branch metric given by (14) to identify the survivor path. The data sequence corresponding to the survivor with the lowest metric is used to update the CPV. Then, the data sequence corresponding to the survivor with the lowest metric and the updated CPV is used to update for the next time period. This procedure is repeated until all the received data are processed. 4. Numerical Results In this section, the performance of the proposed adaptive SC receiver in impulsive noise environment over frequency selective channel is evaluated. The parameters of simulation are as follows. The number of symbols is 100000. The number of diversity branches is . A class-A impulsive noise is generated with and added to the signal at the input of the receiver. The impulsive index of the noise is varied. First, the convergence properties of the SA under severe impulsiveness of the noise are evaluated in terms of the normalized mean square error of estimation (NMSE). The parameters of the generated impulsive noise are and . The results are shown in Figure 2 which is obtained by performing 10 independent runs of the algorithms. The result for is also included for comparison. The number of the processed symbols is 100000 and the step size parameter is set to for the sign algorithm. The initial value is and the SNR is 15dB. The results show that the sign algorithm converges to a steady state value of the NMSE. It is also shown that the steady state value of the NMSE for is greater than that one in case of . This is because as decreases, the impulsiveness of the noise increases, causing increase in the NMSE. It is noted that the value of the NMSE depends on the SNR which is illustrated in Figure 3. Both figures (Figures 2 and 3) show that the SA is suitable for estimation of the channel over impulsive noise and converges to a steady state value even for severe impulsiveness of the noise (). To illustrate the effect of the impulsive noise on the performance of the SC receiver, we plot Figures 4 and 5. Figure 4 is plotted assuming that the channel is known while Figure 5 is plotted when the sign algorithm is used to estimate the channel. The performance of the receiver is measured in terms of bit error rate (BER). In these figures, the BER is plotted versus SNR for , , and . The results show that at low SNR, the noise dominates the performance of the receiver and the BER is high for all values of . When SNR increases, the performance of the SC receiver degrades as the value of decreases. This is because as the value of the impulsive index becomes smaller, the noise impulsiveness becomes stronger, thus causing larger performance degradation. Finally, the comparison between the performance of the SC receiver with known channel and estimated channel is shown in Figure 6 for and 0.001. The figure shows that there is a gap between the performance in case of known channel and estimated channel. This gap is due to the channel estimation error which affects the performance of the SC receiver. 5. Conclusion An adaptive SC receiver has been proposed for time varying mobile communication channel contaminated with Class-A impulsive noise. The receiver adaptively selects one diversity branch out of the available branches and discards the others. This is performed by computing the maximum likelihood (ML) metric of each diversity branch and selects the branch which has maximum value. The proposed SC receiver dynamically selects the branch according to the time variations of the channel. Since the noise is impulsive, channel estimation is performed by sign algorithm which is more stable than LMS algorithm in presence of strong impulsiveness of the noise. The results show that the sign algorithm is adequate in estimation of the channel parameters in strong impulsiveness of the noise. The results also show that as the value of the impulsive index increases, the performance of the SC receiver is enhanced. This is because as the value of the impulsive index becomes larger, the noise impulsiveness becomes weaker, thus causing enhancement in the receiver performance. 1. S. Al-Dharrab and M. Uysal, “Cooperative diversity in the presence of impulsive noise,” IEEE Transactions on Wireless Communications, vol. 8, no. 9, pp. 4730–4739, 2009. View at Publisher · View at Google Scholar · View at Scopus 2. C. Tepedelenlioǧlu and P. Gao, “On diversity reception over fading channels with impulsive noise,” IEEE Transactions on Vehicular Technology, vol. 54, no. 6, pp. 2037–2047, 2005. View at Publisher · View at Google Scholar · View at Scopus 3. A. Rajan and C. Tepedelenlioglu, “Diversity combining over Rayleigh fading channels with symmetric alpha-stable noise,” IEEE Transactions on Wireless Communications, vol. 9, no. 9, pp. 2968–2976, 2010. View at Publisher · View at Google Scholar · View at Scopus 4. A. Nezampour, A. Nasri, R. Schober, and Y. Ma, “Asymptotic BEP and SEP of quadratic diversity combining receivers in correlated ricean fading, non-gaussian noise, and interference,” IEEE Transactions on Communications, vol. 57, no. 4, pp. 1039–1049, 2009. View at Publisher · View at Google Scholar · View at Scopus 5. C. Tepedelenlioǧlu and P. Gao, “Performance of diversity reception over fading channels with impulsive noise,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), vol. 4, pp. 389–392, May 2004. View at Publisher · View at Google Scholar · View at Scopus 6. R. Schober, Y. Ma, L. Lampe, and P. T. Mathiopoulos, “Diversity combining for coherent and differential M-PSK in fading and class-A impulsive noise,” IEEE Transactions on Wireless Communications, vol. 4, no. 4, pp. 1425–1432, 2005. View at Publisher · View at Google Scholar · View at Scopus 7. T. Hehn, R. Schober, and W. H. Gerstacker, “Optimized delay diversity for frequency-selective fading channels,” IEEE Transactions on Wireless Communications, vol. 4, no. 5, pp. 2289–2298, 2005. View at Publisher · View at Google Scholar · View at Scopus 8. A. E. El-Mahdy, “Adaptive channel estimation and equalization for rapidly mobile communication channels,” IEEE Transactions on Communications, vol. 52, no. 7, pp. 1126–1135, 2004. View at Publisher · View at Google Scholar · View at Scopus 9. M. Jeruchim, P. Balaban, and K. Shanmugan, Simulation of Communication System, Plenum Press, New York, NY, USA, 1992. 10. A. E. El-Mahdy, “Adaptive signal detection over fast frequency-selective fading channels under class-A impulsive noise,” IEEE Transactions on Communications, vol. 53, no. 7, pp. 1110–1113, 2005. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/isrn.signal.processing/2013/894542/","timestamp":"2014-04-19T06:10:05Z","content_type":null,"content_length":"205067","record_id":"<urn:uuid:0353a8a9-3ab0-4e42-9564-a73e8e68b33c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding positions at the same distance from two given points The moment I left the office I realised it. Excuse me for my sudden loss of intelligence. // Myx Last edited by Myx; February 8th 2011 at 07:58 AM. Reason: Felt very stupid once I got some fresh air I see you've found the answer to your problem. Here's my answer, if you find it helpful: Sounds like a circle to me. You're looking for the set of all points equidistant from two points in 3D space, right? [EDIT]: See Plato's post below for a clarification of this phraseology. That's going to be a circle. If you have two points, $\mathbf{r}_{1}$ and [MATH\mathbf{r}_{2},[/tex] with the given distance being $a>\sqrt{(\mathbf{r}_{1}-\mathbf{r}_{2})\cdot(\mathbf{r}_{1}-\mathbf{r}_{2})},$ then let $\mathbf{n}=\mathbf{r}_{2}-\mathbf{r}_{1},$ with length $n=\|\mathbf{n}\|.$ By the Pythagorean theorem, the circle in question is going to be the intersection of the sphere centered at $\mathbf{n}/2$ of radius $\sqrt{a^{2}-n^{2}/4}$, with the plane $\mathbf{n}\cdot\left(\mathbf{x}-\mathbf{n}/2 \right)=0.$ That is, the circle is the set of all points $\mathbf{x}$ such that $(\mathbf{x}-\mathbf{n}/2)\cdot(\mathbf{x}-\mathbf{n}/2)=a^{2}-n^{2}/4,$ and $\mathbf{n}\cdot\left(\mathbf{x}-\mathbf Last edited by Ackbeet; February 8th 2011 at 09:19 AM. Reason: Refer to Plato's post. "The set of all points equidistant from two points in 3D space" is a plane that is perpendicular to the line segment determined by the two points at its midpoint. Sorry. My phraseology was unfortunate. The OP originally stated that he wanted all the points that are the same, single, distance from two points. That is, you have two points A and B, and you want the locus of all points that are a distance a away from A, and a distance a away from B, where a is greater than the distance from A to B. Thank you for your answer, that is indeed pretty much the solution I came up with. The moment I took a breath of fresh air I felt very silly as Pythagora came to mind and solved the problem.
{"url":"http://mathhelpforum.com/pre-calculus/170545-finding-positions-same-distance-two-given-points.html","timestamp":"2014-04-17T05:14:11Z","content_type":null,"content_length":"46633","record_id":"<urn:uuid:15fc3b8d-7b41-44b7-a8c9-bf2434209369>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Pentagon' printed from http://nrich.maths.org/ We received good solutions from several people. Let's start by looking at the solution sent in by Tom of Wolgarston High School. Firstly, let's consider the triangular problem. If instead of coordinates we use vectors to describe the triangle, then we can write ${\bf p}_1$, ${\bf p}_2$ and ${\bf p}_3$ to describe the corners of the triangle (which we don't know yet) and ${\bf m}_1$, ${\bf m}_2$ and ${\bf m}_3$ to describe the midpoints of the lines. If we can find a way of expressing each vector ${\bf p}_1$, ${\bf p}_2$ and ${\bf p}_3$ using just the vectors of the midpoints, then we can locate the corners. By the standard vector laws, the midpoint between any two points descbribed by vectors is the average of those vectors. So we get the following equations: 1. ${\bf m}_1 = \frac{1}{2}({\bf p}_1 + {\bf p}_2)$ 2. ${\bf m}_2 = \frac{1}{2}({\bf p}_2 + {\bf p}_3)$ 3. ${\bf m}_3 = \frac{1}{2}({\bf p}_3 + {\bf p}_1)$ Now we solve the simultaneous equations to find expressions for the corners. For ${\bf p}_1$ add equations 1 and 2 and then subtract equation 3. This gives the following: ${\bf p}_1 = {\bf m}_1 + {\bf m}_2 - {\bf m}_3 $ and similarly for ${\bf p}_2$ and ${\bf p}_3$ we get: ${\bf p}_2 = {\bf m}_2 + {\bf m}_3 - {\bf m}_1 $ ${\bf p}_3 = {\bf m}_3 + {\bf m}_1 - {\bf m}_2 $ Thus we have expressed all the triangle's vertices as expressions of their midpoints. See if you can continue from here using the same method for pentagons. After looking at pentagons Tom then went on to look at the quadrilaterals and he found that he could no longer solve the simultaneous equations in the same way. Ben Kenny noticed that if an arrangement of midpoints produceded a quadrilateral then this quadrilateral was not necessarily unique. For example: Both quadrilaterals have the same midpoints but different vertices. Do you notice anything special about when we can find a quadrilateral? Try connecting the midpoints. Can you say anything interesting about the inner quadrilateral? Try using Tom's method and see what properties you might be able to gain by looking at the simultaneous equations.
{"url":"http://nrich.maths.org/306/solution?nomenu=1","timestamp":"2014-04-18T08:21:14Z","content_type":null,"content_length":"5701","record_id":"<urn:uuid:e2f5cd7b-0b1b-49af-8481-eefd29ebd6af>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Ideas 10th edition by Miller | 9780321168085 | Chegg.com Details about this item Mathematical Ideas: 1. The Art of Problem Solving. Solving Problems by Inductive Reasoning. An Application of Inductive Reasoning: Number Patterns. Strategies for Problem Solving. Calculating, Estimating, and Reading Graphs. Extension: Using Writing to Learn About Mathematics. Collaborative Investigation: Discovering Mathematics in Pascal's Triangle. Chapter 1 Test. 2. The Basic Concepts of Set Theory. Symbols and Terminology. Venn Diagrams and Subsets. Set Operations and Cartesian Products. Cardinal Numbers and Surveys. Infinite Sets and Their Cardinalities. Collaborative Investigation: A Survey of Your Class. Chapter 2 Test. 3. Introduction to Logic. Statements and Quantifiers. Truth Tables and Equivalent Statements. The Conditional and Circuits. More on the Conditional. Extension: Logic Puzzles. Analyzing Arguments with Euler Diagrams. Analyzing Arguments with Truth Tables. Collaborative Investigation: Logic Puzzles. Chapter 3 Test. 4. Numeration and Mathematical Systems. Historical Numeration Systems. Arithmetic in the Hindu-Arabic System. Conversion Between Number Bases. Finite Mathematical Systems. Groups. Collaborative Investigation: A Perpetual Calendar Algorithm. Chapter 4 Test. 5. Number Theory. Prime and Composite Numbers. Selected Topics from Number Theory. Greatest Common Factor and Least Common Multiple. Modular Systems. The Fibonacci Sequence and the Golden Ratio. Extension: Magic Squares. Collaborative Investigation: Investigating an Interesting Property of Number Squares. Chapter 5 Test. 6. The Real Numbers and Their Representations. Real Numbers, Order, and Absolute Value. Operations, Properties, and Applications of Real Numbers. Rational Numbers and Decimal Representation. Irrational Numbers and Decimal Representation. Applications of Decimals and Percents. Extension: Complex Numbers. Collaborative Investigation: Budgeting to Buy a Car. Chapter 6 Test. 7. The Basic Concepts of Algebra. Linear Equations. Applications of Linear Equations. Ratio, Proportion, and Variation. Linear Inequalities. Properties of Exponents and Scientific Notation. Polynomials and Factoring. Quadratic Equations and Applications. Collaborative Investigation: How Does Your Walking Rate Compare to That of Olympic Race-walkers? Chapter 7 Test. 8. Graphs, Functions, and Systems of Equations and Inequalities. The Rectangular Coordinate System and Circles. Lines and Their Slopes. Equations of Lines and Linear Models. An Introduction to Functions: Linear Functions, Applications, and Models. Quadratic Functions, Applications, and Models. Exponential and Logarithmic Functions, Applications, and Models. Systems of Equations and Applications. Extension: Using Matrix Row Operations to Solve Systems. Linear Inequalities, Systems, and Linear Programming. Collaborative Investigation: Living with AIDS. Chapter 8 Test. 9. Geometry. Points, Lines, Planes, and Angles. Curves, Polygons, and Circles. Perimeter, Area, and Circumference. The Geometry of Triangles: Congruence, Similarity, and the Pythagorean Theorem. Space Figures, Volume, and Surface Area. Transformational Geometry. Non-Euclidean Geometry, Topology, and Networks. Chaos and Fractal Geometry. Collaborative Investigation: Generalizing the Angle Sum Concept. Chapter 9 Test. 10. Applications of Trigonometry. Angles and Their Measures. Trigonometric Functions of Angles. Trigonometric Identities. Right Triangles and Function Values. Applications of Right Triangles. The Laws of Sines and Cosines; Area Formulas. The Unit Circle and Graphs. Collaborative Investigation: Making a Point about Trigonometric Function Values. Chapter 10 Test. 11. Counting Methods. Counting by Systematic Listing. Using the Fundamental Counting Principle. Using Permutations and Combinations. Using Pascal's Triangle and the Binomial Theorem. Counting Problems Involving “Not” and “Or”. Collaborative Investigation: Approximating Factorials Using Stirling's Formula. Chapter 11 Test. 12. Probability. Basic Concepts. Events Involving “Not” and “Or”. Events Involving “And”. Binomial Probability. Expected Value. Estimating Probabilities by Simulation. Collaborative Investigation: Finding Empirical Values of p. Chapter 12 Test. 13. Statistics. Frequency Distributions and Graphs. Measures of Central Tendency. Measures of Dispersion. Measures of Position. The Normal Distribution. Extension: How to Lie with Statistics. Regression and Correlation. Collaborative Investigation: Combining Sets of Data. Chapter 13 Test. 14. Consumer Mathematics. The Time Value of Money. Consumer Credit. Truth in Lending. Purchasing a House. Buying a House. Investing. Collaborative Investigation. Chapter 14 Test. Appendix: The Metric System A-1. Back to top
{"url":"http://www.chegg.com/textbooks/mathematical-ideas-10th-edition-9780321168085-0321168089?ii=10&trackid=32a08c6e&omre_ir=0&omre_sp=","timestamp":"2014-04-21T11:49:35Z","content_type":null,"content_length":"25272","record_id":"<urn:uuid:7b75dddf-4efa-4583-b9aa-a8caa06dca1b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Ballistic Theory and the Sagnac Experiment On Mon, 20 Mar 2006 16:37:19 +0100, "Paul B. Andersen" <paul.b.andersen@xxxxxxxxxxxxxxxx> wrote: >Henri Wilson wrote: >> On Fri, 17 Mar 2006 14:50:00 -0000, "George Dishman" >> <george@xxxxxxxxxxxxxxxxx> >> wrote: >>>>I have pointed out your mistake. >>>>Walking around a carousel in opposite directions will soon tell you that >Right about what? >Let's have a closer look at your carousel. >You are obviously very confused about the role of >the centrifugal 'force', so let us be concrete and >calculate the forces. >I would advice you not to make a fool of yourself >and claim that my analysis is wrong. >Think before writing, and don't repeat stupidities >"It is an imaginary force, 'centrifugal', in the rotating frame. > Its magnitude is the same as the centripetal force in > the non-rotating frame." It is not surprising that the definition of centrifugal force is rather vague. Each author seems to have his own opinion. What is YOUR definition of centrifugal force, Paul? I think your 'coriolis force' is my centrifugal force, and my 'coriolis force' is something you appear to know nothing about. Maybe you live to close to the N pole. Twirl an object around by hand, on a string. In the non-rotating frame, both the object and your hand rotate around the barycentre. A centripetal force is required to accelerate both your hand and the object towards the barycentre. These two CENTRIPETAL forces are opposite in direction and show up as a tension in the string = mv^2/r or MV^2/R. Centrifugal force does not stricty exist in this frame, although in most cases R is so small that even the best scientists and engineers (including myself) tend to call the object's 'pull' on the string a 'centrifugal force'. This is also convetient in the case of a balanced wheel where the barycentre is also the centre of rotation. In the rotating frame, The object does not move but the same tension remains in the string. How can this happen? It is due to the imaginary 'centrifugal forces', of course. Coriolis force goes like this. In the above example, if the string is shortened, the object's rotation rate will increase due to conservation of momentum. In the rotating frame however, shortening the string is no big deal and should do nothing except bring the object closer to the centre. However in practice, an imaginary force pushes the object sideways during this process. That is the imaginary CORIOLIS force. It explains why heavy air moving towards the earth's poles forms a couple with light air moving towards the equator and we get the familiar rotations around low and high pressure systems. I hope the students of Norway will benefit from your 'lesson'. >" 'centrifugal' often confused with 'centripetal'.... > and they have the same values anyway." >(In the example below the centripetal force is > ten million times greater than the centrifugal force.) >To be a valid analogy, you will have to run at a vastly >higher speed than the peripheral velocity of the carousel. >A 1 m radius I-FOG can detect the rotation of the Earth, >that's a peripheral speed of 2.3*10^-13 c. >But let us be generous, let the peripheral speed of >the carousel be as high as a 10^-4 part of your speed. >Let the radius of the carousel be r = 10m. >Let the speed with which you run be c = 10 m/s. >Let the peripheral velocity of the carousel be v = 0.001 m/s. >Let your mass be m = 100kg. >The centripetal forces the carousel exerts on >your feet are now: >Running with the rotation: > The centripetal acceleration is (c+v)^2/r > The force is Ff = m*(c+v)^2/r = 1000.20001 N >Running in opposite direction: > The centripetal acceleration is (c-v)^2/r > The force is Fb = m*(c-v)^2/r = 999.80001 N >Note that these are the actual centripetal forces >acting on your feet. These forces could be measured >and are obviously independent of which frame >you use to calculate them in. >So what about the 'centrifugal' force? >Let us calculate the forces in the rotating frame. >The centripetal acceleration is c^2/r and >the component of the centripetal force causing >it is thus Fca = m*c^2/r = 1000 N >The centrifugal force is m*v^2/r = 0.00001N, >same in both directions. That's not how you calculate centrifugal force.. It has the same magnitude as the centripetal foreces. >Since the centrifugal >force is acting outwards, there must be a component >of the centripetal force counteracting this. >Fcf = = 0.00001N >So far, the two components of the centripetal >force are equal for both directions and amounts >to 1000.00001N. >So what's wrong? Why are the forces equal? >Because there is a third component of >the centripetal force, namely the one counteracting >the Coriolis pseudo force. >This force is 2m*(w X c) where w is the angular >velocity vector (spin vector) and c is the velocity >(vector) of the object. In our case the absolute >value of this force is m*v*c/r, and its direction >is radially outwards when you are running with >the rotation and radially inwards when you are >running in the opposite direction. >So the third component of the centripetal force >counteracting the Coriolis force is: >With the rotation: Fcof = m*v*c/r = 0.2 N >Opposite direction: Fcob = - m*v*c/r = - 0.2 N >So the centripetal forces will be: >With the rotation: Ff = Fca + Fcf + Fcof = 1000.20001 >Opposite direction: Fb = Fca + Fcf + Fcob = 999.80001 If c=v then Fb = zero. >The centripetal forces are the forces exerted on your >feet by the carousel. They are obviously the same >whether you calculate them in the stationary or in >the rotating frame. >Note also that: >m*(c+v)^2/r = m*c^2/r + 2*m*c*v/r + m*v^2/r >Of course the centripetal forces are different. >But not much. >Let us assume that there is a friction slowing >you down, and that the slowing during one revolution >is proportional to the centripetal force, we can write: >Slowing when going with the rotation: >delta_vf = k*1000.20001 N >Slowing when going in the opposite direction: >delta_vb = k*999.80001 N >Now we know that to explain the Sagnac in an >I-FOG, the difference between the two speeds >must be in the order of 2v. >thus, in our analogy: >delta_vf - delta_vb = k*0.4 N = 2v = 0.002 m/s >k = 0.005 m/Ns >delta_vf = 5.001 m/s >delta_vb = 4.999 m/s >For the difference to be big enough, you would >have to slow down to half the speed. I don't like your method or your maths. You should burn all the books in Norway, they are obviously wrong. >In an I-FOG with a much smaller v/c ratio >it would be even worse. >The very idea is idiotic beyond belief. The fact is, I have proved my point. There are two separate effects. 1. The beam's 'energy centre' is thrown slightly off the 'fibre centre', thus increasing path length by different amounts for the two beams. 2) Each beam will experience a different amount of 'wall drag' due to slightly different amounts of centrifugal force pushing them towards the outside of the >>>>>>I have shown that to be wrong. >>>>>You have shown nothing whatsoever. Post the results >>>>>of your experiment. If you do manage to prove it wrong >>>>>then you have shown ballistic theory to be wrong. >>>>The light beams DO NOT experience the same 'centrifugal' forces in both >>>>directions. Right or wrong, George? >The centrifugal forces do not depend on velocity >(neither speed nor direction), and it is tiny. >It is however correct that the centripetal forces >are very slightly different. That proves my point and I win the argument. I never claimed that htis was the major cause of the sagnac effect. The main reason it occurs is basically due to the fact that photons have AXES that don't like turning corners. The interfere strangely with photons whose axes point in different directions. >To explain the Sagnac, the light would have to slow >down to a fraction of it original speed in an I-FOG. >It doesn't. >Face it, Henri. >The Sagnac falsifies the ballistic theory. >No way out. I suppose it proves LET too, eh Paul?...because that's what YOU use to explain the effect.
{"url":"http://www.archivum.info/sci.astro/2006-03/00019/Re-Ballistic-Theory-and-the-Sagnac-Experiment.html","timestamp":"2014-04-18T13:08:44Z","content_type":null,"content_length":"21939","record_id":"<urn:uuid:df8dba35-ad33-4733-a817-4dc3477e0fe3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 1996 [00053] [Date Index] [Thread Index] [Author Index] How to plot a date file? Some questions, some solutions. • To: mathgroup at smc.vnet.net • Subject: [mg4848] How to plot a date file? Some questions, some solutions. • From: lbliao at alumnae.caltech.edu (lbliao) • Date: Thu, 26 Sep 1996 22:42:12 -0400 • Organization: California Institute of Technology, Alumni Association • Sender: owner-wri-mathgroup at wolfram.com The following is a file that I want to plot as either R versus FQ or DG versus FQ. Note it has three columns of numbers preceeded by two letter identifiers R_, DG, FQ, where R_ is a two letter identifier. _ denotes white space. R 999.1E-3,DG 000.0E+0,FQ 1.000E+3 R 999.1E-3,DG 000.0E+0,FQ 1.023E+3 R 999.1E-3,DG 000.0E+0,FQ 1.047E+3 R 999.1E-3,DG 000.0E+0,FQ 1.072E+3 R 999.1E-3,DG 000.0E+0,FQ 1.096E+3 R 999.1E-3,DG 000.0E+0,FQ 1.122E+3 R 999.1E-3,DG 000.0E+0,FQ 1.148E+3 R 999.1E-3,DG 000.0E+0,FQ 1.175E+3 R 999.1E-3,DG 000.0E+0,FQ 1.202E+3 R 999.1E-3,DG 000.0E+0,FQ 1.230E+3 R 998.9E-3,DG 000.0E+0,FQ 7.244E+3 R 998.9E-3,DG 000.0E+0,FQ 7.413E+3 R 999.0E-3,DG 000.0E+0,FQ 7.585E+3 R 998.9E-3,DG 000.0E+0,FQ 7.762E+3 R 998.9E-3,DG 000.0E+0,FQ 7.943E+3 R 998.9E-3,DG 000.0E+0,FQ 8.128E+3 R 999.0E-3,DG 000.0E+0,FQ 8.317E+3 R 999.0E-3,DG 000.0E+0,FQ 8.511E+3 R 999.0E-3,DG 000.0E+0,FQ 8.709E+3 R 999.0E-3,DG 000.0E+0,FQ 8.912E+3 R 999.0E-3,DG 000.0E+0,FQ 9.120E+3 R 999.0E-3,DG 000.0E+0,FQ 9.332E+3 R 998.8E-3,DG-000.0E+0,FQ 9.999E+3 R 998.8E-3,DG-000.0E+0,FQ 10.00E+3 The format is consistent. There is no space between a two letter identifier and the number. R followed by a space, then an optional sign, ie - for minus or space for plus. The number part is 4 digit and one decimal sign. The Exponent is one digit with a mandatory sign, then a comma separator, and next two letter identifier DG, followed by no space and so on. The first problem is to separate the list into three columns and ignoring This is done by the following ReadList command. a=ReadList["c:\\file", Word, WordSeparators -> {"R", ",DG", ",FQ"}, For RecordList -> True, see page 182, For WordSeparators -> {}, see p 496, This has one PROBLEM. Although all the three number fields are separated, they are not numbers any more, but words. The second problem is to get any two columns from the three columns so that they can be plotted using command. I do not see how to directly get two columns other than transposing getting the rows, and then transposing back again, ie Can some kind soul help? Pls reply via email also. Thanks a lot! ==== [MESSAGE SEPARATOR] ==== Prev by Date: Problems running Mathematica 2.2.3 under Windows95 Previous by thread: Re: How to plot a date file? Some questions, some solutions. Next by thread: Problems running Mathematica 2.2.3 under Windows95
{"url":"http://forums.wolfram.com/mathgroup/archive/1996/Sep/msg00053.html","timestamp":"2014-04-21T14:57:47Z","content_type":null,"content_length":"36621","record_id":"<urn:uuid:f52ae81c-8ea6-4709-bf2c-c1a47b6b8e08>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Weighted Stats FAQ Hello, pokefans! At this point you've no doubt noticed that this month's usage stats look a bit different. That's because, instead of just counting the number of times each Pokemon appears on a team, our stats are now weighted by a system that takes into account a player's ranking. The point of this thread is to explain our system and to answer any questions any of you might have. I am not interested in your opinions on whether or not this weighted stat system is a good thing. Any posts along those lines will be deleted. tl;dr--if you're a "good" player (meaning you're not a newbie, and you're not using a troll team), our new weighting system will not noticeably affect how your teams are counted. Beyond a fairly low cutoff, having a higher rating will not lead to your teams being "counted more." The purpose of this weighting system is to lower how much bad, and especially deliberately bad teams affect our stats. What are usage stats? Why are they important? Let's start simple: the usage statistics we publish each month reflects the Pokemon that are used on our Pokemon Showdown simulator server. Each rated battle is logged, and these logs contain full team data (not just the names of the Pokemon but also their EV spreads, movesets, etc.) as well as a log of the battle itself. Each month, we distill these logs into (what we hope is) useful information: moveset statistics, metagame analyses, and, most importantly, usage statistics, which report how often a given Pokemon is used throughout the month. Why is this information important? Well, I hope the moveset and metagame information is useful for players trying to get into a new metagame or trying to improve on their teams (for instance, if your team is having trouble with Ferrothorn, it might be useful for you to go into the moveset data and see that--statistically--the best counters for Ferrothorn are Volcarona, Heatran and Torkoal). Usage statistics are even more important, as we use them to determine our tiers. What does OU mean? One of the grandest and most controversial questions we have in this community is what our tier list represents. Is an OU Pokemon more powerful than one in a lower tier? Is Gastrodon a "better" Pokemon than Victini or Zapdos? My personal answer to this question is no: our tiers do not do a good job of ranking a Pokemon's "power." So if a UU Pokemon isn't inherently better than one in RU, what's the point of tiers? Again, this is a controversial subject, but my answer is that OU means what it stands for, that these Pokemon are simply "overused," and that the primary function of tiers is as threat lists. To elaborate, I'm going to point you folks to the original defining of our current OU-UU cutoff: in short, a Pokemon is OU if, in playing 20 battles, there's at least a 50% chance of you encountering that Pokemon at least once. This is an acknowledgement of the fact that there are 649 Pokemon out there--if you're designing a team of six Pokemon, it's unlikely that you're going to be able to make sure that your team has a way of dealing with each and every Pokemon out there. But if you're making an OU team, you probably will never have to worry if your team gets completely wrecked by Leavanny, since it doesn't even appear on one team in a thousand. What the OU/UU cutoff literally says is: "if said Pokemon is UU or below, you still have a good shot of going 20-0 even if your team is super weak to that Pokemon." So keep that philosophy in mind for the rest of this article. The old system and why it needed to be fixed So tiers are threatlists, and we determine tiers through usage statistics. The way we did this in the past was simple: we simply counted up all the Pokemon that appeared on an OU team in a given month, then, after three months, we combined usage statistics, weighting the most recent month 5/6ths, the month before 1/8th and the month before that 1/24th, and if the combined percentage was greater than 3.406367107%, we declared that Pokemon OU (repeat the process for UU to get RU, RU to get NU, and NU to get the unofficial PU tier). I should note at this point that not all teams were counted under the old system. If a battle lasted less than six turns, it was thrown out. The idea here was to make it harder for people to "spam the stats" by forfeiting immediately and then looking for a new battle. It also means that if you selected the wrong tier by mistake or made a mistake in your teambuilding, and you decide not to go through with the battle (for obvious reasons), then your team doesn't get counted towards the usage stats. All that was well and good in the PO days, when you needed to download and install a program, then select a server that wasn't the default, in order to play on our ladders. Pokemon Showdown, however, has a much lower barrier to entry than Pokemon Online, and Smogon is also the default server. This means that we see tons more activity than we used to (all in all a good thing), but it also means we see a ton more inexperienced and "casual" players. And under the old system, the battles of someone using Ash's team from the anime are weighted the same as the battles of our most skilled players. Is this a bad thing? I remind you of the philosophy behind our tiers: they're threatlists. It doesn't matter if Pikachu is everywhere in the metagame, you still don't need to know how to counter it if the only people using it think you can take out a Rhydon by "aiming for its horn." I wish I were exaggerating, but you can actually look this up in the moveset statistics: The average "weight" of a player using Pikachu in OU is 0.239 (I'll get into what that means in the a few sections) and its top teammates are, in order Charizard, Blastoise, Venusaur, Snorlax and Of course, Pikachu isn't about to go OU, but Charizard lives fairly close to the cutoff, and there are plenty of other Pokemon whose usages are boosted disproportionately by inexperienced users who aren't really a threat to any decent player. There's a separate, more important issue that led us to abandon the old system: spammers who deliberately try to manipulate the tiers. While the old system already tried to make "spamming the tiers" difficult by discarding early forfeits, the bottom line is that we could do nothing about a player who had a team of six Pokemon, all with useless items and awful movesets, who lost his battles without forfeiting. What we want in a stat system So if the old system is broken, we're left with the issue of what we do to replace it. Thinking on it for a while, I came up with the following criteria for any new stat system I could come up 1. The system should result in tier lists that correspond to the principle that they function, first and foremost, as threat lists. 2. It should be difficult to game. 3. The contribution from bad players and trolls should be minimized. This is the first component of (2). 4. Decent players and good player should contribute strongly to the stats. 5. The difference in weighting between a decent player and a good player should be relatively small--we're not looking for "1337" stats. This is also tied to (2), as there should be no advantage in trying to game the ladder, at least where tiering is concerned. 6. The system should make some sort of logical sense beyond the empirical "because it works." I dwelled on these criteria for a good long while, and after a lot of careful thought, I came up with the following principle: A pokemon should be OU if it appears a sufficient number of times on teams used by players who are better than average. You might think that this concept of "better than average" would be difficult to define, but it's really not--Pokemon Showdown rates players based on a system called Glicko2. In contrast to PO's Elo system, your Glicko2 rating is actually two numbers: a rating (R) and a deviation (RD). The idea is that it's impossible to know a player's "true" rating, but we can guess that that rating falls within a probability distribution, in this case, a normal distribution of width RD and center R. Below are two sample distributions: one for a player of rating 1600±100 and one whose rating is 1575±50. For those unfamiliar with probability distributions, the idea is that the probability of a player's "true rating" being in the infinitesimal interval r<rating<r+dr is f(r)dr. So looking at these two graphs, the question is who's the better player (assuming a player is "better" than another if his or her "true rating" is greater)? Using the Glicko2 system, it's impossible to know for certain, but what we can determine is the likelihood that one player is better than another player. Which brings us back to the idea of the average player. Not to get too technical, but with our rating system, each player starts off with a rating of 1500 and a deviation of 350. "But wait, Antar," I hear you say. "My starting rating was 1000, not 1500.. Alas, you're confusing your rating with your ACRE, which is a conservative rating estimate. CREs basically say, "I don't know for certain how good a player is, but I'm pretty sure he or she is better than this. Specifically, there's about a 92% chance that your true rating is better than your ACRE. Okay. Glad I got that out of the way. When I'm talking ratings here, I'm talking Glicko2, not ACRE. Okay, so each player starts off with a rating of 1500. As they battle, their rating goes up when they win, down when they lose. I'm not going to get into the specifics. The bottom line is, at the end of the day, the distribution of the ratings of all the players should pretty much be centered around 1500. And thus, we define the average player to have a true rating of 1500. Our new weighting system So with our stated premise that we're looking only at teams used by players who are better than average, and with our definition of average, the obvious solution is to throw out teams by players whose rating is less than or equal to 1500. While this is certainly simple enough, it has the problem that, in the Glicko2 system, we don't actually know a player's true rating. We only know the probability distribution corresponding to their rating. But all is not lost, because it's relatively simple to figure out what the likelihood is that a player's true rating is greater than 1500: it's just the integral of their rating distribution from 1500 to infinity, which works out to be It is this probability that we will be using to weight our stats To get a feel for how this weighting system works, here are some sample calculations: Here are some sample calculations: □ A brand new player, just starting out, whose rating is 1500±350 will be weighted 0.5. □ I have a mediocre OU team i sometimes play on PS under an alt. It currently has a provisional rating of 1576±105. Its weighting is 0.77. □ The person who I just demolished with that team (you have to be pretty bad...) has a provisional rating of 1394±139. Weighting is 0.223 □ My good (not great) OU team has a rating of 1946±177. Weighting is 0.994 □ The person at the top of the ladder right now has a rating of 2120±55. Their weighting is 1.0, for all intents and purposes. So this system has the properties I was looking for: bad player only counted at about 20% or less, while mediocre teams counts almost 80% and good teams--no matter how good--are pretty much fully Put another way, here's a graph showing what percentages of players make up what percentage of the stats (this is for OU, for the first half of January) To summarize: □ 1% of teams make up 1.76% of stats (ratio 1.76:1) □ 5% of teams make up 8.79% of stats (ratio: 1.76:1) □ 10% of teams make up 17.58% of stats (ratio: 1.76:1) □ 20% of teams make up 35.16% of stats (ratio: 1.76:1) □ 50% of teams make up 80.80% of stats (ratio: 1.61:1) □ 77.53% of teams make up 99% of stats (ratio: 1.27:1) So it doesn't matter if you're in the top 1% or the top 20--your contibution to the stats is basically the same. But then, if you're in the bottom quarter of the tier (meaning you're either really bad or a troll), your teams contribute basically not at all to the usage stats. As it should be. Frequently Asked Questions 1. Why not just count winning teams? Wouldn't that accomplish the same thing? Well, yes and no. First off, I'm not asking what the likelihood is of a player defeating another one, but rather that the likelihood that the player is better. The distinction is dense and mathematical, and if you're curious, the probability that you will win a given match against a random opponent is given by your GXE. But beyond that, there's a bigger problem: PS will try to match you with someone with a rating close to yours. Meaning if your rating is 2000, it's much less likely that you'll be paired with someone of rating 1000, than if your rating is 1200. So, at the end of the day, race-to-the-bottom players will get paired with trolls, so half of their teams will be counted, while half of the teams at the top are counted as well, and we're little better off than when we started. 2. Which rating are you using? Before the battle? After the battle? At the end of the month? Provisional or actual? With an eye towards making the system harder to game, we're using provisional ratings, logged at the end of the battle. 3. What happens if there's a ladder error and the rating doesn't get recorded? Originally, we treated you and your opponent as completely unknown quantities, so your teams were each weighted 0.5, the same as a new player just joining the ladder or a hypothetical player with RD of infinity. But that method ignored a key data point: the outcome of the battle, itself. A player with exactly one win and no losses will have a rating of 1662±290, and a player with exactly one loss and no wins will have a rating of 1338±290. So it's these ratings we use if no other is available (they work out to a weighting of 0.71 for the winner and 0.29 for the loser). This is far from ideal, but the ladder error rate is pretty low, and Zarel and others are working on lowering it further. 4. Could this same principle be applied to make some sort of "1337" stats? Yes, yes it could! I have experimented with looking at statistics resulting from considering the probability that a player's rating is greater than 1850 (one standard deviation above average) and above 2200 (two standard deviations above average). Interestingly, they don't look much different than the regular stats, which is why I didn't include them in January's stats thread. But due to popular demand, I did decide to "1850" stats in February's stats and probably will continue to provide them in future months. 5. Will metagame and moveset stats be weighted? Absolutely. The idea behind these additional statistics is to provide players with references for set- and team-building. It makes sense to minimize the impact of bad and malicious players on those statistics. Note that "checks and counters" rankings, which are based on the events of a given battle, are weighted by the lower weight of the two players. 6. Will this stop Molk? For those with short memories, a few tier updates ago, Molk managed to get Metang into RU (up from PU) by making a decent team built around Metang. I understand the team did decently well. This was back in the PO days, so we don't know what his ranking would've been, but it seems likely it would've been a bit above 1500. Thus, this rating system would not have prevented Molk from getting Metang into RU. The difference here between what Molk was doing and what our recent "tier troll" did is what Molk's team was actually viable. So if you were playing RU at that time, and you happened to have no way of dealing with Metang, you would've legitimately been in trouble. Last edited: Apr 3, 2014 DougJustDoug likes this. Nov 15, 2012 Quick clarification question: Does this mean that you're changing how the three-month stats are generated as well? Or are you keeping that process the same and just using the weighted stats? No, that part will stay the same, just using the weighted stats instead of unweighted. Aug 18, 2012 Will this bring about more suspect testing(this month), specifically between OU and Ubers? As far as I know, no. These stats should have no effect on suspect tests. Ok, this thread has been posted two months ago, but after this time, wondering about how will the 1850 April stats looks like, I came here. I felt that the numerous bad ranked player end to count more than those who have really 1850, even if less than in non weighted stats. I noted that : -this is called "1850 stats" when what is used is : I feel that I wasn't the only one to think "1850 stats -> weighted with the likelihood to be over 1850", so though I know that this name come from "1500 + 1 deviation" I think it doesn't fit. Imo, calling it "1500 stats" would be better. -these stats are often seen as reflecting what good players use but : This let me toughtful. If we keep in mind that this is "1500 stats", we understand from where it come, but the fact is that it decreases a lot the quality with all new alts spamming craps before getting down. After these two point, I would suggest, bar renaming "1850 stats" into "1500 stats", to give use " 'real' 1850 stats" (real is maybe not the good word, but you understood) with the likelyhood of the player being over 1850, to see what is really played on high ladder. I don't know how much work it would take to be fair, not too much I hope, but it would definitely be very interesting. After having realized that these stats take in account most players, I returned on this : If we want threatlists, wouldn't it be more accurate to see what serious players play ? Bottom of the ladder is mainly made of people spamming crap for fun, and if they don't surprise you with extreme gimmick (in sets, new pokes should be played around) and haxx, you're nearly sure to beat them. So what these people play really doesn't matter, you don't need to be aware of 1% of Charizard. So taking these games in account made the usage bad threatlists. If you want to play a little bit seriously, you'll never face these people spamming crap for fun, and you'll face, be you very good or average, not the same mons that these spammed on bottom. Will you prepare in the same way while teambuilding using OU stats or weighted stats (which represent what you're more likely to face) ? Surely no. I'll take a few random examples from March stats : Ammonguss is at 1,3% in standard stats, at 2,2% in weigthed stats, Landorus at 9% in standard, 12% in weighted, Keldeo at 10,4% vs 14,4%, or Latias at 7,7% vs 10%. A last quick point is about the UU/OU cutoff, which is directly related. So wouldn't be the stats weighted with the likelyhood to be over 1500 better for that ? And when you see that the more you'll win, the higher you'lll be ranked, this statement seems to really misfit what the cutoff from standards stats represents. [I know that this part start to tell about something else than rating, but I feel it's related, though a new thread would probably be better if we start to debate about that] Thanks for reading. tl;dr : "1850 stats" should be renamed "1500 stats", would be interesting to replace standards stats for the UU/OU cutoff, and stats weighted with the likelyhood being over 1850 would be very I've done detailed numerical analysis of which-players-with-what-ratings-contribute-how-much-to-the-stats, and trust me--bad (and even average) players contribute not even a significant fraction of a percent to the 1850 stats. The weighting function is basically a Heaviside. And I can't figure out what the hell you're trying to say in the second part. Sorry for the time to answer, and if my first post was unclear (I'm not especially good in english so I may not have noticed that some things wasn't because of that, too) Basically, the first part is to say that '1850 stats' is a name that induce in a false thinking about them, and that the fact that a new alt at his first try we'll have half of his fight counted doesn't help at all to trust them. The second says that, if stats are threatlists (and thus used for defining OU), '1850 stats' would be better at this job since they don't count most of "crap spamming for fun" in bottom of ladder, while reflecting what average player do. For the numerical analysis, I guess that if you say that this is true, but without seeing what would be the stats with the likelyhood of being over 1850 (or 1600/1700...) it is hard to trust that it would be very very similar to these with te likelyhood being over 1500. After I don't know how much work it is to produce them, but for sure it would be very interesting. May 20, 2006 How does this change how we should interpret the monthly statistics? Does "RAW" refer to the unweighted usage and "REAL" for the weighted usage? Or am I missing something here? Raw is unweighted. Real is unweighted only counting Pokemon that acutally appear in battle. Sorry this might be the wrong place to ask, but with gen 6 here, will we be using the weighted stats or just pure usage stats for the sorting of gen 6 tiers. If we start with the weighted stats it might be more forward-thinking. I've enjoyed following the logic of these posts; also I hope I don't get in any trouble for reviving or bumping an old thread, I think what I'm asking is relevant. Also how soon should we have october's gen 6 usage statistics (I'm new to smogon [not PO however] so I don't know how to find out these things yet) Yes, we will continue to use weighted stats for the 6th-gen tiers. Usage stats get published on the 1st or 2nd of every month (depending on how long it takes them to process). Will the weighted stats be used to sort UU from OU, or just the regular usage stats? I think that's what I meant to say before but I didn't say it with any clarity xD But thanks very much for your time and help. Yes. We will use the same system for determining tiers. Keep in mind that UU will not be established for quite some time, since first we need to sort out OU vs. Ubers. Once OU has "settled down" we will establish a UU list using the weighted usage stats from the most recent three months, the same way as we've done before. Note that it'll probably be about a year and a half (if not longer) before we get the full tier list (as in RU, NU, hypothetically PU). If you're itching for non-"standard" matches in the meantime, though, keep in mind that Doubles is an official tier now, and there's always Little Cup. Piexplode likes this.
{"url":"http://www.smogon.com/forums/threads/weighted-stats-faq.3478570/","timestamp":"2014-04-16T19:46:50Z","content_type":null,"content_length":"91878","record_id":"<urn:uuid:ad9f17f9-8174-44e0-bd66-0159a27ab7c3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Re^2: Evolving a faster filter? (combinatorics) in reply to Re: Evolving a faster filter? in thread Evolving a faster filter? > record their performance in previous runs and you sort them accordingly. OK some theoretical thoughts of how to sort them: be c[i] the average cost to run filter[i] on 1 object be r[i] the average ratio of remaining/checked objects after applying filter[i] be o the number of objects, c the costs so far so after applying a filter i o*=r[i] and c+=o*c[i] so with an chosen ordering 0,1,2 we get c = o*c[0]+ o*r[0]*c[1] + o*r[0]*r[1]*c[2] + ... after restructuring we get c = o * ( c[0] + r[0] * ( c[1] + r[1] * ( ... ) ) ) I don't know if there is a direct solution for this optimization problem (I could ask some colleagues from uni, I'm pretty sure they know) but this equation is a good base for a "branch-and-bound"¹ - graph search algorithm: the minimal cost-factor after the first filter is > c[0] + r[0] * c[min] with c[min] being the minimal cost after excluding filter[0]. These are pretty good branch and bound criteria, which should lead very fast to an optimal solution w/o trying each faculty(@filters) permutation like tye did here. That means even if your recorded performance for each filter is fuzzy and unstable you can always adapt and recalculate on the fly, even if you have more complicated models with ranges of values for c and r. ¹) for examples of B&B applications see Possible pairings for first knockout round of UEFA champions league or Puzzle Time
{"url":"http://www.perlmonks.org/index.pl?node_id=1011682","timestamp":"2014-04-18T07:17:18Z","content_type":null,"content_length":"19874","record_id":"<urn:uuid:886a0d22-befe-4150-a321-b7d71b3a7b93>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
coordinate geometry help ( equations of a line) March 7th 2011, 08:27 AM #1 Mar 2011 I've been stuck on this one assignment for months on my online class and need major help. here are some example problems it would be really awesome if someone could help me PLEASE and thank "indicate the equation given in standard form" 1) A (2,2) B (-2,-2) C(1,-1) 7)Indicate the equation of the given line in standard form. The line with slope and containing the midpoint of the segment whose endpoints are (2, -3) and (-6, 5) Last edited by mr fantastic; March 7th 2011 at 10:22 AM. Reason: Excess questions deleted. We don't know what "you know"; so hard to help... The above 2 problems make no sense... 1) why 3 points? 7) slope = ? Do you know what "standard form" means? Did you do any work, like using google? : nevermind the first thing in quotes i was just trying to copy the explanation. I really don't know how to do any of this and yeah I know that standard form is ax+by=c or something like that You should understand that this is not a homework service nor is it a tutorial service. Please either post some of your own work on this problem or explain what you do not understand about the March 7th 2011, 08:41 AM #2 MHF Contributor Dec 2007 Ottawa, Canada March 7th 2011, 08:44 AM #3 Mar 2011 March 7th 2011, 08:50 AM #4
{"url":"http://mathhelpforum.com/pre-calculus/173745-coordinate-geometry-help-equations-line.html","timestamp":"2014-04-20T06:52:34Z","content_type":null,"content_length":"40063","record_id":"<urn:uuid:4484976d-096e-4658-830c-f15cd2a36018>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Artificial Intelligence Knowledge Representation RC Chakraborty 03/03 to 08/3, 2007, Lecture 15 to 22 (8 hrs) Slides 1 to 79 myreaders , http://myreaders.wordpress.com/ , rcchak@gmail.com (Revised – Feb. 02, 2008) Artificial Intelligence Knowledge Representation Issues, Predicate Logic, Rules (Lectures 15, 16, 17, 18, 19, 20, 21, 22 8 hours) 1. Knowledge Representation 03-26 Introduction – KR model, typology, relationship, framework, mapping, forward & backward representation, system requirements; KR schemes - relational, inheritable, inferential, declarative, procedural; KR issues - attributes, relationship, granularity. 2. KR Using Predicate Logic 27-47 Logic representation, Propositional logic - statements, variables, symbols, connective, truth value, contingencies, tautologies, contradictions, antecedent, consequent, argument; Predicate logic - expressions, quantifiers, formula; Representing “IsA” and “Instance” relationships, computable functions and predicates; Resolution. 3. KR Using Rules 48-77 Types of Rules - declarative, procedural, meta rules; Procedural verses declarative knowledge & language; Logic programming – characteristics, Statement, language, syntax & terminology, simple & structured data objects, Program Components - clause, predicate, sentence, subject; Programming paradigms - models of computation, imperative model, functional model, logic model; Forward & backward reasoning - chaining, conflict resolution; Control knowledge. 4. References 78-79 Knowledge Representation Issues, Predicate Logic, Rules How do we represent what we know ? • Knowledge is a general term. An answer to the question, "how to represent knowledge", requires an analysis to distinguish between knowledge “how” and knowledge “that”. ■ knowing "how to do something". e.g. "how to drive a car" is Procedural knowledge. ■ knowing "that something is true or false". e.g. "that is the speed limit for a car on a motorway" is Declarative • knowledge and Representation are distinct entities that play a central but distinguishable roles in intelligent system. ■ Knowledge is a description of the world. It determines a system's competence by what it knows. ■ Representation is the way knowledge is encoded. It defines the performance of a system in doing something. • Different types of knowledge require different kinds of representation. The Knowledge Representation models/mechanisms are often based on: ◊ Logic ◊ Rules ◊ Frames ◊ Semantic Net • Different types of knowledge require different kinds of reasoning. KR -Introduction 1. Introduction Knowledge is a general term. Knowledge is a progression that starts with data which is of limited utility. By organizing or analyzing the data, we understand what the data means, and this becomes information. The interpretation or evaluation of information yield knowledge. An understanding of the principles embodied within the knowledge is wisdom. • Knowledge Progression Organizing Interpretation Understanding Data Information Knowledge Wisdom Analyzing Evaluation Principles Fig 1 Knowledge Progression ■ Data is viewed as collection of : Example : It is raining. disconnected facts. ■ Information emerges when : Example : The temperature dropped 15 relationships among facts are degrees and then it started raining. established and understood; Provides answers to "who", "what", "where", and "when". ■ Knowledge emerges when : Example : If the humidity is very high relationships among patterns and the temperature drops substantially, are identified and understood; then atmospheres is unlikely to hold the Provides answers as "how" . moisture, so it rains. ■ Wisdom is the pinnacle of : Example : Encompasses understanding understanding, uncovers the of all the interactions that happen principles of relationships that between raining, evaporation, air describe patterns. currents, temperature gradients, Provides answers as "why" . changes, and raining. KR -Introduction • Knowledge Model (Bellinger 1980) The model tells, that as the degree of “connectedness” and “understanding” increase, we progress from data through information and knowledge to Degree of Degree of Fig. Knowledge Model The model represents transitions and understanding. the transitions are from data, to information, to knowledge, and finally to the understanding support the transitions from one stage to the next The distinctions between data, information, knowledge, and wisdom are not very discrete. They are more like shades of gray, rather than black and white (Shedroff, 2001). data and information deal with the past; they are based on the gathering of facts and adding context. knowledge deals with the present that enable us to perform. wisdom deals with the future, acquire vision for what will be, rather than for what is or was. KR -Introduction • Knowledge Type Knowledge is categorized into two major types: Tacit and Explicit. term “Tacit” corresponds to informal or implicit type of knowledge, term “Explicit” corresponds to formal type of knowledge. Tacit knowledge Explicit knowledge ◊ Exists within a human being; ◊ Exists outside a human being; it is embodied. it is embedded. ◊ Difficult to articulate formally. ◊ Can be articulated formally. ◊ Difficult to share/communicate. ◊ Can be shared, copied, processed and ◊ Hard to steal or copy. ◊ Easy to steal or copy ◊ Drawn from experience, action, ◊ Drawn from artifact of some type as subjective insight. principle, procedure, process, concepts. [The next slide explains more about tacit and explicit knowledge.] KR -Introduction ■ Knowledge typology map The map shows that, Tacit knowledge comes from experience, action, subjective insight and Explicit knowledge comes from principle, procedure, process, concepts, via transcribed content or artifact of some Experience Principles Procedure Tacit Explicit Knowledge Process Subjective Knowledge Concept Fig. Knowledge Typology Map ◊ Facts : are data or instance that are specific and unique. ◊ Concepts : are class of items, words, or ideas that are known by a common name and share common features. ◊ Processes : are flow of events or activities that describe how things work rather than how to do things. ◊ Procedures : are series of step-by-step actions and decisions that result in the achievement of a task. ◊ Principles : are guidelines, rules, and parameters that govern; principles allow to make predictions and draw implications; principles are the basic building blocks of theoretical models (theories). These artifacts are used in the knowledge creation process to create two types of knowledge: declarative and procedural explained below. KR -Introduction • Knowledge Type Cognitive psychologists sort knowledge into Declarative and Procedural category and some researchers added Strategic as a third category. Procedural knowledge Declarative knowledge ◊ examples : procedures, rules, ◊ example : concepts, objects, facts, strategies, agendas, models. propositions, assertions, semantic nets, logic and descriptive models. ◊ focuses on tasks that must be ◊ refers to representations of objects performed to reach a particular and events; knowledge about facts objective or goal. and relationships; ◊ Knowledge about "how to do ◊ Knowledge about "that something something"; e.g., to determine if is true or false". e.g., A car has Peter or Robert is older, first find four tyres; Peter is older than their ages. Robert; Note : About procedural knowledge, there is some disparity in views. − One view is, that it is close to Tacit knowledge; it manifests itself in the doing of some-thing yet cannot be expressed in words; e.g., we read faces and moods. − Another view is, that it is close to declarative knowledge; the difference is that a task or method is described instead of facts or things. All declarative knowledge are explicit knowledge; it is knowledge that can be and has been articulated. The strategic knowledge is thought as a subset of declarative knowledge. KR -Introduction • Relationship among knowledge type The relationship among explicit, implicit, tacit, declarative and procedural knowledge are illustrated below. No Can not be Has been Implicit articulated articulated Yes No Explicit Tacit Facts and Motor Skill things (Manual) Describing Declarative Procedural Doing Tasks and Mental Skill Fig. Relationship among types of knowledge The Figure shows, declarative knowledge is tied to "describing" and procedural knowledge is tied to "doing." − The arrows connecting explicit with declarative and tacit with procedural, indicate the strong relationships exist among them. − The arrow connecting declarative and procedural indicates that we often develop procedural knowledge as a result of starting with declarative knowledge. i.e., we often "know about" before we "know how". Therefore, we may view, − all procedural knowledge as tacit, and − all declarative knowledge as explicit. KR -framework 1.1 Framework of Knowledge Representation (Poole 1998) Computer requires a well-defined problem description to process and also provide well-defined acceptable solution. To collect fragments of knowledge we need : first to formulate description in our spoken language and then represent it in formal language so that computer can understand. The computer can then use an algorithm to compute an answer. This process is illustrated below. Problem Solution Represent Interpret Informal Representation Output Fig. Knowledge Representation Framework The steps are − The informal formalism of the problem takes place first. − It is then represented formally and the computer produces an output. − This output can then be represented in a informally described solution that user understands or checks for consistency. Note : The Problem solving requires − formal knowledge representation, and − conversion of informal (implicit) knowledge to formal (explicit) knowledge. KR - framework • Knowledge and Representation Problem solving requires large amount of knowledge and some mechanism for manipulating that knowledge. The Knowledge and the Representation are distinct entities, play a central but distinguishable roles in intelligent system. − Knowledge is a description of the world; it determines a system's competence by what it knows. − Representation is the way knowledge is encoded; it defines the system's performance in doing something. In simple words, we : − need to know about things we want to represent , and − need some means by which things we can manipulate. ◊ know things to ‡ Objects - facts about objects in the domain. ‡ Events - actions that occur in the domain. ‡ Performance - knowledge about how to do things ‡ Meta-knowledge - knowledge about what we know ◊ need means to ‡ Requires some formalism - to what we represent ; Thus, knowledge representation can be considered at two levels : (a) knowledge level at which facts are described, and (b) symbol level at which the representations of the objects, defined in terms of symbols, can be manipulated in the programs. Note : A good representation enables fast and accurate access to knowledge and understanding of the content. KR - framework • Mapping between Facts and Representation Knowledge is a collection of “facts” from some domain. We need a representation of facts that can be manipulated by a program. Normal English is insufficient, too hard currently for a computer program to draw inferences in natural languages. Thus some symbolic representation is necessary. Therefore, we must be able to map "facts to symbols" and "symbols to facts" using forward and backward representation mapping. Example : Consider an English sentence English English understanding generation Facts Representations ◊ Spot is a dog A fact represented in English sentence ◊ dog (Spot) Using forward mapping function the above fact is represented in logic ◊ ∀ x : dog(x) → hastail (x) A logical representation of the fact that "all dogs have tails" Now using deductive mechanism we can generate a new representation of object : ◊ hastail (Spot) A new object representation ◊ Spot has a tail Using backward mapping function to [it is new knowledge] generate English sentence KR - framework ■ Forward and Backward representation The forward and backward representations are elaborated below : Desired real Initial reasoning Final Facts Facts Forward Backward representation representation mapping mapping Internal English Representation Operated by Representation ‡ The doted line on top indicates the abstract reasoning process that a program is intended to model. ‡ The solid lines on bottom indicates the concrete reasoning process that the program performs. KR - framework • KR System Requirements A good knowledge representation enables fast and accurate access to knowledge and understanding of the content. A knowledge representation system should have following properties. ◊ Representational The ability to represent all kinds of knowledge that Adequacy are needed in that domain. ◊ Inferential Adequacy The ability to manipulate the representational structures to derive new structure corresponding to new knowledge inferred from old . ◊ Inferential Efficiency The ability to incorporate additional information into the knowledge structure that can be used to focus the attention of the inference mechanisms in the most promising direction. ◊ Acquisitional The ability to acquire new knowledge using automatic Efficiency methods wherever possible rather than reliance on human intervention. Note : To date no single system can optimizes all of the above properties. KR - schemes 1.2 knowledge Representation schemes There are four types of Knowledge representation - Relational, Inheritable, Inferential, and Declarative/Procedural. ◊ Relational Knowledge : − provides a framework to compare two objects based on equivalent − any instance in which two different objects are compared is a relational type of knowledge. ◊ Inheritable Knowledge − is obtained from associated objects. − it prescribes a structure in which new objects are created which may inherit all or a subset of attributes from existing objects. ◊ Inferential Knowledge − is inferred from objects through relations among objects. − e.g., a word alone is a simple syntax, but with the help of other words in phrase the reader may infer more from a word; this inference within linguistic is called semantics. ◊ Declarative Knowledge − a statement in which knowledge is specified, but the use to which that knowledge is to be put is not given. − e.g. laws, people's name; these are facts which can stand alone, not dependent on other knowledge; Procedural Knowledge − a representation in which the control information, to use the knowledge, is embedded in the knowledge itself. − e.g. computer programs, directions, and recipes; these indicate specific use or implementation; These KR schemes are detailed below in next few slides KR - schemes • Relational knowledge : associates elements of one domain with another. Used to associate elements of one domain with the elements of another domain or set of design constrains. − Relational knowledge is made up of objects consisting of attributes and their corresponding associated values. − The results of this knowledge type is a mapping of elements among different domains. The table below shows a simple way to store facts. − The facts about a set of objects are put systematically in columns. − This representation provides little opportunity for inference. Table - Simple Relational Knowledge Player Height Weight Bats - Throws Aaron 6-0 180 Right - Right Mays 5-10 170 Right - Right Ruth 6-2 215 Left - Left Williams 6-3 205 Left - Right ‡ Given the facts it is not possible to answer simple question such as : " Who is the heaviest player ? ". ‡ But if a procedure for finding heaviest player is provided, then these facts will enable that procedure to compute an answer. KR - schemes • Inheritable knowledge : elements inherit attributes from their parents. The knowledge is embodied in the design hierarchies found in the functional, physical and process domains. Within the hierarchy, elements inherit attributes from their parents, but in many cases, not all attributes of the parent elements be prescribed to the child elements. − The basic KR needs to be augmented with inference mechanism, and − Inheritance is a powerful form of inference, but not adequate. The KR in hierarchical structure, shown below, is called “semantic network” or a collection of “frames” or “slot-and-filler structure". It shows property inheritance and way for insertion of additional knowledge. − Property inheritance : Objects/elements of specific classes inherit attributes and values from more general classes. − Classes are organized in a generalized hierarchy. Baseball knowledge Person Right − isa : show class inclusion handed − instance : show class membership isa Adult height isa height 6.1 EQUAL Baseball handed Player batting-average isa isa batting-average batting-average 0.106 Pitcher Fielder 0.262 instance instance team Three Finger team Pee-Wee- Brooklyn- Cubs Brown Reese Dodger Fig. Inheritable knowledge representation (KR) ‡ the directed arrows represent attributes (isa, instance, and team) originating at the object being described and terminating at the object or its value. ‡ the box nodes represents objects and values of the attributes. [Continuing in the next slide] KR - schemes [Continuing from previous slide – example] ◊ Viewing a node as a frame isa : Adult-Male Bates : EQUAL handed Height : 6.1 Batting-average : 0.252 ◊ Algorithm : Property Inheritance Retrieve a value V for an attribute A of an instance object O. Steps to follow: 1. Find object O in the knowledge base. 2. If there is a value for the attribute A then report that value. 3. Else, see if there is a value for the attribute instance; If not, then fail. 4 Else, move to the node corresponding to that value and look for a value for the attribute A; If one is found, report it. 5. Else, do until there is no value for the “isa” attribute or until an answer is found : (a) Get the value of the “isa” attribute and move to that node. (b) See if there is a value for the attribute A; If yes, report it. This algorithm is simple, ‡ It does describe the basic mechanism of inheritance. ‡ It does not say what to do if there is more than one value of the instance or “isa” attribute. This can be applied to the example of knowledge base illustrated to derive answers to the following queries : − team (Pee-Wee-Reese) = Brooklyn–Dodger − batting–average(Three-Finger-Brown) = 0.106 − height (Pee-Wee-Reese) = 6.1 − bats(Three Finger Brown) = right [For explanation - refer book on AI by Elaine Rich & Kevin Knight, page 112] KR - schemes • Inferential knowledge : generates new information . Generates new information from the given information. This new information does not require further data gathering form source, but does require analysis of the given information to generate new knowledge. − Given a set of relations and values, one may infer other values or relations. − In addition to algebraic relations, a predicate logic (mathematical deduction) is used to infer from a set of attributes. − Inference through predicate logic uses a set of logical operations to relate individual data. The symbols used for the logic operations are : " → " (implication), " ¬ " (not), " V " (or), " Λ " (and), " ∀ " (for all), " ∃ " (there exists). Examples of predicate logic statements : 1. Wonder is a name of a dog : dog (wonder) 2. All dogs belong to the class of animals : ∀ x : dog (x) → animal(x) 3. All animals either live on land or in water : ∀ x : animal(x) → live (x, land) V live (x, water) We can infer from these three statements that : " Wonder lives either on land or on water." As more information is made available about these objects and their relations, more knowledge can be inferred. KR - schemes • Declarative/Procedural knowledge The difference between Declarative/Procedural knowledge is not very clear. Declarative knowledge : Here, the knowledge is based on declarative facts about axioms and − axioms are assumed to be true unless a counter example is found to invalidate them. − domains represent the physical world and the perceived functionality. − axiom and domains thus simply exists and serve as declarative statements that can stand alone. Procedural knowledge: Here the knowledge is a mapping process between domains that specifies “what to do when” and the representation is of “how to make it” rather than “what it is”. The procedural knowledge : − may have inferential efficiency, but no inferential adequacy and acquisitional efficiency. − are represented as small programs that know how to do specific things, how to proceed. Example : a parser in a natural language has the knowledge that a noun phrase may contain articles, adjectives and nouns. It thus accordingly call routines that know how to process articles, adjectives and nouns. KR - issues 1.3 Issues in Knowledge Representation The fundamental goal of Knowledge Representation is to facilitate inferencing (conclusions) from knowledge. The issues that arise while using KR techniques are many. Some of these are explained below. ◊ Important Attributes : Any attribute of objects so basic that they occur in almost every problem domain ? ◊ Relationship among attributes: Any important relationship that exists among object attributes ? ◊ Choosing Granularity : At what level of detail should the knowledge be represented ? ◊ Set of objects : How sets of objects be represented ? ◊ Finding Right structure : Given a large amount of knowledge stored, how can relevant parts be accessed ? Note : These issues are briefly explained, referring previous example, Fig. Inheritable KR. For detail readers may refer book on AI by Elaine Rich & Kevin Knight- page 115 – 126. KR - issues • Important Attributes : Ref. Example- Fig. Inheritable KR There are two attributes "instance" and "isa", that are of general significance. These attributes are important because they support property • Relationship among attributes : Ref. Example- Fig. Inheritable KR The attributes we use to describe objects are themselves entities that we represent. The relationship between the attributes of an object, independent of specific knowledge they encode, may hold properties like: Inverses , existence in an isa hierarchy , techniques for reasoning about values and single valued attributes. ◊ Inverses : This is about consistency check, while a value is added to one attribute. The entities are related to each other in many different ways. The figure shows attributes (isa, instance, and team), each with a directed arrow, originating at the object being described and terminating either at the object or its value. There are two ways of realizing this: ‡ first, represent both relationships in a single representation; e.g., a logical representation, team(Pee-Wee-Reese, Brooklyn–Dodgers), that can be interpreted as a statement about Pee-Wee-Reese or Brooklyn–Dodger. ‡ second, use attributes that focus on a single entity but use them in pairs, one the inverse of the other; for e.g., one, team = Brooklyn–Dodgers , and the other, team = Pee-Wee-Reese, . . . . This second approach is followed in semantic net and frame-based systems, accompanied by a knowledge acquisition tool that guarantees the consistency of inverse slot by checking, each time a value is added to one attribute then the corresponding value is added to the inverse. KR - issues ◊ Existence in an isa hierarchy : This is about generalization-specialization, like, classes of objects and specialized subsets of those classes, there are attributes and specialization of attributes. Example, the attribute height is a specialization of general attribute physical-size which is, in turn, a specialization of physical-attribute. These generalization-specialization relationships are important for attributes because they support inheritance. ◊ Techniques for reasoning about values : This is about reasoning values of attributes not given explicitly. Several kinds of information are used in reasoning, like, height : must be in a unit of length, age : of person can not be greater than the age of person's parents. The values are often specified when a knowledge base is created. ◊ Single valued attributes : This is about a specific attribute that is guaranteed to take a unique value. Example, a baseball player can at time have only a single height and be a member of only one team. KR systems take different approaches to provide support for single valued attributes. KR - issues • Choosing Granularity Regardless of the KR formalism, it is necessary to know : − At what level should the knowledge be represented and what are the primitives ?." − Should there be a small number or should there be a large number of low-level primitives or High-level facts. − High-level facts may not be adequate for inference while Low-level primitives may require a lot of storage. Example of Granularity : − Suppose we are interested in following facts: John spotted Sue. − This could be represented as Spotted (agent(John), object (Sue)) − Such a representation would make it easy to answer questions such are : Who spotted Sue ? − Suppose we want to know : Did John see Sue ? − Given only one fact, we cannot discover that answer. − We can add other facts, such as Spotted (x , y) → saw (x , y) − We can now infer the answer to the question. KR - issues • Set of objects There are certain properties of objects that are true as member of a set but not as individual; Example : Consider the assertion made in the sentences : "there are more sheep than people in Australia", and "English speakers can be found all over the world." To describe these facts, the only way is to attach assertion to the sets representing people, sheep, and English. The reason to represent sets of objects is : If a property is true for all or most elements of a set, then it is more efficient to associate it once with the set rather than to associate it explicitly with every elements of the set . This is done, − in logical representation through the use of universal quantifier, and − in hierarchical structure where node represent sets and inheritance propagate set level assertion down to individual. However in doing so, for example: assert large (elephant), remember to make clear distinction between, − whether we are asserting some property of the set itself, means, the set of elephants is large, or − asserting some property that holds for individual elements of the set , means, any thing that is an elephant is large. There are three ways in which sets may be represented by. (a) Name, as in the example – Fig. Inheritable KR, the node - Baseball-Player and the predicates as Ball and Batter in logical representation. (b) Extensional definition is to list the numbers, and (c) Intensional definition is to provide a rule, that returns true or false depending on whether the object is in the set or not. [Readers may refer book on AI by Elaine Rich & Kevin Knight- page 122 - 123] KR - issues • Finding Right structure This is about access to right structure for describing a particular situation. This requires, selecting an initial structure and then revising the choice. While doing so, it is necessary to solve following problems : − how to perform an initial selection of the most appropriate structure. − how to fill in appropriate details from the current situations. − how to find a better structure if the one chosen initially turns out not to be appropriate. − what to do if none of the available structures is appropriate. − when to create and remember a new structure. There is no good, general purpose method for solving all these problems. Some knowledge representation techniques solve some of them. [Readers may refer book on AI by Elaine Rich & Kevin Knight- page 124 - 126] KR – using logic 2. KR Using Predicate Logic In the previous section much has been illustrated about knowledge and KR related issues. This section, illustrates how knowledge may be represented as “symbol structures” that characterize bits of knowledge about objects, concepts, facts, rules, strategies; examples : “red” represents colour red; “car1” represents my car ; "red(car1)" represents fact that my car is red. Assumptions about KR : − Intelligent Behavior can be achieved by manipulation of symbol structures. − KR languages are designed to facilitate operations over symbol structures, have precise syntax and semantics; Syntax tells which expression is legal ?, e.g., red1(car1), red1 car1, car1(red1), red1(car1 & car2) ?; and Semantic tells what an expression means ? e.g., property “dark red” applies to my car. − Make Inferences, draw new conclusions from existing facts. To satisfy these assumptions about KR, we need formal notation that allow automated inference and problem solving. One popular choice is use of logic. KR – using Logic • Logic Logic is concerned with the truth of statements about the world. Generally each statement is either TRUE or FALSE. Logic includes : Syntax , Semantics and Inference Procedure. ◊ Syntax : Specifies the symbols in the language about how they can be combined to form sentences. The facts about the world are represented as sentences in logic. ◊ Semantic : Specifies how to assign a truth value to a sentence based on its meaning in the world. It Specifies what facts a sentence refers to. A fact is a claim about the world, and it may be TRUE or FALSE. ◊ Inference Procedure : Specifies methods for computing new sentences from an existing Note : Facts are claims about the world that are True or False. Representation is an expression (sentence), stands for the objects and relations. Sentences can be encoded in a computer program. KR – using Logic • Logic as a KR Language Logic is a language for reasoning, a collection of rules used while doing logical reasoning. Logic is studied as KR languages in artificial intelligence. ◊ Logic is a formal system in which the formulas or sentences have true or false values. ◊ The problem of designing a KR language is a tradeoff between that which is : (a) Expressive enough to represent important objects and relations in a problem domain. (b) Efficient enough in reasoning and answering questions about implicit information in a reasonable amount of time. ◊ Logics are of different types : Propositional logic, Predicate logic, Temporal logic, Modal logic, Description logic etc; They represent things and allow more or less efficient inference. ◊ Propositional logic and Predicate logic are fundamental to all logic. Propositional Logic is the study of statements and their connectivity. Predicate Logic is the study of individuals and their properties. KR – Logic 2.1 Logic Representation The Facts are claims about the world that are True or False. Logic can be used to represent simple facts. To build a Logic-based representation : ◊ User defines a set of primitive symbols and the associated semantics. ◊ Logic defines ways of putting symbols together so that user can define legal sentences in the language that represent TRUE facts. ◊ Logic defines ways of inferring new sentences from existing ones. ◊ Sentences - either TRUE or false but not both are called propositions. ◊ A declarative sentence expresses a statement with a proposition as content; example: the declarative "snow is white" expresses that snow is white; further, "snow is white" expresses that snow is white is TRUE. In this section, first Propositional Logic (PL) is briefly explained and then the Predicate logic is illustrated in detail. KR - Propositional Logic • Propositional Logic (PL) A proposition is a statement, which in English would be a declarative sentence. Every proposition is either TRUE or FALSE. Examples: (a) The sky is blue., (b) Snow is cold. , (c) 12 * 12=144 ‡ propositions are “sentences” , either true or false but not both. ‡ a sentence is smallest unit in propositional logic. ‡ if proposition is true, then truth value is "true" . if proposition is false, then truth value is "false" . Example : Sentence Truth value Proposition (Y/N) "Grass is green" "true" Yes "2 + 5 = 5" "false" Yes "Close the door" - No "Is it hot out side ?" - No "x > 2" where is variable - No (since x is not defined) "x = x" - No (don't know what is "x" and "="; "3 = 3" or "air is equal to air" or "Water is equal to water" has no meaning) − Propositional logic is fundamental to all logic. − Propositional logic is also called Propositional calculus, Sentential calculus, or Boolean algebra. − Propositional logic tells the ways of joining and/or modifying entire propositions, statements or sentences to form more complicated propositions, statements or sentences, as well as the logical relationships and properties that are derived from the methods of combining or altering statements. KR - Propositional Logic ■ Statement, variables and symbols These and few more related terms, such as, connective, truth value, contingencies, tautologies, contradictions, antecedent, consequent and argument are explained below. ◊ Statement Simple statements (sentences), TRUE or FALSE, that does not contain any other statement as a part, are basic propositions; lower-case letters, p, q, r, are symbols for simple statements. Large, compound or complex statement are constructed from basic propositions by combining them with connectives. ◊ Connective or Operator The connectives join simple statements into compounds, and joins compounds into larger compounds. Table below indicates, five basic connectives and their symbols : − listed in decreasing order of operation priority; − operations with higher priority is solved first. Example of a formula : ((((a Λ ¬b) V c → d) ↔ ¬ (a V c )) Connectives and Symbols in decreasing order of operation priority Connective Symbols Read as assertion P "p is true" negation ¬p ~ ! NOT "p is false" conjunction p∧q · && & AND "both p and q are true" disjunction P v q || | OR "either p is true, or q is true, or both " implication p→q ⊃ ⇒ if ..then "if p is true, then q is true" " p implies q " equivalence ↔ ≡ ⇔ if and only if "p and q are either both true or both false" Note : The propositions and connectives are the basic elements of propositional logic. KR - Propositional Logic ◊ Truth value The truth value of a statement is its TRUTH or FALSITY , Example : p is either TRUE or FALSE, ~p is either TRUE or FALSE, pvq is either TRUE or FALSE, and so on. use " T " or " 1 " to mean TRUE. use " F " or " 0 " to mean FALSE Truth table defining the basic connectives : p q ¬p ¬q p ∧ q p v q p→q p ↔ q q→p T T F F T T T T T T F F T F T F F T F T T F F T T F F F F T T F F T T T KR - Propositional Logic ◊ Tautologies A proposition that is always true is called a tautology. e.g., (P v ¬P) is always true regardless of the truth value of the proposition P. ◊ Contradictions A proposition that is always false is called a contradiction. e.g., (P ∧ ¬P) is always false regardless of the truth value of the proposition P. ◊ Contingencies A proposition is called a contingency, if that proposition is neither a tautology nor a contradiction e.g., (P v Q) is a contingency. ◊ Antecedent, Consequent In the conditional statements, p → q , the 1st statement or "if - clause" (here p) is called antecedent , 2nd statement or "then - clause" (here q) is called consequent. KR - Propositional Logic ◊ Argument Any argument can be expressed as a compound statement. Take all the premises, conjoin them, and make that conjunction the antecedent of a conditional and make the conclusion the consequent. This implication statement is called the corresponding conditional of the argument. Note : − Every argument has a corresponding conditional, and every implication statement has a corresponding argument. − Because the corresponding conditional of an argument is a statement, it is therefore either a tautology, or a contradiction, or a contingency. ‡ An argument is valid "if and only if" its corresponding conditional is a tautology. ‡ Two statements are consistent "if and only if" their conjunction is not a contradiction. ‡ Two statements are logically equivalent "if and only if" their truth table columns are identical; "if and only if" the statement of their equivalence using " ≡ " is a tautology. Note : The truth tables are adequate to test validity, tautology, contradiction, contingency, consistency, and equivalence. KR - Predicate Logic • Predicate logic The propositional logic, is not powerful enough for all types of assertions; Example : The assertion "x > 1", where x is a variable, is not a proposition because it is neither true nor false unless value of x is defined. For x > 1 to be a proposition , − either we substitute a specific number for x ; − or change it to something like "There is a number x for which x > 1 holds"; − or "For every number x, x > 1 holds". Consider example : “All men are mortal. Socrates is a man. Then Socrates is mortal” , These cannot be expressed in propositional logic as a finite and logically valid argument (formula). We need languages : that allow us to describe properties (predicates) of objects, or a relationship among objects represented by the variables . Predicate logic satisfies the requirements of a language. − Predicate logic is powerful enough for expression and reasoning. − Predicate logic is built upon the ideas of propositional logic. KR - Predicate Logic ■ Predicate : Every complete sentence contains two parts: a subject and a predicate. The subject is what (or whom) the sentence is about. The predicate tells something about the subject; Example : A sentence "Judy {runs}". The subject is Judy and the predicate is runs . Predicate, always includes verb, tells something about the subject. Predicate is a verb phrase template that describes a property of objects, or a relation among objects represented by the variables. “The car Tom is driving is blue" ; "The sky is blue" ; "The cover of this book is blue" Predicate is “is blue" , describes property. Predicates are given names; Let ‘B’ is name for predicate "is_blue". Sentence is represented as "B(x)" , read as "x is blue"; “x” represents an arbitrary Object . KR - Predicate Logic ■ Predicate logic expressions : The propositional operators combine predicates, like If ( p(....) && ( !q(....) || r (....) ) ) Examples of logic operators : disjunction (OR) and conjunction (AND). Consider the expression with the respective logic symbols || and && x < y || ( y < z && z < x) Which is true || ( true && true) ; Applying truth table, found True Assignment for < are 3, 2, 1 for x, y, z and then the value can be FALSE or TRUE 3 < 2 || ( 2 < 1 && 1 < 3) It is False KR - Predicate Logic ■ Predicate Logic Quantifiers As said before, x>1 is not proposition and why ? Also said, that for x > 1 to be a proposition what is required ? Generally, a predicate with variables (is called atomic formula) can be made a proposition by applying one of the following two operations to each of its variables : 1. Assign a value to the variable; e.g., x > 1, if 3 is assigned to x becomes 3 > 1 , and it then becomes a true statement, hence a 2. Quantify the variable using a quantifier on formulas of predicate logic (called wff ), such as x > 1 or P(x), by using Quantifiers on variables. Apply Quantifiers on Variables ‡ Variable x * x > 5 is not a proposition, its tru th depends upon the value of variable x * to reason such statements, x need to be declared ‡ Declaration x : a * x:a declares variable x * x:a read as “x is an element of set a” ‡ Statement p is a statement about x * Q x:a • p is quantification of statement declaration of variable x as element of set a * Quantifiers are two types : universal quantifiers , denoted by symbol and existential quantifiers , denoted by symbol Note : The next few slide tells more on these two Quantifiers. KR - Predicate Logic ■ Universe of Discourse The universe of discourse, also called domain of discourse or universe. This indicates : − a set of entities that the quantifiers deal. − entities can be set of real numbers, set of integers, set of all cars on a parking lot, the set of all students in a classroom etc. − universe is thus the domain of the (individual) variables. − propositions in the predicate logic are statements on objects of a The universe is often left implicit in practice, but it should be obvious from the context. − About natural numbers forAll x, y (x < y or x = y or x > y), there is no need to be more precise and say forAll x, y in N, because N is implicit, being the universe of discourse. − About a property that holds for natural numbers but not for real numbers, it is necessary to qualify what the allowable values of x and y are. KR - Predicate Logic ■ Apply Universal quantifier " For All " Universal Quantification allows us to make a statement about a collection of objects. ‡ Universal quantification: x:a•p * read “ for all x in a , p holds ” * a is universe of discourse * x is a member of the domain of discourse. * p is a statement about x ‡ In propositional form it is written as : x P(x) * read “ for all x, P(x) holds ” “ for each x, P(x) holds ” or “ for every x, P(x) holds ” * where P(x) is predicate, x means all the objects x in the universe P(x) is true for every object x in the universe ‡ Example : English language to Propositional form * "All cars have wheels" x : car • x has wheel * x P(x) where P (x) is predicate tells : ‘x has wheels’ x is variable for object ‘cars’ that populate universe of discourse KR - Predicate Logic ■ Apply Existential quantifier " There Exists " Existential Quantification allows us to state that an object does exist without naming it. ‡ Existential quantification: x:a•p * read “ there exists an x such that p holds ” * a is universe of discourse * x is a member of the domain of discourse. * p is a statement about x ‡ In propositional form it is written as : x P(x) * read “ there exists an x such that P(x) ” or “ there exists at least one x such that P(x) ” * Where P(x) is predicate x means at least one object x in the universe P(x) is true for least one object x in the universe ‡ Example : English language to Propositional form * “ Someone loves you ” x : Someone • x loves you * x P(x) where P(x) is predicate tells : ‘ x loves you ’ x is variable for object ‘ someone ’ that populate universe of discourse KR - Predicate Logic ■ Formula : In mathematical logic, a formula is a type of abstract object, a token of which is a symbol or string of symbols which may be interpreted as any meaningful unit in a formal language. ‡ Terms : Defined recursively as variables, or constants, or functions like f(t1, . . . , tn), where f is an n-ary function symbol, and t1, . . . , tn are terms. Applying predicates to terms produce atomic formulas. ‡ Atomic formulas : An atomic formula (or simply atom) is a formula with no deeper propositional structure, i.e., a formula that contains no logical connectives or a formula that has no strict sub-formulas. − Atoms are thus the simplest well-formed formulas of the logic. − Compound formulas are formed by combining the atomic formulas using the logical connectives. − Well-formed formula ("wiff") is a symbol or string of symbols (a formula) generated by the formal grammar of a formal language. An atomic formula is one of the form: − t1 = t2, where t1 and t2 are terms, or − R(t1, . . . , tn), where R is an n-ary relation symbol, and t1, . . . , tn are terms. − ¬ a is a formula when a is a formula. − (a ∧ b) and (a v b) are formula when a and b are formula ‡ Compound formula : example ((((a ∧ b ) ∧ c) ∨ ((¬ a ∧ b) ∧ c)) ∨ ((a ∧ ¬ b) ∧ c)) KR – logic relation 2.2 Representing “ IsA ” and “ Instance ” Relationships Logic statements, containing subject, predicate, and object, were explained. Also stated, two important attributes "instance" and "isa", in a hierarchical structure (ref. fig. Inheritable KR). These two attributes support property inheritance and play important role in knowledge representation. The ways, attributes "instance" and "isa", are logically expressed are : ■ Example : A simple sentence like "Joe is a musician" ◊ Here "is a" (called IsA) is a way of expressing what logically is called a class-instance relationship between the subjects represented by the terms "Joe" and "musician". ◊ "Joe" is an instance of the class of things called "musician". "Joe" plays the role of instance, "musician" plays the role of class in that sentence. ◊ Note : In such a sentence, while for a human there is no confusion, but for computers each relationship have to be defined explicitly. This is specified as: [Joe] IsA [Musician] i.e., [Instance] IsA [Class] KR – functions & predicates 2.3 Computable Functions and Predicates The objective is to define class of functions C computable in terms of F. This is expressed as C { F } is explained below using two examples : (1) "evaluate factorial n" and (2) "expression for triangular functions". ■ Example (1) : A conditional expression to define factorial n ie n! ◊ Expression “ if p1 then e1 else if p2 then e2 ... else if pn then en” . ie. (p1 → e1, p2 → e2, . . . . . . pn → en ) Here p1, p2, . . . . pn are propositional expressions taking the values T or F for true and false respectively. ◊ The value of ( p1 → e1, p2 → e2, . . . . . .pn → en ) is the value of the e corresponding to the first p that has value T. ◊ The expressions defining n! , n= 5, recursively are : n! = n x (n-1)! for n ≥ 1 5! = 1 x 2 x 3 x 4 x 5 = 120 0! = 1 The above definition incorporates an instance that the product of no numbers ie 0! = 1 , then only, the recursive relation (n + 1)! = n! x (n+1) works for n = 0 . ◊ Now use conditional expressions n! = ( n = 0 → 1, n ≠ 0 → n . (n – 1 ) ! ) to define functions recursively. ◊ Example: Evaluate 2! according to above definition. 2! = ( 2 = 0 → 1, 2 ≠ 0 → 2 . ( 2 – 1 )! ) = 2 x 1! = 2 x ( 1 = 0 → 1, 1 ≠ 0 → 1 . ( 1 – 1 )! ) = 2 x 1 x 0! = 2 x 1 x ( 0 = 0 → 1, 0 ≠ 0 → 0 . ( 0 – 1 )! ) = 2x 1x 1 = 2 KR – functions & predicates ■ Example (2) : A conditional expression for triangular functions ◊ The graph of a well known triangular function is shown below -1,0 1,0 the conditional expressions for triangular functions are x = (x < 0 → -x , x ≥ 0 → x) ◊ the triangular function of the above graph is represented by the conditional expression is tri (x) = (x ≤ -1 → 0, x ≤ 0 → -x, x ≤ 1 → x, x > 1 → 0) KR - Predicate Logic – resolution 2.4 Resolution Resolution is a procedure used in proving that arguments which are expressible in predicate logic are correct. Resolution is a procedure that produces proofs by refutation or Resolution lead to refute a theorem-proving technique for sentences in propositional logic and first-order logic. − Resolution is a rule of inference. − Resolution is a computerized theorem prover. − Resolution is so far only defined for Propositional Logic. The strategy is that the Resolution techniques of Propositional logic be adopted in Predicate Logic. KR Using Rules 3. KR Using Rules Knowledge representations using predicate logic have been illustrated. The other most popular approach to Knowledge representation is to use production rules, sometimes called IF-THEN rules. The remaining two other types of KR are semantic net and frames. The production rules are simple but powerful forms of knowledge representation providing the flexibility of combining declarative and procedural representation for using them in a unified form. Examples of production rules : − IF condition THEN action − IF premise THEN conclusion − IF proposition p1 and proposition p2 are true THEN proposition p3 is true The advantages of production rules : − they are modular, − each rule define a small and independent piece of knowledge. − new rules may be added and old ones deleted − rules are usually independently of other rules. The production rules as knowledge representation mechanism are used in the design of many "Rule-based systems" also called "Production systems" . KR Using Rules • Types of rules Three major types of rules used in the Rule-based production systems. ■ Knowledge Declarative Rules : These rules state all the facts and relationships about a problem. e.g., IF inflation rate declines THEN the price of gold goes down. These rules are a part of the knowledge base. ■ Inference Procedural Rules These rules advise on how to solve a problem, while certain facts are e.g., IF the data needed is not in the system THEN request it from the user. These rules are part of the inference engine. ■ Meta rules These are rules for making rules. Meta-rules reason about which rules should be considered for firing. e.g., IF the rules which do not mention the current goal in their premise, AND there are rules which do mention the current goal in their premise, THEN the former rule should be used in preference to the latter. − Meta-rules direct reasoning rather than actually performing − Meta-rules specify which rules should be considered and in which order they should be invoked. KR – procedural & declarative 3.1 Procedural versus Declarative Knowledge These two types of knowledge were defined in earlier slides. ■ Procedural Knowledge : knowing 'how to do' Includes : Rules, strategies, agendas, procedures, models. These explains what to do in order to reach a certain conclusion. e.g., Rule: To determine if Peter or Robert is older, first find their ages. It is knowledge about how to do something. It manifests itself in the doing of something, e.g., manual or mental skills cannot reduce to words. It is held by individuals in a way which does not allow it to be communicated directly to other individuals. Accepts a description of the steps of a task or procedure. It Looks similar to declarative knowledge, except that tasks or methods are being described instead of facts or things. ■ Declarative Knowledge : knowing 'what', knowing 'that' Includes : Concepts, objects, facts, propositions, assertions, models. It is knowledge about facts and relationships, that − can be expressed in simple and clear statements, − can be added and modified without difficulty. e.g., A car has four tyres; Peter is older than Robert. Declarative knowledge and explicit knowledge are articulated knowledge and may be treated as synonyms for most practical purposes. Declarative knowledge is represented in a format that can be manipulated, decomposed and analyzed independent of its content. KR – procedural & declarative ■ Comparison : Comparison between Procedural and Declarative Knowledge : Procedural Knowledge Declarative Knowledge • Hard to debug • Easy to validate • Black box • White box • Obscure • Explicit • Process oriented • Data - oriented • Extension may effect stability • Extension is easy • Fast , direct execution • Slow (requires interpretation) • Simple data type can be used • May require high level data type • Representations in the form of • Representations in the form of sets of rules, organized into production system, the entire set routines and subroutines. of rules for executing the task. KR – procedural & declarative ■ Comparison : Comparison between Procedural and Declarative Language : Procedural Language Declarative Language • Basic, C++, Cobol, etc. • SQL • Most work is done by interpreter of • Most work done by Data Engine the languages within the DBMS • For one task many lines of code • For one task one SQL statement • Programmer must be skilled in • Programmer must be skilled in translating the objective into lines clearly stating the objective as a of procedural code SQL statement • Requires minimum of management • Relies on SQL-enabled DBMS to around the actual data hold the data and execute the SQL statement . • Programmer understands and has • Programmer has no interaction access to each step of the code with the execution of the SQL • Data exposed to programmer • Programmer receives data at end during execution of the code as an entire set • More susceptible to failure due to • More resistant to changes in the changes in the data structure data structure • Traditionally faster, but that is • Originally slower, but now setting changing speed records • Code of procedure tightly linked to • Same SQL statements will work front end with most front ends Code loosely linked to front end. • Code tightly integrated with • Code loosely linked to structure of structure of the data store data; DBMS handles structural • Programmer works with a pointer • Programmer not concerned with or cursor positioning • Knowledge of coding tricks applies • Knowledge of SQL tricks applies to only to one language any language using SQL KR – Logic Programming 3.2 Logic Programming Logic programming offers a formalism for specifying a computation in terms of logical relations between entities. − logic program is a collection of logic statements. − programmer describes all relevant logical relationships between the various entities. − computation determines whether or not, a particular conclusion follows from those logical statements. • characteristics of Logic program Logic program is characterized by set of relations and inferences. − the program consists of a set of axioms and a goal statement. − the Rules of inference determine whether the axioms are sufficient to ensure the truth of the goal statement. − the execution of a logic program corresponds to the construction of a proof of the goal statement from the axioms. − the Programmer specify basic logical relationships, does not specify the manner in which inference rules are applied. Thus Logic + Control = Algorithms • Examples of Logic Statements − Statement A grand-parent is a parent of a parent. − Statement expressed in more closely related logic terms as A person is a grand-parent if she/he has a child and that child is a parent. − Statement expressed in first order logic as (for all) x: grand-parent(x) ← (there exist) y, z : parent(x, y) & parent(y, z) KR – Logic Programming • Logic programming Language A programming language includes : − the syntax − the semantics of programs and − the computational model. There are many ways of organizing computations. The most familiar paradigm is procedural. The program specifies a computation by saying "how" it is to be performed. FORTRAN, C, and object-oriented languages fall under this general approach. Another paradigm is declarative. The program specifies a computation by giving the properties of a correct answer. Prolog and logic data language (LDL) are examples of declarative languages, emphasize the logical properties of a computation. Prolog and LDL are called logic programming languages. PROLOG is the most popular Logic programming system. KR – Logic Programming • Syntax and terminology (relevant to Prolog programs) In any language, the formation of components (expressions, statements, etc.), is guided by syntactic rules. The components are divided into two parts: (A) data components and (B) program components. (A) Data components : Data components are collection of data objects that follow hierarchy. Data object of any kind is also called Data Objects a term. A term is a constant, a variable (terms) or a compound term. Simple data object is not Simple Structured decomposable; e.g. atoms, numbers, constants, variables. The syntax Constants Variables distinguishes the data objects, hence no need for declaring them. Atoms Numbers Structured data object are made of several components; e.g. general, special structure. All these data components were mentioned in the earlier slides, are now explained in detail below. KR – Logic Programming (a) Data objects : The data objects of any kind is called a term. ◊ Term : examples ‡ Constants: denote elements such as integers, floating point, atoms. ‡ Variables: denote a single but unspecified element; symbols for variables begin with an uppercase letter or an underscore. ‡ Compound terms: comprise a functor and sequence of one or more compound terms called arguments. ► Functor : is characterized by its name, which is an atom, and its arity or number of arguments. ƒ/n = ƒ( t1 , t2, . . . tn ) where ƒ is name of the functor and is of arity n ti 's are the arguments ƒ/n denotes functor ƒ of arity n Functors with the same name but different arities are distinct. ‡ Ground and non-ground: Terms are ground if they contain no variables; otherwise they are non-ground. Goals are atoms or compound terms, and are generally non-ground. KR – Logic Programming (b) Simple data objects : Atoms, Numbers, Variables ◊ Atoms ‡ a lower-case letter, possibly followed by other letters (either case), digits, and underscore character. e.g. a greaterThan two_B_or_not_2_b ‡ a string of special characters such as: + - * / \ = ^ < > : . ~ @ # $ & e.g. <> ##&& ::= ‡ a string of any characters enclosed within single quotes. e.g. 'ABC' '1234' 'a<>b' ‡ following are also atoms ! ; [] {} ◊ Numbers ‡ applications involving heavy numerical calculations are rarely written in Prolog. ‡ integer representation: e.g. 0 -16 33 +100 ‡ real numbers written in standard or scientific notation, e.g. 0.5 -3.1416 6.23e+23 11.0e-3 -2.6e-2 ◊ Variables ‡ begins by a capital letter, possibly followed by other letters (either case), digits, and underscore character. e.g. X25 List Noun_Phrase KR – Logic Programming (c) Structured data objects : General Structures , Special Structures ◊ General Structures ‡ a structured term is syntactically formed by a functor and a list of ‡ functor is an atom. ‡ list of arguments appears between parentheses. ‡ arguments are separated by a comma. ‡ each argument is a term (i.e., any Prolog data object). ‡ the number of arguments of a structured term is called its arity. ‡ e.g. greaterThan(9, 6) f(a, g(b, c), h(d)) plus(2, 3, 5) Note : a structure in Prolog is a mechanism for combining terms together, like integers 2, 3, 5 are combined with the functor plus. ◊ Special Structures ‡ In Prolog an ordered collection of terms is called a list . ‡ Lists are structured terms and Prolog offers a convenient notation to represent them: * Empty list is denoted by the atom [ ]. * Non-empty list carries element(s) between square brackets, separating elements by comma. e.g. [bach, bee] [apples, oranges, grapes] [] KR – Logic Programming (B) Program Components A Prolog program is a collection of predicates or rules. A predicate establishes a relationships between objects. (a) Clause, Predicate, Sentence, Subject ‡ Clause is a collection of grammatically-related words . ‡ Predicate is composed of one or more clauses. ‡ Clauses are the building blocks of sentences; every sentence contains one or more clauses. ‡ A Complete Sentence has two parts: subject and predicate. o subject is what (or whom) the sentence is about. o predicate tells something about the subject. ‡ Example 1 : "cows eat grass". It is a clause, because it contains the subject "cows" and the predicate "eat grass." ‡ Example 2 : "cows eating grass are visible from highway" This is a complete clause. The subject "cows eating grass" and the predicate "are visible from the highway" makes complete thought. KR – Logic Programming (b) Predicates & Clause Syntactically a predicate is composed of one or more clauses. ‡ The general form of clauses is : <left-hand-side> :- <right-hand-side>. where LHS is a single goal called "goal" and RHS is composed of one or more goals, separated by commas, called "sub-goals" of the goal on left-hand side. ‡ The structure of a clause in logic program head body pred ( functor(var1, var2)) :- pred(var1) , pred(var2) literal literal ‡ Example : grand_parent (X, Y) :- parent(X, Z), parent(Z, Y). parent (X, Y) :- mother(X, Y). parent (X, Y) :- father(X, Y). ‡ Interpretation: * a clause specifies the conditional truth of the goal on the LHS; i.e., goal on LHS is assumed to be true if the sub-goals on RHS are all true. A predicate is true if at least one of its clauses is true. * An individual "Y" is the grand-parent of "X" if a parent of that same "X" is "Z" and "Y" is the parent of that "Z". (Y is parent of Z) (Z is parent of X) Y Z X (Y is grand parent of X) * An individual "Y" is a parent of "X" if "Y" is the mother of "X" (Y is parent of X) Y X (Y is mother of X) * An individual "Y" is a parent of "X" if "Y" is the father of "X". (Y is parent of X) Y X (Y is father of X) KR – Logic Programming (c) Unit Clause - a special Case Unlike the previous example of conditional truth, one often encounters unconditional relationships that hold. ‡ In Prolog the clauses that are unconditionally true are called unit clause or fact ‡ Example : Unconditionally relationships say 'Y' is the father of 'X' is unconditionally true. This relationship as a Prolog clause is : father(X, Y) :- true. Interpreted as relationship of father between Y and X is always true; or simply stated as Y is father of X ‡ Goal true is built-in in Prolog and always holds. ‡ Prolog offers a simpler syntax to express unit clause or fact father(X, Y) ie the :- true part is simply omitted. KR – Logic Programming (d) Queries In Prolog the queries are statements called directive. A special case of directives, are called queries. ‡ Syntactically, directives are clauses with an empty left-hand side. Example : ? - grandparent(X, W). This query is interpreted as : Who is a grandparent of X ? By issuing queries, Prolog tries to establish the validity of specific ‡ The result of executing a query is either success or failure Success, means the goals specified in the query holds according to the facts and rules of the program. Failure, means the goals specified in the query does not hold according to the facts and rules of the program. KR – Logic - models of computation • Programming paradigms : Models of Computation A complete description of a programming language includes the computational model, syntax, semantics, and pragmatic considerations that shape the language. Models of Computation : A computational model is a collection of values and operations, while computation is the application of a sequence of operations to a value to yield another value. There are three basic computational models : (a) Imperative, (b) Functional, and (c) Logic. In addition to these, there are two programming paradigms (concurrent and object-oriented programming). While, they are not models of computation, they rank in importance with computational models. KR – Logic - models of computation (a) Imperative Model : The Imperative model of computation, consists of a state and an operation of assignment which is used to modify the state. Programs consist of sequences of commands. The computations are changes in the state. Example 1 : Linear function A linear function y = 2x + 3 can be written as Y := 2 ∗ X + 3 The implementation determines the value of X in the state and then create a new state, which differs from the old state. The value of Y in the new state is the value that 2 ∗ X + 3 had in the old state. Old State: X = 3, Y = -2, Y := 2 ∗ X + 3 New State: X = 3, Y = 9, The imperative model is closest to the hardware model on which programs are executed, that makes it most efficient model in terms of execution time. KR – Logic - models of computation (b) Functional model : The Functional model of computation, consists of a set of values, functions, and the operation of functions. The functions may be named and may be composed with other functions. They can take other functions as arguments and return results. The programs consist of definitions of functions. The computations are application of functions to ‡ Example 1 : Linear function A linear function y = 2x + 3 can be defined as : f (x) = 2 ∗ x + 3 ‡ Example 2 : Determine a value for Circumference. Assigned a value to Radius, that determines a value for Circumference = 2 × pi × radius where pi = 3.14 Generalize Circumference with the variable "radius" ie Circumference(radius) = 2 × pi × radius , where pi = 3.14 Functional models are developed over many years. The notations and methods form the base upon which problem solving methodologies rest. KR – Logic - models of computation (c) Logic model : The logic model of computation is based on relations and logical inference. Programs consist of definitions of relations. Computations are inferences (is a proof). ‡ Example 1 : Linear function A linear function y = 2x + 3 can be represented as : f (X , Y) if Y is 2 ∗ X + 3. ‡ Example 2: Determine a value for Circumference. The earlier circumference computation can be represented as: Circle (R , C) if Pi = 3.14 and C = 2 ∗ pi ∗ R. The function is represented as a relation between radious R and circumference C. ‡ Example 3: Determine the mortality of Socrates. The program is to determine the mortality of Socrates. The fact given that Socrates is human. The rule is that all humans are mortal, that is for all X, if X is human then X is mortal. To determine the mortality of Socrates, make the assumption that there are no mortals, that is ¬ mortal (Y) [logic model continued in the next slide] KR – Logic - models of computation [logic model continued in the previous slide] ‡ The fact and rule are: human (Socrates) mortal (X) if human (X) ‡ To determine the mortality of Socrates, make the assumption that there are no mortals i.e. ¬ mortal (Y) ‡ Computation (proof) that Socrates is mortal : 1. human(Socrates) Fact 2. mortal(X) if human(X) Rule 3 ¬mortal(Y) assumption 4.(a) X=Y from 2 & 3 by unification 4.(b) ¬human(Y) and modus tollens 5. Y = Socrates from 1 and 4 by unification 6. Contradiction 5, 4b, and 1 ‡ Explanation : * The 1st line is the statement "Socrates is a man." * The 2nd line is a phrase "all human are mortal" into the equivalent "for all X, if X is a man then X is mortal". * The 3rd line is added to the set to determine the mortality of Socrates. * The 4th line is the deduction from lines 2 and 3. It is justified by the inference rule modus tollens which states that if the conclusion of a rule is known to be false, then so is the hypothesis. * Variables X and Y are unified because they have same value. * By unification, Lines 5, 4b, and 1 produce contradictions and identify Socrates as mortal. * Note that, resolution is the an inference rule which looks for a contradiction and it is facilitated by unification which determines if there is a substitution which makes two terms the same. Logic model formalizes the reasoning process. It is related to relational data bases and expert systems. KR – forward-backward reasoning 3.3 Forward versus Backward Reasoning Rule-Based system architecture consists a set of rules, a set of facts, and an inference engine. The need is to find what new facts can be derived. Given a set of rules, there are essentially two ways to generate new knowledge: one, forward chaining and the other, backward chaining. ■ Forward chaining : also called data driven. It starts with the facts, and sees what rules apply. ■ Backward chaining : also called goal driven. It starts with something to find out, and looks for rules that will help in answering it. KR – forward-backward reasoning ■ Example 1 : Rule R1 : IF hot AND smoky THEN fire Rule R2 : IF alarm_beeps THEN smoky Rule R3 : IF fire THEN switch_on_sprinklers Fact F1 : alarm_beeps [Given] Fact F2 : hot [Given] ■ Example 2 : Rule R1 : IF hot AND smoky THEN ADD fire Rule R2 : IF alarm_beeps THEN ADD smoky Rule R3 : IF fire THEN ADD switch_on_sprinklers Fact F1 : alarm_beeps [Given] Fact F2 : hot [Given] KR – forward-backward reasoning ■ Example 3 : A typical Forward Chaining Rule R1 : IF hot AND smoky THEN ADD fire Rule R2 : IF alarm_beeps THEN ADD smoky Rule R3 : If fire THEN ADD switch_on_sprinklers Fact F1 : alarm_beeps [Given] Fact F2 : hot [Given] Fact F4 : smoky [from F1 by R2] Fact F2 : fire [from F2, F4 by R1] Fact F6 : switch_on_sprinklers [from F4 by R3] ■ Example 4 : A typical Backward Chaining Rule R1 : IF hot AND smoky THEN fire Rule R2 : IF alarm_beeps THEN smoky Rule R3 : If _re THEN switch_on_sprinklers Fact F1 : hot [Given] Fact F2 : alarm_beeps [Given] Goal : Should I switch sprinklers on? KR – forward chaining • Forward chaining The Forward chaining system, properties , algorithms, and conflict resolution strategy are illustrated. ■ Forward chaining system Working Inference Memory Engine facts rules ‡ facts are held in a working memory ‡ condition-action rules represent actions to be taken when specified facts occur in working memory. ‡ typically, actions involve adding or deleting facts from the working ■ Properties of Forward Chaining ‡ all rules which can fire do fire. ‡ can be inefficient - lead to spurious rules firing, unfocused problem ‡ set of rules that can fire known as conflict set. ‡ decision about which rule to fire is conflict resolution. KR – forward chaining ■ Forward chaining algorithm - I ‡ Collect the rule whose condition matches a fact in WM. ‡ Do actions indicated by the rule. (add facts to WM or delete facts from WM) Until problem is solved or no condition match Apply on the Example 2 extended (adding 2 more rules and 1 fact) Rule R1 : IF hot AND smoky THEN ADD fire Rule R2 : IF alarm_beeps THEN ADD smoky Rule R3 : If fire THEN ADD switch_on_sprinklers Rule R4 : IF dry THEN ADD switch_on_humidifier Rule R5 : IF sprinklers_on THEN DELETE dry Fact F1 : alarm_beeps [Given] Fact F2 : hot [Given] Fact F2 : Dry [Given] Now, two rules can fire (R2 and R4) ‡ R4 fires, humidifier is on (then, as before) ‡ R2 fires, humidifier is off A conflict ! ■ Forward chaining algorithm - II ‡ Collect the rules whose conditions match facts in WM. ‡ If more than one rule matches ◊ Use conflict resolution strategy to eliminate all but one ‡ Do actions indicated by the rules (add facts to WM or delete facts from WM) Until problem is solved or no condition match KR – forward chaining ■ Conflict Resolution Strategy Conflict set is the set of rules that have their conditions satisfied by working memory elements. Conflict resolution normally selects a single rule to fire. The popular conflict resolution mechanisms are Refractory, Recency, Specificity. ◊ Refractory ‡ a rule should not be allowed to fire more than once on the same data. ‡ discard executed rules from the conflict set. ‡ prevents undesired loops. ◊ Recency ‡ rank instantiations in terms of the recency of the elements in the premise of the rule. ‡ rules which use more recent data are preferred. ‡ working memory elements are time-tagged indicating at what cycle each fact was added to working memory. ◊ Specificity ‡ rules which have a greater number of conditions and are therefore more difficult to satisfy, are preferred to more general rules with fewer ‡ more specific rules are ‘better’ because they take more of the data into account. KR – forward chaining ■ Alternative to Conflict Resolution – Use Meta Knowledge Instead of conflict resolution strategies, sometimes we want to use knowledge in deciding which rules to fire. Meta-rules reason about which rules should be considered for firing. They direct reasoning rather than actually performing reasoning. ‡ Meta-knowledge : knowledge about knowledge to guide search. ‡ Example of meta-knowledge IF conflict set contains any rule (c , a) such that a = "animal is mammal'' THEN fire (c , a) ‡ This example says meta-knowledge encodes knowledge about how to guide search for solution. ‡ Meta-knowledge, explicitly coded in the form of rules with "object level" knowledge. KR – backward chaining • Backward chaining Backward chaining system and the algorithm are illustrated. ■ Backward chaining system ‡ Backward chaining means reasoning from goals back to facts. The idea is to focus on the search. ‡ Rules and facts are processed using backward chaining interpreter. ‡ Checks hypothesis, e.g. "should I switch the sprinklers on?" ■ Backward chaining algorithm ‡ Prove goal G : If G is in the initial facts , it is proven. Otherwise, find a rule which can be used to conclude G, and try to prove each of that rule's conditions. Smoky hot Encoding of rules Rule R1 : IF hot AND smoky THEN fire Rule R2 : IF alarm_beeps THEN smoky Rule R3 : If fire THEN switch_on_sprinklers Fact F1 : hot [Given] Fact F2 : alarm_beeps [Given] Goal : Should I switch sprinklers on? KR – backward chaining • Forward vs Backward Chaining ‡ Depends on problem, and on properties of rule set. ‡ If there is clear hypotheses, then backward chaining is likely to be better; e.g., Diagnostic problems or classification problems, Medical expert systems ‡ Forward chaining may be better if there is less clear hypothesis and want to see what can be concluded from current situation; e.g., Synthesis systems - design / configuration. KR – control knowledge 3.4 Control Knowledge An algorithm consists of : logic component, that specifies the knowledge to be used in solving problems, and control component, that determines the problem-solving strategies by means of which that knowledge is used. Thus Algorithm = Logic + Control . The logic component determines the meaning of the algorithm whereas the control component only affects its efficiency. An algorithm may be formulated in different ways, producing same behavior. One formulation, may have a clear statement in logic component but employ a sophisticated problem solving strategy in the control component. The other formulation, may have a complicated logic component but employ a simple problem-solving strategy. The efficiency of an algorithm can often be improved by improving the control component without changing the logic of the algorithm and therefore without changing the meaning of the algorithm. The trend in databases is towards the separation of logic and control. The programming languages today do not distinguish between them. The programmer specifies both logic and control in a single language. The execution mechanism exercises only the most rudimentary problem-solving Computer programs will be more often correct, more easily improved, and more readily adapted to new problems when programming languages separate logic and control, and when execution mechanisms provide more powerful problem-solving facilities of the kind provided by intelligent theorem-proving systems. 4. References 1. Elaine Rich and Kevin Knight, Carnegie Mellon University, “Artificial Intelligence, 2006 2. Stuart Russell and Peter Norvig, University of California, Artificial Intelligence: A Modern Approach, http://aima.cs.berkeley.edu/, http://www.cs.berkeley.edu/~russell/intro.html 3. Frans Coenen, University of Liverpool, Artificial Intelligence, 2CS24, 4. John McCarthy, Stanford University, what is artificial intelligence? 5. Randall Davis, Howard Shrobe, Peter Szolovits, What is a Knowledge Representation? 6. Conversion of data to knowledge, 7. Knowledge Management—Emerging Perspectives, 8. Knowledge Management , http://www.nwlink.com/~donclark/knowledge/knowledge.html 9. Nickols, F. W. (2000), The knowledge in knowledge management, 10. Paul Brna, Prolog Programming A First Course, 11. Mike Sharples, David Hogg, Chris Hutchison, Steve Torrance, David Young, A practical Introduction to Artificial Intelligence, http://www.informatics.susx.ac.uk/books/computers- 12. Alison Cawsey, Databases and Artificial Intelligence 3 Artificial Intelligence Segment, 13. Milos Hauskrecht , CS2740 Knowledge Representation (ISSP 3712), 14. Tru Hoang Cao , Knowledge Representation – chapter 4, 15. Agnar Aamodt, A Knowledge-Intensive, Integrated Approach to Problem Solving and Sustained Learning, http://www.idi.ntnu.no/grupper/su/publ/phd/aamodt-thesis.pdf 16. Ronald J. Brachman and Hector J. Levesque, Knowledge Representation and Reasoning, 17. Stuart C. Shapiro, Knowledge Representation, CSE 4/563 18. Robert M. Keller, Predicate Logic 19. Kevin C. Klement, Propositional Logic, 20. Aljoscha Burchardt, Stephan Walter, . . . , Computational Semantics 21. Open To Europe project, FUNDAMENTALS OF PROPOSITIONAL LOGIC 22. J Lawry, Propositional Logic Review, 23. Al Lehnen, An Elementary Introduction to Logic and Set Theory, 24. Jim Woodcock and Jim Davies, Using Z, http://www.uta.edu/cse/levine/fall99/cse5324/z/ 25. C. R. Dyer, Logic , CS 540 Lecture Notes, 26. Peter Suber, Symbolic Logic, , http://www.earlham.edu/~peters/courses/log/loghome.htm 27. John Mccarthy, A Basis For A Mathematical Theory Of Computation, 28. Shunichi Toida, CS381 Introduction to Discrete Structures, 29. Leopoldo Bertossi, Knowledge representation , http://www.scs.carleton.ca/~bertossi/KR/ 30. Anthony A. Aaby, Introduction to Programming Languages, 31. Carl Alphonce, CS312 Functional and Logic Programming, 32. Anthony A. Aaby, Introduction to Programming Languages, Note : This list is not exhaustive. The quote, paraphrase or summaries, information, ideas, text, data, tables, figures or any other material which originally appeared in someone else’s work, I sincerely acknowledge them.
{"url":"http://www.docstoc.com/docs/32524915/Artificial-Intelligence-Knowledge-Representation","timestamp":"2014-04-19T19:38:57Z","content_type":null,"content_length":"172708","record_id":"<urn:uuid:68dffd98-e302-42c9-a877-f475bbbc32a4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Cannot find my own mistake (derivation) November 21st 2010, 10:22 AM #1 Cannot find my own mistake (derivation) So I have this: $x^{y} = y^{x}$ which, in case $x$ and $y$ are positive, may be graphed like: $x^{y} = y^{x} \Longrightarrow e^{y \cdot ln(x)} = e^{x \cdot ln(y)} \Longrightarrow y \cdot ln(x) = x \cdot ln(y) \Longrightarrow ln(x^{y}) = ln(y^{x})$ It seems plotting $x^{y} = y^{x}$ and $ln(x^{y}) = ln(y^{x})$ should be essentially the same. However, my real problem is when I try to get $\frac{dy}{dx}$ out of those equations. *Unfortunately first pictures showing my thorough work had to be removed because of some reader-disruptive flaws.* Essence of derivation of equation #1: $\frac{d}{dx}(x^{y} = y^{x})$ $y \cdot x^{y-1} \cdot \frac{dx}{dx} + x^{y} \cdot ln(x) \cdot \frac{dy}{dx} = x \cdot y^{x-1} \cdot \frac{dy}{dx} + y^{x} \cdot ln(y) \cdot \frac{dx}{dx}$ $\frac{dy}{dx} = \frac{y^{x} \cdot ln(y) - y \cdot x^{y-1}}{x^{y} \cdot ln(x) - x \cdot y^{x-1}}$ Essence of derivation of equation #2: $\frac{d}{dx}(ln(x^{y}) = ln(y^{x}))$ $\frac{y}{x} \cdot \frac{dx}{dx} + ln(x) \cdot \frac{dy}{dx} = \frac{x}{y} \cdot \frac{dy}{dx} + ln(y) \cdot \frac{dx}{dx}$ $\frac{dy}{dx} = \frac{ln(y) - \frac{y}{x}}{ln(x) - \frac{x}{y}}$ Why is there a big difference? Last edited by Pranas; November 22nd 2010 at 10:09 AM. So I was looking for the derivative of this: What is that very first line? This is false big time: the left hand is $yx^{y-1}$ , whereas the right one is $y^x\ln y$ ... But then I tried to put everything inside logarithm ln() , which, in case numbers are positive, I thought wouldn't change the answer. However it changed big time: Can you explain, what is wrong with my thoughts? tonio, could you please be at least a little more thorough? $x^{y} = y^{x}$ is what i have. $\frac{dy}{dx} = ?$ is what I need. As long as I understand you saying $y \cdot x^{y-1} = y^{x} \cdot ln(y)$ that doesn't give me much. Pranas, could you please be a little less sloppy? You did not say anything about having $x^y=y^x$ . It doesn't appear anywhere in your message. It only appears the first equality $\frac{d}{dx}(x^y)=\frac{d}{x}(y^x)$ , which gives you what I wrote you. And don't bother in writing back asking for my help unless you first apologize. Pranas, could you please be a little less sloppy? You did not say anything about having $x^y=y^x$ . It doesn't appear anywhere in your message. It only appears the first equality $\frac{d}{dx}(x^y)=\frac{d}{x}(y^x)$ , which gives you what I wrote you. And don't bother in writing back asking for my help unless you first apologize. Yes, I apologize. I simply couldn't figure your post out at first, although it is very correct Seems I should have written $\frac{d}{dx}(x^{y} = y^{x})$ ? However $x^{y} = y^{x} \Longrightarrow e^{y \cdot ln(x)} = e^{x \cdot ln(y)} \Longrightarrow y \cdot ln(x) = x \cdot ln(y) \Longrightarrow ln(x^{y}) = ln(y^{x})$ so it seems like the disagreement I get $\frac{d}{dx}(x^{y} = y^{x}) eq \frac{d}{dx}(ln(x^{y}) = ln(y^{x}))$ prevails although it (I believe) really means the same. Yes, I apologize. I simply couldn't figure your post out at first, although it is very correct Seems I should have written $\frac{d}{dx}(x^{y} = y^{x})$ ? However $x^{y} = y^{x} \Longrightarrow e^{y \cdot ln(x)} = e^{x \cdot ln(y)} \Longrightarrow y \cdot ln(x) = x \cdot ln(y) \Longrightarrow ln(x^{y}) = ln(y^{x})$ so it seems like the disagreement I get $\frac{d}{dx}(x^{y} = y^{x}) eq \frac{d}{dx}(ln(x^{y}) = ln(y^{x}))$ prevails although it (I believe) really means the same. Well, that last statement is certainly true- $\frac{d(x^y= y^x)}{dx}e \frac{d ln(x^y)= ln(y^x)}{dx}$ and no one has said they are equal. The problem goes back to your initial post: "I was looking for the derivative of this $\frac{d x^y}{dx}= \frac{d y^x}{dx}$ which is very ambiguous: were you asking for the derivative of each side of that equation or were you asking how to arrive at that equation? Now you say "Seems I should have written $\frac{d}{dx}(x^{y} = y^{x})$ If we let $z= x^y$, then $ln(z)= y ln(x)$ and so $\frac{1}{z}\frac{dz}{dx}= \frac{y}{x}+ ln(x)\frac{dy}{dx}$. That is, $\frac{dz}{dx}= \frac{d x^y}{dx}= \left(\frac{y}{x}+ ln(x)\frac{dy}{dx}\right)z$ $\frac{dx^y}{dx}= yx^{y-1}+ y^x ln(x)\frac{dy}{dx}$. Of course, that depends upon dy/dx. We cannot differentiate some function of x without knowing precisely what that function is- what y is as a function of x. (Note that if y does NOT depend upon x, if y is a constant, then dy/dx= 0 and this just becomes the standard $\frac{dx^y}{dx}= yx^{y-1}$ from Calculus I.) Last edited by HallsofIvy; November 23rd 2010 at 04:32 AM. Indeed there were some obvious flaws in my "questionnaire", I am trying to correct most of it as the conversation evolves. For the sake of simplicity, let's say we're interested only in positive values of $x$ and $y$ (I've added a possible graphing below in this post). Please be aware that I did not add $ln()$ just like that, I tried to rationally generate it as showed in this thread: $x^{y} = y^{x} \Longrightarrow e^{y \cdot ln(x)} = e^{x \cdot ln(y)} \Longrightarrow y \cdot ln(x) = x \cdot ln(y) \Longrightarrow ln(x^{y}) = ln(y^{x})$ Maybe I am wrong, but at this moment I do not see how were the relations between $x$ and $y$ effected by that, therefore I assume I haven't changed the possible plot nor the value of $\frac{dy} I did not write what I really meant to at that point. Sorry. ...Now you say "Seems I should have written $\frac{d}{dx}(x^{y} = y^{x})$ If we let $z= x^y$, then [tex]ln(z)= y ln(x) and so $\frac{1}{z}\frac{dz}{dx}= \frac{y}{x}+ ln(x)\frac{dy}{dx}$. That is, $\frac{dz}{dx}= \frac{d x^y}{dx}= \left(\frac{y}{x}+ ln(x)\frac{dy}{dx}\right)z$ $\frac{dx^y}{dx}= yx^{y-1}+ y^x ln(x)\frac{dy}{dx}$... Well, yes. I pretty much applied parallel method to yours on both $x^{y}$ and $y^{x}$. Then expressed $\frac{dy}{dx}$ (that would be my equation #1 in updated first message). As far as my mental calculation goes, answer seems to be identical as the one you're approaching in this part of the post. ...Of course, that depends upon dy/dx. We cannot differentiate some function of x without knowing precisely what that function is- what y is as a function of x. (Note that if y does NOT depend upon x, if y is a constant, then dy/dx= 0 and this just becomes the standard $\frac{dx^y}{dx}= yx^{y-1}$ from Calculus I.) Indeed you're correct once more. That is a reasonable variation in a broad sense, although might be a little vulgar mapped on a plane, because I would imagine it as of having only a few points. What I imagined as a relation between positive $x$ and $y$ in the function $x^{y} = y^{x}$ is like this: In my language that is defined to be a simple function, only "unexpressed", because of not being represented by $y =$*operations involving only variable $x$ and constants* P.S. We've had some confusion here, so I can politely remind, that what's still unclear for me is inequality $\frac{d(x^{y}= y^{x})}{dx}e \frac{d(ln(x^{y})= ln(y^{x}))}{dx}$. Also I tried to do my best in editing the first post. Last edited by Pranas; November 22nd 2010 at 10:09 AM. November 21st 2010, 03:23 PM #2 Oct 2009 November 22nd 2010, 04:49 AM #3 November 22nd 2010, 06:04 AM #4 Oct 2009 November 22nd 2010, 06:35 AM #5 November 22nd 2010, 06:50 AM #6 MHF Contributor Apr 2005 November 22nd 2010, 09:04 AM #7
{"url":"http://mathhelpforum.com/calculus/163968-cannot-find-my-own-mistake-derivation.html","timestamp":"2014-04-16T06:37:04Z","content_type":null,"content_length":"72487","record_id":"<urn:uuid:eb00798b-f4d6-487b-8756-a9dea895cc4a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
GA-Based Fuzzy Sliding Mode Controller for Nonlinear Systems Mathematical Problems in Engineering Volume 2008 (2008), Article ID 325859, 16 pages Research Article GA-Based Fuzzy Sliding Mode Controller for Nonlinear Systems ^1Department of Civil Engineering, National Central University, Chung-li 32011, Taiwan ^2Department of Logistics Management, College of Management, Shu-Te University, Kaohsiung 82445, Taiwan Received 20 February 2008; Revised 4 June 2008; Accepted 8 August 2008 Academic Editor: Paulo Gonçalves Copyright © 2008 P. C. Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Generally, the greatest difficulty encountered when designing a fuzzy sliding mode controller (FSMC) or an adaptive fuzzy sliding mode controller (AFSMC) capable of rapidly and efficiently controlling complex and nonlinear systems is how to select the most appropriate initial values for the parameter vector. In this paper, we describe a method of stability analysis for a GA-based reference adaptive fuzzy sliding model controller capable of handling these types of problems for a nonlinear system. First, we approximate and describe an uncertain and nonlinear plant for the tracking of a reference trajectory via a fuzzy model incorporating fuzzy logic control rules. Next, the initial values of the consequent parameter vector are decided via a genetic algorithm. After this, an adaptive fuzzy sliding model controller, designed to simultaneously stabilize and control the system, is derived. The stability of the nonlinear system is ensured by the derivation of the stability criterion based upon Lyapunov's direct method. Finally, an example, a numerical simulation, is provided to demonstrate the control methodology. 1. Introduction Over the past few years, fuzzy control (FC) can be designed without needing an exact mathematical model of the system to be controlled, and can efficiently control complex continuous unmodeled or partially modeled processes [1, 2]. There have been significant research efforts devoted to the analysis and control designs for fuzzy systems (see [3, 4] and the references therein). The main motivation for this development has been applied to practical nonlinear systems and engineering problems (see [5–7] and the references therein). Undoubtedly, Lyapunov’s theory is one of the most common approaches for dealing with the stability analysis of systems. However, to overcome the conservatism that arises from the use of Lyapunov’s methods, it has been necessary to develop a number of more effective methods, for example, fuzzy Lyapunov functions [8, 9]. There are also many important issues that have advanced results for T-S fuzzy control systems, such as time delays [10–13], performance [3–15], robustness [16, 17], neural networks (NNs), and genetic algorithms (GAs) [18–21]. Furthermore, much work has been published on the design of fuzzy sliding mode controllers (FSMCs) [22, 23]. An FSMC is composed of an FC and a sliding mode controller (SMC) [24–26]. An FSMC is a powerful and robust control strategy for the treatment of modeling uncertainties and external disturbances. Although control performance is good, one still has to decide on the parameters. This is one of the most important issues in their design. In the so-called adaptive FSMC (AFSMC), [27–29], an adaptive algorithm is utilized to find the best high-performance parameters for the FSMC [30, 31]. In recent years, adaptive fuzzy control system designs have attracted a good deal of attention as a promising way to approach nonlinear control problems [30, 31]. For adaptive fuzzy control, one initially constructs a fuzzy model to describe the dynamic characteristics of the controlled system; then, an FSMC is designed based on the fuzzy model to achieve the control objectives. After this, adaptive laws are designed (with Lyapunov’s synthesis approach) for tuning the adjustable parameters of the fuzzy models, and analyzing the stability of the overall system. Deciding on the fuzzy rules and the initial parameter vector values for the AFSMC is very important. A genetic algorithm [32–34] is usually used as an optimization technique in the self-learning or training strategy for deciding on the fuzzy control rules and the initial values of the parameter vector. This GA-based AFSMC should improve the immediate response, the stability, and the robustness of the control system. Another common problem encountered when switching the control input of the FSMC system is the so-called “chattering” phenomenon. Chattering is eliminated by smoothing the control discontinuity inside a thin boundary layer, which essentially acts as a low-pass filter structure for the local dynamics [25]. The boundary-layer function is introduced into these updated laws to cover parameter and modeling errors, and to guarantee that the state errors converge within a specified error bound. In this study, we focus on the design of robust tracking control for a class of nonlinear uncertain system involving plant uncertainties and external disturbances. First, the nonlinear system for the tracking of a reference trajectory for the plant [35] is described via fuzzy models with fuzzy rules. A genetic algorithm is used to find the initial values of the parameter vector. Then the designed adaptive control laws of the reference adaptive fuzzy sliding mode controller (RAFSMC) are updated. This GA-based RAFSMC would improve the immediate response, the stability, and the robustness of the control system. Finally, both the tracking error and the modeling error approach zero. 2. Reference Modeling of a Nonlinear Dynamic System The plant is a single-input/single-out th-order system with : where is the state vector of the system; is the control signal; , are smooth nonlinear functions; denotes the external disturbance which is unknown but usually bounded. The states are assumed to be available. For example, a single robot can be represented in the form of (2.1), with and being measurable. Differentiating the output with respect to time for times (till the control input appears), one obtains the input/output form of (2.1): The system is said to have a relative degree , if is bounded away from zero. Assumption 2.1. g(x) is bounded away from zero over a compact set , If the control goal is for the plant output to track a reference trajectory , the reference control input can be defined by the following reference model: where are chosen such that the polynomial is Hurwitz, and here denotes the complex Laplace variable. If , are known, and assumption 2.1 is satisfied, the control law can defined by Substituting (2.5) into (2.1), the linearized system becomes If we define as the tracking error, then the reference control input (2.4) results in the following error equation: It is clear that e will approach zero if are chosen, such that the polynomial is Hurwitz. 3. Development of a GA-Based FSMC In general, people describe the decision-making process using linguistic statements, such as “IF something happens, THEN do a certain action.” For example, let us look at a rule: “IF the temperature is high, THEN the power of the heater is low.” In this statement both “high” and “low” are linguistic terms. Although this kind of linguistic rule is not precise, humans can use them to make correct decisions. To utilize such fuzzy information in a scientific way, mathematical representation of the fuzzy information is needed. Fuzzy set theory and approximate reasoning are two ways that such linguistic information can be dealt with mathematically. A review of the literature provides the theoretical foundation for the developed fuzzy logic controller. The configuration of the fuzzy logic controller is shown in Figure 1. The basic concepts for fuzzy sets and fuzzy logic are briefly described below. (1) Fuzzy set, fuzzifier, and membership function. Let denote the universe of discourse. A fuzzy set in is characterized by a membership function , with representing the grade of membership of in fuzzy set . For example, the Gaussian-shaped membership function is represented as , where is the center and denotes the spread of the membership function. (2) Fuzzy rule base and fuzzy inference engine. Each rule in the fuzzy rule base can be expressed as (3) Deffuzzifier. The defuzzifier maps a fuzzy set in to a crisp point . There are several defuzzification methods described in the literature. The most popular is the weighted average defuzzification method defined as . The FSMC is composed of a sliding mode controller and an FLC. This makes it a powerful and robust control strategy for the treatment of modeling uncertainties and external disturbances. The sliding mode plant combined with the FLC is shown in Figure 2. Genetic algorithms (GAs) are parallel, global search techniques derived from the concepts of evolutionary theory and natural genetics. They emulate biological evolution by means of genetic operations such as reproduction, crossover, and mutation. GAs are usually used as optimization techniques and it has been shown that they also perform well with multimodal functions (i.e., functions which have multiple local optima). Genetic algorithms work with a set of artificial elements (binary strings, e.g., ) called a population. An individual (string) is referred to as a chromosome, and a single bit in the string is called a gene. A new population (called offspring) is generated by the application of genetic operators to the chromosomes in the old population (called parents). Each iteration of the genetic operation is referred to as a generation. A fitness function, specifically the function to be maximized, is used to evaluate the fitness of an individual. The offspring may have better fitness than their parents. Consequently, the value of the fitness function increases from generation to generation. In most genetic algorithms, mutation is a random-work mechanism to avoid the problem of being trapped in a local optimum. Theoretically, a global optimal solution can be found. Offspring are generated from the parents until the size of the new population is equal to that of the old population. This evolutionary procedure continues until the fitness reaches the desired specifications. However, in a specific application, the fitness specification might be used to stop the evolutionary process. In most applications, the optimal fitness value is totally unknown. In this case, the evolutionary process is interrupted either by stabilization of the fitness value (the variation is below a specific value) or by reaching the maximum number of generations. Knowledge acquisition is the most important task in the fuzzy sliding mode controller design. The initial values of the entries in the consequent parameter vector are decided by the self-organizing of FSMC system which developed based on GA. The configuration of this system is shown in Figure 3. The learning procedure for the GA-based FSMC is summarized as follows. (1) The fuzzy rule base of FSMC (with fixed premise parts and random consequence parts) is constructed. For example, FSMC for system (2.1): where is an unknown linguistic label for the control ; is the adjustable parameter, which have to be encoded as binary strings for genetic operations. (2) Encode each parameter, (), to a -bit binary code, , where and denote the encoding operator which encodes the real values to the corresponding binary codes and synthesizes the chromosome of the th (3) Establish the population for generation , , where is the population size, and every individual corresponds to a binary-code parameter of an FSMC candidate. (4) Evaluate the fitness value of each individual. The fitness function F is defined as , where denotes the iteration instance; is the sampling period; is the rounding off operator; and are positive weights; is a very small positive constant used to avoid the numerical error of dividing by zero. (5) Based on the fitness value of the individual, keep the best and apply the genetic operators. Assuming that the population size is 12, pick the top ten-fitted individuals in to apply as genetic operators, that is, reproduction, crossover, mutation (assuming the mutation rate is 0.03125), and keep the top two fitted individuals to generate a new population , as the offspring of . (6) Decode each binary code to its real value and use this to calculate the control , then apply to the system (2.1). (7) Set ; go to Step 2, and repeat the aforementioned procedure until or , where and H denote an acceptable specific fitness value and the top generation number, respectively, as specified by the In general, there are at least four methods for the construction of a fuzzy rule base: (1) from expert knowledge or operator experience; (2) modeling an operator’s control action; (3) modeling a process; (4) generating fuzzy rules by training, self-organizing, and self-learning algorithms. In Figure 3, GA is used as the learning and training mechanism. The use of the GA means that the second, third, and fourth approaches also provide an efficient way to obtain a fuzzy rule base. Although there are several methods that can provide excellent results in this kind of modeling [36–38], we are convinced that GAs are the most advantageous way to extract an optimal, or at least suboptimal fuzzy rule base for the initial values of the consequent parameter vector of the FSMC or AFSMC. 4. GA-Based RAFSMC for Nonlinear Systems A schematic representation of the GA_RAFSMC system is shown in Figure 4. If , are known, we can design the FLC (4.1) to approximate : where is the sum of the fuzzy rules, , that is, indicate the adjustable consequent parameters of the FLC, and is the vector of fuzzy basis function [23] which is defined as where and with represent the degree of membership. The in can be chosen by Since here , the sum of input variables, is only one, we know that where with represent the degree of membership. The in can be chosen by . From the approximation property of the fuzzy system, an uncertain and nonlinear plant can be well approximated and described via a fuzzy model with FLC rules to achieve the control object [14, 39, 40 Assumption 4.1. For , there exists an adjustable parameter vector such that the fuzzy system can approximate a continuous function with accuracy over the set , that is, , such that Let denote the estimate of at time . Now, we can define the estimated control output by and decide on the initial values of the consequent parameter vector based on the genetic algorithm. First, define the parameter error vector at time by , and then According to assumption 4.1, we can define the modeling error where . We can say that Now, by substituting (4.9) into (2.5), we obtain the error dynamic equation: We now define the augmented error as where in (4.11), and in (4.10) are chosen such that is strictly positive real (SPR) transfer function, and and are coprime. Now, and can be related by where is the Laplace transform of the function, and denotes the complex Laplace transform variable. If we define as the states of (4.10), then (4.10) can be realized as where According to the Kalman-Yakubovich lemma, when is SPR, there exist symmetric and positive definite matrices and such that Next, we investigate the asymptotic stability of the origin using Lyapunov’s function candidates. First, define a Lyapunov candidate function as where is a positive constant representing the learning If , the derivate of along the trajectories of the system should be negative definite for all nonlinearities that satisfy a given sector condition (Lyapunov’s stability): As mentioned above , and we can infer that , and In general, chattering must be eliminated for the controller to perform properly. This can be achieved by smoothing out control discontinuity in a thin boundary layer neighboring the switching surface. To amend the modeling error and the chattering phenomenon, we propose a modified adaptive law (4.22) with which to tune the adjustable consequent parameters of the RAFSMC: The thin boundary layer function is defined as where is the thickness of the boundary layer. If we substitute (4.22) into (4.21), then (4.21) becomes When , then If is positive and small enough, then and , such that where . It is real that if and , and hence . Thus will gradually converge to zero as all the . Based on the above inference and Lyapunov’s stability theory, will gradually converge inside the bounded zone . The tracking error and the modeling error will then both approach zero. Theorem 4.2. Consider a nonlinear uncertain system that satisfies the assumptions . Suppose that the unknown control input can be approximated by as in (4.6). Now, is given by (4.15), and is a symmetric positive definite weighting matrix. 5. Numerical Simulation In this section, the proposed GA-based RAFSMC is demonstrated with an example of the control methodology. Consider the problem of balancing an inverted pendulum on a cart as shown in Figure 5. The dynamic equations of motion of the pendulum are given below [27]: where denotes the angle (in radian) of the pendulum from the vertical; and is the angular vector. Thus the gravity constant , where is the mass of the pendulum, is the mass of the cart, is the length of (input force), is the force applied to the cart (in Newtons), and . The parameters chosen for the pendulum in this simulation are =0.1kg, =1kg, and =0.5m. The control objective in this example is to balance the inverted pendulum in the approximate range . The GA-based RAFSMC designed based on the procedure discussed above will have the following steps. Step 1. Specify the response of the control system by defining a suitable sliding surface Step 2. Construct the fuzzy rule base (3.2) and the fuzzy models (4.6) based on the genetic algorithm. After carrying out the abovementioned genetic-based learning procedure, the number of individual strings is 10, the size of population is 12, the crossover rate is 0.8333, the mutation rate is 0.03125, and the maximum number of the generations is 15. Now, the initial values of the consequent parameter vector for the GA-based RAFSMC can be chosen as follows: Step 3. Apply the controller as given by (4.6) to control the nonlinear system (2.1). Now, let , , and adjust by the adaptive law as given by (4.22). Therefore, based on Theorem 4.2, the proposed GA-based RAFSMC can asymptotically stabilize the inverted pendulum. The simulation results are illustrated in Figures 6–9. The initial conditions are , and . Figures 6–9 show that the inverted pendulum system (compare with Yoo and Ham [27]) is rapidly, asymptotically stable because the system trajectory starts from any nonzero initial state, to rapidly and asymptotically approach the origin. 6. Conclusion The stability analysis of a GA-based reference adaptive fuzzy sliding model controller for a nonlinear system is discussed. First, we track the reference trajectory for an uncertain and nonlinear plant. We make sure that it is well approximated and described via the fuzzy model involving FLC rules. Then we decide on the initial values of the consequent parameter vector via a GA. Next, an adaptive fuzzy sliding model controller is proposed to simultaneously stabilize and control the system. A stability criterion is also derived from Lyapunov’s direct method to ensure stability of the nonlinear system. Finally, we discuss an example and provide a numerical simulation. From this example, we see that the stability of the inverted pendulum system is ensured because the trajectories from nonzero initial states approach to zero by proposed controller design, and the results demonstrate that with this control methodology we can rapidly and efficiently control a complex and nonlinear system. The authors would like to thank the National Science Council of the Republic of China, Taiwan for financial support of this research under Contract no. NSC 96-2628-E-366-004-MY2. The authors are also most grateful for the kind assistance of Professor Balthazar, Editor of special issue, and the constructive suggestions from anonymous reviewers all of which has led to the making of several corrections and suggestions that have greatly aided us in the presentation of this paper. 1. G. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic: Theory and Applications, Prentice Hall, Upper Saddle River, NJ, USA, 1995. View at Zentralblatt MATH · View at MathSciNet 2. W.-J. Wang and H.-R. Lin, “Fuzzy control design for the trajectory tracking on uncertain nonlinear systems,” IEEE Transactions on Fuzzy Systems, vol. 7, no. 1, pp. 53–62, 1999. View at Publisher · View at Google Scholar 3. X.-J. Ma, Z.-Q. Sun, and Y.-Y. He, “Analysis and design of fuzzy controller and fuzzy observer,” IEEE Transactions on Fuzzy Systems, vol. 6, no. 1, pp. 41–51, 1998. View at Publisher · View at Google Scholar 4. T. Takagi and M. Sugeno, “Fuzzy identification of systems and its applications to modeling and control,” IEEE Transactions on Systems, Man and Cybernetics, vol. 15, no. 1, pp. 116–132, 1985. View at Zentralblatt MATH 5. F.-H. Hsiao, C. W. Chen, Y.-H. Wu, and W.-L. Chiang, “Fuzzy controllers for nonlinear interconnected TMD systems with external force,” Journal of the Chinese Institute of Engineers, vol. 28, no. 1, pp. 175–181, 2005. 6. T.-Y. Hsieh, M. H. L. Wang, C. W. Chen, et al., “A new viewpoint of s-curve regression model and its application to construction management,” International Journal on Artificial Intelligence Tools, vol. 15, no. 2, pp. 131–142, 2006. View at Publisher · View at Google Scholar 7. C.-H. Tsai, C. W. Chen, W.-L. Chiang, and M.-L. Lin, “Application of geographic information system to the allocation of disaster shelters via fuzzy models,” Engineering Computations, vol. 25, no. 1, pp. 86–100, 2008. View at Publisher · View at Google Scholar 8. C. W. Chen, W.-L. Chiang, C.-H. Tsai, C.-Y. Chen, and M. H. L. Wang, “Fuzzy Lyapunov method for stability conditions of nonlinear systems,” International Journal on Artificial Intelligence Tools, vol. 15, no. 2, pp. 163–171, 2006. View at Publisher · View at Google Scholar 9. K. Tanaka, T. Hori, and H. O. Wang, “A multiple Lyapunov function approach to stabilization of fuzzy control systems,” IEEE Transactions on Fuzzy Systems, vol. 11, no. 4, pp. 582–589, 2003. View at Publisher · View at Google Scholar 10. B. Chen, X. Liu, and S. Tong, “New delay-dependent stabilization conditions of T-S fuzzy systems with constant delay,” Fuzzy Sets and Systems, vol. 158, no. 20, pp. 2209–2224, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 11. C. W. Chen, C.-L. Lin, C.-H. Tsai, C.-Y. Chen, and K. Yeh, “A novel delay-dependent criterion for time-delay T-S fuzzy systems using fuzzy Lyapunov method,” International Journal on Artificial Intelligence Tools, vol. 16, no. 3, pp. 545–552, 2007. View at Publisher · View at Google Scholar 12. F.-H. Hsiao, J.-D. Hwang, C. W. Chen, and Z.-R. Tsai, “Robust stabilization of nonlinear multiple time-delay large-scale systems via decentralized fuzzy control,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 1, pp. 152–163, 2005. View at Publisher · View at Google Scholar 13. K. Yeh, C.-Y. Chen, and C. W. Chen, “Robustness design of time-delay fuzzy systems using fuzzy Lyapunov method,” Applied Mathematics and Computation, vol. 205, no. 2, pp. 568–577, 2008. View at Publisher · View at Google Scholar 14. C. W. Chen, K. Yeh, W.-L. Chiang, C.-Y. Chen, and D.-J. Wu, “Modeling, ${\text{H}}^{\infty }$ control and stability analysis for structural systems using Takagi-Sugeno fuzzy model,” Journal of Vibration and Control, vol. 13, no. 11, pp. 1519–1534, 2007. View at Publisher · View at Google Scholar · View at MathSciNet 15. G. Feng, C.-L. Chen, D. Sun, and Y. Zhu, “${\text{H}}^{\infty }$ controller synthesis of fuzzy dynamic systems based on piecewise Lyapunov functions and bilinear matrix inequalities,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 1, pp. 94–103, 2005. View at Publisher · View at Google Scholar 16. F.-H. Hsiao, C. W. Chen, Y.-W. Liang, S.-D. Xu, and W.-L. Chiang, “T-S fuzzy controllers for nonlinear interconnected systems with multiple time delays,” IEEE Transactions on Circuits and Systems I, vol. 52, no. 9, pp. 1883–1893, 2005. View at Publisher · View at Google Scholar 17. S. Xu and J. Lam, “Robust ${\text{H}}^{\infty }$ control for uncertain discrete-time-delay fuzzy systems via output feedback controllers,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 1, pp. 82–93, 2005. View at Publisher · View at Google Scholar 18. C. W. Chen, “Modeling and control for nonlinear structural systems via a NN-based approach,” Expert Systems with Applications. In press. View at Publisher · View at Google Scholar 19. C.-Y. Chen, J. R.-C. Hsu, and C. W. Chen, “Fuzzy logic derivation of neural network models with time delays in subsystems,” International Journal on Artificial Intelligence Tools, vol. 14, no. 6, pp. 967–974, 2005. View at Publisher · View at Google Scholar 20. P. C. Chen, C. W. Chen, and W. L. Chiang, “GA-based modified adaptive fuzzy sliding mode controller for nonlinear systems,” Expert Systems with Applications. In press. View at Publisher · View at Google Scholar 21. S. Limanond and J. Si, “Neural-network-based control design: an LMI approach,” IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1422–1429, 1998. View at Publisher · View at Google Scholar 22. R. Palm, “Robust control by fuzzy sliding mode,” Automatica, vol. 30, no. 9, pp. 1429–1437, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 23. L. X. Wang, A Course in Fuzzy Systems and Control, Prentice Hall, Englewood Cliffs, NJ, USA, 1997. View at Zentralblatt MATH 24. V. I. Utkin, Sliding Modes and Their Application in Variable Structure Systems, MIR Publishers, Moscow, Russia, 1978. View at Zentralblatt MATH 25. J. J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice Hall, Englewood Cliffs, NJ, USA, 1991. View at Zentralblatt MATH 26. Y. Xia and Y. Jia, “Robust sliding-mode control for uncertain time-delay systems: an LMI approach,” IEEE Transactions on Automatic Control, vol. 48, no. 6, pp. 1086–1092, 2003. View at Publisher · View at Google Scholar · View at MathSciNet 27. B. Yoo and W. Ham, “Adaptive fuzzy sliding mode control of nonlinear system,” IEEE Transactions on Fuzzy Systems, vol. 6, no. 2, pp. 315–321, 1998. View at Publisher · View at Google Scholar 28. S. Tong and H.-X. Li, “Fuzzy adaptive sliding-mode control for MIMO nonlinear systems,” IEEE Transactions on Fuzzy Systems, vol. 11, no. 3, pp. 354–360, 2003. View at Publisher · View at Google 29. S. Labiod, M. S. Boucherit, and T. M. Guerra, “Adaptive fuzzy control of a class of MIMO nonlinear systems,” Fuzzy Sets and Systems, vol. 151, no. 1, pp. 59–77, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 30. L. X. Wang, Adaptive Fuzzy Systems and Control: Design and Stability Analysis, Prentice Hall, Englewood Cliffs, NJ, USA, 1994. 31. G. Feng, S. G. Cao, and N. W. Rees, “Stable adaptive control for fuzzy dynamic systems,” Fuzzy Sets and Systems, vol. 131, no. 2, pp. 217–224, 2002. View at Publisher · View at Google Scholar · View at MathSciNet 32. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989. View at Zentralblatt MATH 33. S. C. Lin, Stable self-learning optimal fuzzy control system design and application, Ph.D. dissertation, Department of Electrical Engineering, National Taiwan University, Chung-li, Taiwan, 1997. 34. P. C. Chen, Genetic algorithm for control of structure system, M.S. thesis, Department of Civil Engineering, Chung Yuan University, Taipei, Taiwan, 1998. 35. C. C. Liu and F. C. Chen, “Adaptive control of nonlinear continuous-time systems using neural networks—general relative degree and MIMO cases,” International Journal of Control, vol. 58, no. 2, pp. 317–335, 1993. View at Publisher · View at Google Scholar · View at MathSciNet 36. J.-S. R. Jang, C.-T. Sun, and E. Mizutani, Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence, Prentice-Hall, Upper Saddle River, NJ, USA, 1997. 37. J.-S. R. Jang, “ANFIS: adaptive-network-based fuzzy inference system,” IEEE Transactions on Systems, Man and Cybernetics, vol. 23, no. 3, pp. 665–685, 1993. View at Publisher · View at Google 38. F. J. de Souza, M. M. R. Vellasco, and M. A. C. Pacheco, “Hierarchical neuro-fuzzy quadtree models,” Fuzzy Sets and Systems, vol. 130, no. 2, pp. 189–205, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 39. C. W. Chen, W. L. Chiang, and F. H. Hsiao, “Stability analysis of T-S fuzzy models for nonlinear multiple time-delay interconnected systems,” Mathematics and Computers in Simulation, vol. 66, no. 6, pp. 523–537, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 40. C. W. Chen, “Stability conditions of fuzzy systems and its application to structural and mechanical systems,” Advances in Engineering Software, vol. 37, no. 9, pp. 624–629, 2006. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/mpe/2008/325859/","timestamp":"2014-04-17T06:07:11Z","content_type":null,"content_length":"444692","record_id":"<urn:uuid:25ac8b81-cc27-46ec-928d-07a8350ba1fd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/alexpr787/medals/1","timestamp":"2014-04-16T13:05:53Z","content_type":null,"content_length":"100157","record_id":"<urn:uuid:3bca6b8f-d4a7-4817-aa9e-b15f18bc9eb1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Cross-sectional area of a helix October 13th 2011, 03:53 PM Cross-sectional area of a helix I'm trying to find a general formula for the cross-sectional area of a 3D circular constant helix if it is sliced parallel to its axis. The shape formed should be an ellipse, but I need the input parameters to be in terms of typical helical parameters (axial pitch, helix diameter, etc.). Picture a spring sitting flat on a table. The end of it was sliced so it can sit flat, but what is the area of the flat? My colleague found the formula for how much longer the helix will be than it is tall, and then used this ratio to scale the cross section of the circle. Is this a valid approach? I appreciate any ideas that you have. I have a minor in mathematics and feel super dumb for not being able to figure this out. (Worried) Thank you very much for your time! Have a great day.
{"url":"http://mathhelpforum.com/geometry/190317-cross-sectional-area-helix-print.html","timestamp":"2014-04-18T20:56:08Z","content_type":null,"content_length":"3829","record_id":"<urn:uuid:96a7cf7c-9cfa-49b9-b183-7ebb5304c60f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis of Variance An important technique for analyzing the effect of categorical factors on a response is to perform an Analysis of Variance. An ANOVA decomposes the variability in the response variable amongst the different factors. Depending upon the type of analysis, it may be important to determine: (a) which factors have a significant effect on the response, and/or (b) how much of the variability in the response variable is attributable to each factor. STATGRAPHICS Centurion provides several procedures for performing an analysis of variance: 1. One-Way ANOVA - used when there is only a single categorical factor. This is equivalent to comparing multiple groups of data. 2. Multifactor ANOVA - used when there is more than one categorical factor, arranged in a crossed pattern. When factors are crossed, the levels of one factor appear at more than one level of the other factors. 3. Variance Components Analysis - used when there are multiple factors, arranged in a hierarchical manner. In such a design, each factor is nested in the factor above it. 4. General Linear Models - used whenever there are both crossed and nested factors, when some factors are fixed and some are random, and when both categorical and quantitative factors are One-Way ANOVA A one-way analysis of variance is used when the data are divided into groups according to only one factor. The questions of interest are usually: (a) Is there a significant difference between the groups?, and (b) If so, which groups are significantly different from which others? Statistical tests are provided to compare group means, group medians, and group standard deviations. When comparing means, multiple range tests are used, the most popular of which is Tukey's HSD procedure. For equal size samples, significant group differences can be determined by examining the means plot and identifying those intervals that do not overlap. Multifactor ANOVA When more than one factor is present and the factors are crossed, a multifactor ANOVA is appropriate. Both main effects and interactions between the factors may be estimated. The output includes an ANOVA table and a new graphical ANOVA from the latest edition of Statistics for Experimenters by Box, Hunter and Hunter (Wiley, 2005). In a graphical ANOVA, the points are scaled so that any levels that differ by more than exhibited in the distribution of the residuals are significantly different. Variance Components Analysis A Variance Components Analysis is most commonly used to determine the level at which variability is being introduced into a product. A typical experiment might select several batches, several samples from each batch, and then run replicates tests on each sample. The goal is to determine the relative percentages of the overall process variability that is being introduced at each level. General Linear Model The General Linear Models procedure is used whenever the above procedures are not appropriate. It can be used for models with both crossed and nested factors, models in which one or more of the variables is random rather than fixed, and when quantitative factors are to be combined with categorical ones. Designs that can be analyzed with the GLM procedure include partially nested designs, repeated measures experiments, split plots, and many others. For example, pages 536-540 of the book Design and Analysis of Experiments (sixth edition) by Douglas Montgomery (Wiley, 2005) contains an example of an experimental design with both crossed and nested factors. For that data, the GLM procedure produces several important tables, including estimates of the variance components for the random factors. Analysis of Variance for Assembly Time Source Sum of Squares Df Mean Square F-Ratio P-Value Model 243.7 23 10.59 4.54 0.0002 Residual 56.0 24 2.333 Total (Corr.) 299.7 47 Type III Sums of Squares Source Sum of Squares Df Mean Square F-Ratio P-Value Layout 4.083 1 4.083 0.34 0.5807 Operator(Layout) 71.92 6 11.99 2.18 0.1174 Fixture 82.79 2 41.4 7.55 0.0076 Layout*Fixture 19.04 2 9.521 1.74 0.2178 Fixture*Operator(Layout) 65.83 12 5.486 2.35 0.0360 Residual 56.0 24 2.333 Total (corrected) 299.7 47 Expected Mean Squares Source EMS Layout (6)+2.0(5)+6.0(2)+Q1 Operator(Layout) (6)+2.0(5)+6.0(2) Fixture (6)+2.0(5)+Q2 Layout*Fixture (6)+2.0(5)+Q3 Fixture*Operator(Layout) (6)+2.0(5) Residual (6) Variance Components Source Estimate Operator(Layout) 1.083 Fixture*Operator(Layout) 1.576 Residual 2.333
{"url":"http://www.statgraphics.com/analysis_of_variance.htm","timestamp":"2014-04-18T15:38:51Z","content_type":null,"content_length":"52254","record_id":"<urn:uuid:ecbf0048-1d53-48e4-aa7f-a49131608195>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Seahurst Statistics Tutor ...I am uniquely qualified to tutor geometry, with a PhD in Aeronautical and Astronautical Engineering from the University of Washington and more than 40 years of project experience in science and engineering. The coursework for my Ph.D. included an extensive amount of mathematics, including calcul... 21 Subjects: including statistics, chemistry, physics, English ...I also volunteered with the Pullman, WA YMCA after school tutoring for over a year while earning my degree at WSU. During this time I also volunteered with the YMCA Special Olympics program and am very comfortable working with special needs children.I am qualified to tutor Study Skills due to my... 25 Subjects: including statistics, chemistry, physics, geometry ...If your looking for a fun, creative, and EFFECTIVE way to improve your math skills- contact me for a tutoring session and you won't be disappointed. To give you an example of my creative methods of teaching - I once taught math in an inner city New York 2nd grade class room. I took a class of 15 students that didn't know how to multiply. 17 Subjects: including statistics, calculus, geometry, algebra 2 ...I became a tutorial instructor, because I am excited to see people learn. I was a tutor, through college. I always studied for depth, with the intent to teach what I learned. 62 Subjects: including statistics, English, chemistry, physics ...Have extensive IT industry experience and have been actively tutoring for 2 years. I excel in helping people learn to compute fast without or without calculators, and prepare for standardized tests. Handle all levels of math through undergraduate levels. 43 Subjects: including statistics, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Seahurst_statistics_tutors.php","timestamp":"2014-04-18T00:56:40Z","content_type":null,"content_length":"23839","record_id":"<urn:uuid:96fb052b-220a-4425-aa98-a1afdb21abae>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00651-ip-10-147-4-33.ec2.internal.warc.gz"}