content
stringlengths
86
994k
meta
stringlengths
288
619
Johnston, RI Math Tutor Find a Johnston, RI Math Tutor ...I hope that if you are someone who is having difficulty in either Algebra 1 or Geometry, you will consider me. I am dependable, and can be easily reached. I have found in my own experience both through attending classes and listening to students that many math instructors, although very knowled... 2 Subjects: including algebra 1, geometry ...I have participated in numerous after-school tutoring programs and was trained by Princeton Review to tutor for the SATs, I hold a Rhode Island teacher certification in Elementary Education and Middle Level mathematics. I have an undergraduate degree in Elementary Education with a Math Content Major. I have taught grades 6-8 for the past ten years for Providence Public Schools. 5 Subjects: including algebra 1, prealgebra, statistics, SAT math ...The format of the SAT Writing section is extremely specific. Graders of the SAT writing section look specifically for the presence of several items on a rubric. Though there are many types of acceptable writing styles in the workforce, there is a specific type of writing that yields high scores on the SAT Writing section. 27 Subjects: including SAT math, algebra 1, algebra 2, reading ...I am a certified Rhode Island Elementary School teacher. I have a Bachelors degree in elem. Ed and a middle school endorsement in social studies. 15 Subjects: including algebra 1, geometry, reading, writing Hi! My name is Dan, and I love helping students to improve in Math and Science. I attended U.C. 27 Subjects: including logic, grammar, ACT Math, GED Related Johnston, RI Tutors Johnston, RI Accounting Tutors Johnston, RI ACT Tutors Johnston, RI Algebra Tutors Johnston, RI Algebra 2 Tutors Johnston, RI Calculus Tutors Johnston, RI Geometry Tutors Johnston, RI Math Tutors Johnston, RI Prealgebra Tutors Johnston, RI Precalculus Tutors Johnston, RI SAT Tutors Johnston, RI SAT Math Tutors Johnston, RI Science Tutors Johnston, RI Statistics Tutors Johnston, RI Trigonometry Tutors
{"url":"http://www.purplemath.com/Johnston_RI_Math_tutors.php","timestamp":"2014-04-18T23:54:29Z","content_type":null,"content_length":"23526","record_id":"<urn:uuid:d9e07458-fcdc-46ac-91a8-6ecaa3386c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 160 - Finite Mathematics Calendar of lecture material and exams. Study guides for exams can be found here. Check your grades throughout the semester and keep track of your progress in the course. You will need to use your NetID and password to access this. The course syllabus. There is also a shortened version [PDF] containing the highlights from the syllabus. Per section listing of homework assignments. Group projects that require you to extend your understanding beyond the basic level of the textbook. There are worksheets available to help with some of the projects. Choose the Projects link and then scroll down to the appropriate section. This series of assignments is to use a word processor to create a document containing formulas. Instructions and demonstration videos on how to use the Equation Editor are available. Find links to some of the software that is available to use as a Richland student. You must be a current student to view this page as some of the software is licensed only for our students. Keep track of your scores. Some instructions on using the TI-82 or TI-83 calculator. A tutorial on Linear Programming written for an ICTCM short course. This is a handout on decision theory. Only expected value decision theory is covered in the text, so this is to supplement that. This is a handout on how to perform Gauss Jordan elimination using pivoting. Pivoting is a technique that we will also use for the simplex method. This is a document that explains the skills and proficiences that you need coming into this course to be successful. If you don't have these skills then please talk to the instructor about the best course of action or additional resources to help bring yourself up to speed. Information including office hours, education, interests, etc.
{"url":"https://people.richland.edu/james/summer10/m160/","timestamp":"2014-04-24T09:32:39Z","content_type":null,"content_length":"4387","record_id":"<urn:uuid:5d422a12-614d-4b0f-907c-07d0807e3354>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Poway Statistics Tutor Find a Poway Statistics Tutor ...I began tutoring math in high school, volunteering to assist an Algebra 1 class for 4 hours per week. Today I have hundreds of hours of experience, with the majority in Algebra and Statistics, and I would be comfortable well into college math. During the learning process, small knowledge gaps from past courses tend to reappear as roadblocks down the line. 13 Subjects: including statistics, calculus, geometry, algebra 1 ...As part of my work I have taught programming to college students, faculty, staff and more junior programmers. I have been a professional programmer for over twenty years, using Linux and Unix computers. I have also been a system administrator at times, and have taught use of Unix computers at levels ranging from beginner to expert. 22 Subjects: including statistics, English, grammar, Java ...I recently just retook calculus 1-3 and received an A as well as reviewed this subject for the GRE. I have run study groups and have tutored other math subjects. My major is currently in math, and I'm working to become a math teacher My strength is breaking down what seems like big concepts and relating it to stuff students have seen before. 13 Subjects: including statistics, chemistry, calculus, geometry ...I have extensive knowledge of the command line, Linux applications, and server maintenance. I can teach shell scripting, job scheduling, system maintenance as well. I was first exposed to MATLAB programming in my Masters in Complexity Science, where course work and research projects required it. 26 Subjects: including statistics, physics, calculus, algebra 1 ...I am well versed in English, Math, and Science and pride myself in explaining concepts in a clear, logical manner so as for you to better understand the material. If I was able to overcome any difficulties grasping subject material, I will make sure that you will too! I am nothing if not persistent, challenging, and understanding. 43 Subjects: including statistics, chemistry, reading, calculus
{"url":"http://www.purplemath.com/Poway_statistics_tutors.php","timestamp":"2014-04-21T04:40:55Z","content_type":null,"content_length":"24056","record_id":"<urn:uuid:d9ecb806-8140-45df-b54e-ba14e3a8f3b7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
IFS Attractors Recent changes • 11th October 2002: added page for a 4-element tile associated with the 8rd unit cubic Pisot number with complex conjugates. • 15th October 2002: added gallery of 3-element tiles associated with the 4th unit cubic Pisot number with complex conjugates. • 14th May 2005: corrected sign error on page for 8rd unit cubic Pisot number with complex conjugates, and added 4 tiles with similar constructions Index, Glossary, etc What is an IFS • Algorithms • Cartesian Linear (Affine) IFSs • Piecewise Linear IFSs • Other Functions • 1st and higher order IFSs Design of the XIFS Program • Notation • Post-processing • Scaling • Rendering schemes Techniques for designing IFSs • composition techniques I (composition of IFSs) • composition techniques II, including metafigure technique • trans technique (in preparation) • grouped element technique • element removal (voiding) and addition (infill) techniques • additive 1st order technique • subtractive 1st order technique Rep-Tiles: A rep-tile is a plane figure which tiles the plane and can be divided into several smaller copies of itself. From this definition it can be seen that the simply connected attractor of any IFS with a uniform measure and a similarity dimension of 2 is a rep-tile, and all rep-tiles are attractors of IFSs. There an infinite number of rep-tiles, some of which are shown on this page. Other Tiling IFSs: There are several classes tilings of the plane by attractors of IFSs which are not tilings by single rep-tiles. These include tilings by self-affine (but not self-similar) figures; tilings by a mixture of grouped element fractals derived from a single base figure; and tilings by figures in which not all copies are the same size. Non-Tiling Polymers • dimers • cyclomers • astromers • higher order Sierpinski triangles • Raisa fractals • Koch Curve and related figures Non-Self-Similar IFSs • polyamids • Falconer-Lammering triangles First Order IFSs • subtractive 1st order fractals Non-linear IFSs Other Topics © 2000, 2001, 2002 Stewart R. Hinsley
{"url":"http://www.meden.demon.co.uk/Fractals/fractals.html","timestamp":"2014-04-18T15:41:25Z","content_type":null,"content_length":"5385","record_id":"<urn:uuid:a212ed97-37cc-442d-a4a3-b2f1dab493ed>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Cantor set Consider a line segment of unit length. Remove its middle third. Now remove the middle thirds from the remaining two segments. Now remove the middle thirds from the remaining four segments. Now remove the middle thirds from the remaining eight segments. Now remove…well you get the idea. If you could continue this construction through infinitely many steps, what would you have left? What remains after infinitely many steps is a remarkable subset of the real numbers called the Cantor set, or “Cantor’s Dust.” At first glance one may reasonably wonder if there is anything left. After all, the lengths of the intervals we removed all add up to 1, exactly the length of the segment we started with: \[\begin{eqnarray*} \,\frac{1}{3} + \frac{2}{9} + \frac{4}{27} + \frac{8}{81} + \ldots & = & \frac{1}{3} \sum_{n=0}^\infty \frac{2^n}{3^n} \\ & & \\ & = & \frac{1}{3} \left( \frac{1}{1-\frac{2}{3}} \ right) \\ & & \\ & = & \frac{1}{3} \times 3 \\ & & \\ & = & 1 \end{eqnarray*}\] Yet, remarkably, we can show that there are just as many “points” remaining as there were before we began! This startling fact is only one of the many surprising properties exhibited by the Cantor Before we begin to expose these properties, it is important to be quite precise about this construction. Let us agree that the segments we remove at each stage of the construction are open intervals. That is, in the first step we remove all of the points between \(\frac{1}{3}\) and \(\frac{2}{3}\), but leave the end points, and similarly for each successive stage. A little reflection will convince you that these endpoints we leave behind never get removed, since at each stage we are only removing parts that lie strictly between the endpoints left behind at the previous stage. Thus we see that our Cantor set cannot be empty, since it contains the points 0, 1, \(\frac{1}{3}\), \(\frac{2}{3}\), \(\frac{1}{9}\), \(\frac{2}{9}\), \(\frac{7}{9}\), \(\frac{8}{9}\), \(\frac{1}{27}\), and so on. But in fact there is much more that remains. To see this, recall that we may choose any number base to represent real numbers. That is, there is nothing necessary or even special about our common use of base ten; we can just as easily represent our numbers using base two, or base three, or any other base: \[ \begin{array}{rcll} \displaystyle\frac{1}{3} & = & 0.3333\ldots & \,\,\,\mbox{(base 10)} \\ & & & \\ \displaystyle\frac{1}{3} & = & 0.0222\ldots & \,\,\,\mbox{(base 3)} \\ & & & \\ \displaystyle\ frac{1}{3} & = & 0.0101\ldots & \,\,\,\mbox{(base 2)} \end{array} \] When a number is written in base two it is said to be in binary notation, and when it is written in base three it is said to be in ternary notation. Let's focus on the ternary representations of the decimals between 0 and 1. Since, in base three, \(\frac{1}{3}\) is equal to 0.1, and \(\frac{2}{3}\) is equal to 0.2, we see that in the first stage of the construction (when we removed the middle third of the unit interval) we actually removed all of the real numbers whose ternary decimal representation have a 1 in the first decimal place, except for 0.1 itself. (Also, 0.1 is the same as 0.0222… in base three, so if we choose this representation we are removing all the ternary decimals with 1 in the first decimal place.) In the same way, the second stage of the construction removes all those ternary decimals that have a 1 in the second decimal place. The third stage removes those with a 1 in the third decimal place, and so on. (Convince yourself that this is so. Begin by noticing that \(\frac{1}{9}\) is equal to 0.01 and \(\frac{2}{3}\) is equal to 0.02 in base three.) Thus, after everything has been removed, the numbers that are left—that is, the numbers making up the Cantor set—are precisely those whose ternary decimal representations consist entirely of 0’s and 2’s. What numbers does this include, besides the ones already noted above? How many are there? Lots. Consider 1/4. This is not one of the endpoints (those all have powers of three in the denominator), but it is not hard to show that 1/4 is in the Cantor set: in ternary notation its decimal expansion is 0.0202…. Since it consists entirely of 0’s and 2’s it was never removed during the construction of the Cantor set, so it’s still there—somewhere. Asking how many numbers are left, as you can easily see, is to ask how many numbers can be represented in ternary notation with no 1 in any decimal place. But this must be as many as there are real numbers in the unit interval—for consider: we may represent all the real numbers between 0 and 1 in binary, and this is just every possible decimal with a 1 or a 0 in each decimal place. And there can be no more and no less of these than there are ternary decimals with a 0 or a 2 in each decimal place. They correspond in an obvious way. The conclusion is inescapable: once we remove all those intervals, the number of points remaining is no less than the number we started with. Let us examine that “correspondence” more closely. The idea is evident: for every number whose ternary decimal expansion consists entirely of 0’s and 2’s, match it with the corresponding number whose binary decimal expansion has 0’s in the same place, and 1’s wherever the ternary number had 2’s. Thus, 1/4 in ternary gets matched with 1/3 in binary: \[ \begin{array}{rcll} \displaystyle\frac{1}{4} & = & 0.020202\ldots & \,\,\,\mbox{(ternary)} \\ & & & \\ & \Updownarrow & \\ & & & \\ \displaystyle\frac{1}{3} & = & 0.010101\ldots & \,\,\,\mbox {(binary)} \end{array} \] This is evidently a function that is surjective. Moreover, it is continuous. (Elements that are “close” in the domain are mapped to elements that are “close” in the range.) We can extend this to a function, called the Cantor function, from the entire unit interval onto itself, by simply agreeing to let its value on the missing intervals be the constant values which equal the values of the original function on the endpoints of those intervals. For example, the Cantor function will map each point in the first middle-third interval (1/3, 2/3) to 1/2, the value of the original function on the points 1/3 and 2/3. (Recall that 1/3 has ternary representation 0.0222... and 2/3 has ternary representation 0.2, which map to 0.0111... and 0.1 respectively, and these both represent the number 1/2 in binary.) Here is the graph: The flat parts are the images of all of the “middle thirds,” and these are all connected by the images of the Cantor set itself. This construction has been called the “Devil's Staircase” since it has infinitely many “steps.” A few more analytical tidbits: Since each interval removed was open, and there were only countably many of them, their union is also open. Thus, the Cantor set (which is the complement of this union) is closed. That is, it contains all of its accumulation points. Moreover, every point of the Cantor set is an accumulation point, since within any neighborhood of a number whose ternary expansion consists entirely of 0’s and 2’s one may find other such numbers. Consequently, the Cantor set is a perfect set in the topologist’s sense. Finally, since any open neighborhood of any point of the Cantor set contains an open set which is disjoint from the Cantor set, we have that the Cantor set is nowhere dense. Altogether a remarkable set. Oh, it's also a fractal… Let us again visualize the construction: Although the Cantor set itself is to be thought of as the “final row” in this picture, the picture considered altogether is very suggestive. Notice that at each stage the picture is “doubled” into two copies which precisely resemble the whole, but which at each stage become two-thirds smaller. Together these properties—self-similarity at every scale over a uniform reduction of scale—qualify the Cantor set as a fractal with Hausdorf dimension given by: \[\frac{\log 2}{\log 3} = 0.630929753\ldots \] The Cantor set is an instructively simple example of a fractal, demonstrating that our geometrical intuitions about space (even such simple spaces as the unit interval) may draw us, by way of the mathematical imagination, into revelations of deep and even startling structure. • [MLA] Smith, B. Sidney. "Cantor set." Platonic Realms Interactive Mathematics Encyclopedia. Platonic Realms, 22 Aug 2013. Web. 22 Aug 2013. <http://platonicrealms.com/> • [APA] Smith, B. Sidney (22 Aug 2013). Cantor set. Retrieved 22 Aug 2013 from the Platonic Realms Interactive Mathematics Encyclopedia: http://platonicrealms.com/encyclopedia/Cantor-set/
{"url":"http://platonicrealms.com/encyclopedia/Cantor-set","timestamp":"2014-04-16T18:56:25Z","content_type":null,"content_length":"71186","record_id":"<urn:uuid:0a583c4d-fdb0-474d-8783-1acbe929e359>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Array Formula and Charts Dec 2nd, 2010, 03:51 PM #2 Dec 1st, 2010, 07:33 PM #1 New Member Join Date Dec 2010 I am using Excel 2003. I have Spreadsheet#1 which has Names and Dates in columns as below. A B Name | Date I have Spreadsheet#2 which contains Date, Sales and Results for each Name as below and is exported from a database in this format. A B C D E F Name Name Date | Sales | Results | Date | Sales | Results In Spreadsheet #2, the I have tried writing an array formula in the Results column to enter the value from the Sales column into the Results column if the Name in cell A1 equals the Name in column A of Spreadsheet#1 and the Date equals the Date in Spreadsheet#1. When the array formula returns a false result (name and date do not match), the formula returns zero as the answer. Is there a way I can have a blank result for false? The reason I need a blank is because I am graphing the data in Spreadsheet#2 and I don’t want all the false results (the zeros) appearing on my I have looked at several sites online and have not been able to find a solution. Any help would be appreciated. Can you post the formula you have in the results column? It might be as simple as adding an IF statement to the beginning of the formula. Something like =IF(array formula=0,"",array formula). This does double the calculation load, though, which may impact performance. There could be some simpler options, though. Whoa, another split! What a bummer! Excel 2013, Win7 Rastaman - appreciate your response. This is my formula. I even tried using NA() as the result for false rather than the double quotations, but then all I return in every cell was #N/A. If I could only get the #N/A times when both conditions are not met, I'd be happy because Excel does not chart #N/A as text or zero. Here is my formula that is in Spreadsheet#2 Results column where B2 is the Name and A778 is the Date and B778 is Sales. This was entered using Ctrl Shift Enter. I don't quite understand your data, but here's what I was suggesting. Basically a repeat of your formula (I rewrote without the IF), first check if the result is zero, if so set to blank, or you could replace the "" with NA(). Regarding your formula, is it giving you the correct result except for your need to show blank instead of zero? It doesn't look like it's summing anything, it's just multiplying the value in cell B778 times the number of times the name/date test is true. Don't you want B778 to instead be the the range of the Results column? I am also assuming that Sheet#1 column A is Names, and Sheet#1 column I is dates. Whoa, another split! What a bummer! Excel 2013, Win7 Thanks for suggestion, but unfortunately it still does not work. It makes Results column appear blank when it should contain the value from the Sales column and also on the chart it appears as a value of 0 because the formula is still there. Will try to explain more. Formula in Results column of Spreadsheet#2 is checking for two conditions, the Date and the Name in Spreadsheet#1 to equal the Date and Name in Spreadsheet#2 and if they are both the same (true), then enter the Sales value from Spreadsheet#2 into the Results column for that date and if the Name and/or Date do not equal the Name and Date from Spreadsheet#1, leave the cell blank or put #N/ A. The problem is, using the double quotations leaves zeros in the results column whether you use them as the result for true or false. If I use the NA(), the formula doesn't return the Sales value into the Results column. Does my example below look anything like your data? I've tried to recreate it based on your description. In this example you enter the name into cell B2 of sheet#2, and the values in the Results column update to show the Sales if name and date match Sheet#1. If there is no match, the cell in the results column C is blank. │ │A │B │C │ │2│name │jane │ │ │3│date │sales│results │ │4│1/1/2010 │12 │12 │ │5│1/2/2010 │10 │ │ │6│1/3/2010 │14 │14 │ Spreadsheet Formulas │ Cell │ Formula │ │ C4 │ {=IF(SUM(($B$2='[Spreadsheet#1.xls]Sheet#1'!$A$7:$A$1640)*(A4='[Spreadsheet#1.xls]Sheet#1'!$I$7:$I$1640)*B4)=0,"",SUM(($B$2='[Spreadsheet#1.xls]Sheet#1'!$A$7:$A$1640)*(A4='[Spreadsheet │ │ │ #1.xls]Sheet#1'!$I$7:$I$1640)*B4))} │ │ C5 │ {=IF(SUM(($B$2='[Spreadsheet#1.xls]Sheet#1'!$A$7:$A$1640)*(A5='[Spreadsheet#1.xls]Sheet#1'!$I$7:$I$1640)*B5)=0,"",SUM(($B$2='[Spreadsheet#1.xls]Sheet#1'!$A$7:$A$1640)*(A5='[Spreadsheet │ │ │ #1.xls]Sheet#1'!$I$7:$I$1640)*B5))} │ │ C6 │ {=IF(SUM(($B$2='[Spreadsheet#1.xls]Sheet#1'!$A$7:$A$1640)*(A6='[Spreadsheet#1.xls]Sheet#1'!$I$7:$I$1640)*B6)=0,"",SUM(($B$2='[Spreadsheet#1.xls]Sheet#1'!$A$7:$A$1640)*(A6='[Spreadsheet │ │ │ #1.xls]Sheet#1'!$I$7:$I$1640)*B6))} │ Formula Array: Produce enclosing { } by entering formula with CTRL+SHIFT+ENTER! Excel tables to the web >> Excel Jeanie HTML 4 │ │A │B│C│D│E│F│G│H│I │ │6 │Name │ │ │ │ │ │ │ │Date │ │7 │joe │ │ │ │ │ │ │ │1/1/2010 │ │8 │tom │ │ │ │ │ │ │ │1/1/2010 │ │9 │jane │ │ │ │ │ │ │ │1/1/2010 │ │10│fred │ │ │ │ │ │ │ │1/1/2010 │ │11│joe │ │ │ │ │ │ │ │1/2/2010 │ │12│tom │ │ │ │ │ │ │ │1/2/2010 │ │13│fred │ │ │ │ │ │ │ │1/2/2010 │ │14│joe │ │ │ │ │ │ │ │1/3/2010 │ │15│jane │ │ │ │ │ │ │ │1/3/2010 │ │16│fred │ │ │ │ │ │ │ │1/3/2010 │ Excel tables to the web >> Excel Jeanie HTML 4 Whoa, another split! What a bummer! Excel 2013, Win7 Rick/Rastaman - Thankyou! It does appear to be working which is great! I guess the only problem still is that the chart in Sheet#2. It will put markers on the chart with a value of zero for all the blanks and the only way to get around this is the use NA() instead of "". I couldn't get this to work before, but will try again using your formula format. Hope it works! Rastaman - I modified the formula you gave me to have NA() instead of "" and everything works great!! THANKYOU!!! Rastaman, sorry to bug you again, but I found a glitch....using your example, if the sales value is zero, this formula won't copy over the zero into the results column. Any idea on how to add that factor into the formula? Hi PrincessD. No worries, I think the formula below will work. I removed the '*B4' in the IF test, this isn't necessary and was causing the test to think there was no match for the name/date. I included the NA() in this version. entered with control-shift-enter, copied down over the range: Whoa, another split! What a bummer! Excel 2013, Win7 Dec 2nd, 2010, 04:44 PM #3 New Member Join Date Dec 2010 Dec 2nd, 2010, 06:16 PM #4 Dec 2nd, 2010, 07:35 PM #5 New Member Join Date Dec 2010 Dec 3rd, 2010, 12:04 AM #6 Dec 3rd, 2010, 11:49 AM #7 New Member Join Date Dec 2010 Dec 7th, 2010, 01:56 PM #8 New Member Join Date Dec 2010 Dec 7th, 2010, 06:18 PM #9 New Member Join Date Dec 2010 Dec 7th, 2010, 10:23 PM #10
{"url":"http://www.mrexcel.com/forum/excel-questions/512631-array-formula-charts.html","timestamp":"2014-04-21T10:07:52Z","content_type":null,"content_length":"97356","record_id":"<urn:uuid:c9160e08-6c4c-408e-b272-aa6c77471040>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Help with one question please!! • 10 months ago • 10 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51a67176e4b08f9e59e1f7df","timestamp":"2014-04-19T10:24:41Z","content_type":null,"content_length":"51017","record_id":"<urn:uuid:25e74c35-c16d-4e17-af8c-fbaf0215c948>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
My secret project is now ready to be seen. It's Codeville, a version control system. One thing no longer nagging at my brain, a million left to go. Space travel is dangerous. Please take careful consideration before boarding any space vehicles. The game is called Straights and Queers (this potentially uncomfortable metaphor is by far the easiest way of understanding the rules, so please bear with me). It's played on the following board - __ __/ \__ __/ \__/ \__ / \__/ \__/ \ \__/ \__/ \__/ / \__/ \__/ \ \__/ \__/ \__/ / \__/ \__/ \ \__/ \__/ \__/ \__/ \__/ \__/ There are two players, the straights and the queers. Each player has two pieces, one male and one female. On each turn, both players move any pieces they have on the board and place any pieces which aren't on the board, which happens at the beginning of the game or when a piece is captured. Both players write down what their moves are without seeing the opponent's moves, then reveal what the moves are. Pieces move to any adjacent hexagon. They cannot move to a hexagon which another piece is currently on, even if that piece belongs to the same side. Pieces can be placed on any empty hexagon, but may not be placed onto a hexagon currently containing a piece, even one belonging to the same side. A player may not move both of their pieces onto the same hexagon. When pieces move, they must move to a different spot, they cannot remain in the same place. If two pieces wind up on the same spot, then a capture happens. If the two pieces are of the same gender, then the queer captures, otherwise the straight captures. Additionally, in the rare case where a piece winds up in a corner with all three adjacent spots occupied and hence no legal turn on the next move, then it is captured. Captured pieces are returned to the side they belong to be placed on the next move. That's all the rules. I think this game is made interesting by the simultaneous play despite the extraordinarily small board. Perhaps I went too far on board smallness, rendering the game brute forceable. However, there are classic games involving some chance and secret information which have extremely simple positions, such as yachtzee and texas hold'em poker. Texas hold'em turns out to be completely out of brute force range, but yachtzee is emminently brute forceable in the solo case optimizing average score, and on paper looks just barely solveable for the two-player case trying to optimize chances of winning. I'm curious to see if the entrants into the sumultaneous play competition generally have very limited board positions. I'd also like to know if anyone has actually set about brute forcing yachtzee. If anyone knows of such efforts please tell me. The CodeCon 2003 program is now announced, and registration is open. CodeCon 2003 will be February 22-24, noon-6pm at Club NV in San Francisco, California. All presentations will given by one of the active developers, and accompanied by a functional demo. Lance Fortnow links to some interesting commentary on P vs. NP. I found out today the standard terminology for my favorite conjecture - Conjecture: The circuit complexity of the k-sum problem is at least n ** k. This directly implies that the 4SUM problem is quadratic. I believe that the 3SUM problem is also quadratic (as does everybody), but that the reasons for that are much deeper and more complex. It also implies P != NP, so we can rest assured that noone has proven it yet. Given the obviousness of this conjecture I'm sure circuit complexity people have already spent considerable time on it and just haven't gotten anywhere, although curiously googling for '3sum' and 'circuit complexity' doesn't turn up anyone else musing on the subject. Here is a good list of open problems to spend time on. I haven't posted much lately because I've been spending my blogging time on my new secret project, which will be unveiled when I get a proof of concept working. XML people don't seem to realize that when I say I'm on the de facto standards committee I'm not kidding. In the case of an auction, any price below the second price wouldn't be pareto efficient, because the seller could sell the item to someone else at a higher price and both the seller and the new buyer would be happier. Pareto efficiency isn't unique in this case, because the high bidder purchasing with any price between the first and second bids is pareto efficient. (Yes it's possible to make more money selling on ebay by setting your minimum bid increment to a humongous value. At some point I'll get around to writing about ebay at length.) The reason to go with the second price instead of some amount more (which is how ebay does things, irritatingly enough) is to make it less gameable, however some gameability remains. Specifically, the seller could inflate the minimum price to anywhere between the first and second price and be better off. This is impractical under many circumenstances, but is very important with only two (or one!) bidder. Note that gameability is only a problem in the situations in selecting between different pareto efficient solutions, so there's no weakness to it as a technique here. In the case of stable marriage, there is a straightforward algorithm for finding all stable solutions based on pareto efficiency. For each person A, if A's first place choice is B, then for every C which appears below A on B's list of preferences, scratch B off C's list and C off B's list (this is justified because if B were paired with C then we could change the pairings to have B paired with A, and both A and B would be happier). Repeat until you can't simplify any more. At this point, participants will be in cycles, in which B is first on A's list, C is first on B's list, D is first on C's list, etc, until we get to A being first no somebody's list. Because of the gender difference, these cycles will always be of even length, so for each cycle we have to decide whether the males get their first choice or the females get their first choice. Note that this is a choice between different pareto efficient configurations, the appropriateness of pareto efficiency as a criterion is In practice, even on random data, this technique does such a good job of pairing up people that there's hardly any arbitrariness in final pairings. In the medical example a study was done and found that only a miniscule number of students would be assigned elsewhere in students's choice, so the algorithm was switched to that for good PR. A bit off-topic, I'd like to point out that It's utterly stupid that 'the match' was so controversial for so long. I at one point solved stable marriage on my own because it's an interesting problem, coming up with the above algorithm, and after a little bit of testing realized how little difference male versus female choice makes. That was only a few days's worth of work. Let this be a lesson to everyone that if there's a big controversy which a simple study can shed a lot of light on, do the damn study. The stable roommates problem doesn't always have a pareto efficient solution, for example there might be a loser who's put last on everyone else's list of priorities, but the above technique can be used to make an algorithm which works very well. Note that stable marriage is just a special case of stable roommates where all the males rank all females above all males and all females rank all males above all females. A much simpler and more justifiable approach is to try to reach pareto efficiency, which is a situation in which no two individuals can make an exchange and both be happier. This cleverly avoids making subjective comparisons of different peoples's worth, and also yields a straightforward algorithm for maximization. The success of capitalism can be viewed as a testament to the robustness of greedy algorithms. robocoder: I suggest setting up CVS, a mailing list, syncmail, and a todo list. Picking a bug tracking tool to begin with is like starting the construction of a bridge by digging a mass grave for everyone who will die in the building process. There's a very difficult puzzle in the latest scientific american, it goes as follows: Three of the nine members of the president's cabinet are leaks. If the president gives a tip to some subset of his advisors and all three interlopers are in that subset, then that tip will get leaked to the press. The president has decided to determine who the leaks are by selectively giving tips to advisors and seeing which ones leak. How can this be done by giving each tip to three or four people, having no more than two leaked tips, and using no more than 25 tips total? There's a solution on the web site, although the one I figured out is very different. Here's my solution. First, note that if a tip to four people leaks, the leaks can be found using only three more tips, giving each of them to three of the four. Arrange eight of the nine advisors on the vertices of a 2x2x2 cube. Test each of the 12 subsets which are coplanar, plus the 2 subsets which are all the same color if they're colored like a checkerboard. If any of those leak, we're done. If not, we've determined that the odd one out must be one of the leaks. Next, arrange the eight we previously had in a cube formation into a line, and test all 8 configurations of the form the odd one out, x, x+1, and x+3 (modulo 8). If none of those hit, then the two other leaks must be of the form x and x + 4, which there are four possibilities for. Three of them can be tried directly and if none leak then the last one can be This leads to a total of 14 + 8 + 3 = 25 tips. It's a very hard problem, it took me about 45 minutes to figure out a solution. An additional question given is whether increasing the number of advisors included in a tip can reduce the number of trials necessary. The answer is yes. For example, to modify the technique I gave the first test could be of six of the corners of the cube except for two adjacent ones. This effectively does three tests at once, so the total number of tests needed drops to 23. If the first test turns up positive, all but one of the 20 subsets of 3 of 6 can be tried individually, and if none of them leak then the last one can be inferred, for a total of one initial tip with to 6 plus 19 others, or 20 total, so leaking on the first tip isn't the limiting case. New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser. Keep up with the latest Advogato features by reading the Advogato status blog. If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!
{"url":"http://www.advogato.org/person/Bram/diary.html?start=56","timestamp":"2014-04-20T18:34:17Z","content_type":null,"content_length":"20376","record_id":"<urn:uuid:c7fd5fff-a473-4f00-99df-f8d80868ac4e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Derivation for surface area of revolution Replies: 7 Last Post: May 25, 2012 8:41 PM Messages: [ Previous | Next ] Derivation for surface area of revolution Posted: May 22, 2012 5:23 PM Deriving the integral formula for the surface area of a curve revolved around an axis often raises the following question (and suppose y=f(x) is revolved around the x-axis): "Why do we multiply 2*pi*f(x) by the arc length differential ds instead of the differential dx?" This is a perfectly reasonable question since for volumes of revolution one can multiply a cross section area A(x) by dx, and then integrate A(x)*dx to get the volume. No need for worrying about the curvature that ds captures. The plain old dx differential does just fine. Almost every modern calculus book out there bypasses this issue, which I find rather shameless. I've had a look at Anton, Thomas, Stewart, and a host of other standard texts and they say nothing about it. Any explanations I've seen amount to "well, if you use dx instead of ds then you get the wrong answer; calculating surface area is different than volume." Who could dispute that standard answer? Using dx instead of ds just doesn't recover the proper formulas for surface areas of simple surfaces. Fine. But doesn't anyone find that "proof is in the pudding" explanation dissatisfying? Isn't there a better explanation than this? Why does adding up (integrating) A(x)*dx work for volumes but adding up 2*pi*f(x)*dx doesn't work for surface areas? Is there a more intuitive explanation? Date Subject Author 5/22/12 Derivation for surface area of revolution Jim Rockford 5/22/12 Re: Derivation for surface area of revolution Mike Terry 5/22/12 Re: Derivation for surface area of revolution RGVickson@shaw.ca 5/22/12 Re: Derivation for surface area of revolution Ken.Pledger@vuw.ac.nz 5/25/12 Re: Derivation for surface area of revolution Jim Rockford 5/25/12 Re: Derivation for surface area of revolution Mike Terry 5/25/12 Re: Derivation for surface area of revolution ross.finlayson@gmail.com 5/25/12 Re: Derivation for surface area of revolution RGVickson@shaw.ca
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2385389&messageID=7826944","timestamp":"2014-04-17T01:10:44Z","content_type":null,"content_length":"25610","record_id":"<urn:uuid:5160ca76-8c8e-40a6-bbeb-64e40583caec>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Techniques for Engineers and Scientists Author(s): Larry C Andrews; Ronald L Phillips Published: 2003 DOI: 10.1117/3.467443 eISBN: 9780819478290 | Print ISBN10: 0819445061 As technology continues to move ahead, modern engineers and scientists are frequently faced with difficult mathematical problems that require an ever greater understanding of advanced concepts. Designed as a self-study text for practicing engineers and scientists, as well as a useful reference, the book takes the reader from ordinary differential equations to more sophisticated mathematicsâ Fourier analysis, vector and tensor analysis, complex variables, partial differential equations, and random processes. The emphasis is on the use of mathematical tools and techniques. The general exposition and choice of topics appeals to a wide audience of applied practitioners. Modern engineers and scientists are frequently faced with difficult mathematical problems to solve. As technology continues to move ahead, some of these problems will require a greater understanding of advanced mathematical concepts than ever before. Unfortunately, the mathematical training in many engineering and science undergraduate university programs ends with an introductory course in differential equations. Even in those engineering and science curriculums that require some mathematics beyond differential equations, the required advanced mathematics courses often do not make a clear connection between abstract mathematical concepts and practical engineering applications. This mathematics book is designed as a self-study text for practicing engineers and scientists, and as a useful reference source to complement more comprehensive publications. In particular, the text might serve as a supplemental text for certain undergraduate or graduate mathematics courses designed primarily for engineers and/or scientists. It takes the reader from ordinary differential equations to more sophisticated mathematicsâ Fourier analysis, vector and tensor analysis, complex variables, partial differential equations, and random processes. The assumed formal training of the reader is at the undergraduate or beginning graduate level with possible extended experience on the job. We present the exposition in a way that is intended to bridge the gap between the formal education of the practitioner and his/her experience. The emphasis in this text is on the use of mathematical tools and techniques. In that regard it should be useful to those who have little or no experience in the subjects, but should also provide a useful review for readers with some background in the various topics. Some special features of the text that may be of interest to readers include the following: â ¢ Historical comments appear in a box at the beginning of many chapters to identify some of the major contributors to the subject. â ¢ The most important equations in each section are enclosed in a box to help the reader identify key results. â ¢ Boxes are also used to enclose important lists of identities and sometimes to summarize special results. â ¢ Numbered examples are given in every chapter, each of which appears between horizontal lines. â ¢ Exercise sets are included at the end of each chapter. Most of the problems in these exercise sets have answers provided. â ¢ Remark boxes are occasionally introduced to provide some additional comments about a given point. â ¢ At the end of each chapter is a â Suggested Readingâ section which contains a brief list of textbooks that generally provide a deeper treatment of the mathematical concepts. â ¢ A more comprehensive numbered set of references is also provided at the end of the text to which the reader is directed throughout the text, e.g., (see [10]). â ¢ We have included a Symbols and Notation page for easy reference to some of the acronyms and special symbols as well as a list of Special Function notation (at the end of Chapter 2). The text is composed of 15 chapters, each of which is presented independently of other chapters as much as possible. Thus, the particular ordering of the chapters is not necessarily crucial to the user with few exceptions. We begin Chapter 1 with a review of ordinary differential equations, concentrating on second-order linear equations. Equations of this type arise in simple mechanical oscillating systems and in the analysis of electric circuits. Special functions such as the gamma function, orthogonal polynomials, Bessel functions, and hypergeometric functions are introduced in Chapter 2. Our presentation also includes useful engineering functions like the step function, rectangle function, and delta (impulse) function. An introduction to matrix methods and linear vector spaces is presented in Chapter 3, the ideas of which are used repeatedly throughout the text. Chapters 4 and 5 are devoted to vector and tensor analysis, respectively. Vectors are used in the study of electromagnetic theory and to describe the motion of an object moving through space. Tensors are useful in studies of continuum mechanics like elasticity, and in describing various properties of anisotropic materials like crystals. In Chapters 6 and 7 we present a fairly detailed discussion of analytic functions of a complex variable. The Cauchy-Riemann equations are developed in Chapter 6 along with the mapping properties associated with analytic functions. The Laurent series representation of complex functions and the residue calculus presented in Chapter 7 are powerful tools that can be used in a variety of applications, such as the evaluation of nonelementary integrals associated with various integral transforms. Fourier series and eigenvalue problems are discussed in Chapter 8, followed by an introduction to the Fourier transform in Chapter 9. Generally speaking, the Fourier series representation is useful in describing spectral properties of power signals, whereas the Fourier transform is used in the same fashion for energy signals. However, through the development of formal properties associated with the impulse function, the Fourier transform can also be used for power signals. Other integral transforms are discussed in Chapter 10â the Laplace transform associated with initial value problems, the Hankel transform for circularly symmetric functions, and the Mellin transform for more specialized applications. A brief discussion of discrete transforms ends this chapter. We present some of the classical problems associated with the calculus of variations in Chapter 11, including the famous brachistochrone problem which is similar to Fermat's principle for light. In Chapter 12 we give an introductory treatment of partial differential equations, concentrating primarily on the separation of variables method and transform methods applied to the heat equation, wave equation, and Laplace's equation. Basic probability theory is introduced in Chapter 13, followed by a similar treatment of random processes in Chapter 14. The theory of random processes is essential to the treatment of random noise as found, for example, in the study of statistical communication systems. Chapter 15 is a collection of applications that involve a number of the mathematical techniques introduced in the first 14 chapters. Some additional applications are also presented throughout the text in the various chapters. In addition to the classical mathematical topics mentioned above, we also include a cursory introduction to some more specialized areas of mathematics that are of growing interest to engineers and scientists. These other topics include the fractional Fourier transform (Chapter 9), wavelets (Chapter 9), and the Walsh transform (Chapter 10). Except for Chapter 15, each chapter is a condensed version of a subject ordinarily expanded to cover an entire textbook. Consequently, the material found here is necessarily less comprehensive, and also generally less formal (i.e., it is presented in somewhat of a tutorial style). We discuss the main ideas that we feel are essential to each chapter topic and try to relate the mathematical techniques to a variety of applications, many of which are commonly associated with electrical and optical engineeringâ e.g., communications, imaging, radar, antennas, and optics, among others. Nonetheless, we believe the general exposition and choice of topics should appeal to a wide audience of applied practitioners. Last, we wish to thank our reviewers Christopher Groves-Kirkby and Andrew Tescher for their careful review of the manuscript and helpful suggestions. Larry C. Andrews Ronald L. Phillips Orlando, Florida (USA) © 2003 Society of Photo-Optical Instrumentation Engineers
{"url":"http://ebooks.spiedigitallibrary.org/book.aspx?bookid=213","timestamp":"2014-04-21T12:10:16Z","content_type":null,"content_length":"90148","record_id":"<urn:uuid:a177cda6-a723-43e1-ba34-b6671c6b7c90>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Watkinsville Math Tutor Find a Watkinsville Math Tutor ...I have expert level knowledge and teaching experience in most Microsoft Office components holding a MOUS (Microsoft Office User Specialist)certification. I have taught computer networking classes including certification preparation for MCSE and MCA certifications. I have an A+ and Net+ certification from CompTia. 28 Subjects: including algebra 2, physics, precalculus, statistics ...I can help high school and college students who need help with algebra, geometry, pre-calculus and calculus. I can make mathematics easier than you think and help you make it sense. I am a patient, effective, and knowledgeable math tutor. 8 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have worked with students as young as four years and as old as adults in their fifties. I love to help students with their studies and take great satisfaction by creating success for my students. I do require 24 hour notice of cancellation, however, I will not request payment from a student who is not satisfied with my services. 33 Subjects: including algebra 1, ACT Math, geometry, prealgebra ...I have taught one section of Precalculus and two sections of Calculus for non-STEM majors at UGA. In addition, I have taught two sections of Elementary Statistics and one section of Precaculus at Piedmont College. It appears to me that the reason why most people have trouble math is the fear fo... 20 Subjects: including prealgebra, differential equations, linear algebra, logic ...I'm currently a 4th year at the University of Georgia, and I work for America Reads as the math tutor. I very much enjoy everything that comes with tutoring, and I hope to teach college for a living some day. What I've had the most experience tutoring is College Algebra and Calculus. 20 Subjects: including calculus, geometry, GRE, reading Nearby Cities With Math Tutor Arnoldsville Math Tutors Bishop, GA Math Tutors Bogart Math Tutors Bostwick, GA Math Tutors Colbert, GA Math Tutors Crawford, GA Math Tutors Farmington, GA Math Tutors High Shoals, GA Math Tutors Lexington, GA Math Tutors Maxeys Math Tutors N High Shoals, GA Math Tutors North High Shoals, GA Math Tutors Stephens, GA Math Tutors Winterville, GA Math Tutors Woodville, GA Math Tutors
{"url":"http://www.purplemath.com/Watkinsville_Math_tutors.php","timestamp":"2014-04-16T07:52:25Z","content_type":null,"content_length":"23908","record_id":"<urn:uuid:fc6ab870-ae70-430b-bdab-9704658bf75d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Trying to script the General Beta 2 CDF in MATLAB July 17th 2011, 04:11 PM Trying to script the General Beta 2 CDF in MATLAB I'm having trouble scripting a function in MATLAB. The function is the General Beta of the Second Kind Cumulative Distribution Function: $\frac{\left(\frac{(x/b)^a}{1+(x/b)^a}\right)^p}{pB(p,q)} \; _1\texttrm{F_2}\begin{bmatrix}p,1-q; & \frac{(x/b)^a}{1+(x/b)^a} \\ p+1 & \end{bmatrix}$ It involves the beta function and hypergeometric series as subroutines in the code. p,a,b,q are parameters. x is the independent variable (a 12x1 column vector in my case). The MATLAB script I've CDFGB2= (term.^p/(p*beta(p,q)))*F The output I get is a 12x12 matrix when I should just get a 12x1 column vector. If anyone has some insight I'd appreciate it. July 17th 2011, 06:30 PM Re: Trying to script the General Beta 2 CDF in MATLAB I'm having trouble scripting a function in MATLAB. The function is the General Beta of the Second Kind Cumulative Distribution Function: $\frac{\left(\frac{(x/b)^a}{1+(x/b)^a}\right)^p}{pB(p,q)} \; _1\texttrm{F_2}\begin{bmatrix}p,1-q; & \frac{(x/b)^a}{1+(x/b)^a} \\ p+1 & \end{bmatrix}$ It involves the beta function and hypergeometric series as subroutines in the code. p,a,b,q are parameters. x is the independent variable (a 12x1 column vector in my case). The MATLAB script I've CDFGB2= (term.^p/(p*beta(p,q)))*F The output I get is a 12x12 matrix when I should just get a 12x1 column vector. If anyone has some insight I'd appreciate it. CDFGB2= (term.^p/(p*beta(p,q))).*F July 18th 2011, 11:55 AM Re: Trying to script the General Beta 2 CDF in MATLAB I like your style CB. You're like the Zorro of math.
{"url":"http://mathhelpforum.com/math-software/184725-trying-script-general-beta-2-cdf-matlab-print.html","timestamp":"2014-04-18T06:05:53Z","content_type":null,"content_length":"7290","record_id":"<urn:uuid:8e81cacf-923c-483d-b510-a3a72fc8abba>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
The counter-propagating Rossby-wave perspective on baroclinic instability. I: Mathematical basis Downloads per month over past year Heifetz, E., Bishop, C. H., Hoskins, B. J. and Methven, J. (2004) The counter-propagating Rossby-wave perspective on baroclinic instability. I: Mathematical basis. Quarterly Journal of the Royal Meteorological Society, 130 (596). pp. 211-231. ISSN 1477-870X (Part A) Text - Published Version · Please see our End User Agreement before downloading. To link to this article DOI: 10.1002/qj.200413059610 It is shown that Bretherton's view of baroclinic instability as the interaction of two counter-propagating Rossby waves (CRWs) can be extended to a general zonal flow and to a general dynamical system based on material conservation of potential vorticity (PV). The two CRWs have zero tilt with both altitude and latitude and are constructed from a pair of growing and decaying normal modes. One CRW has generally large amplitude in regions of positive meridional PV gradient and propagates westwards relative to the flow in such regions. Conversely, the other CRW has large amplitude in regions of negative PV gradient and propagates eastward relative to the zonal flow there. Two methods of construction are described. In the first, more heuristic, method a ‘home-base’ is chosen for each CRW and the other CRW is defined to have zero PV there. Consideration of the PV equation at the two home-bases gives ‘CRW equations’ quantifying the evolution of the amplitudes and phases of both CRWs. They involve only three coefficients describing the mutual interaction of the waves and their self-propagation speeds. These coefficients relate to PV anomalies formed by meridional fluid displacements and the wind induced by these anomalies at the home-bases. In the second method, the CRWs are defined by orthogonality constraints with respect to wave activity and energy growth, avoiding the subjective choice of home-bases. Using these constraints, the same form of CRW equations are obtained from global integrals of the PV equation, but the three coefficients are global integrals that are not so readily described by ‘PV-thinking’ arguments. Each CRW could not continue to exist alone, but together they can describe the time development of any flow whose initial conditions can be described by the pair of growing and decaying normal modes, including the possibility of a super-modal growth rate for a short period. A phase-locking configuration (and normal-mode growth) is possible only if the PV gradient takes opposite signs and the mean zonal wind and the PV gradient are positively correlated in the two distinct regions where the wave activity of each CRW is concentrated. These are easily interpreted local versions of the integral conditions for instability given by Charney and Stern and by Fjørtoft. Deposit Details Download Statistics for this item. Centaur Editors: Update this record
{"url":"http://centaur.reading.ac.uk/88/","timestamp":"2014-04-19T12:07:45Z","content_type":null,"content_length":"32249","record_id":"<urn:uuid:f8646e45-a119-407c-ae07-93cf24d45dce>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
CBSE Class IX - Mathematics - Number System 1.1 online Test Which is not a Number Sysytem? In a number line if you start from zoro and go towads right then which will you find? All possitive numbers All negative numbers All Zeros None of the above Which is not a Natural Number? Wich is statement(s) is/are correct? A) All whole numbers are Natural Numbers B) If you take out Zero from Whole Numbers remaining numbers froms Natural Numbers C) All Natural numbers are whole Numbers D) zero is a natural Number A and B B and C C and D A and D Only A A and D 'Z' denotes which set of natural numbers. Rational Numbers If a number is represented by p/q, then below which condition mekes it a Rational Number? p is not equal to 0 q is not equal to 0 p and q both are not equal to 0 How many rational numbers can be found between two whole numbers? Which statement(s) is/are correct? A) Zero is a common number between Natural Numbers and Whole Numbers B) All rational numbers are Natural numbers C) All whole numbers are Integers D) All Natural Numbers are Rational Numbers A and B A, B and C A, C and D All are correct B, C and D Only D Which is not a number between 4 and 5? Negative numbers are part of which number system? N and Q Z and W W and Q Q and Z Answer all the questions. Send an email to teachorissa@gmail.com for your feedback and doubts. the questions like "which of the following are correct" the correct answer was not in the option 941 days 8 hours 21 minutes ago let me know the question you are referring to. 940 days 23 hours 51 minutes ago the questions like "which of the following are correct" the correct answer was not in the option 941 days 8 hours 21 minutes ago its very easy test 945 days 5 hours 27 minutes ago class 9 1.1 test:- ANS.of Q.4 987 days 13 hours 40 minutes ago Answer is B and C. Do u have any other thought? 987 days 12 hours 10 minutes ago 'Damn maths is really confusing! >:\ 1018 days 23 hours 34 minutes ago Maths is logical and tricky..Attend some of my classes if you have any doubts. 1007 days 3 hours 24 minutes ago For Question 4 B and C is the Correct answer. For Question 8 B, C and D is the Correct answer. 1389 days 1 hours 2 minutes ago sir but 1/2 isnt a natural number 1106 days 10 hours 14 minutes ago FOR QUESTION 8 B is incorrect,please check your ans. again 987 days 13 hours 32 minutes ago q8 ???? all options are wrong!!!! 280 days 4 hours 14 minutes ago no not true and folse are wrong,so tell me what do u mean by that 1389 days 2 hours 32 minutes ago 0 is not a natural no.........which of the follwing r correct has got wrong ans..... 1501 days 10 hours 42 minutes ago 280 days 4 hours 15 minutes ago Want to access more learning resources? Find a course that’s right for you. Already a member? Your Facebook Friends on WizIQ
{"url":"http://www.wiziq.com/online-tests/3525-cbse-class-ix-mathematics-number-system-1-1","timestamp":"2014-04-20T15:59:46Z","content_type":null,"content_length":"148727","record_id":"<urn:uuid:9d9202da-afea-4d91-9185-90d129629f8f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem with equation of independent events Please explain the question posted in the file attached. CB CaptainBlack I know what does A intersection B means but what I don't understand is that what does this statement |A and B|/|B| has to do with the occurence of two events. I mean the author said that "The probability that A occurs is P(A) = |A|/6 = 3/6 = 1/2, while presuming B occurs, the probability that A occurs is |A and B|/|B|. What I am trying to understand is that how did the author form this equation for this situation. I hope my question is clear. The probability of A happening given that B has occurred in symbols is $P(A|B) = \frac{{P\left( {A \cap B} \right)}}{{P(B)}}$. That explains the intersection in your question. Thus if A and B are independent that means that $P(A|B) = \frac{{P\left( {A \cap B} \right)}}{{P(B)}} = P(A)\quad \Rightarrow \quad P\left( {A \cap B} \right) = P(A)P(B)$.
{"url":"http://mathhelpforum.com/statistics/16071-problem-equation-independent-events-print.html","timestamp":"2014-04-18T00:27:07Z","content_type":null,"content_length":"8641","record_id":"<urn:uuid:abe7cafe-6e57-4fd3-b948-0ea9710cff40>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Q: How far away is the edge of the universe? Physicist: If you ever hear a physicist talking about “the edge of the universe”, what they probably mean is “the edge of the visible universe”. The oldest light (from the most distant sources) is around 15 billion years old. Through a detailed and very careful study of cosmic inflation we can estimate that those sources should now be about 45 billion light years away. So if you define the size of the visible universe as the present physical distance (in terms of the “co-moving coordinates” which are stationary with respect to the cosmic microwave background) to the farthest things we can see, then the edge of the visible universe is 45 billion light years away (give or take). However, that “edge” doesn’t mean a lot. It’s essentially a horizon, in that it only determines how far you can see. Of course, if you wanted to know “how far can we see?” you would have asked that. The picture of the universe that most people have is of a universe enclosed in some kind of bubble. That is, the picture that most people have is of a universe that has an edge. However, there are some big problems with assuming that there’s a boundary out there. If you decide that space suddenly ends at an edge, then you have to figure out how particles would interact with it. Obviously they can’t keep going, but bouncing off or stopping both violate conservation of momentum, and disappearing violates conservation of mass/energy. Moreover, if you say that spacetime has a definite edge at a definite place then you’re messing with relativistic equivalence (all of physics works the same in all positions and velocities). It may seem easy to just put an asterisk on relativity and say that there’s an exception where the edge of the universe is concerned, but the math isn’t nearly as forgiving. The nicest theories today suggest that there is no boundary to the universe at all. This leads to several options: 1) A negatively curved, infinite universe. This option has been ruled out by a study of the distribution of the Cosmic Microwave Background. 2) A flat (non-curved), infinite universe. The measurements so far (devotees may already know how to do these measurements) show that space is flat, or very very nearly flat. However, infinite universes make everyone nervous. An infinite universe will repeat everything in the visible universe an infinite number of times, as well as every possible tiny variation, as well as every vastly different variation. All philosophy aside, what really bothers physicists is that an infinite (roughly homogeneous) universe will contain an infinite amount of matter and energy. Also, the big bang (assuming that the Big Bang happened) would have had to happen everywhere at once. As bad as the mathematical descriptions of the Big Bang traditionally are, an infinitely large Big Bang is much 3) A curved, finite universe. This is the best option. You can think of the universe as being a 3-dimensional space that is the surface of a 4-dimensional ball, in the same way that the surface of a balloon is a 2-dimensional space wrapped around a 3-dimensional ball. Of course, this immediately begs the question “what’s inside the ball?”. Well, keep in mind that what defines a space is how the things inside it relate to each other (the only thing that defines space is rulers). So even if you turned the “balloon” inside-out you’d still have the same space. Or, if you’re not a topologist, then remember that there’s nothing outside of space, and the surface of the 4-d sphere is space. Now, be warned, the “3-d surface of a 4-d ball” description isn’t entirely accurate. Right of the bat, we don’t live in 3 dimensions, we live in 3+1 dimensions (not “space” but “spacetime”), and the metric for that is a little weird. Also, when you talk about “the shape of the universe”, you probably mean “the shape of the universe right now”, and sadly there’s no way to universally agree on what “now” means in a universe with any rotating stuff in it. That being said, the “surface of a sphere” thing is still a good way to talk about the universe. Since our best measurements show that space is very flat, if the universe has taken the 3rd “curved, finite” path (it probably has), then it must be really really big. This is for the same reason that you can easily show that a ball is curved, but may have some difficulty showing that the Earth is curved. Also, to answer the original question: the universe doesn’t have an edge. 18 Responses to Q: How far away is the edge of the universe? 1. couldnt be a bubble with stuff in it, with the surface of the bubble (or edge of the universe) expanding at the speed of light? if that were the case wouldnt it remove the problem of ‘how would particles interact with the edge’ since particles could never reach the edge? 2. It might. However, it wouldn’t clear up the mathematical difficulties (as much as they count), and that doesn’t seem to be the way the universe is. 3. I really enjoy your blog; but does your account of a “A curved, finite universe” still allow the universe to be infinite in extent? Most cosmologist I have read, James Bullock (of Irvine) as well as my research on WMAP shows that it is flat, infinite in extent and finite in volume. Nasa WMAP reads this: “Thus the universe was known to be flat to within about 15% accuracy prior to the WMAP results. WMAP has confirmed this result with very high accuracy and precision. We now know that the universe is flat with only a 0.5% margin of error. This suggests that the Universe is infinite in extent; however, since the Universe has a finite age, we can only observe a finite volume of the Universe. All we can truly conclude is that the Universe is much larger than the volume we can directly observe.” 4. Any positive curvature necessarily implies a finite universe. What that quote is referring to is the fact that, whether or not the universe is infinite, we can only see a finite part of it. When you hear about things like the “size of the universe” what you’re almost always hearing about is the size of the visible universe. 5. Thank you Physicist. So, when it comes to the actual size of the universe (and not just the observable universe), are the most convincing theories positing that the universe is infinite or finite? Is it right for me to assume that, from everything I hear, it is unknown because it is unobservable, but it is “possibly” infinite in actuality? Also, does the curvature you suggest agree with the WMAP quote? I’m not a physicist, but a poet, so please bear with me. 6. Not sure what “.. flat to within about 15% accuracy..” means, but it looks like we more or less agree. The measurements so far show that the universe is extremely flat, so either it is infinite (definite possibility) or it’s so big that its curvature is undetectable (so far). To do that, the universe would need to be so big we’d need a poet to describe it. 7. So it has no edge but we know it is flat? If that is the case than it would be bigger in the x and y plane than the z plane. So in that respect w ouldn’t there be an dge in the z plane? Furthermore, in order to know or even to implied or guess that the universe is flat than you have to have some reason why you think one plane is smaller than the other 2. That could only be found by taking some reading of both edges of one plane and not the others. Example: if you are in a fish tank and a 1ft cube of water is all you can detect you could say that the water is infinite in every direction. However if on one side you detect the glass of the tank than you know it is not infinite in all directions. Now to think it is flat you would have to detect a barrier on 2 opposite sides of the tank. Just seems that saying it is flat but there is no edge is a paradox. 8. If we were traveling out frim the centre of our solar system when would we know we have reached the edge of it? 9. There are a lot of different ways of defining the boundary, so it depends who you ask. That’s also why this is funny. 10. If from the vantage point of our Earth in the Milky Way galaxy, we look back and say that the beginning of the Universe started about 13.73 billion years ago. And if Hubble’s 42 Law is correct, looking the other way, the universe is expanding at an accelerated rate of 42 miles per second per 3 million light years. So, as you say above: if the furthest point of the expanded universe is 45 billion light years away and light travels at a constant speed, time must be expanding or accelerating to make up for the 31.27 billion-year discrepancy. Am I mad? 11. @David Medlyn You sound happy enough. The extra distance comes from the fact that the object that emits a given photon continues to get farther away. If you were on a long road trip across an expanding planet, you’d find that (by the time you get where you’re going) the distance you traveled is less than the present distance to your starting point. 12. So, that’s pretty amazing in itself. The universe in around 14 billion years has travelled about 45 billion light years. It opens two questions that have bothered me: 1. At approximately what point (light years from the Big Bang) did the universe reach light speed? and 2. Could we see any of that matter after it has attained light speed or are the photons coming towards us drawn back at a negative rate? 13. @David Medlyn There’s a post here that tries to cover that. But in short: The expansion of the universe isn’t described by a speed, it’s described by a speed per distance. Right now it’s around 70 kilometers per second per megaparsec. The speed of light is never really involved one way or the other, in large part because this isn’t real speed so much as “the generation of more distance”. 14. Thanks, that Wikipedia link was really interesting. Nine pages plus four pages of references. I am still wondering if anyone has worked out at what distance from the initiation of the universe did the outer rim reach light speed. And at that point, would we be able to see light emitted from that matter or would it be travelling faster in the opposite direction than the light travelling in our direction? 15. This article is wrong.The furthest thing we’ve ever taken a picture of is 13.1 billion light years away not 45. 16. I don’t think this Wikipedia article is necessarily comparing what’s visible to us here on Earth. What is important is the application of Hubble’s law which has the expansion of the universe at 42 miles per second per 3 million years. Therefore at some time the outer sections of the universe reached light speed and if the mathematical calculations are correct the place has travelled a distance of around 46 billion light years. And since light travels (normally) at a constant speed, space itself must be expanding. I know I’m applying basic Australian bush logic to what are obviously complex and advanced cosmic mathematics, but it is really fascinating territory. Lawrence Krauss said recently that Einstein had said that if you’re moving faster than the speed of light, you’re actually going backwards in time. This opens up the notion that if light in the outer edges of the universe is being “dragged” along by the surrounding expanding space, then is that outer rim going backwards in time? And if Hubble’s acceleration law has no upper limit, does that mean that eventually the outer edge of the universe will go so far back in time that it will arrive back at the Big Bang? …. just a thought. 17. Wouldn’t the correct answer be, No one knows, everyone just guesses. 18. I think new frontiers in all science often begin with a guess, an embryo of a new direction, then some mathematician ponders on the hypothesis and works some formulae around the notion. Usually they are good communicators so once one starts, others join in. Eventually, a mathematical solution is completed and may remain the new benchmark until proven wrong – which often happens. That’s the joy of it – constant advancement of knowledge and improvement. I think the Japanese have a name for it – Kaizen. Surely my initial questions are within current mathematical knowledge: 1. When did the outer edge of the Universe reach light speed? 2. Can we see objects beyond that point? (by visible light or other methods) 3. Does Hubble’s 42 Law have an upper accelerated speed limit? 4. Does Einstein’s theory that “if you are going faster than the speed of light, you’re actually going backwards in time” continue to apply exponentially? 4. If so, will the outer edge of the Universe eventually go so far back in time that it will end up at the Big Bang? I know the last question is a bit out there, but the whole concept is fascinating and I’m hoping someone can fill in the gaps. This entry was posted in -- By the Physicist, Astronomy, Physics, Relativity. Bookmark the permalink.
{"url":"http://www.askamathematician.com/2010/01/q-how-far-away-is-the-edge-of-the-universe/","timestamp":"2014-04-19T20:43:57Z","content_type":null,"content_length":"151380","record_id":"<urn:uuid:33d5d7ab-f006-44bd-a73c-8efb6880c241>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Is set {x, sinx, sin2x,}in C[0,1] linearly dependent or independent? March 11th 2009, 08:29 AM #1 Mar 2009 Is set {x, sinx, sin2x,}in C[0,1] linearly dependent or independent? Is set {x, sinx, sin2x,}in C[0,1] linearly dependent or independent? I understand the problem is to show if in this case c1x + c2sinx + c3sin2x=0 has more than the trivial solution in the space of all continouse functions in [0,1]. In otherwords can I find at least one non-zero c coefficient to satisfy this equation? What is the best way to approach this? Can I plug in numbers for x and see if any of the c's are non-zero. The non-trivial solution should work for all x's so if I show it doesn't work for specific x's in the interval then the solution must be trivial? Or can I define another continous function in the interval = function 1 (dot product) function 2 then say these are orthogonal hence = 0 and find c's that way? My intution tells me they are independent but math is all about showing proof. Its just when I look at the graph of these on 0,1 interval I can't see how any of the functions can be a linear combination of the other two. If $\{x,\sin x,\sin 2x\}$ is linearly dependent on $[0,1]$ then it means $c_1x + c_2\sin x + c_3\sin 2x = 0$ has a non-trivial solution (not all three coefficients are zero) for all $x\in [0,1]$. Differenciating we find $c_1 + c_2\cos x + 2c_3 \cos 2x = 0$ and differenciating again we find $0 - c_2\sin x - 4c_3\sin 2x = 0$. Thus, we have shown that: $\left\{ \begin{array}{c}c_1x + c_2\sin x + c_3\sin 2x = 0\\c_1 + c_2\cos x + 2c_3 \cos 2x = 0\\ 0c_1 - c_2\sin x - 4c_3\sin 2x = 0\end{array} \right.$ Has a non-trivial solution, and so, it must be that, $\left| \begin{array}{ccc} x & \sin x & \sin 2x \\ 1 & \cos x & 2\cos 2x \\ 0 & -\sin x & - 4\sin 4x \end{array} \right| = 0 \text{ for all }x\in [0,1]$. Try to argue that this is impossible by expanding out the determinant. Simpler trig solution? I've been reviewing trig because some of the identities maybe helpful and timesaving ways to show linear dependence for trig functions such as the functions in my original question. Which brings me to my next question. Can I use the identity sin2x=2sinxcosx to show that x , sinx, sin2x are linearly dependent? I can choose C1=0 & C2=2cosx? Sin2x=0x + 2cosxsinx Is my assumption that associative properties hold incorrect? Been a while since I took a trig class. March 11th 2009, 04:28 PM #2 Global Moderator Nov 2005 New York City March 15th 2009, 06:03 PM #3 Mar 2009
{"url":"http://mathhelpforum.com/advanced-algebra/78141-set-x-sinx-sin2x-c-0-1-linearly-dependent-independent.html","timestamp":"2014-04-20T18:37:22Z","content_type":null,"content_length":"39548","record_id":"<urn:uuid:2f686541-9026-45cf-87bd-7c04a5f0f50f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
solvable congruent April 20th 2012, 06:48 AM #1 Super Member Aug 2009 solvable congruent How do you proof that -1 congruent to x^4 mod p is solvable iff (-1)^(p-1/d) is congruent to 1 mod p where d= gcd(4,p-1)? i was trying to use euler theorem that (-1)^(p-1) is congruent to 1 mod p then how else can i continue? Re: Solvable congruent I take it you want $p$ to be prime. The case $p=2$ is trivial, so we'll assume $p$ to be odd. Then $d=2$ if $p\equiv3\mod4$ and $d=4$ if $p\equiv1\mod 4.$ If $p\equiv3\mod4$ then by Euler's criterion $\left(\frac{-1}p\right)=(-1)^{\frac{p-1}2}=-1.$ In this case $x$ is not a quadratic residue modulo $p$ (and so it certainly can't be a quartic residue modulo $p);$ we also have $(-1)^{\frac{p-1}d}=(-1)^{\frac{p-1}2}=-1ot\equiv1\mod p.$ Consider $p\equiv1\mod4.$ (i) Suppose $x^4\equiv-1\mod p.$ As $d=4$ in this case, $-1\equiv x^d\mod p$$\implies$$(-1)^{\frac{p-1}d}\equiv x^{p-1}\mod p\equiv1\mod p$ by Fermat's little theorem. (ii) Conversely suppose $(-1)^{\frac{p-1}d}=(-1)^{\frac{p-1}4}\equiv1\mod p.$ Then $\frac{p-1}4$ is even, i.e. $8\mid p-1.$ Let $k$ be a primitive root $\mod p$ and set $x=k^{\frac{p-1}8}.$ Then $x^8\equiv1\mod p$$\implies$$p$ divides $x^8-1=\left(x^4+1\right)\left(x^4-1\right).$ But $p$ cannot divide $x^4-1$ otherwise $k^{\frac{p-1}2}\equiv1\mod p$ which would contradict the fact that, as a primitive root, $k$ has order $p-1$ in the multiplicative group of the integers modulo $p.$ Hence $p$ divides $x^4+1$ and we are done. April 20th 2012, 05:39 PM #2
{"url":"http://mathhelpforum.com/number-theory/197614-solvable-congruent.html","timestamp":"2014-04-18T00:28:35Z","content_type":null,"content_length":"38167","record_id":"<urn:uuid:44ed26af-20a3-4282-b8b7-e6fcfa7c7af9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
An overview of the MIZAR project. Available on the Web as http://web.cs.ualberta.ca/~piotr/Mizar/MizarOverview.ps - In International Joint Conference on Automated Reasoning , 2006 "... Abstract. The HOL Light prover is based on a logical kernel consisting of about 400 lines of mostly functional OCaml, whose complete formal verification seems to be quite feasible. We would like to formally verify (i) that the abstract HOL logic is indeed correct, and (ii) that the OCaml code does c ..." Cited by 17 (0 self) Add to MetaCart Abstract. The HOL Light prover is based on a logical kernel consisting of about 400 lines of mostly functional OCaml, whose complete formal verification seems to be quite feasible. We would like to formally verify (i) that the abstract HOL logic is indeed correct, and (ii) that the OCaml code does correctly implement this logic. We have performed a full verification of an imperfect but quite detailed model of the basic HOL Light core, without definitional mechanisms, and this verification is entirely conducted with respect to a set-theoretic semantics within HOL Light itself. We will duly explain why the obvious logical and pragmatic difficulties do not vitiate this approach, even though it looks impossible or useless at first sight. Extension to include definitional mechanisms seems straightforward enough, and the results so far allay most of our practical worries. 1 Introduction: quis custodiet ipsos custodes? Mathematical proofs are subjected to peer review before publication, but there , 2000 "... ion is in a precise sense a converse operation to application. Given 49 50 CHAPTER 5. PRIMITIVE BASIS OF HOL LIGHT a variable x and a term t, which may or may not contain x, one can construct the so-called lambda-abstraction x: t, which means `the function of x that yields t'. (In HOL's ASCII concr ..." Cited by 6 (0 self) Add to MetaCart ion is in a precise sense a converse operation to application. Given 49 50 CHAPTER 5. PRIMITIVE BASIS OF HOL LIGHT a variable x and a term t, which may or may not contain x, one can construct the so-called lambda-abstraction x: t, which means `the function of x that yields t'. (In HOL's ASCII concrete syntax the backslash is used, e.g. \x. t.) For example, x: x + 1 is the function that adds one to its argument. Abstractions are not often seen in informal mathematics, but they have at least two merits. First, they allow one to write anonymous function-valued expressions without naming them (occasionally one sees x 7! t[x] used for this purpose), and since our logic is avowedly higher order, it's desirable to place functions on an equal footing with rstorder objects in this way. Secondly, they make variable dependencies and binding explicit; by contrast in informal mathematics one often writes f(x) in situations where one really means x: f(x). We should give some idea of how
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2727951","timestamp":"2014-04-18T12:21:00Z","content_type":null,"content_length":"15747","record_id":"<urn:uuid:bace3c02-31b2-41ea-8dde-9a499e23719b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
A remark on pseudo-exponentiation Seminar Room 1, Newton Institute There are a number of interesting open problems about definability in the field of complex numbers with exponentiation. Zilber has proposed a novel approach. He constructed a nonelementary class of exponential algebraically closed fields and showed that in this class definable subsets of the field are countable or co-countable. He also showed the class is categorical in all uncountable cardinalities. The natural question is whether the complex numbers are the unique model in this class of size continuum. In this talk I will show that, assuming Schanuel's Conjecture, the simplest case of Zilber's strong exponential closure axiom is true in the complex numbers.
{"url":"http://www.newton.ac.uk/programmes/MAA/seminars/2005022411001.html","timestamp":"2014-04-16T16:02:28Z","content_type":null,"content_length":"4147","record_id":"<urn:uuid:b1672438-b9c8-4da2-85f0-e0268bd4a70e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory of Everything This website is dedicated to teaching the cosmological aspects of the Theory of Everything, including a few additional topics. A pdf copy of the most recent version of "The Theory of Everything: Foundations, Applications and Corrections to General Relativity" can be downloaded Part I: Ruling Out an Expanding Universe (HD) Description: This is the first of several videos based on "The Theory of Everything". The first half rules out the 2011 Noble Prize, awarded "for the discovery of the accelerating expansion of the Universe through observations of distant supernovae". The second half discusses the correct model of the universe. Proof arises from measuring how large distant galaxies and clusters appear from Earth. With recent observations of the amounts of mergers between galaxies, the big bang theory can be conclusively ruled out. The inferred accelerated expansion is instead an illusion due to being in a localized universe, i.e. the gravitational field of the unvierse forces local paths to deflect towards the center of the universe. The paper provides an in-depth dicussion. What is "The Theory of Everything"? The theory of everything is a diverse collection of subjects, which range from the underlying nature of matter to the universe as a whole. It explains all crucial aspects of existence in a rigerous, logical form that can be verified experimentally. The current version of the theory of everything includes only topics covering general relativity and cosmology. An extended edition will be released later this year including additional topics on agriculture and climate change. Was Einstein Wrong? Although Einstein is commonly regarded as one of the key contributers to our understanding of the universe, it turns out that his theory of general relativity is wrong. Einstein firmly believed that particles exist as point-like objects and rejected the results offered by quantum mechanics. This had lead him down a long path of failed attempts and incorrect theories, which further progressed into what is now modern theory. With the correct model of the universe now in hand, it is possible to conclusively rule out Einstein's view of general relativity. The central proof against Einstein's theory of general relativity arises from the observed temperature of the cosmic background radiation. From the central core's 3000 K black body spectrum, it is clear that neither event horizon nor Hawking radiation exists. For example, it is known the local universe originated from the central core in the form of a relativistic jet, also known as the dark flow. This means that the central core must be larger than any local objects including galaxies and clusters. However, a 3000 K black body temperature with respect to Hawking radiation would require the central core to be less massive than the Moon. Since this is clearly impossible, any theory that predicts event horizon from finite energy can be conclusively ruled out. This includes Einstein's field equations, the Maxwell-Einstein field equations, the Einstein-Cartan theory, bimetric gravitational theories, scalar-tensor theories and any other theory which predicts event horizon. What is Next? With respect to physics, the only remaining aspect is the unified field theory. Vacuum field theory, which is part of the theory of everything, is based upon the unified field theory. The postulates used to derive vacuum field theory require for the underlying essense of matter to arise from planck-scale fluctuations of space itself. This can also be viewed as an elastic medium, similar to springs attached to masses. The foundations of quantum field theory arise from similar foundations. However, the geometric component is missing due to the current interpretation of general relativity. Connections do exist between the standard model and vacuum field theory, with both being compatible to each other.
{"url":"http://www.thecontinuousuniverse.com/","timestamp":"2014-04-16T16:11:05Z","content_type":null,"content_length":"7139","record_id":"<urn:uuid:5ac49f01-2c79-4b0c-86c6-d3cd75b7c190>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with calculating geometric sequence June 28th 2013, 01:29 AM #1 Nov 2012 Help with calculating geometric sequence I hope it is the right place to post it... I need help with the following sequence: (it's 2 to the power of 2 to the power of i) I need to find i as a function of N. meaning how further away in the sequence do i need to go, in order to get N. any help would greatly appreciated. Last edited by Stormey; June 28th 2013 at 01:32 AM. Re: Help with calculating geometric sequence Hey Stormey. Did you mean as a function of j? I don't know of a closed form solution but you should look into techniques like Euler-Mclaurin series and their relationship to integrals. Re: Help with calculating geometric sequence Hi chiro. yes, as a function of j, sorry. the truth is that it is actually a question from data starctures in computer science, so I almost sure the solution suppose to be accomplished with discrete math. (using some manipulation to compute the sum of a more simple sequence, to which the sum is known or easy to calculate, and then calculating the sum of the original sequence, I also tried to think about some change of variable [index in this case], that will help simplify this problem) Re: Help with calculating geometric sequence You might want to consider a bit-wise representation (i.e. binary) and how that relates to N. June 28th 2013, 02:53 AM #2 MHF Contributor Sep 2012 June 28th 2013, 04:18 AM #3 Nov 2012 June 28th 2013, 04:41 PM #4 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/calculus/220202-help-calculating-geometric-sequence.html","timestamp":"2014-04-17T16:34:50Z","content_type":null,"content_length":"38829","record_id":"<urn:uuid:16a34165-7f56-472f-8f2d-92fc54ce1ea3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Whippany Science Tutor Find a Whippany Science Tutor ...I also have recently tutored in chemistry. I have five years experience teaching high school geometry. I have also tutored in geometry and in SAT prep (which involves quite a bit of geometry). I have also recently taken a college level geometry class (for my teaching certification). Many skills needed in pre-algebra are then used in algebra and geometry. 8 Subjects: including chemistry, geometry, algebra 1, algebra 2 ...Through my experience both at Princeton High School and as a private tutor, I have also worked extensively with students who have special needs and students who speak limited or no English. My goal is to impart the knowledge I have gained through diligent study and daily practice, and to help ot... 37 Subjects: including physical science, anthropology, reading, English ...I love tutoring, and I make each lesson dynamic, engaging, and enjoyable. I've worked with students of many different backgrounds, including non-native English speakers and students with learning disabilities. I'm able to explain both the most basic concepts and the most challenging questions with clarity, and I tailor my instruction to each student's personal learning style. 10 Subjects: including ACT Science, SAT math, SAT reading, SAT writing ...During the time I coached Yale, the team was ranked first in the nation and the world. I coached the Choate team to its first appearance at the World Championships. I have experience coaching the following debate formats: Lincoln-Douglas (LD), Public Form (PF), Parliamentary, and Student Congress. 25 Subjects: including philosophy, astronomy, English, reading ...While obtaining my Ph.D., I have taught genetics topics to classes of first year graduate students. I am quite patient, and am willing to try multiple approaches to ensure that my student is satisfied that he or she is comfortable with the material learned.I have a B.Sc. in Biology and a Ph.D. i... 2 Subjects: including biology, genetics
{"url":"http://www.purplemath.com/whippany_science_tutors.php","timestamp":"2014-04-19T17:23:10Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:fd80f5a2-84fd-4a23-9ef2-a96af0043452>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Differential Forms Problem I'm reading Flanders' Differential Forms with Applications to the Physical Sciences and I have some issues with problems 2 and 3 in chapter 3, which appear to ask the reader to compute the pullback a mapping from X to Y applied to a form over X, and I'm not sure how to interpret such a thing. Problem 3 reads: Consider the mapping [tex]\phi : (x,y)\rightarrow (xy,1)[/tex] on E^2 into E^2. Compute [tex]\phi^{*}(dx)[/tex], [tex]\phi^{*}(dy)[/tex], and [tex]\phi^{*}(ydx)[/tex]. What I'm tempted to do is to consider phi as a mapping from [tex](x,y)[/tex] to [tex](x',y')[/tex], and instead compute [tex]\phi^{*}(dx')[/tex], [tex]\phi^{*}(dy')[/tex], and [tex]\phi^{*}(y'dx')[/ tex], which would make everything easy as pie since then I'd just be computing the pullback of a form defined over the image space. But I'm not sure if that dumb little notational issue is all there is to it, or maybe I'm supposed to do something else, like first apply the identity map to [tex](dx,dy)[/tex] and then apply the pullback, which I must admit makes my head spin a little. I have essentially the same issue with problem 2, which asks for the pullback of a form defined on the domain, and I'm tempted to say that's just the identity transformation (i.e. that the pullback is a projection operator). I also did a funky calculation and found just that, but I'm far from confident that I even understood what the question was. Thanks for your help. I apologize for the hideous appearance; I'm still wrestling with LATEX. I also realized a bit too late that I should have posted this in "homework help" even though I'm not in a class, so sorry for that too.
{"url":"http://www.physicsforums.com/showthread.php?t=502596","timestamp":"2014-04-18T00:21:02Z","content_type":null,"content_length":"25466","record_id":"<urn:uuid:06e26615-c5af-4d5c-9717-0484d75a7f0a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
My First App Review: Dragonbox I just got my very first smartphone. First app I bought was . I've heard lots of good things about it, and I think I played around with a free online version a while back (although it doesn't seem to exist now). It is fun to play with, even though I know my algebra. I still need to test it with kids who haven't learned algebra yet, to see how much sense it makes to them, and how well it transfers to paper-and-pen(cil) algebra. I did find a few bugs, and I hope the makers will set up some sort of program to get users to report bugs, so they can fix them. Bug #1 Here's the screen. I wanted to subtract a/5 from both sides, but that's not possible. I had to multiply both sides by 5, subtract a, and then divide both sides by 5. Bug #2 I was penalized for changing c+c to 2c. Not sure why. Bug #3 I divided both sides by x, and got x = 1/3. Dreambox said that was right. But that leaves out the other possible answer, which is x = 0. Yikes! I think dividing by x needs to be a wrong step in the I bought the version for age 5 and up ($5.99) by mistake. I'm looking forward to checking out the version for age 12 and up ($9.99), too. Even with bugs, this game is great. I'm impressed. 4 comments: 1. Sue, DragonBox developer here. Thanks for your review. Here are some answers to your questions: * the first issue isn't an issue. Not all rules are available from start. That said, this turns the level into a puzzle. Even if you know Maths, you have to think a bit. Note the missing rule exists in the 12+ version. * not sure about the second one (which chapter/level did it happen?) * the third one, yes DragonBox doesn't yet support solving quadratic equations and some levels could have been designed better. 2. Wow! I was going to try to figure out how to email you, but you found me first. I'm impressed. Thanks for responding! I was thinking about that first one, and I agree with you. It was intriguing to figure out an alternate way to resolve it. As I mentioned, I'm new to my smartphone. I couldn't get screenshots while I was playing yesterday, so I'm not sure where the second one happened. Maybe toward the end of chapter one? (If that is clearly wrong, then I just don't know. I remember that the c+c was on the left side. I was experimenting when I pulled the one on top of the other, and I thought it was so cool that it changed to 2c. So when I got too many steps, I figured it was earlier in the problem. I did it again. It might have been my 4th try before I left it alone. Mathematically, that one's not a real problem, but it would be more consistent for the game to allow (require?) the 2c. So on the third one, shouldn't you pull that one out? You can send updates out to people's devices, right? I don't know much about all this, so I hope I'm understanding it correctly. 3. Sue, regarding the quadratic level definition: * this level in the game allows for people to engage in discussion. We have a similar scenario in DragonBox 12+ where people can end up with 4 = 5 in one level. * see also http://support.dragonboxapp.com/knowledgebase/articles/85364-bug-all-versions-imperfect-level-definitions and We can send updates, but I am not sure we will update that one. Feel free to contact us at support at dragonboxapp.com If you ever try dragonbox 12+, let us know what you think! 4. Thanks for this feedback Sue. I haven't really tried this app but have read several times about it. Comments with links unrelated to the topic at hand will not be accepted. (I'm moderating comments because some spammers made it past the word verification.)
{"url":"http://mathmamawrites.blogspot.com/2013/05/my-first-app-review-dragonbox.html","timestamp":"2014-04-18T13:06:24Z","content_type":null,"content_length":"108450","record_id":"<urn:uuid:76b35512-7a0f-43b8-9326-2bde0f5856a9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Point Calculator Camarilla Calculator As you can see there are TWO Camarilla Calculators. This is because one is the standard formulae and the other is a proprietary formulae which some companies charge a monthly fee to use. Hence you can own it yourself and save $1000s a year on subscription fees. Camarilla (1) Camarilla (2) So what is Camarilla? Just input the Open, High, Low and Closing price of your chosen market, then click the Calculate button to obtain the price levels for the following day. The Camarilla Equation tells you the High and Low range for the next day. It also shows you the possible turning points and the breakout levels. The most important of these being the H3, H4, L3 and L4 levels which are used to gauge the likely retracement and break-out points. The main way to use Camarilla equation is to wait for price to approach L3 or H3. When it does, it's expected that the market will reverse at those levels. So a position is opened (against the trend) and a protective stop loss is placed outside the L4 or H4 level. A reversal candlestick pattern is all that's needed to give confirmation of a turning market. (Candle reversal patterns are included in the manual which accompanies this software.) The RoadMap Traders are often right about direction, but enter at the wrong time. This explains why so many traders enter a trade, quickly get stopped out, and then the market resumes in the original direction of their trade. They failed to TIME the market correctly. You can be very educated about the markets, but if you don't know exactly when and where to get in and out, all the education in the world won't do you much good. There are only 2 dimensions on a chart: the price axis and the time axis. And of the two, W.D. Gann said that time is more important than price. Nearly all technical analysis focuses on price. Very few traders learn how to TIME the market with precision ... yet it is fully 1/2 of the information available on every chart. Therefore most traders ignore fully 1/2 of the information available to them on every chart! This is like trading with impaired vision. The way to uncover the secret of timing the markets is by unveiling the hidden power of market cycles.It's amazing that while all traders desperately want to know exactly where and when to enter and exit a trade, they don't even have a market cycle indicator on their The RoadMap was the most complex of all the formulas to put into a workable calculator. It's based on WD Ganns Square of Nine formula. And if you know anything about Gann then you'll understand how difficult this was to accomplish. However I have to say that the results are truly astounding!!! We include free with the Traders Calculator, scanned PDF documents of Gann's original 'How To Trade', and his 'Mathematical Formula'. Two truly fascinating books. And we do hope that once you've read some of his material you'll appreciate just how difficult it was to create a calculator which simplified the process. I'll give you a basic idea: 1. You input prices, which can be a high, low, close or price range. 2. You then input the number of Trading Days and/or Calendar Days. 3. Then, after you click the Calculate button, you will see if price and time have squared. In the trading manual which accompanies our software, we show you some amazing real life examples which prove just how incredible the Road Map is. We'll show you how to predict, well in advance, the days, weeks and even months where the turning points of a market are likely to occur. Also where the support and resistance levels will stall the market. These you will know days in advance. They can be used for possible entry points in a trend (should you have missed the initial entry signal). The Road Map will increase your level of trading confidence to such a high degree that you'll realize that you've been trading blind for all this time. No other software or indicator can possibly match what the Road Map can do. They're simply not in the same league! Order Information You obviously recognize the value of using a scientific and carefully calculated approach to reduce risk and maximize profits. Once you own this Traders Calculator, you'll wonder just how you ever managed to trade without it. In fact you shouldn't trade without it. All updates and additional functions will be sent to you automatically! To place your order and download the Traders Calculator and manual immediately...Click Here PRODUCT SUPPORT │Multi Function Traders Pivot Point Calculator│
{"url":"http://www.traderscalculator.com/","timestamp":"2014-04-19T10:00:47Z","content_type":null,"content_length":"33205","record_id":"<urn:uuid:2ace753d-85d6-48c5-a50c-846751f252d7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do you calculate the momentum of two objects immediately after a perfectly elastic collision? Do you need to find the velocities of the two objects after collision and then use the formula of p= mv. where p is momentum vector, m is mass of the object, and v is the velocity vector? Or is there another way to find the linear momentum? • one year ago • one year ago Best Response You've already chosen the best response. I have a problem. object 1: m=10kg v=(6, 1) p=mv=(60, 10) object 2: m=100kg v=(-6, -2) p=mv=(-600, -200) They collide, what's the momentum of each object immediately after collision. Best Response You've already chosen the best response. So what I got from those two links was basically that I do need to calculate the final velocities. \[v_{1f}={(m_1 - m_2)v_{1i} + 2m_2 v_{2i} \over m_1+m_2}\]\[v_{2f}={(m_2-m_1)v_{2i}+2m_1v_{1i} \ over m_1+m_2}\] Then apply the momentum formula\[\mathbf {\vec p}=m {\mathbf{\vec v}}\] Kinda strange since the next question asks me to find the velocities.. You'd think they would put it in the right order to help you learn the method. Best Response You've already chosen the best response. The problem you posted it's a 2-dimensional collision and momentum is still conserved but you have to account for the momentum in 2 dimensions. That's a bit tougher and I admit that I haven't done one of those in a while :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5050011de4b0e55c0098b3a1","timestamp":"2014-04-17T15:57:30Z","content_type":null,"content_length":"36159","record_id":"<urn:uuid:95b4f0dc-6cd6-488e-ab32-2fea804bb948>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Computing pi (or not) Replies: 8 Last Post: Sep 18, 2012 1:28 PM Messages: [ Previous | Next ] Re: Computing pi (or not) Posted: Sep 13, 2012 10:31 AM Joe Niederberger wrote (in part): > Is it not outputting all the real numbers between 0 and 10? > By tracing a suitable path down the tree we can find any > real number we care to. (Yes, no, maybe?) > On the other hand, there is a hypothesis that the digits > of pi are "normal" -- containing all possible 2-digit > sequences (not only, but they all occur 1/100 of the time.) > Likewise all possible 3-digit sequences, etc. If true, > then pi contains somewhere in its decimal expansion, > sequentially, an encoding of the complete works of > Shakespeare (any edition), complete encodings of the > bible (any edition, any translation.) The sum total > of human output all in one number, not just past knowledge, > but all books yet to be written as well! [Imagine the > enormity of the knowledge contained in the whole tree! > Perhaps we should call it the "god tree (tm)".] The tree "paradox" is a standard technique/idea in set theory and logic, and a good starting point for those interested would be to google "infinite binary tree" and "binary tree" AND "real numbers". The messages in pi idea showed up in Carl Sagan's book "Contact" (but not in the movie made from the book), something I've posted a fair amount about in the past, for example these two posts: sci.math: "Contact"/pi [23 October 2000] math-teach: Pi & contact [24 April 2001] - ---------------- begin technical math aside ------------------ Incidentally, for pi to have this property, a much weaker hypothesis than "pi is a normal number" suffices. Almost all real numbers, in the sense of Lebesgue measure, are normal (i.e. the set of non-normal numbers has Lebesgue measure zero), but the opposite is true in the case of Baire category (almost all real numbers, in the sense of Baire category, are NOT normal). Thus, while the set of normal numbers is really big in the sense of Lebesgue measure, it's also really small in the sense of Baire category. On the other hand, the property of having all finite strings of decimals appearing in a real number's decimal expansion holds for real numbers that form a much larger set of real numbers. Indeed, this set of real numbers is really big in the sense of Lebesgue measure AND really big in the sense of Baire category. In fact, the distinction is even more extreme than this, as I indicate in the post below. sci.math: Omni-transcental numbers [19 February 2003] - ----------------- end technical math aside ------------------- Dave L. Renfro ------- End of Forwarded Message Date Subject Author 9/12/12 Joe Niederberger 9/13/12 Re: Computing pi (or not) Dave L. Renfro 9/14/12 Re: Computing pi (or not) kirby urner 9/14/12 Re: Computing pi (or not) Joe Niederberger 9/14/12 Re: Computing pi (or not) kirby urner 9/18/12 Re: Computing pi (or not) kirby urner 9/14/12 Re: Computing pi (or not) Joe Niederberger 9/14/12 Re: Computing pi (or not) Dave L. Renfro 9/15/12 Re: Computing pi (or not) Joe Niederberger
{"url":"http://mathforum.org/kb/message.jspa?messageID=7889958","timestamp":"2014-04-20T03:23:45Z","content_type":null,"content_length":"28860","record_id":"<urn:uuid:8c414410-c34d-4ba1-8d91-4dd2698b92fa>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph::TransitiveClosure::Matrix - create and query transitive closure of graph use Graph::TransitiveClosure::Matrix; use Graph::Directed; # or Undirected my $g = Graph::Directed->new; $g->add_...(); # build $g # Compute the transitive closure matrix. my $tcm = Graph::TransitiveClosure::Matrix->new($g); # Being reflexive is the default, # meaning that null transitions are included. my $tcm = Graph::TransitiveClosure::Matrix->new($g, reflexive => 1); $tcm->is_reachable($u, $v) # is_reachable(u, v) is always reflexive. $tcm->is_reachable($u, $v) # The reflexivity of is_transitive(u, v) depends of the reflexivity # of the transitive closure. $tcg->is_transitive($u, $v) my $tcm = Graph::TransitiveClosure::Matrix->new($g, path_length => 1); my $n = $tcm->path_length($u, $v) my $tcm = Graph::TransitiveClosure::Matrix->new($g, path_vertices => 1); my @v = $tcm->path_vertices($u, $v) my $tcm = attribute_name => 'length'); my $n = $tcm->path_length($u, $v) my @v = $tcm->vertices You can use Graph::TransitiveClosure::Matrix to compute the transitive closure matrix of a graph and optionally also the minimum paths (lengths and vertices) between vertices, and after that query the transitiveness between vertices by using the is_reachable() and is_transitive() methods, and the paths by using the path_length() and path_vertices() methods. If you modify the graph after computing its transitive closure, the transitive closure and minimum paths may become invalid. Construct the transitive closure matrix of the graph $g. Construct the transitive closure matrix of the graph $g with options as a hash. The known options are By default the edge attribute used for distance is w. You can change that by giving another attribute name with the attribute_name attribute to the new() constructor. By default the transitive closure matrix is not reflexive: that is, the adjacency matrix has zeroes on the diagonal. To have ones on the diagonal, use true for the reflexive option. NOTE: this behaviour has changed from Graph 0.2xxx: transitive closure graphs were by default reflexive. By default the path lengths are not computed, only the boolean transitivity. By using true for path_length also the path lengths will be computed, they can be retrieved using the path_length () method. By default the paths are not computed, only the boolean transitivity. By using true for path_vertices also the paths will be computed, they can be retrieved using the path_vertices() method. Return true if the vertex $v is reachable from the vertex $u, or false if not. Return the minimum path length from the vertex $u to the vertex $v, or undef if there is no such path. Return the minimum path (as a list of vertices) from the vertex $u to the vertex $v, or an empty list if there is no such path, OR also return an empty list if $u equals $v. Return true if the transitive closure matrix has all the listed vertices, false if not. Return true if the vertex $v is transitively reachable from the vertex $u, false if not. Return the list of vertices in the transitive closure matrix. Return the predecessor of vertex $v in the transitive closure path going back to vertex $u. For path_length() the return value will be the sum of the appropriate attributes on the edges of the path, weight by default. If no attribute has been set, one (1) will be assumed. If you try to ask about vertices not in the graph, undefs and empty lists will be returned. The transitive closure algorithm used is Warshall and Floyd-Warshall for the minimum paths, which is O(V**3) in time, and the returned matrices are O(V**2) in space. Jarkko Hietaniemi jhi@iki.fi This module is licensed under the same terms as Perl itself.
{"url":"http://search.cpan.org/dist/Graph/lib/Graph/TransitiveClosure/Matrix.pm","timestamp":"2014-04-21T15:57:30Z","content_type":null,"content_length":"18785","record_id":"<urn:uuid:e95c1e94-5490-49f3-80a1-7f2c24c9fd7d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
please graph these 2 graphs hello. please graph these two f(x)=(x^2-1)(1-x/2), also if you could find the real zeros of this polynomial f(x)=x^2+3x+2/x-1 thanks Hi, you can factor the term of the function completely: f(x) = (x² - 1)(1 - x/2) = (x - 1)(x + 1)(x - x/2) A product equals zero if one of the factors is zero. Therefore solve for x; x + 1 = 0 or x - 1 = 0 or 1 - x/2 = 0. You'llget the zeros: -1, 1, 2 I've attached the graph of the function. EB Hi, I'm not quite certain how to read this problem. I assume that you mean: f(x)=x^2+3x+2/(x-1) The graph has a vertical asymptote at x = 1 because the function is not defined at x = 1. To calculate the zeros you have to solve an equation of 3rd degree: x³ + 2x² - 3x + 2 = 0. The solution is possible if you use Cardano's formula. It's a lot of work so I let my computer do it. You get only one zero at x -3.15275... I've attached a diagram of the graph. The asymptote is painted red. EB Hi, I'm not quite certain how to read this problem. I assume that you mean: f(x)=x^2+3x+2/x-1 The graph has a vertical asymptote at x = 0 because the function is not defined at x = 0. To calculate the zeros you have to solve an equation of 3rd degree: x³ + 3x² - x + 2 = 0. The solution is possible if you use Cardano's formula. It's a lot of work so I let my computer do it. You get only one zero at x = -3.4566... I've attached a diagram of the graph. The asymptote is painted red. EB
{"url":"http://mathhelpforum.com/calculus/12215-please-graph-these-2-graphs.html","timestamp":"2014-04-17T09:15:32Z","content_type":null,"content_length":"45002","record_id":"<urn:uuid:915b4e25-a06e-4d52-bf6d-f6f5dc72c22a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: Some important demonstrations on negative numbers > a MACS Replies: 1 Last Post: Dec 16, 2012 5:59 AM Messages: [ Previous | Next ] Re: Some important demonstrations on negative numbers > a MACS Posted: Dec 15, 2012 2:27 PM Clyde Greeno says: >By failing to so develop the concept of "slope" from the multiplication tables for the whole numbers, Is that true? I'm not exactly who you are referring to, but I agree that the 4 quadrant graph and viewing mult. as a family of lines through the origin is very desirable, a great unifying picture. They can tie together multiplication, division concepts, (whole number, fractional, real number, and signed,) similar triangles, rates of change, and later, variable rates of change. Slope seems to get stuck in fairly at some point as an almost separate topic that comes with a formula for calculation, and is just a precursor to "formulas for lines in a plane." For example, it is easy to illustrate the distributive law with such a picture. Most often, the distributive law is pictured as an N x M grid divided into two smaller rectangular grids. Now, I've not seen the corresponding picture (based on sloped lines) used at all. A Google search for images related "distributive law" does not show this picture: the 1st quadrant (or all 4), a line of slope 3 representing the values of "3x" for any x, and two of what I call "knights moves" (over and up). So, 3(3+2) can be illustrated as two such moves (over 3 and up, over 2 more and up) and of course you come to the same place as if you had moved over 5 and up originally. It can be shown pictorially then, if we want the distributive law to work in all 4 quadrants, that "multiplication" needs to be extended to keep the nice straight lines going as they are. So, I'm with you that those kinds of illustrations should perhaps be used more and earlier, augmenting standard "number line" pictures as soon as multiplication is developed. On the other hand, I wouldn't call such aids "a common sense understanding" of the sign rules. I'm not convinced there is any common sense understanding of the signed number system (integers), if common sense means "readily understandable how to map to everyday concerns". But that's OK, its not as if its a truism that everyone should have a common sense understanding of how a watch works. They don't, and they need not. But, a watch can be taken apart piece by piece and an appreciation generated for how the various simple elements work together. A problem with the integer system is that, unlike a watch, its not even clear to most people what the composite object is for! As Jonathan Crabtree has pointed out, its not really necessary for everyday bookkeeping duties. Clyde says: >What now are commonly called "negative" numbers are more accurately called "negator numbers." How about "anti-numbers" similar to anti-matter? When we add like quantities of opposite sign, we get a kind of a annihilation. That's nice, but with the sign rules and multiplication, we also get a new capability: reversal, with our new "toggling operator" (-1). Now that *is* a somewhat common sense notion, of sequentially alternating between opposites; night and day, love and hate, etc. Its wrapped up in some strange formalism, but it is recognizable. Joe N Date Subject Author 12/15/12 Re: Some important demonstrations on negative numbers > a MACS Joe Niederberger 12/16/12 Re: Some important demonstrations on negative numbers > a MACS syllabus Robert Hansen
{"url":"http://mathforum.org/kb/message.jspa?messageID=7937484","timestamp":"2014-04-19T21:28:32Z","content_type":null,"content_length":"20299","record_id":"<urn:uuid:770fef83-057c-46e6-b6f2-46637de6a032>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
GATE 2014 Syllabus for Electronics and Communication Engineering (EC) Engineering Mathematics Linear Algebra: Matrix Algebra, Systems of linear equations, Eigen values and eigen vectors. Mean value theorems, Theorems of integral calculus, Evaluation of definite and improper integrals, Partial Derivatives, Maxima and minima, Multiple integrals, Fourier series. Vector identities, Directional derivatives, Line, Surface and Volume integrals, Stokes, Gauss and Green's theorems. Differential equations: First order equation (linear and nonlinear), Higher order linear differential equations with constant coefficients, Method of variation of parameters, Cauchy's and Euler's equations, Initial and boundary value problems, Partial Differential Equations and variable separable method. Complex variables: Analytic functions, Cauchy's integral theorem and integral formula, Taylor's and Laurent' series, Residue theorem, solution integrals. Probability and Statistics: Sampling theorems, Conditional probability, Mean, median, mode and standard deviation, Random variables, Discrete and continuous distributions, Poisson, Normal and Binomial distribution, Correlation and regression analysis. Numerical Methods: Solutions of non-linear algebraic equations, single and multi-step methods for differential equations. Transform Theory: Fourier transform, Laplace transform, Z-transform. Electronics and Communication Engineering Network graphs: matrices associated with graphs; incidence, fundamental cut set and fundamental circuit matrices. Solution methods: nodal and mesh analysis. Network theorems: superposition, Thevenin and Norton's maximum power transfer, Wye-Delta transformation. Steady state sinusoidal analysis using phasors. Linear constant coefficient differential equations; time domain analysis of simple RLC circuits, Solution of network equations using Laplace transform: frequency domain analysis of RLC circuits. 2-port network parameters: driving point and transfer functions. State equations for Electronic Devices: Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon: diffusion current, drift current, mobility, and resistivity. Generation and recombination of carriers. p-n junction diode, Zener diode, tunnel diode, BJT, JFET, MOS capacitor, MOSFET, LED, p-I-n and avalanche photo diode, Basics of LASERs. Device technology: integrated circuits fabrication process, oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process. Analog Circuits: Small Signal Equivalent circuits of diodes, BJTs, MOSFETs and analog CMOS. Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and FET amplifiers. Amplifiers: single-and multi-stage, differential and operational, feedback, and power. Frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal oscillators; criterion for oscillation; single-transistor and op-amp configurations. Function generators and wave-shaping circuits, 555 Timers. Power supplies. Digital circuits: Boolean algebra, minimization of Boolean functions; logic gates; digital IC families (DTL, TTL, ECL, MOS, CMOS). Combinatorial circuits: arithmetic circuits, code converters, multiplexers, decoders, PROMs and PLAs. Sequential circuits: latches and flip-flops, counters and shift-registers. Sample and hold circuits, ADCs, DACs. Semiconductor memories. Microprocessor(8085): architecture, programming, memory and I/O interfacing. Signals and Systems: Definitions and properties of Laplace transform, continuous-time and discrete-time Fourier series, continuous-time and discrete-time Fourier Transform, DFT and FFT, z-transform. Sampling theorem. Linear Time-Invariant (LTI) Systems: definitions and properties; causality, stability, impulse response, convolution, poles and zeros, parallel and cascade structure, frequency response, group delay, phase delay. Signal transmission through LTI systems. Control Systems: Basic control system components; block diagrammatic description, reduction of block diagrams. Open loop and closed loop (feedback) systems and stability analysis of these systems. Signal flow graphs and their use in determining transfer functions of systems; transient and steady state analysis of LTI control systems and frequency response. Tools and techniques for LTI control system analysis: root loci, Routh-Hurwitz criterion, Bode and Nyquist plots. Control system compensators: elements of lead and lag compensation, elements of Proportional-Integral-Derivative (PID) control. State variable representation and solution of state equation of LTI control systems. Random signals and noise: probability, random variables, probability density function, autocorrelation, power spectral density. Analog communication systems: amplitude and angle modulation and demodulation systems, spectral analysis of these operations, superheterodyne receivers; elements of hardware, realizations of analog communication systems; signal-to-noise ratio (SNR) calculations for amplitude modulation (AM) and frequency modulation (FM) for low noise conditions. Fundamentals of information theory and channel capacity theorem. Digital communication systems: pulse code modulation (PCM), differential pulse code modulation (DPCM), digital modulation schemes: amplitude, phase and frequency shift keying schemes (ASK, PSK, FSK), matched filter receivers, bandwidth consideration and probability of error calculations for these schemes. Basics of TDMA, FDMA and CDMA and GSM. Elements of vector calculus: divergence and curl; Gauss' and Stokes' theorems, Maxwell's equations: differential and integral forms. Wave equation, Poynting vector. Plane waves: propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart; impedance matching; S parameters, pulse excitation. Waveguides: modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Basics of propagation in dielectric waveguide and optical fibers. Basics of Antennas: Dipole antennas; radiation pattern; antenna gain.
{"url":"http://gate.iitkgp.ac.in/gate2014/syllabus.php?pap=2f53e6f3f2acb041a4e0737e58c45321","timestamp":"2014-04-20T01:00:18Z","content_type":null,"content_length":"17789","record_id":"<urn:uuid:1887a9cc-097a-49d6-a6fa-eaf1fb16d99d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Watkinsville Math Tutor Find a Watkinsville Math Tutor ...I have expert level knowledge and teaching experience in most Microsoft Office components holding a MOUS (Microsoft Office User Specialist)certification. I have taught computer networking classes including certification preparation for MCSE and MCA certifications. I have an A+ and Net+ certification from CompTia. 28 Subjects: including algebra 2, physics, precalculus, statistics ...I can help high school and college students who need help with algebra, geometry, pre-calculus and calculus. I can make mathematics easier than you think and help you make it sense. I am a patient, effective, and knowledgeable math tutor. 8 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have worked with students as young as four years and as old as adults in their fifties. I love to help students with their studies and take great satisfaction by creating success for my students. I do require 24 hour notice of cancellation, however, I will not request payment from a student who is not satisfied with my services. 33 Subjects: including algebra 1, ACT Math, geometry, prealgebra ...I have taught one section of Precalculus and two sections of Calculus for non-STEM majors at UGA. In addition, I have taught two sections of Elementary Statistics and one section of Precaculus at Piedmont College. It appears to me that the reason why most people have trouble math is the fear fo... 20 Subjects: including prealgebra, differential equations, linear algebra, logic ...I'm currently a 4th year at the University of Georgia, and I work for America Reads as the math tutor. I very much enjoy everything that comes with tutoring, and I hope to teach college for a living some day. What I've had the most experience tutoring is College Algebra and Calculus. 20 Subjects: including calculus, geometry, GRE, reading Nearby Cities With Math Tutor Arnoldsville Math Tutors Bishop, GA Math Tutors Bogart Math Tutors Bostwick, GA Math Tutors Colbert, GA Math Tutors Crawford, GA Math Tutors Farmington, GA Math Tutors High Shoals, GA Math Tutors Lexington, GA Math Tutors Maxeys Math Tutors N High Shoals, GA Math Tutors North High Shoals, GA Math Tutors Stephens, GA Math Tutors Winterville, GA Math Tutors Woodville, GA Math Tutors
{"url":"http://www.purplemath.com/Watkinsville_Math_tutors.php","timestamp":"2014-04-16T07:52:25Z","content_type":null,"content_length":"23908","record_id":"<urn:uuid:fc6ab870-ae70-430b-bdab-9704658bf75d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Gravity: the well of spacetimeliness Early last century Einstein tackled a problem that everyone else thought was already solved: gravity. Ever since Newton published his universal law of gravitation in 1687, anyone with a slide rule and a half decent education could calculate the force of gravity on a falling body or a planet in orbit. Newton saw gravity as an attractive force between all bits of matter. So every speck of matter in the universe is literally attracting every other speck of matter to it. The heavier a thing is, the more attractive it is, and the force always gets stronger up close. So you and planet Earth pull on each other equally, but because it's such a heavy lump of a planet, you're stuck to the Earth while even your most Newtonian attraction doesn't make it budge noticeably. But if a decent sized planet swung our way you'd soon see Earth moving towards the thing. So fickle. Newton's gravity was beautifully simple, and his law had worked spectacularly well for calculating gravitational force for a couple of centuries. Obviously something had to be done! Einstein tackled gravity from a completely different angle. Already on a roll with special relativity, where he showed that mass and energy were really two versions of the same thing (read more about E=mc^2), he couldn't get the idea that gravity wasn't any different from uniform acceleration out of his head. It started with an elevator in space. Relativity, gravity and space elevators Einstein's thinking went like this. If you're in a stationary elevator and you drop a ball (theoretical physicists never travel without one), the ball will fall to the floor of the lift at 9.8 metres per second per second — that's the rate of gravitational acceleration for everything that falls to Earth. Now if some nefarious fiend cut the cables on your elevator, you and the ball would go into freefall. For a few seconds you'd both float in the elevator, because it's falling away beneath you just as fast as you're falling towards Earth. You'd experience the majesty of weightlessness, right up until your thigh bones rammed through your shoulders. But focus on the weightlessness people! Meanwhile someone else is in another elevator way out in space that's accelerating upwards at 9.8 metres per second per second (clearly for all his genius the man couldn't imagine a rocket, let alone a Tardis). When they drop their compulsory travel ball, it will fall to the floor of their speeding elevator exactly the same way yours did — even though there's no gravity in sight. And if their space elevator came to a halt, your mate and their ball would go into weightlessness that looks and feels exactly like your freefall, minus the imminent skeletal realignment. Most of us would just leave an idea like that right where we found it, maybe dragging it out for the odd philosophical dinner party. But Einstein wasn't one for leaving well enough alone. The idea that there's no difference between the effect of gravity and the effect of uniform acceleration became known as the equivalence principle. And together with the idea of spacetime, it's the basis of Einstein's take on gravity — his theory of general relativity. Spacetime is where (and when) it's at, man In Newton's world it's a gravitational force that causes bits of matter to accelerate towards each another. Einstein put the acceleration down to the fact that matter warps spacetime. Spacetime isn't an actual thing, it's the geometry that the universe works in. We're all used to space and to time — events always involve a where and a when. So the idea of four dimensions (the up/down, left/right and front/back of space, plus time) seems pretty But spacetime is more than just a bundling of coordinates. Space and time are part of the same deal: they mix and match and morph into one another. If you change space, you affect time. And the one thing that's guaranteed to mess with spacetime is matter. Every bit of matter in the universe is distorting the bit of spacetime it exists in. And distortions in spacetime affect the way matter and energy (like light) move through space and time. It's the mass of matter that's the real spacetime bender, so the heavier matter is, the more it distorts (or warps, curves or bends) spacetime. And thanks to E=mc^2, the more energy a thing has, the more mass, so energy distorts spacetime too. Planets make big dents in spacetime, and massive things like stars cause enormous wells — like the one in the two-dimensional diagram at the start of the story. It's hard to find a graphics program that renders four-dimensional spacetime accurately, but for our 2D-loving brains, this image shows how curved spacetime draws things towards massive objects. The rocket passing the star, and all who sail in her, will feel a strong pull towards it. That tug isn't a force, it's the acceleration you get when you scoot along a curved bit of spacetime. It's gravity. (Get your Play School craft gear and follow these instructions for a hands-on version of curved spacetime). Curvy spacetime is a pretty out there explanation for gravity, and in science the wackier the idea, the more hard core the maths you need to back it up. Einstein had the heavy duty equations to explain his theory for anyone who could follow them. And in most situations his calculations gave the same results as Newton's much simpler law. But Einstein's gravity did much more. Relativistic gravity applied beautifully when things got very fast (near light speed), or very massive (star size), where Newton's law was a dud. And it accounted for why light bends near massive objects (it does pretty much the same thing the alien tennis ball does in the diagram above). Newtonian gravity only applied to objects with mass — matter, not energy like light. More impressive still, general relativity predicted the effect that gravity has on the time part of spacetime: time literally slows down in curvy spacetime. The curvier the spacetime, the stronger the gravity, the slower the time. It's called gravitational time dilation, and along with the other predictions it's been well and truly checked off the 'things to prove' list. In 2010 a couple of ridiculously accurate atomic clocks were set at slightly different heights, with one 33 centimetres higher than the other. The difference in the curvature of spacetime due to the Earth's mass was noticeable even over that ridiculously small distance — the higher clock will gain the equivalent of 90 billionths of a second over the next 80 years. And there's another relativistic gravitational time dilation experiment that you conduct every time you use a GPS. GPS relies on satellites orbiting high above us, where Earth's gravity is weaker (spacetime is less warped by Earth's mass the further away you go). So the ultra-precise atomic clocks in the satellites run 45 millionths of a second faster per day than clocks here on the ground, deep in the Earth's gravitational well. If those millionths of a second weren't taken into account when the satellite signals were synced, your GPS coordinates would be out by more than 10 kilometres. Between that and the annoying voices GPS would never have taken off. Einstein's take on gravity has been incredibly successful from our dashboards to our space programs. It's gone where no Newtonian equation could go. And it might have been the end of the "what exactly is gravity?" question if it wasn't for one thing — it doesn't work or play well with the physics at the other end of the scale: the laws of quantum mechanics. The two theories can't both be right. And no doubt Einstein would be happy to know that thousands of research physicists have been tackling the mismatch for almost a century, and the hunt for a single explanation of the massive and the tiny is still on. More on that next time. Thanks to Prof David Jamieson from the School of Physics at The University of Melbourne. Published 25 April 2012
{"url":"http://www.abc.net.au/science/articles/2012/04/25/3488717.htm?site=science/basics","timestamp":"2014-04-23T18:13:44Z","content_type":null,"content_length":"46056","record_id":"<urn:uuid:fae34037-3149-46b8-a95b-64daa989bf4f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/baraha/answered","timestamp":"2014-04-17T04:21:49Z","content_type":null,"content_length":"107895","record_id":"<urn:uuid:96dd1721-4d06-4d14-9d07-32fd0c4cdfde>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse Skorokhod Embedding Problem up vote 1 down vote favorite The Skorokhod Embedding Problem is well known and has many documented solutions in the literature. Now if we are given a Brownian stochastic basis (satisfying usual hypothesis), a diffusion $X_t$ (with explicit SDE or transition semigroup, or infinitesimal generator), and a stopping time $\tau$ (let's say a.s. finite). I was wondering if (or when) it was possible to find a process $Y_t$ such that $Y_1=X_{\tau}$ and where the dynamics of $Y_t$ is explicitely know (aka an exlicit SDE for $Y$, or its transition Best Regards stochastic-processes stochastic-calculus pr.probability The question as it is asked is really general so do not hesitate to give particular cases where a solution is attainable. – The Bridge Mar 1 '11 at 9:48 May be this could do the (theoretical) trick, define $\mathcal{G_t}=\mathcal{F}_t\vee \tau$ the augmented filtration of the original stocahstic basis, then defining $Y_t=E[X_\tau| \mathcal{G}_t]$ would be correct answer. anyway this is still theoretical since this doesn't give analytical form to the $Y$ process – The Bridge Mar 1 '11 at 14:57 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged stochastic-processes stochastic-calculus pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/56980/inverse-skorokhod-embedding-problem","timestamp":"2014-04-19T00:04:53Z","content_type":null,"content_length":"48856","record_id":"<urn:uuid:02a9be1c-f2bb-47c7-9cc9-00a380b91d67>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'How Far Can You Go?' Brain Teaser How Far Can You Go? Math brain teasers require computations to solve. Puzzle ID: #31758 Category: Math Submitted By: MarcM1098 Your assignment is to make a delivery from the depot to your base camp, 1600 miles away. The trip begins normally, but unfortunately, halfway to base camp, your supply truck breaks down. You have no way to call for help. Luckily, in addition to medical supplies, the truck is carrying a Desert Patrol Vehicle (DPV) and two barrels of fuel. The DPV has a full 10 gallon tank, but to make room for the medical supplies, it can only carry one of the 45 gallon barrels at a time. While you can't transfer fuel between barrels, you can refill the tank from the barrels. Assume the DPV can get 12 miles per gallon, regardless of the load it carries. How far can you go? Is it far enough to deliver the medical supplies and the DPV to your base camp? Show Hint Show Answer What Next?
{"url":"http://www.braingle.com/brainteasers/31758/how-far-can-you-go.html","timestamp":"2014-04-20T13:59:35Z","content_type":null,"content_length":"24992","record_id":"<urn:uuid:e6eead4b-76ae-47a4-be8c-3ba137a5d70a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Sharp error bounds in approximating the Riemann-Stieltjes integral by a generalised trapezoid formula and applications Riemann-Stieltjes integral; trapezoid rule; integral inequalities; weighted integrals 1 Introduction In [1], in order to approximate the Riemann-Stieltjes integral by the generalised trapezoid formula the authors considered the error functional and proved that provided that is of bounded variation on and u is of r-H-Hölder type, that is, satisfies the condition for any , where and are given. The dual case, namely, when f is of q-K-Hölder type and u is of bounded variation, has been considered by the authors in [2] in which they obtained the bound: The case where f is monotonic and u is of r-H-Hölder type, which provides a refinement for (1.3), and respectively the case where u is monotonic and f of q-K-Hölder type were considered by Cheung and Dragomir in [3], while the case where one function was of Hölder type and the other was Lipschitzian was considered in [4]. For other recent results in estimating the error for absolutely continuous integrands f and integrators u of bounded variation, see [5] and [6]. The main aim of the present paper is to investigate the error bounds in approximating the Stieltjes integral by a different generalised trapezoid rule than the one from (1.1) in which the value , is replaced with the integral mean . Applications in approximating the weighted integrals are also provided. 2 Representation results We consider the following error functional in approximating the Riemann-Stieltjes integral by the generalised trapezoid formula: If we consider the associated functions , and defined by then we observe that The following representation result can be stated. Theorem 1Let be bounded on and such that the Riemann-Stieltjes integral and the Riemann integral exist. Then we have the identities Integrating the Riemann-Stieltjes integral by parts, we have and the first equality in (2.3) is proved. The second and third identity is obvious by the relation (2.2). For the last equality, we use the fact that for any bounded functions for which the Riemann-Stieltjes integral and the Riemann integral exist, we have the representation (see, for instance, [7]) The proof is now complete.□ In the case where u is an integral, the following identity can be stated. Corollary 1Let be continuous on and be Riemann integrable. Then we have the identity Proof Since p and h are continuous, the function is differentiable and for each . Integrating by parts, we have then, by the definition of in (2.1), we deduce the first part of (2.6). The second part of (2.6) follows by (2.3).□ Remark 1 In the particular case , , we have the equality 3 Some inequalities for f-convex The following result concerning the nonnegativity of the error functional can be stated. Theorem 2Ifuis monotonic nonincreasing and is such that the Riemann-Stieltjes integral exists and A sufficient condition for (3.1) to hold is thatfis convex on . Proof The condition (3.1) is equivalent with the fact that for any and then, by the equality If f is convex, then which shows that , namely, the condition (3.1) is satisfied.□ Corollary 2Let be continuous on and be Riemann integrable. If for any andfsatisfies (3.1) or, sufficiently, fis convex on , then We are now able to provide some new results. Theorem 3Assume thatpandhare continuous and synchronous (asynchronous) on , i.e., Iffsatisfies (3.1) and is Riemann integrable on (or sufficiently, fis convex on ), then We use the Čebyšev inequality which holds for synchronous (asynchronous) functions p, h and nonnegative α for which the involved integrals exist. Now, on applying the Čebyšev inequality (3.7) for and utilising the representation result (2.6), we deduce the desired inequality (3.5).□ We also have the following theorem. Theorem 4Assume that is Riemann integrable and satisfies (3.1) (or sufficiently, fis concave on ). Then, for continuous, we have where , . In particular, we have Observe that and the inequality (3.8) is proved. Further, by the Hölder inequality, we also have for , , and the theorem is proved.□ Remark 2 The above result can be useful for providing some error estimates in approximating the weighted integral by the generalised trapezoid rule as follows: provided f satisfies (3.1) and is Riemann integrable (or sufficiently, convex on ), which is continuous on . If , , then for some f, we also have Finally, we can state the following Jensen type inequality for the error functional . Theorem 5Assume is Riemann integrable and satisfies (3.1) (or sufficiently, fis convex on ), while is continuous. If is convex (concave), then By the use of Jensen’s integral inequality, we have Since, by the identity (2.6), we have then (3.14) is equivalent with the desired result (3.13).□ 4 Sharp bounds via Grüss type inequalities Due to the identity (2.3), in which the error bound can be represented as , where is a Grüss type functional introduced in [8], any sharp bound for will be a sharp bound for . We can state the following result. Theorem 6Let be bounded functions on . (i) If there exist constantsn, Nsuch that for any , uis Riemann integrable andfisK-Lipschitzian ( ), then The constant is best possible in (4.1). (ii) Iffis of bounded variation anduisS-Lipschitzian ( ), then The constant is best possible in (4.2) (iii) Iffis monotonic nondecreasing anduisS-Lipschitzian, then The constant is best possible in both inequalities. (iv) Iffis monotonic nondecreasing anduis of bounded variation and such that the Riemann-Stieltjes integral exists, then The inequality (4.4) is sharp. (v) Iffis continuous and convex on anduis of bounded variation on , then The constant is sharp (if and are finite). (vi) If is continuous and convex on anduis monotonic nondecreasing on , then The constants 2 and are best possible in (4.6) (if and are finite). Proof The inequality (4.1) follows from the inequality (2.5) in [8] applied to , while (4.2) comes from (1.3) of [9]. The inequalities (4.3) and (4.4) follow from [7], while (4.5) and (4.6) are valid via the inequalities (2.8) and (2.1) from [10] applied to the functional . The details are omitted.□ If we consider the error functional in approximating the weighted integral by the generalised trapezoid formula, namely (see also (2.7)), then the following corollary provides various sharp bounds for the absolute value of . Corollary 3Assume thatfanduare Riemann integrable on . (i) If there exist constantsγ, Γ such that for each , andfisK-Lipschitzian on , then The constant is best possible in (4.8). (ii) Iffis of bounded variation and for each , then The constant is best possible in (4.9). (iii) Iffis monotonic nondecreasing and , , then where is defined in Theorem 6. The constant is sharp in both inequalities. (iv) Iffis monotonic nondecreasing and , then where is defined in Theorem 6. The inequality (4.11) is sharp. (v) Iffis continuous and convex on and , then The constant is sharp (if and are finite). (vi) If is continuous and convex on and for , then The first inequality in (4.13) is sharp (if and are finite). Proof We only prove the first inequality in (4.13). Utilising the inequality (4.6) for , we get However, on integrating by parts, we have The rest of the inequality is obvious.□ Authors’ contributions PC and SSD have contributed to all parts of the article. Both authors read and approved the final manuscript. Most of the work for this article was undertaken while the first author was at Victoria University, Melbourne Australia. Sign up to receive new article alerts from Journal of Inequalities and Applications
{"url":"http://www.journalofinequalitiesandapplications.com/content/2013/1/53","timestamp":"2014-04-21T07:03:48Z","content_type":null,"content_length":"196235","record_id":"<urn:uuid:5b458a56-c09c-42f7-80b5-b0c0b506452e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic Analysis of Partially Embedded Structures Considering Soil-Structure Interaction in Time Domain Mathematical Problems in Engineering Volume 2011 (2011), Article ID 534968, 23 pages Research Article Dynamic Analysis of Partially Embedded Structures Considering Soil-Structure Interaction in Time Domain School of Civil Engineering, University of Tehran, Tehran, Iran Received 7 June 2011; Revised 8 August 2011; Accepted 12 August 2011 Academic Editor: Delfim Soares Jr. Copyright © 2011 Sanaz Mahmoudpour et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Analysis and design of structures subjected to arbitrary dynamic loadings especially earthquakes have been studied during past decades. In practice, the effects of soil-structure interaction on the dynamic response of structures are usually neglected. In this study, the effect of soil-structure interaction on the dynamic response of structures has been examined. The substructure method using dynamic stiffness of soil is used to analyze soil-structure system. A coupled model based on finite element method and scaled boundary finite element method is applied. Finite element method is used to analyze the structure, and scaled boundary finite element method is applied in the analysis of unbounded soil region. Due to analytical solution in the radial direction, the radiation condition is satisfied exactly. The material behavior of soil and structure is assumed to be linear. The soil region is considered as a homogeneous half-space. The analysis is performed in time domain. A computer program is prepared to analyze the soil-structure system. Comparing the results with those in literature shows the exactness and competency of the proposed method. 1. Introduction In a dynamic soil-structure interaction problem, the structure is supported by an unbounded soil medium subjected to a dynamic load like an earthquake. The dynamic response of the structure is affected by the interaction between the structure, foundation, and soil. In dynamic soil-structure interaction analysis, usually the higher modes of the structure are affected significantly by soil-structure interaction (SSI) effects. As the influence of higher modes on the seismic response of flexible high structures with small mass remains small, the SSI effects are negligible for these structures. On the other hand for stiff and massive structures on relatively soft ground, the effects of SSI are noticeable and lead to an increase in the natural period and a change in the damping ratio of the system [1–3]. Effects of interaction can be expressed as inertial interaction and kinematic interaction. The interaction effect associated with the stiffness of the structure is termed kinematic interaction, and the corresponding mass-related effect is called inertial interaction [4]. Jennings and Bielak [5], Veletsos and Nair [6], and Bielak [7] studied effects of inertial interaction, and Todorovska and Trifunac [8], Aviles and Perez-Rocha [9], Betti et al. [10], and Aviles et al. [11] studied the effects of kinematic interaction. In dynamic soil-structure interaction problems, analysis methods can be classified into three groups [12]: (1) time domain and frequency domain analysis methods, (2) substructure method and direct method, (3) rigorous methods and approximate simple physical models. Time domain methods are capable of studying nonlinear behavior of soil medium, effects of pore water, and nonlinear conditions along the interface between soil and structure. In frequency domain, the solving procedure is easier than time domain but it can deal only with linear aspects. In substructure, method the whole media is represented by an impedance matrix which could be attached to the dynamic stiffness of the structure. This hypothesis renders the soil-structure interaction problem simpler and reduces the analysis efforts. In direct method the soil region near the structure is modeled directly; hence, complex geometry, variations of soil properties, and nonlinear behavior of the medium could be considered. As mentioned in this method, the unbounded soil medium is replaced by a bounded region with artificial boundaries. It should be considered that in the numerical modeling of unbounded media, the boundaries should be expressed, so that the radiation condition is satisfied exactly, and the wave energy dissipates in the medium. Several studies have been performed, and methods to impose a wave-absorbing boundary condition have been proposed [13–17]. Simple physical models can be applied to help the analyst identify the key parameters of the dynamic system for preliminary design or investigate alternative designs. They are used to check the results of more rigorous procedures determined with sophisticated computer programs [12]. To solve the soil-structure interaction problems, several analytical and numerical methods have been developed. Applying analytical methods is limited to simple structures and uniform soil media, while numerical methods such as finite element method (FEM), infinite element method, and boundary element method (BEM) are widely used. The FEM method is well suited for nonhomogeneous, anisotropic materials of arbitrary-shaped structure with non-linear behavior [18]. BE methods require a fundamental solution satisfying the governing differential equations exactly [19–22]. This analytical solution is often complicated, exhibiting singularities. Certain shortage in modeling nonhomogeneous soil media is exhibited using BE methods. Cone models have been used to determine dynamic stiffness of foundations and the seismic effective foundation input motion as an alternative to rigorous boundary-element solutions [23–30]. The concept of infinite element method was introduced by Ungless [31] and Bettess [32, 33]. The concepts and formulation procedure in this method are similar to those of FEM methods. The scaled boundary finite element method (SBFEM) which is a semianalytical computational procedure can be used for modeling bounded and unbounded medium considering nonhomogeneous and incompressible material properties. This method has been applied to soil-structure interaction problems both in time and frequency domain by Wolf and Song [34, 35]. Combined models are used in soil-structure interaction analysis. The most widely used combined model is the coupled finite element and boundary element method in both time and frequency domain [36–38 ]. Qian et al. [39] Estorff and Prabucki [40], and Israil and Banerjee [41] used the coupled FEM-BEM model for analysis of homogeneous media. Zhang et al. presented the analysis in time domain for layered soils [42]. Tanikulu et al. extended BEM formulation for infinite nonhomogeneous media [43]. They could model only three different layers. Coupled finite element-Infinite element models have been used in dynamic soil-structure interaction analysis [44–46]. Coupled finite element/boundary element/scaled boundary finite element model [47] has been used to solve soil-structure interaction Jeremić et al. [48] have studied the effects of nonuniformity of soils in large structures where they developed various models to simulate wave propagation through soils with elastoplastic behavior. Ghannad and Mahsuli [49] studied the effect of foundation embedment using a simplified single degree of freedom model with idealized bilinear behavior for the structure and considered the soil as a homogeneous half-space as a discrete model based on cone model concepts. The foundation is modeled as a rigid cylinder embedded in the soil. The scaled boundary finite element method is a boundary-element method based on finite elements. This method combines the advantages of the boundary and finite element methods. It also combines the advantages of the numerical and analytical procedures. This method can be applied in both frequency and time domains [35]. This method is a semianalytical procedure which transforms the partial differential equation to an ordinary differential equation using a virtual work statement as in finite elements. In this method, no fundamental solution is required, and no singular integrals occur. Only the boundary is discretized which results in a reduction of the spatial discretization by one. The analytical solution in the radial direction permits the boundary condition at infinity to be satisfied exactly [35]. A computer program named SIMILAR based on SBFEM is presented by Wolf and Song [50]. This program calculates the dynamic stiffness of the unbounded media in frequency and time In this study, the dynamic behavior of partially embedded structures is examined. The substructure method is used, and a coupled finite element, scaled boundary finite element model is applied. The scaled boundary finite element method is used to calculate the dynamic stiffness of the soil, and the finite element method is applied to analyze the dynamic behavior of the structure. In continuation, firstly, the equation of motion of the soil structure system in total and relative displacements is introduced. The dynamic stiffness matrix of the soil is obtained using SBFEM in the second section. In the third section, an iterative procedure is presented to calculate dynamic load using dynamic stiffness matrix of the soil. Applying Newmark method, the equation of motion of the system is solved, and the displacements of the structure are obtained. It is worth noting that although the formulation in the paper is not innovative, this is the first time a complete model of structure is studied and the dynamic response of the structure is examined. Previous studies have used simplified soil model or simplified structural model and/or both. Therefore, the present results seem to be the first ones obtained based on a complete soil-structure model. Moreover, from a practical point of view, the present results could lead to an interesting conclusion in the important topic of “choosing base shear level” which is not clearly defined in practice codes Numerical examples are presented, and the final section is devoted to concluding remarks. 2. Equation of Motion The dynamic behavior of the structure could be described by its static stiffness matrix and the mass matrix . The equation of motion of the structure in total displacements in time domain is formulated as follows [35]: Considering damping matrix of the structure, , the above equation is written as follows: where , , and are the acceleration, velocity, and displacement vectors of the structure. Subscripts are used to denote the nodes of the discretized system. As shown in Figure 1, nodes on the foundation structure interface are denoted by , and the remaining nodes related to the structure are denoted by . denotes the interaction forces of the unbounded soil acting on the interface nodes of soil-structure system. The interaction forces of the soil depend upon the motion relative to the effective foundation input motion. The interaction force-displacement relationship in the time domain is formulated as: where is called the displacement unit impulse response matrix in time domain. The interaction force-displacement relationship can alternatively be written as in which is the acceleration unit-impulse response matrix in time domain. Superscript denotes the unbounded medium. For an unbounded medium initially at rest, we have Substituting (2.6) in (2.1) results the equation of motion in total displacement [35]: in which is the ground motion acceleration induced to the base of the structure during an earthquake. In this paper, the equation of motion of soil-structure system in relative displacement is used As can be seen in (2.7), the unit impulse response matrix should be obtained a priori. The dynamic load on the right hand side of the equation is calculated a posteriori. In the next section, the unit impulse response matrix is obtained applying scaled boundary finite element method [35]. 3. Obtaining Acceleration Unit-Impulse Response Matrix The force displacement relationship in the frequency domain could be written as follows [35]: where and are force and displacement in frequency domain. is denoted as acceleration dynamic stiffness matrix in the frequency domain. The relationship between the acceleration and displacement dynamic stiffness matrices is The scaled boundary finite element equation in dynamic stiffness for the unbounded medium is formulated as follows [35]: in which , and are coefficient matrices in the Scaled Boundary Finite Element method introduced in [35]. Dividing (3.3) by and substituting (3.2) yields [35] Applying the inverse Fourier transformation to (3.4) results in The positive definite coefficient matrix [] is decomposed by Cholesky’s method as follows: where is an upper-triangular matrix. Substituting (3.6) in (3.5) and premultiplying by and postmultiplying by yields where And the coefficient matrices are Once obtained from (3.7), the acceleration unit-impulse response matrix is obtained as In this paper, is obtained using the program SIMILAR presented by Jeremić et al. [48]. 4. Calculating Dynamic Load The dynamic load on the right hand side of (2.7) could be written as follows: where represents the dynamic load due to ground motion, respectively, and affects the total nodes of the system, while is the dynamic load related to interaction effects and affects the nodes on the foundation-structure interface denoted by in Figure 2. The dynamic load vector on the right hand side of (2.7) could be written as follows: Comparing (4.1) and (4.2) results where is the total mass matrix of structure, and is the acceleration unit-impulse response matrix. The dynamic load could be written in discrete form as follows [51]: where is the dynamic load at th step. and are the acceleration and velocity at the corresponding time step. In this paper, an iterative method is adopted to calculate . It is supposed that the acceleration is constant at each time step, so (4.5) could be written as follows: For the first time step, the dynamic load is calculated assuming that (2.7) is solved applying Newmark scheme, and acceleration, velocity, and displacement vectors are obtained. The dynamic load is then calculated using calculated acceleration. Equation (2.7) is solved again and the magnitudes of acceleration, velocity, and displacements are obtained. This procedure is iterated until the convergence is achieved. In this study, the tolerance between two successive iterations is taken as 0.001. The above procedure is outlined as in Table 1. According to the above algorithm, an FORTRAN program is prepared to examine the dynamic behavior of the structure considering interaction effects. Numerical examples are presented in the next 5. Examples 2D frames on soft ground have been analyzed applying a coupled scaled boundary finite element-Finite element models. The analysis is performed in time domain and the material behavior of soil and structure is assumed to be linear. The soil-structure system is subjected to sine excitations, El Centro, and Tabas ground motions. The displacement and base shear are calculated. Base shear is assumed to be the algebraic summation of horizontal forces induced in the structure. Results are compared with those obtained by cone model. Example 5.1. As the first example, the frames shown in Figures 2 and 3 are used in analysis. The damping ratio of the structure is considered as five percent of the mass matrix. Structure properties are assumed as: The soil properties are Firstly, a dynamic analysis is performed (ignoring SSI effects). The natural frequencies and periods of the structures are calculated and presented in Tables 3 and 4. Then the soil-structure system is subjected to sine excitations with unit amplitude. The loading frequency is considered to be variable and selected so that it would be close to natural frequencies of the structure. The harmonic load used in the analysis could be expressed as follows: where is the period of sine function. The SSI effect on dynamic response of the structure is examined. Figures 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, and 16 show the results obtained in analysis. Figures 17, 18, 19, and 20 show the variation of displacement and base shear versus period of dynamic load. The maximums (peaks) represent the obtained magnitudes with loading period close to the first and the second natural periods of the structure. As can be seen considering SSI effects, the maximum displacement and base shear are decreased. In Tables 5 and 6 the percentage of the relative reduction of displacement and base shear due to first and second modes is presented. It is observed that considering SSI effect leads to reduction in displacement and base shear. The reduction in displacement and base shear is more significant when the loading frequency is close to natural frequencies of the structure. As shown in Tables 5 and 6, the percentage of relative reduction in displacement and base shear is more significant for the second mode than the first one. It could be concluded that SSI effect is more pronounced for higher modes of the structure. Comparing the results presented in Tables 5 and 6 shows that the relative reduction is more significant for farme no. 2. It could be concluded that SSI effects are more significant for stiff structures. Example 5.2. In the second example, the fames are subjected to El Centro ground motion. It is worth noting that the predominant period of El Centro ground motion is 0.56 (sec) which is close to the first natural period of frame no. 2. The results are given in Figures 21, 22, 23, and 24. As can be observed in Tables 7 and 8, the relative reduction in displacement and shear base is more significant for frame no. 2. It can be concluded that when the predominant period of the earthquake is close to natural period of the structure, considering SSI effects leads to more significant reduction, and the dynamic response of the structure is more affected. Example 5.3. In the third example, the frames are subjected to Tabas ground motion. The predominant period of Tabas ground motion is 0.2(sec) which is close to the second natural period of frame no. 2. The results are given in Figures 25, 26, 27, and 28. As it is observed, considering SSI effect has a pronounced effect on results (Tables 9 and 10). 6. Conclusion Analysis and design of structures subjected to arbitrary dynamic loadings especially earthquakes have been studied during past decades. In practice, the effects of soil-structure interaction on the dynamic response of structures are usually neglected. In this paper, a coupled scaled boundary finite element-finite element model is presented to examine the dynamic response of the structure considering soil-structure interaction. The substructure method is used to analyze the soil-structure interaction problem. The analysis is performed in time domain. The material behavior of soil and structure is assumed to be linear. The scaled boundary finite element method is used to calculate the dynamic stiffness of the soil, and the finite element method is applied to analyze the dynamic behavior of the structure. 2D frames have been analyzed using the proposed model. The results are compared with those obtained by cone model. Considering SSI effect leads to reduction in displacement and base shear. When the system is subjected to sine excitation, the reduction in displacement and base shear is more significant when the loading frequency is close to natural frequencies of the structure. The reduction in displacement and base shear is more significant for the second mode than the first one, thus considering SSI in dynamic analysis of the structure affects the higher modes more significantly. It is observed that when the soil-structure system is subjected to an earthquake whose predominant period is close to natural period of the structure, considering SSI effects leads to more significant reduction, and the dynamic response of the structure is more affected. It is obvious that considering SSI effects results in more effective design without decreasing safety 1. J. E. Luco, “Soil-structure interaction and identification of structural models,” in Proceedings of the ASCE Specialty Conference in Civil Engineering and Nuclear Power, Tenn, USA, 1980. 2. J. P. Wolf, Dynamic Soil-Structure Interaction, Prentice Hall, Englewood Cliffs, NJ, USA, 1985. 3. J. Avilés and L. E. Pérez-Rocha, “Evaluation of interaction effects on the system period and the system damping due to foundation embedment and layer depth,” Soil Dynamics and Earthquake Engineering, vol. 15, no. 1, pp. 11–27, 1996. View at Publisher · View at Google Scholar · View at Scopus 4. R. W. Clough and J. Penzien, Dynamics of Structures, McGraw-Hill, New York, NY, USA, 2003. 5. P. C. Jennings and J. Bielak, “Dynamics of building-soil interaction,” Bulletin of the Seismological Society of America, vol. 63, pp. 9–48, 1973. 6. A. S. Veletsos and V. V. D. Nair, “Seismic interaction of structures on hysteretic foundations,” Journal of the Structural Division, vol. 101, no. 1, pp. 109–129, 1975. View at Scopus 7. J. Bielak, “Dynamic behavior of structures with embedded foundations,” Earthquake Engineering and Structural Dynamics, vol. 3, no. 3, pp. 259–274, 1975. View at Scopus 8. M. I. Todorovska and M. D. Trifunac, “The system damping, the system frequency and the system response peak amplitudes during in-plane building-soil interaction,” Earthquake Engineering and Structural Dynamics, vol. 21, no. 2, pp. 127–144, 1992. View at Publisher · View at Google Scholar · View at Scopus 9. J. Aviles and L. E. Perez-Rocha, “Effects of foundation embedment during building-soil interaction,” Earthquake Engineering and Structural Dynamics, vol. 27, no. 12, pp. 1523–1540, 1998. View at 10. R. Betti, A. M. Abdel-Ghaffar, and A. S. Niazy, “Kinematic soil-structure interaction for long-span cable-supported bridges,” Earthquake Engineering and Structural Dynamics, vol. 22, no. 5, pp. 415–430, 1993. View at Publisher · View at Google Scholar · View at Scopus 11. J. Aviles, M. Suarez, and F. J. Sanchez-Sesma, “Effects of wave passage on the relevant dynamic properties of structures with flexible foundation,” Earthquake Engineering and Structural Dynamics, vol. 31, pp. 139–159, 2002. View at Publisher · View at Google Scholar 12. J. P. Wolf, Foundation Vibration Analysis Using Simple Physical Models, Prentice Hall, Englewood Cliffs, NJ, USA, 1994. 13. J. Lysmer and R. L. Kuhlemeyer, “Finite dynamic model for infinite media,” Journal of Engineering Mechanics Division, vol. 95, no. 4, pp. 859–877, 1969. 14. W. D. Smith, “A nonreflecting plane boundary for wave propagation problems,” Journal of Computational Physics, vol. 15, no. 4, pp. 492–503, 1974. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 15. R. Clayton and B. Engquist, “Absorbing boundary conditions for acoustic and elastic wave equations,” Bulletin of the Seismological Society of America, vol. 67, pp. 1529–1540, 1977. 16. W. White, S. Valliappan, and I. K. Lee, “Unified boundary for finite dynamic models,” Journal of Engineering Mechanics, vol. 103, no. 5, pp. 949–964, 1977. View at Scopus 17. Z. P. Liao and H. L. Wong, “A transmitting boundary for the numerical simulation of elastic wave propagation,” International Journal of Soil Dynamics and Earthquake Engineering, vol. 3, no. 4, pp. 174–183, 1984. View at Publisher · View at Google Scholar · View at Scopus 18. F. Medina and R. L. Taylor, “Finite element techniques for problems of unbounded domains,” International Journal for Numerical Methods in Engineering, vol. 19, no. 8, pp. 1209–1226, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 19. G. D. Manolis and D. E. Beskos, Boundary Element Methods in Elastodynamics, Unwin Hyman, London, UK, 1988. 20. J. Dominguez, Boundary Elements in Dynamics, Computational Mechanics Publications, Southampton, UK, 1993. 21. D. E. Beskos, “Boundary element methods in dynamic analysis: part II (1986–1996),” Applied Mechanics Reviews, vol. 50, no. 3, pp. 149–197, 1997. View at Publisher · View at Google Scholar · View at Scopus 22. W. S. Hall and G. Oliveto, Boundary Element Methods for Soil-Structure Interaction, Kluwer Academic, Dodrecht, The Netherlands, 2003. 23. J. W. Meek and J. P. Wolf, “Cone models for homogeneous soil. I,” Journal of the Geotechnical Engineering Division, vol. 118, no. 5, pp. 667–685, 1992. View at Publisher · View at Google Scholar · View at Scopus 24. J. W. Meek and J. P. Wolf, “Cone models for soil layer on rigid rock. II,” Journal of the Geotechnical Engineering Division, vol. 118, no. 5, pp. 686–703, 1992. View at Publisher · View at Google Scholar · View at Scopus 25. J. W. Meek and J. P. Wolf, “Cone models for nearly incompressible soil,” Earthquake Engineering and Structural Dynamics, vol. 22, no. 8, pp. 649–663, 1993. View at Publisher · View at Google Scholar · View at Scopus 26. J. W. Meek and J. P. Wolf, “Cone models can present the elastic half-space,” Earthquake Engineering and Structural Dynamics, vol. 22, no. 9, pp. 759–771, 1993. View at Publisher · View at Google Scholar · View at Scopus 27. J. W. Meek and J. P. Wolf, “Cone models for embedded foundation II,” Journal of the Geotechnical Engineering Division, vol. 120, no. 1, pp. 60–80, 1994. View at Publisher · View at Google Scholar · View at Scopus 28. J. P. Wolf and J. W. Meek, “Cone models for a soil layer on a flexible rock half-space,” Earthquake Engineering and Structural Dynamics, vol. 22, no. 3, pp. 185–193, 1993. View at Publisher · View at Google Scholar · View at Scopus 29. J. P. Wolf and J. W. Meek, “Rotational cone models for a soil layer on flexible rock half-space,” Earthquake Engineering and Structural Dynamics, vol. 23, no. 8, pp. 909–925, 1994. View at Publisher · View at Google Scholar · View at Scopus 30. J. P. Wolf and J. W. Meek, “Dynamic stiffness of foundation on layered soil half-space using cone frustums,” Earthquake Engineering and Structural Dynamics, vol. 23, no. 10, pp. 1079–1095, 1994. View at Publisher · View at Google Scholar · View at Scopus 31. R. F. Ungless, An infinite element, M.S. thesis, University of British Columbia, 1973. 32. P. Bettess, “Infinite elements,” International Journal for Numerical Methods in Engineering, vol. 11, no. 1, pp. 53–64, 1977. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 33. P. Bettes, Infinite Elements, Penshaw Press, Sunderland, UK, 1992. 34. J. P. Wolf and C. Song, Finite Element Modeling of Unbounded Media, Wiley, UK, 1996. 35. J. P. Wolf, The Scaled Boundary Finite Element Method, Wiley, UK, 2003. 36. D. L. Karabalis and D. E. Beskos, “Dynamic response of 3-D flexible foundations by time domain BEM and FEM,” International Journal of Soil Dynamics and Earthquake Engineering, vol. 4, no. 2, pp. 91–101, 1985. View at Publisher · View at Google Scholar · View at Scopus 37. O. V. Estorff, “Dynamic response of elastic blocks by time domain BEM and FEM,” Computers and Structures, vol. 38, no. 3, pp. 289–300, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 38. F. Guan and M. Novak, “Transient response of an elastic homogeneous half-space to suddenly applied rectangular loading,” Transactions of the ASME, vol. 61, no. 2, pp. 256–263, 1994. View at Publisher · View at Google Scholar · View at Scopus 39. J. Qian, L. G. Tham, and Y. K. Cheung, “Dynamic cross-interaction between flexible surface footings by combined BEM and FEM,” Earthquake Engineering and Structural Dynamics, vol. 25, no. 5, pp. 509–526, 1996. View at Publisher · View at Google Scholar · View at Scopus 40. O. Von Estorff and M. J. Prabucki, “Dynamic response in the time domain by coupled boundary and finite elements,” Computational Mechanics, vol. 6, no. 1, pp. 35–46, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 41. A. S. M. Israil and P. K. Banerjee, “Effects of geometrical and material properties on the vertical vibration of three-dimensional foundations by BEM,” International Journal for Numerical and Analytical Methods in Geomechanics, vol. 14, no. 1, pp. 49–70, 1990. View at Publisher · View at Google Scholar · View at Scopus 42. X. Zhang, J. L. Wegner, and J. B. Haddow, “Three-dimensional dynamic soil-structure interaction analysis in the time domain,” Earthquake Engineering and Structural Dynamics, vol. 28, no. 12, pp. 1501–1524, 1999. View at Publisher · View at Google Scholar · View at Scopus 43. A. H. Tanrikulu, H. R. Yerli, and A. K. Tanrikulu, “Application of the multi-region boundary element method to dynamic soil-structure interaction analysis,” Computers and Geotechnics, vol. 28, no. 4, pp. 289–307, 2001. View at Publisher · View at Google Scholar · View at Scopus 44. C. B. Yun, D. K. Kim, and J. M. Kim, “Analytical frequency-dependent infinite elements for soil-structure interaction analysis in two-dimensional medium,” Engineering Structures, vol. 22, no. 3, pp. 258–271, 2000. View at Publisher · View at Google Scholar · View at Scopus 45. D. K. Kim and C. B. Yun, “Time domain soil-structure interaction analysis in two dimensional medium based on analytical frequency-dependent infinite elements,” International Journal for Numerical Methods in Engineering, vol. 47, no. 7, pp. 1241–1261, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 46. D. K. Kim and C. B. Yun, “Earthquake response analysis in the time domain for 2D soil-structure systems using analytical frequency-dependent infinite elements,” International Journal for Numerical Methods in Engineering, vol. 58, no. 12, pp. 1837–1855, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 47. M. C. Genes and S. Kocak, “Dynamic soil-structure interaction analysis of layered unbounded media via a coupled finite element/boundary element/scaled boundary finite element model,” International Journal for Numerical Methods in Engineering, vol. 62, no. 6, pp. 798–823, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus 48. B. Jeremić, G. Jie, M. Preisig, and N. Tafazzoli, “Time domain simulation of soil-foundation-structure interaction in non-uniform soils,” Earthquake Engineering and Structural Dynamics, vol. 38, no. 5, pp. 699–718, 2009. View at Publisher · View at Google Scholar · View at Scopus 49. M. Mahsuli and M. A. Ghannad, “The effect of foundation embedment on inelastic response of structures,” Earthquake Engineering and Structural Dynamics, vol. 38, no. 4, pp. 423–437, 2009. View at Publisher · View at Google Scholar · View at Scopus 50. J. P. Wolf and Ch. Song, Finite-Element Modelling of Unbounded Media, John Wiley & Sons, Chichester, UK, 1996. 51. L. Lehmann, Wave Propagation in Infinite Domains with Applications to Structure Interaction, Springer, Berlin, Germany, 2007.
{"url":"http://www.hindawi.com/journals/mpe/2011/534968/","timestamp":"2014-04-18T07:41:59Z","content_type":null,"content_length":"297451","record_id":"<urn:uuid:2d78b7fa-c14f-45fd-b35f-e2979f4e7952>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: rigor and intuition Vladimir Sazonov V.Sazonov at csc.liv.ac.uk Tue Feb 12 08:50:09 EST 2002 Matthew Frank wrote: > In response to the quote from Kit Fine that > > > when there is a clash between intuition and rigour, > > > when one's sense of rigour prevents one from saying > > > what, from an intuitive point of view, it seems that one can say, > > > then it is rigour and not intuition that should give way. I cannot agree. Without rigor there is no mathematics at all. > Arnon Avron responded that > > Both logic and experience have taught me that whenever there is > > a clash between intuition and rigour it means that something is > > wrong with the intuition. Rigour simply cant be wrong. Completely agree! Formalisms (which embody mathematical rigor) are our instruments, mechanisms (for thought) which cannot be wrong in principle. They may be only useful, convenient, intuitive or not. > I think both of the above misidentify the conflict. Rigor and intuition > need not be in conflict; one should not evaluate either rigor or intuition > as right or wrong. Some conflict is inevitable, as it is shown by the example of quite intuitive Axiom of Choice leading to non measurable sets and other > Rather, given a rigorous formal system and some intuitions, one should ask > whether the formal system aritculates or accords well with the intuitions, > and whether the intuitions are useful to us in finding proofs in the > formal system. We always should try, but, actually, there is no hope for a complete harmony between Procrustean bed of a formalism and the intuition which we try to formalize. Moreover, after formalizing, the initial intuition may change radically. We even can forget what was the initial intuition before formalization because it is usually disappearing and after formalization can exists only TOGETHER or DUE TO formalization. Which was our intuition on natural numbers before studying Peano arithmetic? Who can recall? Did we think from the beginning on potential infinity or we imagined a biggest natural number before a teacher told us (being only obedient children) how we MUST think? Vice versa, any reasonable mathematical formalism always has some "underlying" intuition without which it would be impossible to use it. > There is also a question about rigor and intuition in proofs, somewhat > different from the question previously being discussed. Proofs may be > more or less rigorous and more or less intuitive. Some prominent > mathematicians (notably the geometers Gromov and Thurston) have > preferred to find intuitive proofs and present their proofs intuitively, > and think there are more important things to do than worry about how to > present the proofs rigorously. This is perhaps a more interesting example > of rigor giving way. I agree with Vladik Kreinovich (Hi, Vladik!) who already replied to this. I could only repeat that without rigor (formalisms) there is no mathematics. > We--and especially we who are interested in foundations of math--should > nurture our intuitions. This could be an important function for > foundations of math: to help articulate various mathematical intuitions > in such a way as to make them more useful in mathematical practice. Yes, of course! But always remembering that mathematical intuition cannot exist in a pure form, only with and due to formalisms. Otherwise it is not a mathematical intuition. > --Matt Vladimir Sazonov V.Sazonov at csc.liv.ac.uk Department of Computer Science tel: (+44) 0151 794-6792 University of Liverpool fax: (+44) 0151 794 3715 Liverpool L69 7ZF, U.K. http://www.csc.liv.ac.uk/~sazonov More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-February/005220.html","timestamp":"2014-04-20T01:14:11Z","content_type":null,"content_length":"6330","record_id":"<urn:uuid:cce193b0-e3a1-4137-aa30-b23460ae8e5c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Question 14.5 comp.lang.c FAQ list · Question 14.5 Q: What's a good way to check for ``close enough'' floating-point equality? A: Since the absolute accuracy of floating point values varies, by definition, with their magnitude, the best way of comparing two floating point values is to use an accuracy threshold which is relative to the magnitude of the numbers being compared. Rather than double a, b; if(a == b) /* WRONG */ use something like #include <math.h> if(fabs(a - b) <= epsilon * fabs(a)) where epsilon is a value chosen to set the degree of ``closeness'' (and where you know that a will not be zero). The precise value of epsilon may still have to be chosen with care: its appropriate value may be quite small and related only to the machine's floating-point precision, or it may be larger if the numbers being compared are inherently less accurate or are the results of a chain of calculations which compounds accuracy losses over several steps. (Also, you may have to make the threshold a function of b, or of both a and b.) A decidedly inferior approach, not generally recommended, would be to use an absolute threshold: if(fabs(a - b) < 0.001) /* POOR */ Absolute ``fuzz factors'' like 0.001 never seem to work for very long, however. As the numbers being compared change, it's likely that two small numbers that should be taken as different happen to be within 0.001 of each other, or that two large numbers, which should have been treated as equal, differ by more than 0.001 . (And, of course, the problems merely shift around, and do not go away, when the fuzz factor is tweaked to 0.005, or 0.0001, or any other absolute number.) Doug Gwyn suggests using the following ``relative difference'' function. It returns the relative difference of two real numbers: 0.0 if they are exactly the same, otherwise the ratio of the difference to the larger of the two. #define Abs(x) ((x) < 0 ? -(x) : (x)) #define Max(a, b) ((a) > (b) ? (a) : (b)) double RelDif(double a, double b) double c = Abs(a); double d = Abs(b); d = Max(c, d); return d == 0.0 ? 0.0 : Abs(a - b) / d; Typical usage is if(RelDif(a, b) <= TOLERANCE) ... References: Knuth Sec. 4.2.2 pp. 217-8 about this FAQ list about eskimo search feedback copyright
{"url":"http://c-faq.com/fp/fpequal.html","timestamp":"2014-04-20T10:47:26Z","content_type":null,"content_length":"5271","record_id":"<urn:uuid:535353fa-3c5d-4c0c-a4f0-8dd9f609b2b9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics An expression of the form are called the terms of the polynomial. The order of the terms, and also the order of the factors in each term, can be changed arbitrarily; in precisely the same way it is possible to introduce or omit terms with zero coefficients and, in each individual term, zero powers. When the polynomial has one, two or three terms it is called a monomial, binomial or trinomial. With regard to the coefficients of a polynomial one assumes that they belong to a field, for example, the field of rational, real or complex numbers. Two terms of a polynomial are called similar if the powers of the same variables in them are equal. Terms similar to each other, can be replaced by one term (reduction of similar terms). Two polynomials are called equal if, after reduction, all terms with non-zero coefficients are pairwise identical (but, possibly, written in a different order), and also if all the coefficients of both of these polynomials turn out to be zero. In the latter case the polynomial is called identically zero and is denoted by the symbol 0. The sum of the powers of any term of a polynomial is called the degree of that term. If the polynomial is not identically zero, then among the terms with non-zero coefficients (it is assumed that similar terms have been reduced) there is at least one of highest degree: this highest degree is called the degree of the polynomial. The zero polynomial does not have a degree. A polynomial of degree zero reduces to a single term A polynomial in the variables symmetric polynomial if it is not changed by any permutation of the variables. A polynomial of which all terms have the same degree is called a homogeneous polynomial or a form; forms of the first, second or third degree are called linear, quadratic or cubic, and, according to the number of variables (two or three), they are called dyadic (binary) or triadic (ternary) (for example, The degree of a polynomial The roots of a polynomial in one variable over a field algebraic equation The roots of a polynomial are related to its coefficients by Viète's formula (see Viète theorem). The set of all possible polynomials in ring with respect to the naturally defined operations of addition and multiplication. The ring of polynomials in an infinite set of variables can also be considered. A ring of polynomials is an associative-commutative ring without zero divisors (that is, a product of non-zero polynomials cannot be 0). If for two given polynomials Horner scheme. By repeated application of this operation it is possible to find the greatest common divisor of Euclidean algorithm). Two polynomials with greatest common divisor equal to 1 are called coprime. A polynomial which can be represented as a product of polynomials of smaller degree with coefficients from a given field is called reducible (over that field); otherwise it is called irreducible. The irreducible polynomials play a role in the ring of polynomials similar to that played by the prime numbers in the ring of integers. For example, the following theorem holds: If a product Algebra, fundamental theorem of). For two or more variables this is no longer true. Over any field If the variables Algebraic function). One of the most important properties of polynomials is that any continuous function on a compact subset of the complex plane can be approximated by a polynomial within arbitrarily small error (see Weierstrass theorem). Special systems of polynomials, orthogonal polynomials, are used in approximation theory as a means of representing functions by series. [1] A.P. Mishina, I.V. Proskuryakov, "Higher algebra. Linear algebra, polynomials, general algebra" , Pergamon (1965) (Translated from Russian) [2] A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian) [3] N. Bourbaki, "Elements of mathematics. Algebra: Modules. Rings. Forms" , 2 , Springer (1988) pp. Chapt. 4–7 (Translated from French) Polynomials over arbitrary rings [a1] S. Lang, "Algebra" , Addison-Wesley (1984) How to Cite This Entry: Polynomial. A.I. Markushevich (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Polynomial&oldid=16098 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Polynomial","timestamp":"2014-04-21T09:35:45Z","content_type":null,"content_length":"31062","record_id":"<urn:uuid:95ebacef-6350-4066-a574-81ac137a2c4c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
- Proc. Latin American Theoretical Informatics , 2002 "... A space-efficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four space-efficient algorithms for computing the convex hull of a planar point set. ..." Cited by 20 (1 self) Add to MetaCart A space-efficient algorithm is one in which the output is given in the same location as the input and only a small amount of additional memory is used by the algorithm. We describe four space-efficient algorithms for computing the convex hull of a planar point set. - Theoretical Computer Science "... Two linear-time algorithms for in-place merging are presented. Both algorithms perform at most m(t+1)+n=2 t +o(m) comparisons, where m and n are the sizes of the input sequences, m n, and t = blog 2 (n=m)c. The first algorithm is for unstable merging and it carries out no more than 3(n+m)+o(m) el ..." Cited by 14 (3 self) Add to MetaCart Two linear-time algorithms for in-place merging are presented. Both algorithms perform at most m(t+1)+n=2 t +o(m) comparisons, where m and n are the sizes of the input sequences, m n, and t = blog 2 (n=m)c. The first algorithm is for unstable merging and it carries out no more than 3(n+m)+o(m) element moves. The second algorithm is for stable merging and it accomplishes at most 5n+12m+o(m) moves. Key words: In-place algorithms, merging, sorting ? A preliminary and weaker version of this work appeared in Proceedings of the 20th Symposium on Mathematical Foundations of Computer Science, Lecture Notes in Computer Science 969, Springer-Verlag, Berlin/Heidelberg (1995), 211--220. 1 Supported by the Slovak Grant Agency for Science under contract 1/4376/97 (Project "Combinational Structures and Complexity of Algorithms"). 2 Partially supported by the Danish Natural Science Research Council under contracts 9400952 (Project "Computational Algorithmics") and 9701414 (Project "Experimental Algorithmics"). Preprint submitted to Elsevier Preprint December 19, 1995 1 - The Computer Journal , 1995 "... ..." , 1990 "... In an earlier research paper [HL1], we presented a novel, yet straightforward linear-time algorithm for merging two sorted lists in a fixed amount of additional space. Constant of proportionality estimates and empirical testing reveal that this procedure is reasonably competitive with merge routines ..." Cited by 8 (0 self) Add to MetaCart In an earlier research paper [HL1], we presented a novel, yet straightforward linear-time algorithm for merging two sorted lists in a fixed amount of additional space. Constant of proportionality estimates and empirical testing reveal that this procedure is reasonably competitive with merge routines free to squander unbounded additional memory, making it particularly attractive whenever space is a critical resource. In this paper, we devise a relatively simple strategy by which this efficient merge can be made stable, and extend our results in a nontrivial way to the problem of stable sorting by merging. We also derive upper bounds on our algorithms' constants of proportionality, suggesting that in some environments (most notably external file processing) their modest run-time premiums may be more than offset by the dramatic space savings achieved.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=360332","timestamp":"2014-04-20T05:22:57Z","content_type":null,"content_length":"19713","record_id":"<urn:uuid:519a2978-321f-4007-9ab1-3fce3b36600a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Western Carolina University Sloan E. Despeaux Mathematics and Computer Science Department Associate Professor Phone: 828-227-3825 Email: despeaux@email.wcu.edu Office Address: Stillwell 471 Website: http://paws.wcu.edu/despeaux/Sloan_Despeaux.html • Ph.D., Mathematics, University of Virginia • M.S., Mathematics, Florida State University • B.A., Mathematics, Francis Marion University Areas of Interest: History of Science and History of Mathematics (in particular, nineteenth-century mathematics, mathematicians, and scientific journals in Britain). Selected Publications: • History of Algebra in the 19th and 20th Centuries, ed. Jeremy Gray and Karen Hunger Parshall (Providence: American Mathematical Society and London: London Mathematical Society, 2007). • " ‘Very Full of Symbols’: Duncan F. Gregory, the Calculus of Operations, and the Cambridge Mathematical Journal," in "Launching Mathematical Research without a Formal Mandate: The Role of University Affiliated Journals in Britain, 1837-1870," Historia Mathematica, 34 (1) (2007): 89-106. • "Mathematics Sent Across the Channel and the Atlantic: Nineteenth-Century British Mathematical Contributions to International Scientific Journals," Annals of Science 65 (1) (2008): 73-99. • "Mathematical Societies and Journals," in Mathematics in Victorian Britain, ed. Raymond Flood, Adrian Rice, and Robin Wilson (Oxford: Oxford University Press, forthcoming). Courses Taught: ^ • MATH 101 Mathematical Concepts • MATH 145 Trigonometry • MATH 146 Precalculus • MATH 153 Calculus I • MATH 170 Applied Statistics • MATH 250 Introduction to Logic and Proof • MATH 256 Calculus III • MATH 361 Introduction to Abstract Algebra • MATH 301 History of the Scientific Revolution • MATH 400/500 History of Mathematics • MATH 461/561 Abstract Algebra • MATH 507 Survey of Algebra • MATH 661 Applied Abstract Algebra^
{"url":"http://wcu.edu/academics/departments-schools-colleges/cas/casdepts/mathcsdept/mathematics-and-computer-science-faculty-and-staff/sloan-e.-despeaux.asp","timestamp":"2014-04-20T05:49:58Z","content_type":null,"content_length":"18322","record_id":"<urn:uuid:decb83c2-cfae-45a0-8ad7-1b215665e6f1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Chartley Math Tutor Find a Chartley Math Tutor ...I have been tutoring high school algebra for more than twenty years. I like to help student learn strategies and effective studying techniques, this includes effective note taking, writing effective outlines, etc. Algebra 1 includes: equations, absolute value, inequalities, graphing linear and ... 15 Subjects: including geometry, algebra 1, algebra 2, biology ...Most of my tutoring experience was with physics and calculus material. There are a number of aspects of algebra, geometry, trigonometry, and pre-calculus that I have continued to use and are fundamental to calculus and other advanced mathematics. I have continued to work with teens as I have coached about 10 years in youth sports. 10 Subjects: including linear algebra, mechanical engineering, algebra 1, algebra 2 I am a Dartmouth College grad with 13 years of professional tutoring experience. I have particular expertise with standardized tests such as the SAT, ACT, SSAT and ISEE. I also tutor math and writing for middle school and high school students. 26 Subjects: including ACT Math, probability, linear algebra, algebra 1 ...Logic is the study of the relationship betwen words and concepts. Exploration begins with understanding the basic ways concepts interact with a special focus on "if, then" statements. Once this is mastered, we begin exploring how mathematics and the natural sciences "reason logically" and how our basic understandings of natural laws develop from basic relationships between words and 69 Subjects: including precalculus, logic, elementary (k-6th), physics ...I have been helping students achieve their educational goals for many years as a classroom teacher, and as a tutor. My love of children and learning has led me into teaching. Along with my certifications, I am trained as a Montessori teacher, and use this as my primary teaching method. 16 Subjects: including prealgebra, SAT math, reading, algebra 1
{"url":"http://www.purplemath.com/chartley_ma_math_tutors.php","timestamp":"2014-04-17T04:35:33Z","content_type":null,"content_length":"23831","record_id":"<urn:uuid:ddff61e8-668e-46f5-92b3-dd456defb9bc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Reconstruction of homogeneous relational structures Reconstruction results give conditions under which the automorphism group of a structure determines the structure up to bi-interpretability or bi-definability. Here we examine a large class of omega-categorical combinatorial structures, which was isolated by Herwig and contains K_n-free graps, k-hypergraphs and Henson digraphs. Using a Baire category approach we show how to obtain reconstruction for this class, proving that a reconstruction condition, developed by M. Rubin, holds. The method rests on the existence of a generic pair of automorphisms.
{"url":"http://www.newton.ac.uk/programmes/MAA/Abstract1/barbina.html","timestamp":"2014-04-17T21:37:54Z","content_type":null,"content_length":"2649","record_id":"<urn:uuid:06debc76-12f6-4e44-a6a0-09d3650589f7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
practice 3 Practice Questions on growth and social systems, mathematical growth models, and wealth and poverty 1. The number of telephone subscribers is growing most rapidly in poor countries with competitive telephone industries. It is growing least rapidly in countries with state-run telephone monopolies. Is this consistent with what you have studied so far about growth? Explain 2. Many Internet businesses have failed. What does this say about the Internet as a catalyst for economic growth? 3. In an economy with a depreciation rate of 4 percent (.04), population growth of 2 percent, a savings rate of 30 percent, and growth in the efficiency of labor of 1 percent, □ What is the growth rate of capital per worker that will keep the economy on a balanced growth path? What will be the growth rate of output per worker? □ What is the capital-output ratio along the balanced growth path? 4. What evidence exists to show that poverty is decreasing?
{"url":"http://arnoldkling.com/econ/practice3.html","timestamp":"2014-04-18T11:12:31Z","content_type":null,"content_length":"1523","record_id":"<urn:uuid:747eadfa-2b72-4d4a-be75-5a2f37986318>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #3699 View entire discussion From: Pat Ballew (for Teacher2Teacher Service) Date: Apr 17, 2000 at 01:33:49 Subject: Re: Scatter plots First, a really good site to learn from, and then some notes about general This is a great site for statistical ideas, but it may be a little too deep in some spots for an eighth grader. But maybe if you went through the tough spots with her she could get most of the big ideas. Second, I do not know if they are using the words in the same way in middle school, but in general, if both things get bigger together we call that "positive" correlation, and the line should go up from the bottom left to upper right on the graph... Temp and heating costs would probably be a negative correlation because as one (temp) got lower the other (heating cost) gets higher. correlation is measured as a number from -1 to +1... The closer the value is to one, the more the points on the graph lie in a straight line, the + and - is just whether it is positive or negative Points that lie in a perfect line have correlation of 1 or -1. Points that seem to be like a cloud with no line in them have correlation near zero. Just another note, you can have a definite pattern, like a curve, but still have a correlation near zero, because correlation is about fitting a LINE, not something else. I am not sure how much of this the teacher wants them to know about this, but the web page will give some nice examples. Good luck -Pat Ballew, for the Teacher2Teacher service Post a public discussion message Ask Teacher2Teacher a new question
{"url":"http://mathforum.org/t2t/message.taco?thread=3699&message=4","timestamp":"2014-04-21T02:57:15Z","content_type":null,"content_length":"5731","record_id":"<urn:uuid:83735468-3f77-4d6f-872c-2e3f54a8aeb3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Foundation Mathematics Mathematical words Having the same shape and size Split a line or angle in half A polygon with 8 sides Theorem about the right angled triangle Reflection, Translation, Rotation or Enlargement Measured in square units A quadrilateral with opposite sides that are parallel Flat shape which can be folded up into a 3D shape Flat surface of a 3D shape Patterns of shapes that fit together without any gaps An angle measuring less than 90 degrees A triangle with 3 equal sides and angles Parallelogram with four equal sides and equal opposite angles Distance around the outside of a shape Amount of space occupied by a 3D object At right angles to the horizon A mathematical rule written using symbols Sides or angles that are next to each other Longest side on a right angled triangle Straight line joining two points on the circumference of a circle A set of numbers arranged according to a rule The distance around a circle Unit equal to one hundreth of a metre A polygon with 7 sides Any angle between 180 and 360
{"url":"http://www.sporcle.com/games/tsmith1/foundation_mathematics","timestamp":"2014-04-21T05:09:07Z","content_type":null,"content_length":"56924","record_id":"<urn:uuid:f4e3a100-0d57-4abf-a413-1f09ac3b85cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
59 search hits Molecular dynamics simulations and docking of non-nucleoside reverse transcriptase inhibitors (NNRTIs): a possible approach to personalized HIV treatment : from 7th German Conference on Chemoinformatics: 25 CIC-Workshop Goslar, Germany, 6 - 8 November 2011 (2012) Florian D. Roessler Oliver Korb Andreas Bender Werner Mäntele Peter J. Bond The human immunodeficiency virus (HIV) is currently ranked sixth in the worldwide causes of death [1]. One treatment approach is to inhibit reverse transcriptase (RT), an enzyme essential for reverse transcription of viral RNA into DNA before integration into the host genome [2]. By using non-nucleoside RT inhibitors (NNRTIs) [3], which target an allosteric binding site, major side effects can be evaded. Unfortunately, high genetic variability of HIV in combination with selection pressure introduced by drug treatment enables the virus to develop resistance against this drug class by developing point mutations. This situation necessitates treatment with alternative NNRTIs that target the particular RT mutants encountered in a patient. Previously, proteochemometric approaches have demonstrated some success in predicting binding of particular NNRTIs to individual mutants; however a structurebased approach may help to further improve the predictive success of such models. Hence, our aim is to rationalize the experimental activity of known NNRTIs against a variety of RT mutants by combining molecular modeling, long-timescale atomistic molecular dynamics (MD) simulation sampling and ensemble docking. Initial control experiments on known inhibitor-RT mutant complexes using this protocol were successful, and the predictivity for further complexes is currently being evaluated. In addition to predictive power, MD simulations of multiple RT mutants are providing fundamental insight into the dynamics of the allosteric NNRTI binding site which is useful for the design of future inhibitors. Overall, work of this type is hoped to contribute to the development of predictive efficacy models for individual patients, and hence towards personalized HIV treatment options. Hagedorn states and thermalization : XLIX International Winter Meeting on Nuclear Physics, 24 - 28 January 2011, Bormio, Italy (2011) Carsten Greiner Jacquelyn Noronha-Hostler Jorge Noronha In recent years, Hagedorn states have been used to explain the equilibrium and transport properties of a hadron gas close to the QCD critical temperature. These massive resonances are shown to lower h/s to near the AdS/CFT limit close to the phase transition. A comparison of the Hagedorn model to recent lattice results is made and it is found that the hadrons can reach chemical equilibrium almost immediately, well before the chemical freeze-out temperatures found in thermal fits for a hadron gas without Hagedorn states. Direct photon emission in heavy ion collisions from microscopic transport theory and fluid dynamics : XLVIII International Winter Meeting on Nuclear Physics, BORMIO2010, January 25 - 29, 2010, Bormio, Italy (2010) Björn Bäuchle Marcus Bleicher Direct photon emission in heavy-ion collisions is calculated within a relativistic micro+macro hybrid model and compared to the microscopic transport model UrQMD. In the hybrid approach, the high-density part of the collision is calculated by an ideal 3+1-dimensional hydrodynamic calculation, while the early (pre-equilibrium-) and late (rescattering-) phase are calculated with the transport model. Different scenarios of the transition from the macroscopic description to the transport model description and their effects are studied. The calculations are compared to measurements by the WA98-collaboration and predictions for the future CBM-experiment are made. Implications on the collision dynamics via azimuthal sensitive HBT from UrQMD : the Seventh Workshop on Particle Correlations and Femtoscopy, September 20 - 24 2011, University of Tokyo, Japan (2011) Gunnar Gräf Elliot Mount Michael Annan Lisa Marcus Bleicher We explore the shape and orientation of the freezeout region of non-central heavy ion collisions. For this we fit the freezeout distribution with a tilted ellipsoid. The resulting tilt angle is compared to the same tilt angle extracted via an azimuthally sensitive HBT analysis. This allows to access the tilt angle experimentally, which is not possible directly from the freezeout distribution. We also show a systematic study on the system decoupling time dependence on dNch/dh, using HBT results from the UrQMD transport model. In this study we found that the decoupling time scales with (dNch/dh)1/3 within each energy, but the scaling is broken across energies. Strong coupling expansion for Yang-Mills theory at finite temperature (2007) Jens Langelage Gernot Münster Owe Philipsen Euclidean strong coupling expansion of the partition function is applied to lattice Yang-Mills theory at finite temperature, i.e. for lattices with a compactified temporal direction. The expansions have a finite radius of convergence and thus are valid only for b <bc, where bc denotes the nearest singularity of the free energy on the real axis. The accessible temperature range is thus the confined regime up to the deconfinement transition. We have calculated the first few orders of these expansions of the free energy density as well as the screening masses for the gauge groups SU(2) and SU(3). The resulting free energy series can be summed up and corresponds to a glueball gas of the lowest mass glueballs up to the calculated order. Our result can be used to fix the lower integration constant for Monte Carlo calculations of the thermodynamic pressure via the integral method, and shows from first principles that in the confined phase this constant is indeed exponentially small. Similarly, our results also explain the weak temperature dependence of glueball screening masses below Tc, as observed in Monte Carlo simulations. Possibilities and difficulties in extracting bc from the series are discussed. Towards corrections to the strong coupling limit of staggered lattice QCD : the XXIX International Symposium on Lattice Field Theory - Lattice 2011, July 10 - 16, 2011, Squaw Valley, Lake Tahoe, California (2011) Michael Fromm Jens Langelage Owe Philipsen Philippe de Forcrand Wolfgang Unger Kohtaroh Miura We report on the first steps of an ongoing project to add gauge observables and gauge corrections to the well-studied strong coupling limit of staggered lattice QCD, which has been shown earlier to be amenable to numerical simulations by the worm algorithm in the chiral limit and at finite density. Here we show how to evaluate the expectation value of the Polyakov loop in the framework of the strong coupling limit at finite temperature, allowing to study confinement properties along with those of chiral symmetry breaking. We find the Polyakov loop to rise smoothly, thus signalling deconfinement. The non-analytic nature of the chiral phase transition is reflected in the derivative of the Polyakov loop. We also discuss how to construct an effective theory for non-zero lattice coupling, which is valid to O(b). Screened perturbation theory for 3d Yang-Mills theory and the magnetic modes of hot QCD : International Workshop on QCD Green’s Functions, Confinement, and Phenomenology - QCD-TNT09, September 07 - 11 2009, ECT Trento, Italy (2009) Owe Philipsen Daniel Bieletzki York Schröder Perturbation theory for non-abelian gauge theories at finite temperature is plagued by infrared divergences which are caused by magnetic soft modes ~ g2T, corresponding to gluon fields of a 3d Yang-Mills theory. While the divergences can be regulated by a dynamically generated magnetic mass on that scale, the gauge coupling drops out of the effective expansion parameter requiring summation of all loop orders for the calculation of observables. Some gauge invariant possibilities to implement such infrared-safe resummations are reviewed. We use a scheme based on the non-linear sigma model to estimate some of the contributions ~ g6 of the soft magnetic modes to the QCD pressure through two loops. The NLO contribution amounts to ~ 10% of the LO, suggestive of a reasonable convergence of the series. Lattice calculations at non-zero chemical potential: the QCD phase diagram (2009) Owe Philipsen The so-called sign problem of lattice QCD prohibits Monte Carlo simulations at finite baryon density by means of importance sampling. Over the last few years, methods have been developed which are able to circumvent this problem as long as the quark chemical potential is m=T <~1. After a brief review of these methods, their application to a first principles determination of the QCD phase diagram for small baryon densities is summarised. The location and curvature of the pseudo-critical line of the quark hardon transition is under control and extrapolations to physical quark masses and the continuum are feasible in the near future. No definite conclusions can as yet be drawn regarding the existence of a critical end point, which turns out to be extremely quark mass and cut-off sensitive. Investigations with different methods on coarse lattices show the lightmass chiral phase transition to weaken when a chemical potential is switched on. If persisting on finer lattices, this would imply that there is no chiral critical point or phase transition for physical QCD. Any critical structure would then be related to physics other than chiral symmetry Towards a determination of the chiral critical surface of QCD (2009) Owe Philipsen The chiral critical surface is a surface of second order phase transitions bounding the region of first order chiral phase transitions for small quark masses in the fmu;d;ms;mg parameter space. The potential critical endpoint of the QCD (T;m)-phase diagram is widely expected to be part of this surface. Since for m = 0 with physical quark masses QCD is known to exhibit an analytic crossover, this expectation requires the region of chiral transitions to expand with m for a chiral critical endpoint to exist. Instead, on coarse Nt = 4 lattices, we find the area of chiral transitions to shrink with m, which excludes a chiral critical point for QCD at moderate chemical potentials mB < 500 MeV. First results on finer Nt = 6 lattices indicate a curvature of the critical surface consistent with zero and unchanged conclusions. We also comment on the interplay of phase diagrams between the Nf = 2 and Nf = 2+1 theories and its consequences for physical QCD. The finite-temperature phase structure of lattice QCD with twisted-mass Wilson fermions (2008) Ernst-Michael Ilgenfritz Karl Jansen Maria Paola Lombardo Michael Müller-Preussker Marcus Petschlies Owe Philipsen Lars Zeidlewicz We report progress in our exploration of the finite-temperature phase structure of two-flavour lattice QCD with twisted-mass Wilson fermions and a tree-level Symanzik-improved gauge action for a temporal lattice size Nt = 8. Extending our investigations to a wider region of parameter space we gain a global view of the rich phase structure. We identify the finite temperature transition/ crossover for a non-vanishing twisted-mass parameter in the neighbourhood of the zerotemperature critical line at sufficiently high b . Our findings are consistent with Creutz’s conjecture of a conical shape of the finite temperature transition surface. Comparing with NLO lattice cPT we achieve an improved understanding of this shape.
{"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/all/start/0/rows/10/institutefq/Physik/doctypefq/conferenceobject","timestamp":"2014-04-16T22:14:58Z","content_type":null,"content_length":"43233","record_id":"<urn:uuid:ece88910-ddbd-4cf2-957c-17a3b327a06f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Wyandotte, MI ACT Tutor Find a Wyandotte, MI ACT Tutor ...My undergraduate and graduate education degrees are from Eastern Michigan University in Ypsilanti. During my years in college as well as high school, I have maintained excellent grades and am professional in my work with students. A good relationship is a priority and I have always participated and worked as a class sponsor and coach in many sports during my teaching. 6 Subjects: including ACT Math, geometry, algebra 1, algebra 2 ...I have completed math courses through Calculus 3, Linear Algebra, and Differential Equations. I have successfully tutored students in Algebra personally and through the Holman Success Center at Eastern Michigan University. I have successfully tutored students in all levels of Algebra (including trig) both personally and through the Holman Success Center at Eastern Michigan 16 Subjects: including ACT Math, chemistry, physics, Spanish ...Being tutored is a private matter. Maintaining and improving the student’s self-esteem are a must. Students are often very apprehensive about “being tutored” for a variety of reasons. 10 Subjects: including ACT Math, calculus, physics, ASVAB ...I see myself as an academic coach; I not only help with classwork and tests, but I emphasize the importance of study skills and preparation. Before working at City Year I was a student at Ithaca College. I received my bachelors in Health Sciences and graduated summa cum laude in May of 2013. 6 Subjects: including ACT Math, chemistry, biology, algebra 1 ...I worked with a college graduate in MI who had been offered a lucrative fellowship for her Masters program at a noted university in MI. She came to me with a lot of fear about the math portion of the GRE. We worked together on all her areas of weakness with only a month to prepare, and she gained the skills and confidence to take the test and make the necessary score on her first try! 22 Subjects: including ACT Math, English, reading, geometry Related Wyandotte, MI Tutors Wyandotte, MI Accounting Tutors Wyandotte, MI ACT Tutors Wyandotte, MI Algebra Tutors Wyandotte, MI Algebra 2 Tutors Wyandotte, MI Calculus Tutors Wyandotte, MI Geometry Tutors Wyandotte, MI Math Tutors Wyandotte, MI Prealgebra Tutors Wyandotte, MI Precalculus Tutors Wyandotte, MI SAT Tutors Wyandotte, MI SAT Math Tutors Wyandotte, MI Science Tutors Wyandotte, MI Statistics Tutors Wyandotte, MI Trigonometry Tutors
{"url":"http://www.purplemath.com/wyandotte_mi_act_tutors.php","timestamp":"2014-04-21T02:51:38Z","content_type":null,"content_length":"24038","record_id":"<urn:uuid:378143a8-2acd-495d-bd79-c245be56d31e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
pow(x, y[, z]) Returns x to the power y; if z is present, returns x to the power y, modulo z (computed more efficiently than pow(x, y) % z). The arguments must have numeric types. With mixed operand types, the coercion rules for binary arithmetic operators apply. For int and long int operands, the result has the same type as the operands (after coercion) unless the second argument is negative; in that case, all arguments are converted to float and a float result is delivered. For example, 10\*\*2 returns 100, but 10\*\*-2 returns 0.01. (This last feature was added in Python 2.2. In Python 2.1 and before, if both arguments were of integer types and the second argument was negative, an exception was raised.) If the second argument is negative, the third argument must be omitted. If z is present, x and y must be of integer types, and y must be non-negative. (This restriction was added in Python 2.2. In Python 2.1 and before, floating 3-argument pow() returned platform-dependent results depending on floating-point rounding accidents.)
{"url":"http://www.effbot.org/pyref/pow.htm","timestamp":"2014-04-18T03:14:00Z","content_type":null,"content_length":"2550","record_id":"<urn:uuid:cb5ab743-9bc9-46c2-a005-89082a7b1b39>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Proof that norm of submatrix must be less than norm of matrix it's embedded in 1. The problem statement, all variables and given/known data http://dl.dropbox.com/u/4027565/2010-10-10_194728.png 2. Relevant equations 3. The attempt at a solution ||B|| = ||M_1 * A * M_2 || So from an equality following from the norm, we can get... ||B|| <= ||M_1||*||A||*||M_2||. Now, we know that B is a submatrix of A. So if A is 4x3, then M_1 must be 1x4 and M_2 must be 3X1 (I know that block matrices are more complicated than that, but this might work). What this also means is that the combined product of M_1 and M_2 must be <= 1. But beyond that, I'm stuck. Is there another step I should take?
{"url":"http://www.physicsforums.com/showpost.php?p=2926910&postcount=1","timestamp":"2014-04-21T12:22:51Z","content_type":null,"content_length":"9710","record_id":"<urn:uuid:142853b6-d40f-4f38-8f94-8c6180cdf85a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: THREE-DIMENSIONAL FACE CAPTURING APPARATUS AND METHOD AND COMPUTER-READABLE MEDIUM THEREOF Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Disclosed is a 3D face capturing apparatus, method and computer-readable medium. As an example, the 3D face capturing method includes obtaining a face color image, obtaining a face depth image, aligning, by a computer, the face color image and the face depth image, obtaining, by the computer, a 3D face model by 2D modeling of the face color image and covering a modeled 2D face area on an image output by an image alignment module, removing by the computer, depth noise of the 3D face model, and obtaining, by the computer, an accurate 3D face model by aligning the 3D face model and a 3D face template, and removing residual noise based on a registration between the 3D face model and the 3D face template. An apparatus capturing a three-dimensional (3D) face, the apparatus comprising:a color image obtaining unit to obtain a face color image;a depth image obtaining unit to obtain a face depth image;an image alignment module to align the face color image and the face depth image;a 3D face model generating module to perform two-dimensional (2D) modeling of the face color image, and to perform covering of a modeled 2D face area on an image output by the image alignment module to obtain a 3D face model;a first denoising module to remove depth noise of the 3D face model; anda second denoising module to align the 3D face model and a 3D face template, and to remove residual noise based on a registration between the 3D face model and the 3D face template to obtain an accurate 3D face model. The apparatus of claim 1, wherein the 3D face model generating module performs the 2D modeling using an ellipse face color image. The apparatus of claim 2, wherein the first denoising module calculates a scale-independent variance (V) in a depth direction, and compares V with a first threshold to determine whether noise exists in the 3D face model. The apparatus of claim 3, wherein a method of calculating V satisfies Equation 10: V = V _ V ' V ' = ( X i - X _ ) T ( X i - X _ ) V _ = W ( X i - X _ ) × ( X i - X _ ) T ( X i - X _ ) W ( X i - X _ ) , Equation 10 ##EQU00008## wherein W denotes a weight function, X denotes an i vertex of the 3D face model, and X denotes an average value of vertices of the 3D face model. The apparatus of claim 3, wherein the first threshold is selected based on experimentation. The apparatus of claim 3, wherein, when the determining determines that the noise exists in the 3D face model, the first denoising module calculates a depth direction average value of vertices ( z), compares a second threshold (T) with a difference between a depth of a vertex and z, and removes the vertex as the noise when the difference is greater than T. The apparatus of claim 6, wherein z is calculated by substitution based on Equation 11: z _ m + 1 = W ( z i - z _ m ) × z i w ( z i - z _ m ) . Equation 11 ##EQU00009## wherein W denotes a weight function, z denotes a depth coordinate of an i vertex, z +1 stops convergence and substitution when a difference between z +1 and z 0001 after an m substitution is performed, m being a subscript of z and a result of the convergence is z. The apparatus of claim 6, wherein T satisfies Equation 12: T = const × W ( z i - z _ ) × ( z i - z _ ) 2 W ( z i - z _ ) , Equation 12 ##EQU00010## wherein const is a constant. The apparatus of claim 1, wherein the second denoising module performs:calculating a length factor,unification with respect to the input 3D face template based on the calculated length factor;a vertex registration with respect to the 3D face model and the 3D face template based on features;calculating a mobility coefficient and a rotation coefficient with respect to a vertex of the vertex registration;determining whether the mobility coefficient and the rotation coefficient converge;updating the length factor when the mobility coefficient and the rotation coefficient do not converge; updating the 3D face model based on the updated length factor; andrepeating the calculating the length factor, unification, vertex registration, calculating the mobility and the rotation coefficient, determining, updating the length factor and updating the 3D face model until the mobility coefficient and the rotation coefficient converge. The apparatus of claim 9, wherein the features include 3D coordinates of a vertex, color information, a normal direction, and information associated with an area adjacent to the vertex. The apparatus of claim 1, wherein the color image obtaining module is a CCD camera. The apparatus of claim 1, wherein the depth image obtaining module is a depth camera. A method of capturing a 3D face, the method comprising:obtaining a face color image;obtaining a face depth image;aligning, by a computer, the face color image and the face depth image;obtaining, by the computer, a 3D face model by 2D modeling of the face color image and covering a modeled 2D face area on an image output by an image alignment module;removing, by the computer, depth noise of the 3D face model; andobtaining, by the computer, an accurate 3D face model by aligning the 3D face model and a 3D face template, and removing residual noise based on a registration between the 3D face model and the 3D face template. The method of claim 13, wherein the 2D modeling performs the 2D modeling with respect to the face color image using an ellipse. The method of claim 14, wherein the removing comprises:calculating a scale-independent variance (V) in a depth direction;comparing V and a first threshold to determine whether a noise exists in the 3D face model. The method of claim 15, wherein a method of calculating V satisfies Equation 13: V = V _ V ' V ' = ( X i - X _ ) T ( X i - X _ ) V _ = W ( X i - X _ ) × ( X i - X _ ) T ( X i - X _ ) W ( X i - X _ ) , Equation 13 ##EQU00011## wherein W denotes a weight function, X, denotes an i vertex of the 3D face model, and X denotes an average value of vertices of the 3D face model. The method of claim 15, wherein the first threshold is selected based on experimentation. The method of claim 15, wherein the removing comprises:calculating a depth direction average value of vertices (z) when the noise exists in the 3D face model;comparing a second threshold (T) with a difference between a depth of a vertex and z; andremoving the vertex as the noise when the difference is greater than T. The method of claim 18, wherein the z is calculated by substitution based on Equation 14: z _ m + 1 = W ( z i - z _ m ) × z i w ( z i - z _ m ) , Equation 14 ##EQU00012## wherein W denotes a weight function, z denotes a depth coordinate of an i vertex, z +1 stops convergence and substitution when a difference between z +1 and z 0001 after an m substitution is performed, m being a subscript of z , and a result of the convergence is z. The method of claim 18, wherein T satisfies Equation 15: T = const × W ( z i - z _ ) × ( z i - z _ ) 2 W ( z i - z _ ) , Equation 15 ##EQU00013## wherein const is a constant. The method of claim 13, wherein the aligning comprises:calculating a length factor;performing unification with respect to the input 3D face template based on the calculated length factor;performing a vertex registration with respect to the 3D face model and the 3D face template based on features;determining whether a mobility coefficient and a rotation coefficient converge by calculating the mobility coefficient and the rotation coefficient with respect to a vertex of the vertex registration;updating the length factor when the mobility coefficient and the rotation coefficient do not converge;updating the 3D face model based on the updated length factor; andrepeating the calculating the length factor, unification, vertex registration, calculating the mobility and the rotation coefficient, determining, updating the length factor and updating the 3D face model until the mobility coefficient and the rotation coefficient converge. The method of claim 21, wherein the features include 3D coordinates of a vertex, color information, a normal direction, and information associated with an area adjacent to the vertex. The method of claim 21, wherein, when a vertex that is not registered exists in the 3D face model, the performing of the vertex registration comprises removing the vertex as noise. At least one non-transitory computer readable recording medium comprising computer readable instructions that control at least one processor to implement a method, comprising:obtaining a face color image;obtaining a face depth image;aligning the face color image and the face depth image;obtaining a 3D face model by 2D modeling of the face color image and covering a modeled 2D face area on an image output by an image alignment module;removing depth noise of the 3D face model; andobtaining an accurate 3D face model by aligning the 3D face model and a 3D face template, and removing residual noise based on a registration between the 3D face model and the 3D face template. This application claims the benefit of Korean Patent Application No. 10-2010-0042349, filed on May 6, 2010, in the Korean Intellectual Property Office, and Chinese Patent Application No. 200910168295.2, filed on Aug. 24, 2009, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference. BACKGROUND [0002] 1. Field Example embodiments relate to a three-dimensional (3D) face capturing apparatus, method and computer-readable medium. 2. Description of the Related Art Today's users are not satisfied with two-dimensional (2D) information. Users demand new experiences associated with a human interface, a natural game control, a 3D display, and the like. Accordingly, superior 3D contents and 3D facial information may be demanded. A laser scanner may be a general and convenient apparatus for capturing a 3D target object. A 3D surface may be accurately obtained using the laser. Some researchers have also attempted to capture the 3D face using a single camera or a plurality of cameras. U.S. Pat. No. 6,556,196, titled `Method and Apparatus for the Processing of Images`, discusses a 3D face being formed from a 2D image. The patent discusses modeling of a 3D face using a 2D image based on a morphable object model. A shape of a model is learned based on various accurate 3D face models obtained using the laser scanner. A 3D face may be expressed based on a Principal Component Analysis (PCA) coefficient and the PCA coefficient may be calculated by minimizing a difference between a 3D face projection and an input image. However, U.S. Pat. No. 6,556,196 uses only 2D images and thus, a reconstruction of the 3D face is unreliable. Also, the U.S. patent may use a manually marked feature point and may expend much time for calculation. U.S. Patent Application US20090052748, titled `Method and System for Constructing a 3D Representation of a Face from a 2D Representation` discusses that a 3D face is reconstructed using a single neutral frontal face image. Partially inputted facial features may be detected from a 2D input image. As a result, a difference between input features and composed 3D face features may be minimized. However, U.S. Patent Application US20090052748 may have a limit in inputs. The 3D face is reconstructed from the single face image and thus, the reconstructed 3D face may be unreliable. Although a 3D face reconstructed based on a laser scanner is very accurate, there are a variety of problems. First, the interface is not sufficiently good for the 3D face reconstruction. Second, scanning is mechanically processed and as a result much time may be expended. Third, the person being scanned may be immobile while their head is scanned. Also, some users believe that the laser is harmful to human eyes. In addition, the laser scanner is too expensive to be widely used. A method of modeling a 3D face based on an image is unreliable compared with the modeling using the laser scanner. The method of modeling a 3D face based on an image may incur a high cost and a long calculation time. Also, the method of modeling a 3D face based on an image may not realize a reliable and accurate 3D face model. To obtain an ideal result, the method of modeling a 3D face based on an image may have a feature point manually marked. The method may use a facial 3D image and a 2D pattern PCA model. The model may be trained using a 3D face database of the laser scanner and thus, the method is complex. SUMMARY [0012] Additional aspects and/or advantages will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the disclosure. The foregoing and/or other aspects are achieved by providing a three-dimensional (3D) face capturing apparatus, the apparatus including a color image obtaining unit to obtain a face color image, a depth image obtaining unit to obtain a face depth image, an image alignment module to align the face color image and the face depth image, a 3D face model generating module to perform two-dimensional (2D) modeling of the face color image and to perform covering of a modeled 2D face area on an image output by the image alignment module to obtain a 3D face model, a first denoising module to remove depth noise of the 3D face model, and a second denoising module to align the 3D face model and a 3D face template, and to remove residual noise based on a registration between the 3D face model and the 3D face template to obtain an accurate 3D face model. The foregoing and/or other aspects are achieved by providing a 3D face capturing method, the method including obtaining a face color image, obtaining a face depth image, aligning, by a computer, the face color image and the face depth image, obtaining, by the computer, a 3D face model by 2D modeling of the face color image and covering a modeled 2D face area on an image output by an image alignment module, removing, by the computer, a depth noise of the 3D face model, and obtaining, by the computer, an accurate 3D face model by aligning the 3D face model and a 3D face template, and removing a residual noise based on a registration between the 3D face model and the 3D face template. According to example embodiments, a stable 3D face may be remodeled using relatively inexpensive hardware. According to another aspect of one or more embodiments, there is provided at least one computer readable medium including computer readable instructions that control at least one processor to implement methods of one or more embodiments. BRIEF DESCRIPTION OF THE DRAWINGS [0017] These and/or other aspects and advantages will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of FIG. 1 is a block diagram illustrating a 3D face capturing apparatus according to example embodiments; FIG. 2 is a diagram illustrating a face part separated from an RGB image according to example embodiments; [0020]FIG. 3 is a flowchart illustrating a depth noise removing method using substitution according to example embodiments; [0021]FIG. 4 is a illustrates a weight function when a scale-independent variance is performed according to example embodiments; [0022]FIG. 5 is a diagram illustrating an example of a 3D face template used in example embodiments; [0023]FIG. 6 is a flowchart illustrating a multi-feature Iterative Closest Point (ICP) according to example embodiments; and [0024]FIG. 7 is a flowchart illustrating a 3D face capturing method according to example embodiments. DETAILED DESCRIPTION [0025] Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. The embodiments are described below to explain the present disclosure by referring to the figures. FIG. 1 illustrates a 3D face capturing apparatus according to example embodiments. As illustrated in FIG. 1, the 3D face capturing apparatus may include a charge-coupled device (CCD) camera 101 as an example of a color image obtaining unit and may include a depth camera 102 and a data processor (not illustrated) as examples of a depth image obtaining unit. The data processor may include an image alignment module 103, a 3D face mode generating module 104, a non-model based denoising module 105, and a model based denoising module 106. The CCD camera 101 may obtain a face color image, namely, a face RGB image, for example, a color image of 1024 pixels*768 pixels. The depth camera 102 may be a time-of-flight (TOF) camera and may obtain an intensity image and a depth image. The CCD camera may obtain the face RGB image, and the depth camera may obtain the intensity image and the depth image. The RGB image and the depth image may be obtained through two different cameras and thus, the RGB image and the depth image may not be directly merged. A single image alignment module is provided in the example embodiments. The image alignment module may align images of the two different cameras to output an image having six elements including R, G, B, x, y, and z. Specifically, the output image may have color information and the depth information. A demarcation may be determined by a camera, and an internal parameter of the camera and an external parameter of the camera may be calculated. Subsequently, the image based on the six elements, namely, R, G, B, x, y, and z, may be output to the 3D face generating module. The 3D face model generating module may generate an approximate 3D face model based on the output image based on the six elements. A procedure is described below. First, a face area is detected from the RGB image and two eyes are detected from the face area. The face area and the eyes may be accurately detected using a conventional Haar detection. Second, the face area is separated from the RGB image. FIG. 2 is a diagram illustrating a face part separated from an RGB image according to example embodiments. According to a method of the separation of FIG. 2, a 2D ellipse modeling may be used, and an ellipse parameter may be calculated based on coordinates of two eyes, namely, (x , y ) and (x , y ). A rotation of a face area may be adjusted to be y A separated face area forms an ellipse. ( x - x _ ) 2 a 2 + ( y - y _ ) 2 b 2 = 1 [ Equation 1 ] ##EQU00001## A center of the ellipse is ( x, y). ×D [Equation 2] In Equation 2, D denotes a distance between the two eyes and a , denotes an invariable parameter. = {square root over ((x )}{square root over ((x | [Equation 3] A long axis and a short axis are respectively determined in Equation 3 as discussed below. [1] b [Equation 4] In Equation 4, a and a are two invariable values. a , a and a may be calculated based on 2D face image modeling. Subsequently, the separated face area may be obtained. Although a 3D face model may be obtained by covering the obtained ellipse face area on an image of six elements, the generated 3D face model may include a great amount of noise. Therefore, denoising with respect to the generated 3D face model may be performed. The denoising may be completed based on a model-based denoising module and non-model based denoising module. Noise in a direction of x and noise in a direction of y may be reduced in the separated face area that is separated using the ellipse model. The non-model based denoising module may remove noise in a direction of z, namely, depth noise. [0048]FIG. 3 illustrates a depth noise removing method using a substitution according to embodiments, and FIG. 4 illustrates a weight function when a scale-independent variance is performed according to example embodiments. The depth noise may be removed based on the substitution of FIG. 3 A key point of the substitution may be calculating a scale-independent variance (V) in a direction of z. When all sample points are multiplexed by the same ratio factor, V may remain as an invariable variance throughout the example embodiments. In operation 301, V is calculated with respect to a 3D face model of a 3D face model generating module. V may be calculated as given in Equation 5. = V _ V ' V ' = ( X i - X _ ) T ( X i - X _ ) V _ = W ( X i - X _ ) × ( X i - X _ ) T ( X i - X _ ) W ( X i - X _ ) [ Equation 5 ] ##EQU00002## In Equation 5, W(d) denotes a weight function of FIG. 4 . Weight may decrease as a difference value (d) is larger, and the weight may decrease to zero when d reaches a predetermined threshold. The weight function may be selected based on the described feature. X denotes an i vertex of the generated 3D face model, and includes information associated with X axis, Y axis, and Z axis. X denotes an average value of vertices. In operation 302, it is determined whether V is greater than a first threshold. The first threshold may be selected based on experimentation. V with respect to a 3D face model having a small amount of noise, for example, a 3D face template, may be calculated, and the calculated V may be increased based on a ratio parameter. For example, the ratio parameter may be 1.2 which may be used as the first threshold. V may be adjusted to be small through several experiments and thus, an appropriate first threshold may be obtained. When V is less than the first threshold, noise does not exist in the 3D face model. Therefore, the non-model based denoising may be performed. When V is greater than or equal to the first threshold, noise still exists in the 3D face model. Therefore, the noise is removed in operation 303. A 3D point that is relatively far from a center of the 3D face model may be regarded as noise and may be removed. Operations 301 through 303 may be iteratively performed after removing the noise in operation 303 until V is less than the first threshold and a fine 3D face model is produced in operation 304. It may be determined whether the 3D point is relatively far from the center of the 3D face model based on the process below. First, an average value of vertices ( z) in a direction of z may be calculated, namely depth direction average value of vertices. Specifically, an average value of a depth information direction may be calculated to remove depth noise. _ m + 1 = W ( z i - z _ m ) × z i w ( z i - z _ m ) [ Equation 6 ] ##EQU00003## According to example embodiments, z may be calculated based on substitution. W of W(z - z ) may be a weight function, and may be slected as described in FIG. 4 . z is a depth coordinate of an i vertex. m is a subscript of z . A parameter convergence may be represented, and the substitution may be stopped when a difference between z +1 and z is sufficiently small, for example, when a difference is 0.0001, after an M substitution is performed. Subsequently, a second threshold (T) may be compared with a difference between a depth of the vertex and z. Specifically, it may be determined whether |z - z|>T is satisfied. When the difference is greater than T, the distance of the vertex may be regarded as being relatively far from the center of the 3D face model. Specifically, the vertex may be regarded as noise and may be removed. Conversely, when the difference is less than or equal to T, the vertex may not be noise and thus, the vertex may remain. T may be calculated as follows. = const × W ( z i - z _ ) × ( z i - z _ ) 2 W ( z i - z _ ) [ Equation 7 ] ##EQU00004## In Equation 7, const denotes a constant, for example, 3, and z is a result of convergence of _ m + 1 = W ( z i - z _ m ) × z i w ( z i - z _ m ) . ##EQU00005## The noise in a depth direction may be removed based on the above described process. After passing through the 3D face model generating module and the non-model based denoising module, most of nodes may be removed. As a result, the 3D face model having a small amount of noise may be A face in the 3D face model and a face in the 3D template are highly similar to each other. The 3D face model and the 3D face template may be aligned, and may remove noise based on a registration between the 3D face model and the 3D face template. Specifically, the 3D face model and the 3D face template may be aligned and the noise may be removed during the model-based denoising. [0066]FIG. 5 illustrates an example of a 3D face template used in example embodiments. A scale of 3D face template and a scale of a 3D face model may be different from each other. However, a conventional alignment calculation, namely, an iterative closest point (ICP) calculation, may demand that a scale of an input model and a scale of a reference model be the same. Also, the conventional ICP calculation may not align the 3D face model and the 3D face template. Therefore, a new alignment calculation, namely, a multi-feature ICP may be provided. [0068]FIG. 6 illustrates a multi-feature Iterative Closest Point (ICP) calculation according to example embodiments. According to the multi-feature ICP calculation, a scale factor is calculated with respect to an input 3D face model in operation 601 based on Equation 8. = ( X i - X _ ) × ( X i - X _ ) T ( X i ' - X _ ' ) × ( X i ' - X _ ' ) T [ Equation 8 ] ##EQU00006## In Equation 8, X denotes an i vertex of the 3D face model, and X denotes an average value of vertices of the 3D face model. Here, X '' denotes an i vertex of a 3D face template, and X' denotes an average value of vertices of the 3D face template. Unlike the conventional ICP, the inputted 3D face model may include coordinates of a vertex and color information. A normal direction of the vertex and information associated with an area adjacent to the vertex may also be calculated. In operation 602, unification with respect to the input 3D face template is performed based on the calculated scale factor. In operation 603, vertex registration with respect to the 3D face model and the 3D face template is performed, and points remaining after the registration between the 3D face model and the 3D face template, namely, noise points, are removed. To calculate a robust vertex correspondence between the 3D face model and the 3D face template during the multi-feature ICP, the multi-feature may be selected to perform a point registration, and a correspondence point may be detected. The multi-feature may include 3D coordinates of the vertex, color information, a normal direction, information associated with an area adjacent to the vertex. A location of an accurate correspondence may be promptly and stably determined based on the above information. Therefore, the vertex registration of the multi-feature ICP may be more accurately performed. In operation 604, a mobility coefficient and a rotation coefficient with respect to the vertex of the vertex registration are calculated. In operation 605, it is determined whether the mobility coefficient and the rotation coefficient converge. When the mobility coefficient and the rotation coefficient converge, it may indicate that the registration between the 3D face model and the 3D face template has already been performed. Conversely, when the mobility coefficient and the rotation coefficient do not converge, the scale factor may be updated in operation 606. Specifically, the scale factor may be calculated again based on Equation 9. = ( X i - X _ ) × ( X i - X _ ) T ( X i ' - X _ ' ) × ( X i ' - X _ ' ) T [ Equation 9 ] ##EQU00007## In Equation 9, X denotes an i vertex of the 3D face model, and X denotes an average value of vertices of the 3D face model. X ' denotes an i vertex of the 3D face template, and X' denotes an average value of vertices of the 3D face template. In operation 607, the input 3D face model may be updated based on the updated scale factor. Operation 602 through operation 605 may be iteratively performed until the mobility coefficient and the rotation coefficient converge. Specifically, an alignment between the 3D face model and the 3D face template may be completed. The multi-feature ICP may calculate a scale factor between two models and iteratively perform aligning of the two models. Although the two models have different scales, the multi-feature ICP may reliably align the 3D face model and the 3D face template. According to example embodiments, a fine 3D face model may be obtained by a process of a 3D face capturing apparatus. [0082]FIG. 7 illustrates a 3D face capturing method according to example embodiments. A color image and a depth image are captured using a CCD camera and a depth camera. An image having six elements, namely, R, G, B, x, y, and z, is obtained through an image alignment in operation In operation 702, a 3D face model is generated and most noise on an xy-plane is removed. In operation 703, most noise in a depth direction, namely, most noise in a z direction, is removed. In operation 704, the 3D face model and the 3D face template are aligned based on a multi-feature ICP calculation, and residual noise points decrease. The multi-feature ICP has been described above and thus, detailed descriptions thereof will be omitted. After processing four processes, an accurate 3D face model may be obtained. The 3D face capturing method is only an example, and the 3D face capturing method may be applicable to various 3D image capturing methods. Furthermore, the method may be applicable to 3D images of all animal images or to 3D images of transportation, such as an aircraft, a vehicle, and the like. A time to remodel the 3D image may be decreased, and the remodeled 3D image may be more reliable. A 3D face capturing apparatus may be inexpensive and may be easily embodied. The above-described embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. Examples of computer-readable media (computer-readable storage devices) include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. The computer-readable media may be a plurality of computer-readable storage devices in a distributed network, so that the program instructions are stored in the plurality of computer-readable storage devices and executed in a distributed fashion. The program instructions may be executed by one or more processors or processing devices. The computer-readable media may also be embodied in at least one application specific integrated circuit (ASIC) or Field Programmable Gate Array (FPGA). Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described exemplary embodiments, or vice Although a few embodiments have been shown and described, it should be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the disclosure, the scope of which is defined in the claims and their equivalents. Patent applications by Haibing Ren, Beijing CN Patent applications by Hwa Sup Lim, Hwaseong-Si KR Patent applications by Xiangsheng Huang, Beijing CN Patent applications by SAMSUNG ELECTRONICS CO., LTD. Patent applications in class Picture signal generator Patent applications in all subclasses Picture signal generator User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20110043610","timestamp":"2014-04-18T12:49:17Z","content_type":null,"content_length":"65616","record_id":"<urn:uuid:424f70bc-23db-48cd-9065-5e22aeef21f2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove log of eigenvalues are dense in R? up vote 3 down vote favorite Suppose you have the set of all possible $n$ x $n$ square adjacency matrices where $n$={1,2,3,4...}. For each matrix, compute the logarithm of the largest eigenvalue. Is it true that the set of logarithms you obtain is dense in $\mathbb{R}$? How do you begin to prove/disprove this? linear-algebra eigenvalue adjacency matrices means entries are either 0 or 1? – 36min Sep 20 '12 at 5:33 1 What is an adjacency matrix to you? Does it have to have only entries of $0$ or $1$? Does it have to have zeroes along the diagonal? Does it have to be symmetric? – Qiaochu Yuan Sep 20 '12 at 5:52 Yes, it only has to have entries of $0$ or $1$. – Ivy Sep 20 '12 at 7:50 1 See also this question: mathoverflow.net/questions/23989/… – Felix Goldberg Sep 20 '12 at 8:54 add comment 2 Answers active oldest votes I think you mean dense in $[0,\infty)$, since the spectral radius of a nonnegative integer matrix must be at least 1 (the product of all nonzero eigenvalues must be a nonzero integer). You are effectively asking whether Perron numbers are dense in $[1,\infty)$, and this is easy to see. For example, let $A_n$ be the companion matrix of $x^n-x-1$ and $\lambda_n$ be its up vote 10 spectral radius. It's easy to check that $\log \lambda_n\to 0$, and that $k \log \lambda_n =\log \lambda_n^k$ is the spectral radius of $A_n^k$, so these numbers, as $n, k=1,2,3,\dots$ down vote are dense. Finally, one can recode the nonnegative integer matrix $A_n^k$ to an larger adjacency matrix with the same spectral radius using the standard idea called "higher block accepted presentation" from symbolic dynamics (this is described in my book with Marcus called "An Introduction to Symbolic Dynamics and Coding"). I think you mean "I think you mean dense in $[1, \infty )$", right? ;) – Qfwfq Sep 20 '12 at 8:45 1 I think $[0,\infty)$ is right because the OP asked about the logarithms of the largest eigenvalues. – Andreas Blass Sep 20 '12 at 13:20 Oh yes, sure ! – Qfwfq Sep 20 '12 at 17:15 add comment In addition to Doug's nice answer above: it is probably even easier to show that the set of simple Parry numbers is dense in $(1,\infty)$. More precisely, let $\beta>1$ and let $(d_n)_{n=1} ^\infty$ be the greedy $\beta$-expansion of 1, i.e., $$ 1=\sum_{n=1}^\infty d_n\beta^{-n}, $$ where $d_1=\lfloor \beta\rfloor, d_2=\lfloor\beta\ \text{frac}(\beta) \rfloor, d_3=\lfloor \ beta\ \text{frac}(\text{frac}(\beta))\rfloor $, etc. (Here $\lfloor\cdot\rfloor$ stands for the integer part and frac$(\cdot)$ for the fractional part.) A number $\beta$ is called a simple Parry number (also known as a simple $\beta$-number) if $(d_n(\beta))_1^\infty$ has only a finite number of nonzero terms (i.e., ends with $0^\infty$). up vote 1 It is known that any Parry number is a Perron number; also, it is obvious that the Parry numbers are dense, since for any $\beta$ with an infinite $(d_n(\beta))_1^\infty$ we can truncate down vote this sequence at any term and get a $d_n(\beta')$ for some simple Parry number $\beta'$. Since $(d_n(\beta))_1^\infty$ and $d_n(\beta')_1^\infty$ are close (in the topology of coordinate-wise convergence), so are $\beta$ and $\beta'$. For more details and some references you may read the first couple of pages of this paper, for instance. add comment Not the answer you're looking for? Browse other questions tagged linear-algebra eigenvalue or ask your own question.
{"url":"http://mathoverflow.net/questions/107653/prove-log-of-eigenvalues-are-dense-in-r?sort=newest","timestamp":"2014-04-21T02:40:29Z","content_type":null,"content_length":"62756","record_id":"<urn:uuid:4bb442a0-079d-4ad0-b586-8e90ebbf5116>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
help with a problem of probability combinations January 4th 2010, 04:42 PM help with a problem of probability combinations my first post: let's see. I was solving a problem of probability combinations, but I had a little problem in dealing with a question, here is the problem: "a student must meet a test that consists of 3 questions randomly selected from a list of 100 questions (each question has the same probability of being selected). to approve the review he needs to answer at least two questions correctly. What is the probability that the student approve the test if he only knows the answers to 90 questions on the list?" I started getting the following: (i) (100 combination 3) = 161700 ........... number of possible questions in groups of 3 (ii) (90 combination 3) = 117480 ............ number of possible questions in groups of 3 students know really did not know which is the total probability, if 1 / 100 (for each question) or 1 / (100 combination 3) (for each group of questions). (Worried) then: the 10 questions that the student does not know how it affects the probability? very grateful in advance, a response January 4th 2010, 04:50 PM The entire space consists of all possible combinations of choosing three questions out of 100. There are however, 90C3 and 90C2*10 possible combinations that will allow him to pass the test so: Total # of possible 3-question combinations: 100 choose 3 Total # of possible 3-question combinations that student knows all answers: 90C3 Total # of possible 3-question combinations that students knows two answers and doesn't know the last one: (90 choose 2)X10 Probability he will be given a 3-question combinaiton he knows: $\frac{90C3+90C2*10}{100C3}$ This assumes that the test is premade, and its simply a matter of selecting one of the premade tests. If he is actually picking questions then its much easier: January 4th 2010, 05:09 PM The entire space consists of all possible combinations of choosing three questions out of 100. There are however, 90C3 and 90C2*10 possible combinations that will allow him to pass the test so: Total # of possible 3-question combinations: 100 choose 3 Total # of possible 3-question combinations that student knows all answers: 90C3 Total # of possible 3-question combinations that students knows two answers and doesn't know the last one: (90 choose 2)X10 Probability he will be given a 3-question combinaiton he knows: $\frac{90C3+90C2*10}{100C3}$ This assumes that the test is premade, and its simply a matter of selecting one of the premade tests. If he is actually picking questions then its much easier: $\frac{90}{100}\frac{89}{99}+\frac{90}{100}\frac{89 }{99}\frac{88}{98}$ I understand that 1-q = 90C2 ... but why " *10"? January 4th 2010, 05:24 PM There are ten questions he does not know - therefore, if you have 90C2, you need to multiply that number by 10 to get the overall number of 3-question tests he can get where he knows two of the answers (90C2) and doesn't know the last one (10 of those). January 4th 2010, 05:36 PM lol. is true, did not I think. thank you very much January 4th 2010, 05:41 PM Argh. My mistake on the second "scenario": It should actually be a binomial, with $(100C3)(\frac{90}{100})^2(\frac{10}{100})+(100C3)( \frac{90}{100})^3(\frac{10}{100})^0$ January 4th 2010, 07:17 PM Hello, killertapia! Welcome aboard! A student takes a test that has 3 questions randomly selected from a list of 100 questions (each question has the same probability of being selected). To pass the test he needs to answer at least two questions correctly. What is the probability that the student passes the test if he knows the answers to omnly 90 questions on the list? I got the same answer as ANDS! What is the probability that he fails the test? There are: $_{100}C_{3} \:=\:161,\!700$ possible tests. How many of these tests contain no answers he knows? . . There are: . $_{10}C_3 \:=\:120$ such tests. How many of these tests have exactly one answer he knows? . . There are: . $\left(_{90}C_1\right) \left(_{10}C_2\right) \:=\:4,\!050$ such tests. Hence, he fails on: . $120 + 4,\!050 \:=\:4,\!170$ of the tests. And: . $P(\text{fail}) \:=\:\frac{4,\!170}{161,\!700} \:=\:\frac{139}{5390}$ Therefore: . $P(\text{pass}) \;=\;1 - \frac{139}{5390} \;=\;\frac{5251}{5390}$ January 4th 2010, 07:23 PM i'm sorry but, Are you sure that is well? I mean I know how is the binomial function, but with this data I get a very large number on an other hand thanks for the other scenario, i get a correct answer
{"url":"http://mathhelpforum.com/statistics/122435-help-problem-probability-combinations-print.html","timestamp":"2014-04-17T05:51:31Z","content_type":null,"content_length":"14389","record_id":"<urn:uuid:c7a736d0-d0d1-4eaf-ba6b-f08a32bde5ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
What is PASSHEMA? PASSHEMA is the organization of the Pennsylvania State System of Higher Education (PASSHE) educators who are involved in courses or programs that are considered to be a part of mathematics or the mathematical sciences. This includes faculty members from the 14 universities of PASSHE in Mathematics Departments, Mathematics and Computer Science Departments, Developmental Mathematics Departments, and Education Departments who teach mathematics. All faculty in such departments are members of the Association. Currently, no dues are collected from the member departments for inclusion in PASSHEMA The purpose of PASSHEMA is for members to share information on the teaching and learning of mathematics, and on other research topics in the mathematical sciences. To quote the PASSHEMA Constitution, some of the goals of PASSHEMA are: * "To further the propagation of mathematics and mathematical sciences education within member institutions throughout Pennsylvania. * "To provide a forum for the discussion of common problems within the member institutions as related to mathematics and mathematical sciences education. * "To encourage cooperation within the member institutions as related to mathematics and mathematical sciences education. * "To encourage coordination of activities within the member institutions related to mathematics and mathematical sciences education." To help PASSHEMA reach these goals, the Association holds a conference each year at one of the member institutions on a rotating basis. The 1999 conference was held at Shippensburg University, and future conferences are scheduled for Clarion University in 2000, and Lock Haven University in 2001. The managing body of PASSHEMA is its Executive Board, consisting of one representative from each of the departments which are departmental members, the Program Chair and the Assistant Program Chair for the annual conference. The four officers of PASSHEMA, Chair, Vice-Chair, Secretary and Treasurer, are elected from the members of the Executive Board.
{"url":"http://www.passhema.org/FAQs/indexFAQs.html","timestamp":"2014-04-20T05:53:03Z","content_type":null,"content_length":"3174","record_id":"<urn:uuid:91927a2e-d349-4d02-b6f9-e7aed4e62eed>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Cocycles for Differential Characteristic Classes Posted by Urs Schreiber The previous entry mentioned that Chern-Weil theory exists in every cohesive $\infty$-topos $\mathbf{H}$. For $\mathbf{H} = \infty LieGrpd$ the topos of smooth $\infty$-groupoids, this reproduces ordinary Chern-Weil theory – and generalizes it from smooth principal bundles over Lie groups to principal $\infty$-bundles over Lie $\infty$-groups. Some basics of this $\infty$-Chern-Weil theory in the smooth context we have been trying to write up a bit more. Presently the result is this • Domenico Fiorenza, Urs Schreiber, Jim Stasheff, Cocycles for differential characteristic classes - An $\infty$-Lie theoretic construction Abstract We define for every $L_\infty$-algebra $\mathfrak{g}$ a smooth $\infty$-group $G$ integrating it, and define $G$-principal $\infty$-bundles with connection. For every $L_\infty$-algebra coycle of suitable degree we give a refined $\infty$-Chern-Weil homomorphism that sends these $\infty$-bundles to classes in differential cohomology that lift the corresponding curvature characteristic classes. As a first example we show that applied to the canonical 3-cocycle on a semisimple Lie algebra $\mathfrak{g}$, this construction reproduces the Cech-Deligne cocycle representative for the first differential Pontryagin class that was found by Brylinski-MacLaughlin. If its class vanishes there is a lift to a $\mathrm{String}(G)$-connection on a smooth String-2-group principal bundle. As a second example we describe the higher Chern-Weil-homomorphism applied to this String-bundle which is induced by the canonical degree 7 cocycle on $\mathfrak{g}$. This yields a differential refinement of the fractional second Pontryagin class which is not seen by the ordinary Chern-Weil homomorphism. We end by indicating how this serves to define differential String-structures. Posted at November 7, 2010 7:10 PM UTC
{"url":"http://golem.ph.utexas.edu/category/2010/11/cocycles_for_differential_char.html","timestamp":"2014-04-17T10:52:58Z","content_type":null,"content_length":"15238","record_id":"<urn:uuid:aa2960da-0365-466d-93e4-8c284174bc49>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple solutions to nonlinear Schrödinger equations with critical growth In 2000, Cingolani and Lazzo (J. Differ. Equ. 160:118-138, 2000) studied nonlinear Schrödinger equations with competing potential functions and considered only the subcritical growth. They related the number of solutions with the topology of the global minima set of a suitable ground energy function. In the present paper, we establish these results in the critical case. In particular, we remove the condition , which is a key condition in their paper. In the proofs we apply variational methods and Ljusternik-Schnirelmann theory. MSC: 35J60, 35Q55. nonlinear Schrödinger equations; critical growth; variational methods 1 Introduction and main result We investigate the following nonlinear Schrödinger equation: which arises in quantum mechanics and provides a description of the dynamics of the particle in a non-relativistic setting. ħ is the Planck’s constant, denotes the mass of the particle, is the electric potential, g is the nonlinear coupling, and ψ is the wave function representing the state of the particle. A standing wave solution of equation (1.1) is a solution of the form . It is clear that solves (1.1) if and only if solves the following stationary equation: For simplicity and without loss of generality, we set , and , then equation (1.2) is equivalent to A considerable amount of work has been devoted to investigating solutions of (1.3). The existence, multiplicity and qualitative property of such solutions have been extensively studied. For single interior spikes solutions in the whole space , please see [1-9]etc. For multiple interior spikes, please see [10,11]etc. For single boundary spike solutions with Neumann boundary condition, please see [6,12-15]etc. For multiple boundary spikes, please see [16-18]etc. In particular, Wang and Zeng [9] studied the existence and concentration behavior of solutions for NLS with competing potential functions. Cingolani and Lazzo in [19] obtained the multiple solutions for the similar equation. In those papers only the subcritical growth was considered. In the present paper, we complete these studies by considering a class of nonlinearities with the critical growth. In particular, we remove the condition , which is a key condition in [19]. In the sequel, we restrict ourselves to the critical case in which . More specifically, we study the following problem: (f[5]) the function is strictly increasing in for any . Our main results are the following theorem. Theorem 1.1Let . Suppose that f satisfies (f[1])-(f[5]), Vis a continuous function in and satisfies . Then whenεis sufficiently small, the problem (1.4) has at least distinct nontrivial solutions. Here denotes the Ljusternik-Schnirelmann category of Σ in . By definition (e.g., [20]), the category of A with respect to M, denoted by , is the least integer k such that , with ( ) closed and contractible in M. We set and if there are no integers with the above property. We will use the notation for . To prove Theorem 1.1, we mainly use the idea of [15,19,21]. More precisely, we can show that the -condition holds in the subset (see (4.6)). Hence the standard Ljusternik-Schnirelmann category theory can be applied in to yield the existence of at least critical points of . And then we construct two continuous mappings Then a topological argument asserts that We will also prove that if u is a critical point of satisfying , then u cannot change sign. Hence we obtain at least nontrivial critical points of . The paper is organized as follows. In Section 2, we collect some notations and preliminaries. A compactness result is given in Section 3, which is a key step in our proof. Finally, in Section 4, we prove Theorem 1.1. 2 Notations and preliminaries is the usual Sobolev space of real-valued functions defined by with the normal Let be the subspace of a Hilbert space with respect to the norm We denote by S the Sobolev constant for the embedding , namely where is the usual Sobolev space of real-valued functions defined by We say that a function is a weak solution of the problem (1.4) if In view of (f[2]) and (f[3]), we have that the associated functional given by is well defined. Moreover, with the following derivative: Hence, the weak solutions of (1.4) are exactly the critical points of . Let us recall some known facts about the limiting problem, namely the problem here acts as a parameter instead of an independent variable. Solutions of (2.2) will be sought in the Sobolev space as critical points of the functional The least positive critical value can be characterized as An associated critical point w actually solves equation (2.2) and is called a ground state solution or the least energy solution, i.e., w satisfies Moreover, there exist and such that For more details, please see [22,23]. For any , we denote . We need to estimate the super bound of . In order to do this, we estimate . We shall use a family of radial function defined by It is known [20] that Moreover, we have Set , where is a cut-off function satisfying if , if and . After a detailed calculation, we have the following estimates: Since , from (2.5)-(2.7), we conclude then the maximum value of the right-hand side is achieved at Hence we have We denote the Nehari manifold of by 3 Compactness result Proposition 3.1Let as . Assume that satisfies as . Then uniformly in , there exist a subsequence of (still denoted by ), and such that . Furthermore, converges strongly in tow, the positive ground state solution of equation (2.2). Proof Let be such that . Then, by a change of variable , we have This implies that is bounded in . Noting that since . Now we prove that there exists a sequence and constants such that Indeed, if this is not true, then the boundedness of in and a lemma due to Lions [[24], Lemma I.1] imply that in for all . Given , we can use (f[2]), (f[3]) and to get and consequently (3.2) yields However, recall the definition of S in (2.1), equivalent to , contradicting (3.6). Thus, (3.4) holds. Using the idea of [21,25], along a subsequence as , we may assume that We now consider such that (see (2.3)). By a change of variable , it follows that Hence , from which it follows that in . Since and are bounded in and in , the sequence is bounded. Thus, up to a subsequence, . If , then , which does not occur. Hence , and therefore the sequence satisfies By the Hölder inequality, Hence , the dual space of . Consequently, as , in implies , i.e., Since converges weakly to w in , is bounded in . Thus is bounded in . It then follows that there is a subsequence of , still denoted by , such that converges weakly to some in . Next we will show . Choose a sequence of open relatively compact subsets, with regular boundaries, of covering , i.e., . It is easy to see that, by compact embedding, in for any . Hence a.e. on . Hence a.e. on . By the Brezis and Lieb lemma [26], we conclude that strongly in . Thus a.e. on each , and then the diagonal rule implies a.e. on . Hence Similarly, we have By (f[2]) and (f[3]), Hence when R is large enough, we get Noting that in , . Therefore we have By (3.9)-(3.12), we derive that For any let us consider the measure sequence defined by We assume By the concentration-compactness lemma [24], there exists a subsequence of (denoted in the same way) satisfying one of the three following possibilities. Compactness: There exists a sequence such that for any there is a radius with the property that Dichotomy: There exists a number , , such that for any there is a number and a sequence with the following property: Given there are non-negative measures , such that We are going to rule out the last two possibilities so that compactness holds. Our first goal is to show that vanishing cannot occur. Otherwise, Now for the harder part. Let η be a smooth nonincreasing cut-off function, defined in , such that if ; if ; and . Also, let . We define a nondecreasing function on . Denote by . We show now that dichotomy does not occur. Otherwise there exists such that for some and the function splits into and with the following properties: If we denote (3.14) becomes Using Dichotomy (iii), we get which implies Now we observe that , therefore Recall that (see (2.3)), which implies Then using and in place of , respectively, we get There exists such that , i.e., By (f[2]) and (f[3]), , we see cannot go zero, that is, . If , by (3.21), (3.22) and (f[4]), we get since (f[5]). By (3.15), a contradiction. Thus . Assume that , we will show . By (3.21) and (3.22), we have Hence by the Lebesgue dominated convergence theorem, we get By (f[5]), we have . Similarly, . Using this together with (3.16), (3.17), (3.18) and (3.19), we obtain Contradiction! Thus dichotomy does not occur. With vanishing and dichotomy ruled out, we obtain the compactness of a sequence , i.e., there exist and for each , there exists such that Then must be bounded, for otherwise (3.25) would imply, in the limit , for some positive constants , independent of δ, which implies , contrary to (3.8). From the foregoing, it follows that there exist bounded nonnegative measures , on such that weakly and tightly as . Lemma I.1 in [27] declares that there exist sequences , such that where denotes a Dirac measure, . Take in the support of the singular part of , . We consider such that Choosing the test function , from , we have This reduces to hence (3.27)(3) states and hence which implies that the set J is at most finite. Here CardJ is the cardinal numbers of set J. Hence When n is large enough, recall (see (2.11)), together with (3.30) and (3.31), we obtain a contradiction. Therefore J is empty, that is, as . By the Brezis and Lieb lemma [26] again, we get Equation (3.25) and compact embedding theorem imply This together with (3.13), (3.20) and (3.32) allows us to deduce easily Since is a uniformly convex Banach space, hence From (3.32), (3.33) and (3.34), we can obtain i.e., w is the ground state solution of (2.2) in view of (3.13). The proof of Proposition 3.1 is complete.□ 4 Proof of Theorem 1.1 Proposition 4.1Supposefsatisfies (f[2])-(f[4]). Then satisfies the -condition for all , that is, every sequence in such that , , as , possesses a convergent subsequence. Proof Suppose that is a sequence in such that , , as . Using (f[4]), by a change of variable , we obtain that This implies that is bounded in . Therefore we may assume in and a.e. Let . Then and by the Brezis-Lieb lemma [26], For convenience, we denote by It is clear that It is easy to verify that . Hence we have and thus since by (f[2]) and (f[3]). If , then , hence as , and we can obtain the desired conclusion. Hence it remains to show that . By a change of variable, from we get By the Sobolev inequalities, Letting , we get , so either which contradicts (4.5) or .□ Let be fixed. Let η be a smooth nonincreasing cut-off function, defined in , such that if ; if ; and for some . For any , let where w is the positive ground state of (2.2). We may assume that is the unique positive number such that Let be any positive function tending to 0 as , we define the sublevel By Lemma 4.2 below, is not empty for ε sufficiently small. By noticing that , we can define as Lemma 4.2Uniformly in , we have Proof Let . Computing directly, we have By a change of variable , we obtain By the exponential decay of ω, we get uniformly for . Therefore, in the limit that ε is very small, thanks to (4.8) (4.9) and (4.11), we find On the other hand, following the idea of [21,25], from , by the change of variables , we get which contradicts (4.12). Thus, up to a subsequence, . Since f has subcritical growth and , it follows that . Thus, we can take the limit in (4.13) to obtain from which it follows that . Since w also belongs to , we conclude that . This and Lebesgue’s theorem imply that uniformly for . Noting , from (4.12), (4.16) and (4.17), we have Thus (4.7) is proved.□ Let be the center of mass of in terms of the norm: Proof By change of variable , we have By Proposition 3.1, converges strongly in to w, which is a positive ground state solution of equation (2.2). Thanks to the exponential decay of w (see (2.4)), we obtain as . This completes the proof of Lemma 4.3.□ Proof of Theorem 1.1 By Proposition 4.1, satisfies the -condition for all . Now let us choose a function such that as and such that is not a critical level for . For such , let us introduce the set as in (4.6). Then the standard Ljusternik-Schnirelmann theory implies that has at least critical points on (also see [21]). By Lemma 4.3, we can assume that for any , there exists such that for any . For such ε, by Lemma 4.2, we have uniformly for , thus . Recall , calculating directly, we get as uniformly for . Hence the map is homotopical equivalence to the inclusion for ε small enough. We denote . It is easy to verify that and (cf. [[19], Lemma 2.2]). Hence we have Next we show that if u is a critical point of satisfying , then u cannot change sign. Indeed, if with and , then from , we have By change of variable , we get Also, noting which is a contradiction. Therefore there exist at least nonzero critical points of and thus solutions of equation (1.4).□ Authors’ contributions WL carried out the genetic studies, participated in the sequence alignment and drafted the manuscript. PZ checked the references. This work was partially supported by the National Natural Science Foundation of China (11261052). The authors are grateful to Prof. Guowei Dai for pointing out several mistakes and valuable comments. Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2013/1/199","timestamp":"2014-04-18T08:43:10Z","content_type":null,"content_length":"410867","record_id":"<urn:uuid:986da9ba-14ed-43be-bf36-c1a4d41bca94>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: RE: RE: Extracting B coefs and p's from "foreach" regression [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: RE: RE: Extracting B coefs and p's from "foreach" regression From "VISINTAINER PAUL" <PAUL_VISINTAINER@nymc.edu> To <statalist@hsphsun2.harvard.edu> Subject st: RE: RE: RE: Extracting B coefs and p's from "foreach" regression Date Wed, 24 Nov 2004 11:52:22 -0500 I apologize to everyone for the multiple posting. Wendy, there is a carriage return after "replace", that is not registering on in my postings. Paul F. Visintainer, PhD Professor and Program Director Health Quantitative Sciences School of Public Health New York Medical College -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of VISINTAINER Sent: Wednesday, November 24, 2004 11:50 AM To: statalist@hsphsun2.harvard.edu Subject: st: RE: RE: Extracting B coefs and p's from "foreach" I'm not sure why the formatting was altered on my last posting. Hopefully this will be correct. tempname out postfile `out' <output variable names here> using outfile, replace foreach var of varlist <variables to be analyzed> { qui ttest `var', by(case) post `out' ("`var'") (r(mu_2)) (r(sd_2)) (r(N_2)) (r(mu_1)) (r(sd_1)) postclose `out' use outfile, clear qui { for var case sd1 control sd2: format X %6.2f format p %4.3f list, clean noobs Paul F. Visintainer, PhD Professor and Program Director Health Quantitative Sciences School of Public Health New York Medical College -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of VISINTAINER Sent: Wednesday, November 24, 2004 11:39 AM To: statalist@hsphsun2.harvard.edu Subject: st: RE: Extracting B coefs and p's from "foreach" regression You might try something like the following. I use it to output a table comprising results from several t-tests. (Use -return list- to identify the which factors are available for use after your regression.) tempname out postfile `out' <output variable names here> using outfile, replace foreach var of varlist <variables to be analyzed { qui ttest `var', by(case) post `out' ("`var'") (r(mu_2)) (r(sd_2)) (r(N_2)) (r(mu_1)) (r(sd_1)) */ (r(N_1)) (r(p)) postclose `out' use outfile, clear qui { for var case sd1 control sd2: format X %6.2f format p %4.3f list, clean noobs *At this point, I just copy the output to a Word document. Hope this helps, Paul F. Visintainer, PhD Professor and Program Director Health Quantitative Sciences School of Public Health New York Medical College -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Garrard, Wendy M. Sent: Tuesday, November 23, 2004 4:12 PM To: statalist@hsphsun2.harvard.edu Subject: st: Extracting B coefs and p's from "foreach" regression Hi All, As part of preliminary descriptive analysis I need to examine pairwise correlations between the effect size and a large number of possible covariates. The tricky part is that I have discovered I need to examine pairwise correlations which take into account a random effects component. I can do this by specifying bivariate random effect regressions using a metaregW macro in a foreach command as below: foreach var of varlist VarGrp1_* VarGrp_2* VarGrp_3* { metaregW EffectSize `var' [aw=weight] , model(ml) But, this produces lots of extraneous output (R^2, Qs, 95%CIs, etc.) since I only need the B1_coef and p-value from each run. So, I am looking for a way to either capture the target values in a table/list via a modification of the above command, or an alternative method of creating a list of the B1_coefs and p's for this problem. Any suggestions? Much thanks, * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-11/msg00752.html","timestamp":"2014-04-19T22:54:50Z","content_type":null,"content_length":"9630","record_id":"<urn:uuid:1b6da35a-2cfd-4f1b-ab3e-08688872791c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
In hoc signo vinces (Was: Revamping the numeric classes) William Lee Irwin III wli@holomorphy.com Fri, 9 Feb 2001 12:55:12 -0800 Fri, 09 Feb 2001 10:52:39 +0000, Jerzy Karczmarczuk pisze: >> Again, a violation of the orthogonality principle. Needing division >> just to define signum. And of course a completely different approach >> do define the signum of integers. Or of polynomials... On Fri, Feb 09, 2001 at 07:19:21PM +0000, Marcin 'Qrczak' Kowalczyk wrote: > So what? That's why it's a class method and not a plain function with > a single definition. > Multiplication of matrices is implemented differently than > multiplication of integers. Why don't you call it a violation of the > orthogonality principle (whatever it is)? Matrix rings actually manage to expose the inappropriateness of signum and abs' definitions and relationships to Num very well: class (Eq a, Show a) => Num a where (+), (-), (*) :: a -> a -> a negate :: a -> a abs, signum :: a -> a fromInteger :: Integer -> a fromInt :: Int -> a -- partain: Glasgow extension Pure arithmetic ((+), (-), (*), negate) works just fine. But there are no good injections to use for fromInteger or fromInt, the type of abs is wrong if it's going to be a norm, and it's not clear that signum makes much sense. So we have two totally inappropriate operations (fromInteger and fromInt), one operation which has the wrong type (abs), and an operation which doesn't have well-defined meaning (signum) on matrices. If we want people doing graphics or linear algebraic computations to be able to go about their business with their code looking like ordinary arithmetic, this is, perhaps, a real concern. I believe that these applications are widespread enough to be concerned about how the library design affects their aesthetics. <craving> Weak coffee is only fit for lemmas.
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2001-February/001557.html","timestamp":"2014-04-17T07:39:07Z","content_type":null,"content_length":"4109","record_id":"<urn:uuid:f467f88a-1171-4649-b354-4dc3cda79a4c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Von Neumann's If I have freedom in my love, And in my soul am free, Angels alone, that soar above, Enjoy such liberty. Richard Lovelace, 1649 In quantum mechanics the condition of a physical system is represented by a state vector, which encodes the probabilities of each possible result of whatever measurements we may perform on the system. Since the probabilities are usually neither 0 nor 1, it follows that for a given system with a specific state vector, the results of measurements generally are not uniquely determined. Instead, there is a set (or range) of possible results, each with a specific probability. Furthermore, according to the conventional interpretation of quantum mechanics (the so-called Copenhagen Interpretation of Niels Bohr, et al), the state vector is the most complete possible description of the system, which implies that nature is fundamentally probabilistic (i.e., non-deterministic). However, some physicists have questioned whether this interpretation is correct, and whether there might be some more complete description of a system, such that a fully specified system would respond deterministically to any measurement we might perform. Such proposals are called 'hidden variable' theories. In his assessment of hidden variable theories in 1932, John von Neumann pointed out a set of five assumptions which, if we accept them, imply that no hidden variable theory can possibly give deterministic results for all measurements. The first four of these assumptions are fairly unobjectionable, but the fifth seems much more arbitrary, and has been the subject of much discussion. (The parallel with Euclid's postulates, including the controversial fifth postulate discussed in Chapter 3.1, is striking.) To understand von Neumann's fifth postulate, notice that although the conventional interpretation does not uniquely determine the outcome of a particular measurement for a given state, it does predict a unique 'expected value' for that measurement. Let's say a measurement of X on a system with a state vector f has an expected value denoted by <X;f>, computed by simply adding up all the possible results multiplied by their respective probabilities. Not surprisingly, the expected values of observables are additive, in the sense that In practice we can't generally perform a measurement of X+Y without disturbing the measurements of X and Y, so we can't measure all three observables on the same system. However, if we prepare a set of systems, all with the same initial state vector f, and perform measurements of X+Y on some of them, and measurements of X or Y on the others, then the averages of the measured values of X, Y, and X+Y (over sufficiently many systems) will be related in accord with (1). Remember that according to the conventional interpretation the state vector f is the most complete possible description of the system. On the other hand, in a hidden variable theory the premise is that there are additional variables, and if we specify both the state vector f AND the "hidden vector" H, the result of measuring X on the system is uniquely determined. In other words, if we let <X; f,H> denote the expected value of a measurement of X on a system in the state (f,H), then the claim of the hidden variable theorist is that the variance of individual measured values around this expected value is zero. Now we come to von Neumann's controversial fifth postulate. He assumed that, for any hidden variable theory, just as in the conventional interpretation, the averages of X+Y, X and Y evaluated over a set of identical systems are additive. (Compare this with Galileo's assumption of simple additivity for the composition of incommensurate speeds.) Symbolically, this is expressed as for any two observables X and Y. On this basis he proved that the variance ("dispersion") of at least one observable's measurements must be greater than zero. (Technically, he showed that there must be an observable X such that <X^2> is not equal to <X>^2.) Thus, no hidden variable theory can uniquely determine the results of all possible measurements, and we are compelled to accept that nature is fundamentally non-deterministic. However, this is all based on (2), the assumption of additivity for the expectations of identically prepared systems, so it's important to understand exactly what this assumption means. Clearly the words "identically prepared" mean something different under the conventional interpretation than they do in the context of a hidden variable theory. Conventionally, two systems are said to be identically prepared if they have the same state vector (f), but in a hidden variable theory two states with the same state vector are not necessarily "identical", because they may have different hidden vectors (H). Of course, a successful hidden variable theory must satisfy (1) (which has been experimentally verified), but must it necessarily satisfy (2)? Relation (1) implies that the averages of <X;f,H>, etc, evaluated over all applicable hidden vectors H, leads to (1), but does it necessarily follow that (2) is satisfied for every (or even for ANY) specific value of H? To give a simple illustration, consider the following trivial set of data: The averages over these four "conventionally indistinguishable" systems are <X;3> = 3, <Y;3> = 4, and <X+Y;3> = 7, so relation (1) holds. However, if we examine the "identically prepared" systems taking into account the hidden components of the state, we really have two different states (those with H=1 and those with H=2), and we find that the results are not additive (but they are deterministic) in these fully-defined states. Thus, equation (1) clearly doesn't imply equation (2). (If it did, von Neumann could have said so, rather than taking it as an axiom.) Of course, if our hidden variable theory is always going to satisfy (1), we must have some constraints on the values of H that arise among "conventionally indistinguishable" systems. For example, in the above table if we happened to get a sequence of systems all in the same condition as System #1 we would always get the results X=2, Y=5, X+Y=5, which would violate (1). So, if (2) doesn't hold, then at the very least we need our theory to ensure a distribution of the hidden variables H that will make the average results over a set of "conventionally indistinguishable" systems satisfy relation (1). (In the simple illustration above, we would just need to ensure that the hidden variables are equally distributed between H=1 and H=2.) In Bohm's 1952 theory the hidden variables consist of precise initial positions for the particles in the system – more precise than the uncertainty relations would typically allow us to determine - and the distribution of those variables within the uncertainty limits is governed as a function of the conventional state vector, f. It's also worth noting that, in order to make the theory work, it was necessary for f to be related to the values of H for separate particles instantaneously in an explicitly non-local way. Thus, Bohm's theory is a counter-example to von Neumann's theorem, but not to Bell's (see below). Incidentally, it may be worth noting that if a hidden variable theory is valid, and the variance of all measurements around their expectations are zero, then the terms of (2) are not only the expectations, they are the unique results of measurements for a given f and H. This implies that they are eigenvalues, of the respective operators, whereas the expectations for those operators are generally not equal to any of the eigenvalues. Thus, as Bell remarked, "[von Neumann's] 'very general and plausible postulate' is absurd". Still, Gleason showed that we can carry through von Neumann's proof even on the weaker assumption that (2) applies to commuting variables. This weakened assumption has the advantage of not being self-evidently false. However, careful examination of Gleason's proof reveals that the non-zero variances again arise only because of the existence of non-commuting observables, but this time in a "contextual" sense that may not be obvious at first glance. To illustrate, consider three observables X,Y,Z. If X and Y commute and X and Z commute, it doesn't follow that Y and Z commute. We may be able to measure X and Y using one setup, and X and Z using another, but measuring the value of X and Y simultaneously will disturb the value of Z. Gleason's proof leads to non-zero variances precisely for measurements in such non-commuting contexts. It's not hard to understand this, because in a sense the entire non-classical content of quantum mechanics is the fact that some observables do not commute. Thus it's inevitable that any "proof" of the inherent non-classicality of quantum mechanics must at some point invoke non-commuting measurements, but it's precisely at that point where linear additivity can only be empirically verified on an average basis, not a specific basis. This, in turn, leaves the door open for hidden variables to govern the individual results. Notice that in a "contextual" theory the result of an experiment is understood to depend not only on the deterministic state of the "test particles" but also on the state of the experimental apparatus used to make the measurements, and these two can influence each other. Thus, Bohm's 1952 theory escaped the no hidden variable theorems essentially by allowing the measurements to have an instantaneous effect on the hidden variables, which, of course, made the theory essentially non-local as well as non-relativistic (although Bohm and others later worked to relativize his theory). Ironically, the importance of considering the entire experimental setup (rather than just the arbitrarily identified "test particles") was emphasized by Niels Bohr himself, and it's a fundamental feature of quantum mechanics (i.e., objects are influenced by measurements no less than measurements are influenced by objects). As Bell said, even Gleason's relatively robust line of reasoning overlooks this basic insight. Of course, it can be argued that contextual theories are somewhat contrived and not entirely compatible with the spirit of hidden variable explanations, but, if nothing else, they serve to illustrate how difficult it is to categorically rule out "all possible" hidden variable theories based simply on the structure of the quantum mechanical state space. In 1963 John Bell sought to clarify matters, noting that all previous attempts to prove the impossibility of hidden variable interpretations of quantum mechanics had been “found wanting”. His idea was to establish rigorous limits on the kinds of statistical correlations that could possibly exist between spatially separate events under the assumption of determinism and what might be called “local realism”, which he took to be the premises of Einstein, et al. At first Bell thought he had succeeded, but it was soon pointed out that his derivation implicitly assumed one other crucial ingredient, namely, the possibility of free choice. To see why this is necessary, notice that any two spatially separate events share a common causal past, consisting of the intersection of their past light cones. This implies that we can never categorically rule out some kind of "pre-arranged" correlation between spacelike-separated events - at least not unless we can introduce information that is guaranteed to be causally independent of prior events. The appearance of such "new events" whose information content is at least partially independent of their causal past, constitutes a free choice. If no free choice is ever possible, then (as Bell acknowledged) the Bell inequalities do not apply. In summary, Bell showed that quantum mechanics is incompatible with a quite peculiar pair of assumptions, the first being that the future behavior of some particles (i.e., the "entangled" pairs) involved in the experiment is mutually conditioned and coordinated in advance, and the second being that such advance coordination is in principle impossible for other particles involved in the experiment (e.g., the measuring apparatus). These are not quite each others' logical negations, but close to it. One is tempted to suggest that the mention of quantum mechanics is almost superfluous, because Bell's result essentially amounts to a proof that the assumption of a strictly deterministic universe is incompatible with the assumption of a strictly non-deterministic universe. He proved, assuming the predictions of quantum mechanics are valid (which the experimental evidence strongly supports), that not all events can be strictly consequences of their causal pasts, and in order to carry out this proof he found it necessary to introduce the assumption that not all events are strictly consequences of their causal pasts! Bell identified three possible positions (aside from “just ignore it”) that he thought could be taken with respect to the Aspect experiments: (1) detector inefficiencies are keeping us from seeing that the inequalities are not really violated, (2) there are influences going faster than light, or (3) the measuring angles are not free variables. Regarding the third possibility, he wrote: ...if our measurements are not independently variable as we supposed...even if chosen by apparently free-willed physicists... then Einstein local causality can survive. But apparently separate parts of the world become deeply entangled, and our apparent free will is entangled with them. The third possibility clearly shows that Bell understood the necessity of assuming free acausal events for his derivation, but since this amounts to assuming precisely that which he was trying to prove, we must acknowledge that the significance of Bell's inequalities is less clear than many people originally believed. In effect, after clarifying the lack of significance of von Neumann's "no hidden variables proof" due to its assumption of what it meant to prove, Bell proceeded to repeat the mistake, albeit in a more subtle way. Perhaps Bell's most perspicacious remark was (in reference to Von Neumann's proof) that the only thing proved by impossibility proofs is the author's lack of imagination. This all just illustrates that it's extremely difficult to think clearly about causation, and the reasons for this can be traced back to the Aristotelian distinction between natural and violent motion. Natural motion consisted of the motions of non-living objects, such as the motions of celestial objects, the natural flows of water and wind, etc. These are the kinds of motion that people (like Bell) apparently have in mind when they think of determinism. Following the ancients, many people tend to instinctively exempt "violent motions" – i.e., motions resulting from acts of living volition – when considering determinism. In fact, when Bell contemplated the possibility that determinism might also apply to himself and other living beings, he coined a different name for it, calling it “super-determinism”. Regarding the experimental tests of quantum entanglement he said One of the ways of understanding this business is to say that the world is super-deterministic. That not only is inanimate nature deterministic, but we, the experimenters who imagine we can choose to do one experiment rather than another, are also determined. If so, the difficulty which this experimental result creates disappears. But what Bell calls (admittedly on the spur of the moment) super-determinism is nothing other than what philosophers have always called simply determinism. Ironically, if confronted with the idea of vitalism, i.e., the notion that living beings are exempt from the normal laws of physics that apply to inanimate objects, or at least that living beings also entail some other kind of action transcending the normal laws of physics in physically observable ways – many physicists would probably be skeptical if not downright dismissive… and yet hardly any would think to question this very dualistic assumption underlying Bell’s analysis. Regardless of our conscious beliefs, it's psychologically very difficult for us to avoid bifurcating the world into inanimate objects that obey strict laws of causality, and animate objects (like ourselves) that do not. This dichotomy was historically appealing, and may even have been necessary for the development of classical physics, but it always left the nagging question of how or why we (and our constituent atoms) manage to evade the iron hand of determinism that governs everything else. This view affects our conception of science by suggesting to us that the experimenter is not himself part of nature, and is exempt from whatever determinism is postulated for the system being studied. Thus we imagine that we can "test" whether the universe is behaving deterministically by turning some dials and seeing how the universe responds, overlooking the fact that we and the dials are also part of the universe. This immediately introduces "the measurement problem": Where do we draw the boundaries between separate phenomena? What is an observation? How do we distinguish "nature" from "violence", and is this distinction even warranted? When people say they're talking about a deterministic world, they're almost always not. What they're usually talking about is a deterministic sub-set of the world that can be subjected to freely chosen inputs from a non-deterministic "exterior". But just as with the measurement problem in quantum mechanics, when we think we've figured out the constraints on how a deterministic test apparatus can behave in response to arbitrary inputs, someone says "but isn't the whole lab a deterministic system?", and then the whole building, and so on. At what point does "the collapse of determinism" occur, so that we can introduce free inputs to test the system? Just as the infinite regress of the measurement problem in quantum mechanics leads to bewilderment, so too does the infinite regress of determinism. The other loop-hole that can never be closed is what Bell called "correlation by post-arrangement" or "backwards causality". I'd prefer to say that the system may violate the assumption of strong temporal asymmetry, but the point is the same. Clearly the causal pasts of the spacelike separated arms of an EPR experiment overlap, so all the objects involved share a common causal past. Therefore, without something to "block off" this region of common past from the emission and absorption events in the EPR experiment, we're not justified in asserting causal independence, which is required for Bell's derivation. The usual and, as far as I know, only way of blocking off the causal past is by injecting some "other" influence, i.e., an influence other than the deterministic effects propagating from the causal past. This "other" may be true randomness, free will, or some other concept of "free occurrence". In any case, Bell's derivation requires us to assert that each measurement is a "free" action, independent of the causal past, which is inconsistent with even the most limited construal of determinism. There is a fascinating parallel between the ancient concepts of natural and violent motion and the modern quantum mechanical concepts of the linear evolution of the wave function and the collapse of the wave function. These modern concepts are sometimes termed U, for unitary evolution of the quantum mechanical state vector, and R, for reduction of the state vector onto a particular basis of measurement or observation. One could argue that the U process corresponds closely with Aristotle's natural (inanimate) evolution, while the R process represents Aristotle's violent evolution, triggered by some living act. As always, we face the question of whether this is an accurate or meaningful bifurcation of events. Today there are several "non-collapse" interpretations of quantum mechanics, including the famous "many worlds" interpretation of Everett and DeWit. However, to date, none of these interpretations has succeeded in giving a completely satisfactory account of quantum mechanical processes, so we are not yet able to dispense with Aristotle's distinction between natural and violent motion. Return to Table of Contents
{"url":"http://mathpages.com/rr/s9-06/9-06.htm","timestamp":"2014-04-20T19:28:45Z","content_type":null,"content_length":"38248","record_id":"<urn:uuid:c49a139c-aa9a-46b0-a39d-a32dac0a1634>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Dispersion Relation I don't understand, why the angular frequenzy [itex]\omega (\vec{k})[/itex] is important. Or why is this information important? I think you are getting lost in the details. Let me give you a bigger picture to always keep in mind. The reason we care to understand phonons is because they carry and transfer energy. And knowing how and in what capacity they carry energy allows us to calculate the heat capacity and thermal conductivity of any material, which are very important for engineers and applied physicists. The angular frequency of the phonon is what determines HOW much energy it can carry. The higher its vibration frequency, the more energy it can carry. The dispersion relation allows you to know how many photons there are and how much energy each of them carries. E = [itex]\hbar \omega[/itex] You will see this eventually when you get to thermal properties and start calculating density of states.
{"url":"http://www.physicsforums.com/showthread.php?t=686751","timestamp":"2014-04-20T23:39:06Z","content_type":null,"content_length":"31766","record_id":"<urn:uuid:05177e94-5b0a-443e-b5b3-28dbb13aa656>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Production Operations Management Production and Operations Management ­MGT613 Lesson 32 Learning Objectives Inventory Management is the procurement, use and distribution of Inventory; some text books use the work Inventory control for the same concept. The word control ensures that inputs, the process itself and the outputs are all manageable. This inventory control concept helps us to understand two important concepts of Operations Management i.e. Supply Chain Management and Just In Time Production Systems. In this lecture we will study the ABC classification System, Inventory Ordering and Holding Costs and Economic Order Quantity Model. Key Inventory Terms The Key Inventory Terms we should know are Lead time, Holding (carrying) costs, Ordering ( Set up) Costs and Shortage(Stock out) costs 1. Lead time: Time interval between ordering and receiving the order. 2. Holding (carrying) costs: Cost to carry an item in inventory for a length of time, usually a year. Costs include Interest, insurance, taxes, depreciation, obsolescence, deterioration, pilferages, breakage, warehousing costs and Opportunity costs. Holding (carrying) costs: Holding costs are stated in two ways a. Percentage of unit price or b. Rupee 3. Ordering costs: Costs of ordering and receiving inventory. These are the actual costs that vary with the actual placement of the order. 4. Shortage costs: Costs when demand exceeds supply. ABC Classification System An important aspect of Inventory Management is that items held in inventory are not of equal importance in terms of rupees invested, profit potential, sales or usage volume. ABC Classification System controls inventories by dividing items into 3 groups A, B and C 1. Group A consists of High Rupee (Monetary) Value, which account for a small portion about 10% of the total inventory usage. 2. Group B consists of Medium Rupee (Monetary) Value, which account for about 20% of the total inventory usage. 3. Group C consists of Low Rupee (Monetary) Value, which account for a large portion about 70% of the total inventory usage. 4. The level of control reflects cost benefit concerns. 5. Group A items are reviewed on a regular basis. 6. Group B items are reviewed at a less frequency than Group A items but more than Group C 7. Group C items are not reviewed and order is placed directly. Production and Operations Management ­MGT613 ( Rupees) B ( up to Rs. C( Up to Rs. Classify inventory according to ABC classification system, Rupee value up to 50K and 500K represent C and B respectively. Cycle Counting A physical count of items in inventory. Cycle counting management: How much accuracy is needed? When should cycle counting be performed? Who should do it? Economic Order Quantity Models 1. Economic order quantity model 2. Economic production model 3. Quantity discount model Assumptions of EOQ Model Only one product is involved. Annual demand requirements known. Demand is even throughout the year. Lead time does not vary. Each order is received in a single delivery. There are no quantity discounts. Production and Operations Management ­MGT613 The Inventory Cycle Profile of Inventory Level Over Time on hand Place Receive order order Lead time Total Cost Total cost = TC = Cost Minimization Goal TC = H+ S Ordering Costs Order Quantity QO(optimal order quantity) Production and Operations Management ­MGT613 Deriving the EOQ Using calculus, we take the derivative of the total cost function and set the derivative (slope) equal to zero and solve for Q. 2(Annual Demand)(O rder or Setup Cost) Q OPT = Annual Holding Cost Minimum Total Cost The total cost curve reaches its minimum where the carrying and ordering costs are equal. Example 2 A local distributor for an international aerobic exercise machine manufacturer expects to sell approximate 10,000 machines. Annual carrying cost is Rs. 2500 per machine and Order cost is Rs. 10,000. The distributor Operates 300 days a year. 1. Find EOQ? 2. The number of times the store will reorder? 3. Length of an Order Cycle? 4. Total Annual Cost if EOQ is ordered? Given Data D=10,000 machines. H= Annual carrying cost is Rs. 2500 per machine. S=Order cost is Rs. 10,000. No of The distributor Operates 300 days a year. Calculation of EOQ Q0= Sq Root of (2 DS)/H= Sq Root (2 X 10,000 X 10,000 )/2500 =Sq Root (80,000) =283 machines per year The number of times the store will reorder? = 35 Times The Length of an Order Cycle Q0/D=283/10.000=0.0283 of a year= 0.0283 X 300= 8.49 days The Total Annual Cost, if EOQ is ordered TC= Carrying Cost + Ordering Cost =Q0/2 ( H) + D/Q0 (S) =283/2 (2500) + 10.000/283 (10,000) =353,750 + 353,353 = Rs. 707,107 Production and Operations Management ­MGT613 Inventory Management is simply the procurement, use and distribution of Inventory. In our subsequent discussions on Inventory as well as Supply Chain Management we will find some similarities between the two important concepts of Inventory Management and Supply Chain Management. When we combine Inventory Management (Control) with Production and Purchasing we are more or less focusing on the Japanese Philosophy of Just In Time Production. Also, the basic EOQ Model minimizes the sum of carrying or holding costs as well as setup or ordering cost.
{"url":"http://www.zeepedia.com/read.php?inventory_management_abc_classification_system_cycle_counting_production_operations_management&b=55&c=32","timestamp":"2014-04-20T23:29:14Z","content_type":null,"content_length":"91210","record_id":"<urn:uuid:868f67af-3fa5-4a6a-91ef-1453a4c6c80d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 16 , 1996 "... We present the linear type theory LLF as the forAppeared in the proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science --- LICS'96 (E. Clarke editor), pp. 264--275, New Brunswick, NJ, July 27--30 1996. mal basis for a conservative extension of the LF logical framework. LLF c ..." Cited by 217 (44 self) Add to MetaCart We present the linear type theory LLF as the forAppeared in the proceedings of the Eleventh Annual IEEE Symposium on Logic in Computer Science --- LICS'96 (E. Clarke editor), pp. 264--275, New Brunswick, NJ, July 27--30 1996. mal basis for a conservative extension of the LF logical framework. LLF combines the expressive power of dependent types with linear logic to permit the natural and concise representation of a whole new class of deductive systems, namely those dealing with state. As an example we encode a version of Mini-ML with references including its type system, its operational semantics, and a proof of type preservation. Another example is the encoding of a sequent calculus for classical linear logic and its cut elimination theorem. LLF can also be given an operational interpretation as a logic programming language under which the representations above can be used for type inference, evaluation and cut-elimination. 1 Introduction A logical framework is a formal system desig... - HANDBOOK OF LOGIC IN AI AND LOGIC PROGRAMMING, VOLUME 5: LOGIC PROGRAMMING. OXFORD (1998 "... ..." - In Proceedings of 9th Annual IEEE Symposium On Logic In Computer Science , 1994 "... The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli [12], provide data types, higher-order programming) but lack primitives for concurrency. The logic pro ..." Cited by 86 (7 self) Add to MetaCart The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli [12], provide data types, higher-order programming) but lack primitives for concurrency. The logic programming language, LO (Linear Objects) [2] provides for concurrency but lacks abstraction mechanisms. In this paper we present Forum, a logic programming presentation of all of linear logic that modularly extends the languages λProlog, Lolli, and LO. Forum, therefore, allows specifications to incorporate both abstractions and concurrency. As a meta-language, Forum greatly extends the expressiveness of these other logic programming languages. To illustrate its expressive strength, we specify in Forum a sequent calculus proof system and the operational semantics of a functional programming language that incorporates such nonfunctional features as counters and references. 1 - Theoretical Computer Science , 1996 "... The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli [15], provide for various forms of abstraction (modules, abstract data types, and higher-order program ..." Cited by 85 (11 self) Add to MetaCart The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli [15], provide for various forms of abstraction (modules, abstract data types, and higher-order programming) but lack primitives for concurrency. The logic programming language, LO (Linear Objects) [2] provides some primitives for concurrency but lacks abstraction mechanisms. In this paper we present Forum, a logic programming presentation of all of linear logic that modularly extends λProlog, Lolli, and LO. Forum, therefore, allows specifications to incorporate both abstractions and concurrency. To illustrate the new expressive strengths of Forum, we specify in it a sequent calculus proof system and the operational semantics of a programming language that incorporates references and concurrency. We also show that the meta theory of linear logic can be used to prove properties of the objectlanguages specified in Forum. - Theoretical Computer Science , 1997 "... In order to reason about specifications of computations that are given via the proof search or logic programming paradigm one needs to have at least some forms of induction and some principle for reasoning about the ways in which terms are built and the ways in which computations can progress. The l ..." Cited by 61 (19 self) Add to MetaCart In order to reason about specifications of computations that are given via the proof search or logic programming paradigm one needs to have at least some forms of induction and some principle for reasoning about the ways in which terms are built and the ways in which computations can progress. The literature contains many approaches to formally adding these reasoning principles with logic specifications. We choose an approach based on the sequent calculus and design an intuitionistic logic F Oλ ∆IN that includes natural number induction and a notion of definition. We have detailed elsewhere that this logic has a number of applications. In this paper we prove the cut-elimination theorem for F Oλ ∆IN, adapting a technique due to Tait and Martin-Löf. This cut-elimination proof is technically interesting and significantly extends previous results of this kind. 1 - Theoretical Computer Science , 1996 "... Intuitionistic and linear logics can be used to specify the operational semantics of transition systems in various ways. We consider here two encodings: one uses linear logic and maps states of the transition system into formulas, and the other uses intuitionistic logic and maps states into terms. I ..." Cited by 33 (10 self) Add to MetaCart Intuitionistic and linear logics can be used to specify the operational semantics of transition systems in various ways. We consider here two encodings: one uses linear logic and maps states of the transition system into formulas, and the other uses intuitionistic logic and maps states into terms. In both cases, it is possible to relate transition paths to proofs in sequent calculus. In neither encoding, however, does it seem possible to capture properties, such as simulation and bisimulation, that need to consider all possible transitions or all possible computation paths. We consider augmenting both intuitionistic and linear logics with a proof theoretical treatment of definitions. In both cases, this addition allows proving various judgments concerning simulation and bisimulation (especially for noetherian transition systems). We also explore the use of infinite proofs to reason about infinite sequences of transitions. Finally, combining definitions and induction into sequent calculus proofs makes it possible to reason more richly about properties of transition systems completely within the formal setting of sequent calculus. , 1995 "... We show that using deductive systems to specify an offline partial evaluator allows its correctness to be mechanically verified. For a -mix-style partial evaluator, we specify binding-time constraints using a natural-deduction logic, and the associated program specializer using natural (aka "deducti ..." Cited by 12 (3 self) Add to MetaCart We show that using deductive systems to specify an offline partial evaluator allows its correctness to be mechanically verified. For a -mix-style partial evaluator, we specify binding-time constraints using a natural-deduction logic, and the associated program specializer using natural (aka "deductive") semantics. These deductive systems can be directly encoded in the Elf programming language --- a logic programming language based on the LF logical framework. The specifications are then executable as logic programs. This provides a prototype implementation of the partial evaluator. Moreover, since deductive system proofs are accessible as objects in Elf, many aspects of the partial evaluation correctness proofs (e.g., the correctness of binding-time analysis) can be coded in Elf and mechanically verified. This work illustrates the utility of declarative programming and of using deductive systems for defining program specialization systems: by exploiting the logical character of definit... , 1994 "... This document formally defines the syntax and semantics of the Extended ML language. It is based directly on the published semantics of Standard ML in an attempt to ensure compatibility between the two languages. LFCS, Department of Computer Science, University of Edinburgh, Edinburgh, Scotland. ..." Cited by 9 (4 self) Add to MetaCart This document formally defines the syntax and semantics of the Extended ML language. It is based directly on the published semantics of Standard ML in an attempt to ensure compatibility between the two languages. LFCS, Department of Computer Science, University of Edinburgh, Edinburgh, Scotland. y Institute of Informatics, Warsaw University, and Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland. ii CONTENTS Contents 1 Introduction 1 1.1 Behavioural equivalence : : : : : : : : : : : : : : : : : : : : : : : : 3 1.2 Metalanguage : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 2 Syntax of the Core 8 2.1 Reserved Words : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 2.2 Special constants : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 2.3 Comments : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 2.4 Identifiers : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 2.5 Lexical analysis : : : :... - In Proc. AMAST, LNCS 936 , 1995 "... . We address here the problem of automatically translating the Natural Semantics of programming languages to Coq, in order to prove formally general properties of languages. Natural Semantics [18] is a formalism for specifying semantics of programming languages inspired by Plotkin's Structural Opera ..." Cited by 7 (0 self) Add to MetaCart . We address here the problem of automatically translating the Natural Semantics of programming languages to Coq, in order to prove formally general properties of languages. Natural Semantics [18] is a formalism for specifying semantics of programming languages inspired by Plotkin's Structural Operational Semantics [22]. The Coq proof development system [12], based on the Calculus of Constructions extended with inductive types (CCind), provides mechanized support including tactics for building goal-directed proofs. Our representation of a language in Coq is inAEuenced by the encoding of logics used by Church [6] and in the Edinburgh Logical Framework (ELF) [15, 3]. 1 Introduction The motivation for our work is the need for an environment to help develop proofs in Natural Semantics. The interactive programming environment generator Centaur [17] allows us to compile a Natural Semantics speciøcation of a given language into executable code (type-checkers, evaluators, compilers, program t...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=634747","timestamp":"2014-04-20T07:27:58Z","content_type":null,"content_length":"37081","record_id":"<urn:uuid:c0afdc7b-db40-4a1d-9b34-a8612df4eae7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
21 Mathematical Composition The product of 2 or more terms, including units of measure, is conventionally indicated by a raised multiplication dot (·) (eg, 7 kg · m2) or by 2 or more characters closed up (eg, y = mx+b). However, ... The product of 2 or more terms, including units of measure, is conventionally indicated by a raised multiplication dot (·) (eg, 7 kg · m2) or by 2 or more characters closed up (eg, y = mx+b). However, in scientific notation the times sign (×) is used (eg, 3 × 10−10 cm) (see , Units of Measure, Use of Numerals With Units, Multiplication of Numbers). An asterisk should not be used to represent multiplication, despite its use in this role in computer programs. Note: However, there may be occasions on which the asterisk may be used to provide the reader ...
{"url":"http://www.amamanualofstyle.com/browse?pageSize=10&sort=titlesort&t1=AMAMOS_SECTIONS%3Amed-9780195176339-chapter-21","timestamp":"2014-04-16T04:29:34Z","content_type":null,"content_length":"87186","record_id":"<urn:uuid:dfc2d040-9040-49e3-b7fc-593ce70d44be>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Creates a new parallel iterator used to traverse the elements of this parallel collection. This iterator is more specific than the iterator of the returned by iterator, and augmented with additional accessor and transformer methods. a parallel iterator Definition Classes The size of this mutable parallel set . Note: will not terminate for infinite-sized collections. the number of elements in this mutable parallel set . Definition Classes GenTraversableLike → GenTraversableOnce Definition Classes ParSet → ParIterable → ParIterableLike → GenSet → GenIterable → GenTraversable → GenSetLike → Parallelizable → GenTraversableOnce Definition Classes Clears the mutable parallel set 's contents. After this operation, the mutable parallel set is empty. Definition Classes Growable → Clearable Removes a single element from this mutable parallel set . the element to remove. the mutable parallel set itself Definition Classes ParSetLike → Shrinkable adds a single element to this mutable parallel set . the element to add. the mutable parallel set itself Definition Classes ParSetLike → Growable A mutable variant of ParSet. Self Type Linear Supertypes Known Subclasses Type Hierarchy Learn more about scaladoc diagrams 2. final def !=(arg0: Any): Boolean Test two objects for inequality. Test two objects for inequality. true if !(this == that), false otherwise. Definition Classes 3. final def ##(): Int Equivalent to x.hashCode except for boxed numeric types and null. Equivalent to x.hashCode except for boxed numeric types and null. For numerics, it returns a hash value which is consistent with value equality: if two value type instances compare as true, then ## will produce the same hash value for each of them. For null returns a hashcode where null.hashCode throws a NullPointerException. a hash value consistent with == Definition Classes AnyRef → Any 4. Computes the intersection between this set and another set. Computes the intersection between this set and another set. Note: Same as intersect. the set to intersect with. a new set consisting of all elements that are both in this set and in the given set that. Definition Classes 5. The difference of this set and another set. The difference of this set and another set. Note: Same as diff. the set of elements to exclude. a set containing those elements of this set that are not also contained in the given set that. Definition Classes 6. def +(elem: T): ParSet[T] 7. [use case] Returns a new mutable parallel set containing the elements from the left hand operand followed by the elements from the right hand operand. [use case] Returns a new mutable parallel set containing the elements from the left hand operand followed by the elements from the right hand operand. The element type of the mutable parallel set is the most specific superclass encompassing the element types of the two operands. scala> val a = LinkedList(1) a: scala.collection.mutable.LinkedList[Int] = LinkedList(1) scala> val b = LinkedList(2) b: scala.collection.mutable.LinkedList[Int] = LinkedList(2) scala> val c = a ++ b c: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2) scala> val d = LinkedList('a') d: scala.collection.mutable.LinkedList[Char] = LinkedList(a) scala> val e = c ++ d e: scala.collection.mutable.LinkedList[AnyVal] = LinkedList(1, 2, a) the element type of the returned collection. the traversable to append. a new mutable parallel set which contains all elements of this mutable parallel set followed by all elements of that. Definition Classes ParIterableLike → GenTraversableLike 8. adds all elements produced by a TraversableOnce to this mutable parallel set . adds all elements produced by a TraversableOnce to this mutable parallel set . the TraversableOnce producing the elements to add. the mutable parallel set itself. Definition Classes 9. def +=(elem1: T, elem2: T, elems: T*): ParSet.this.type adds two or more elements to this mutable parallel set . adds two or more elements to this mutable parallel set . the first element to add. the second element to add. the remaining elements to add. the mutable parallel set itself Definition Classes 10. def -(elem: T): ParSet[T] 11. Removes all elements produced by an iterator from this mutable parallel set . Removes all elements produced by an iterator from this mutable parallel set . the iterator producing the elements to remove. the mutable parallel set itself Definition Classes 12. def -=(elem1: T, elem2: T, elems: T*): ParSet.this.type Removes two or more elements from this mutable parallel set . Removes two or more elements from this mutable parallel set . the first element to remove. the second element to remove. the remaining elements to remove. the mutable parallel set itself Definition Classes 13. def ->[B](y: B): (ParSet[T], B) 14. def /:[S](z: S)(op: (S, T) ⇒ S): S Applies a binary operator to a start value and all elements of this mutable parallel set , going left to right. Applies a binary operator to a start value and all elements of this mutable parallel set , going left to right. Note: /: is alternate syntax for foldLeft; z /: xs is the same as xs foldLeft z. Note that the folding function used to compute b is equivalent to that used to compute c. scala> val a = LinkedList(1,2,3,4) a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4) scala> val b = (5 /: a)(_+_) b: Int = 15 scala> val c = (5 /: a)((x,y) => x + y) c: Int = 15 Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. or the operator is associative and commutative. the start value. the binary operator. the result of inserting op between consecutive elements of this mutable parallel set , going left to right with the start value z on the left: op(...op(op(z, x_1), x_2), ..., x_n) where x[1], ..., x[n] are the elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 15. def :\[S](z: S)(op: (T, S) ⇒ S): S Applies a binary operator to all elements of this mutable parallel set and a start value, going right to left. Applies a binary operator to all elements of this mutable parallel set and a start value, going right to left. Note: :\ is alternate syntax for foldRight; xs :\ z is the same as xs foldRight z. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. or the operator is associative and commutative. Note that the folding function used to compute b is equivalent to that used to compute c. scala> val a = LinkedList(1,2,3,4) a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4) scala> val b = (a :\ 5)(_+_) b: Int = 15 scala> val c = (a :\ 5)((x,y) => x + y) c: Int = 15 the start value the binary operator the result of inserting op between consecutive elements of this mutable parallel set , going right to left with the start value z on the right: op(x_1, op(x_2, ... op(x_n, z)...)) where x[1], ..., x[n] are the elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 17. final def ==(arg0: Any): Boolean Test two objects for equality. Test two objects for equality. The expression x == that is equivalent to if (x eq null) that eq null else x.equals(that). true if the receiver object is equivalent to the argument; false otherwise. Definition Classes 18. def aggregate[S](z: S)(seqop: (S, T) ⇒ S, combop: (S, S) ⇒ S): S Aggregates the results of applying an operator to subsequent elements. Aggregates the results of applying an operator to subsequent elements. This is a more general form of fold and reduce. It has similar semantics, but does not require the result to be a supertype of the element type. It traverses the elements in different partitions sequentially, using seqop to update the result, and then applies combop to results from different partitions. The implementation of this operation may operate on an arbitrary number of collection partitions, so combop may be invoked arbitrary number of times. For example, one might want to process some elements and then produce a Set. In this case, seqop would process an element and append it to the list, while combop would concatenate two lists from different partitions together. The initial value z would be an empty set. pc.aggregate(Set[Int]())(_ += process(_), _ ++ _) Another example is calculating geometric mean from a collection of doubles (one would typically require big doubles for this). the type of accumulated results the initial value for the accumulated result of the partition - this will typically be the neutral element for the seqop operator (e.g. Nil for list concatenation or 0 for summation) an operator used to accumulate results within a partition an associative operator used to combine results from different partitions Definition Classes ParIterableLike → GenTraversableOnce 19. def andThen[A](g: (Boolean) ⇒ A): (T) ⇒ A Composes two instances of Function1 in a new Function1, with this function applied first. Composes two instances of Function1 in a new Function1, with this function applied first. the result type of function g a function R => A a new function f such that f(x) == g(apply(x)) Definition Classes 20. def apply(elem: T): Boolean Tests if some element is contained in this set. Tests if some element is contained in this set. This method is equivalent to contains. It allows sets to be interpreted as predicates. the element to test for membership. true if elem is contained in this set, false otherwise. Definition Classes GenSetLike → Function1 21. final def asInstanceOf[T0]: T0 Cast the receiver object to be of type T0. Cast the receiver object to be of type T0. Note that the success of a cast at runtime is modulo Scala's erasure semantics. Therefore the expression 1.asInstanceOf[String] will throw a ClassCastException at runtime, while the expression List(1).asInstanceOf[List[String]] will not. In the latter example, because the type argument is erased as part of compilation it is not possible to check whether the contents of the list are of the requested type. the receiver object. Definition Classes Exceptions thrown if the receiver object is not an instance of the erasure of type T0. 25. implicit def builder2ops[Elem, To](cb: Builder[Elem, To]): BuilderOps[Elem, To] 26. def canEqual(other: Any): Boolean 27. def clone(): ParSet[T] Create a copy of the receiver object. Create a copy of the receiver object. The default implementation of the clone method is platform dependent. a copy of the receiver object. Definition Classes Cloneable → AnyRef not specified by SLS as a member of AnyRef 28. [use case] Builds a new collection by applying a partial function to all elements of this mutable parallel set [use case] Builds a new collection by applying a partial function to all elements of this mutable parallel set on which the function is defined. the element type of the returned collection. the partial function which filters and maps the mutable parallel set . a new mutable parallel set resulting from applying the given partial function pf to each element on which it is defined and collecting the results. The order of the elements is preserved. Definition Classes ParIterableLike → GenTraversableLike 29. def combinerFactory[S, That](cbf: () ⇒ Combiner[S, That]): CombinerFactory[S, That] 30. Creates a combiner factory. Creates a combiner factory. Each combiner factory instance is used once per invocation of a parallel transformer method for a single collection. The default combiner factory creates a new combiner every time it is requested, unless the combiner is thread-safe as indicated by its canBeShared method. In this case, the method returns a factory which returns the same combiner each time. This is typically done for concurrent parallel collections, the combiners of which allow thread safe access. Definition Classes 31. The factory companion object that builds instances of class mutable.ParSet. 32. def compose[A](g: (A) ⇒ T): (A) ⇒ Boolean Composes two instances of Function1 in a new Function1, with this function applied last. Composes two instances of Function1 in a new Function1, with this function applied last. the type to which function g can be applied a function A => T1 a new function f such that f(x) == apply(g(x)) Definition Classes 33. def copyToArray[U >: T](xs: Array[U], start: Int, len: Int): Unit 34. def copyToArray(xs: Array[A], start: Int): Unit [use case] Copies values of this mutable parallel set to an array. [use case] Copies values of this mutable parallel set to an array. Fills the given array xs with values of this mutable parallel set , beginning at index start. Copying will stop once either the end of the current mutable parallel set is reached, or the end of the array is reached. Note: will not terminate for infinite-sized collections. the array to fill. the starting index. Definition Classes ParIterableLike → GenTraversableOnce Full Signature def copyToArray[U >: T](xs: Array[U], start: Int): Unit 35. def copyToArray(xs: Array[A]): Unit [use case] Copies values of this mutable parallel set to an array. [use case] Copies values of this mutable parallel set to an array. Fills the given array xs with values of this mutable parallel set . Copying will stop once either the end of the current mutable parallel set is reached, or the end of the array is reached. Note: will not terminate for infinite-sized collections. the array to fill. Definition Classes ParIterableLike → GenTraversableOnce Full Signature def copyToArray[U >: T](xs: Array[U]): Unit 36. def count(p: (T) ⇒ Boolean): Int Counts the number of elements in the mutable parallel set which satisfy a predicate. Counts the number of elements in the mutable parallel set which satisfy a predicate. the predicate used to test elements. the number of elements satisfying the predicate p. Definition Classes ParIterableLike → GenTraversableOnce 39. Computes the difference of this set and another set. Computes the difference of this set and another set. the set of elements to exclude. a set containing those elements of this set that are not also contained in the given set that. Definition Classes ParSetLike → GenSetLike 40. Selects all elements except first n ones. Selects all elements except first n ones. Note: might return different results for different runs, unless the underlying collection type is ordered. the number of elements to drop from this mutable parallel set . a mutable parallel set consisting of all elements of this mutable parallel set except the first n ones, or else the empty mutable parallel set , if this mutable parallel set has less than n Definition Classes ParIterableLike → GenTraversableLike 41. def dropWhile(pred: (T) ⇒ Boolean): ParSet[T] Drops all elements in the longest prefix of elements that satisfy the predicate, and returns a collection composed of the remaining elements. Drops all elements in the longest prefix of elements that satisfy the predicate, and returns a collection composed of the remaining elements. This method will use indexFlag signalling capabilities. This means that splitters may set and read the indexFlag state. The index flag is initially set to maximum integer value. the predicate used to test the elements a collection composed of all the elements after the longest prefix of elements in this mutable parallel set that satisfy the predicate pred Definition Classes ParIterableLike → GenTraversableLike 42. def empty: ParSet[T] Implicit information This member is added by an implicit conversion from ParSet[T] to Ensuring[ParSet[T]] performed by method any2Ensuring in scala.Predef. Definition Classes Implicit information This member is added by an implicit conversion from ParSet[T] to Ensuring[ParSet[T]] performed by method any2Ensuring in scala.Predef. Definition Classes 45. def ensuring(cond: Boolean, msg: ⇒ Any): ParSet[T] Implicit information This member is added by an implicit conversion from ParSet[T] to Ensuring[ParSet[T]] performed by method any2Ensuring in scala.Predef. Definition Classes Implicit information This member is added by an implicit conversion from ParSet[T] to Ensuring[ParSet[T]] performed by method any2Ensuring in scala.Predef. Definition Classes 47. Tests whether the argument (arg0) is a reference to the receiver object (this). Tests whether the argument (arg0) is a reference to the receiver object (this). The eq method implements an equivalence relation on non-null instances of AnyRef, and has three additional properties: □ It is consistent: for any non-null instances x and y of type AnyRef, multiple invocations of x.eq(y) consistently returns true or consistently returns false. □ For any non-null instance x of type AnyRef, x.eq(null) and null.eq(x) returns false. □ null.eq(null) returns true. When overriding the equals or hashCode methods, it is important to ensure that their behavior is consistent with reference equality. Therefore, if two objects are references to each other (o1 eq o2), they should be equal to each other (o1 == o2) and they should hash to the same value (o1.hashCode == o2.hashCode). true if the argument is a reference to the receiver object; false otherwise. Definition Classes 48. Compares this set with another object for equality. Compares this set with another object for equality. Note: This operation contains an unchecked cast: if that is a set, it will assume with an unchecked cast that it has the same element type as this set. Any subsequent ClassCastException is treated as a false result. the other object true if that is a set which contains the same elements as this set. Definition Classes GenSetLike → Equals → AnyRef → Any 49. Tests whether a predicate holds for some element of this mutable parallel set . Tests whether a predicate holds for some element of this mutable parallel set . This method will use abort signalling capabilities. This means that splitters may send and read abort signals. a predicate used to test elements true if p holds for some element, false otherwise Definition Classes ParIterableLike → GenTraversableOnce 50. def filter(pred: (T) ⇒ Boolean): ParSet[T] Selects all elements of this mutable parallel set which satisfy a predicate. Selects all elements of this mutable parallel set which satisfy a predicate. the predicate used to test elements. a new mutable parallel set consisting of all elements of this mutable parallel set that satisfy the given predicate p. Their order may not be preserved. Definition Classes ParIterableLike → GenTraversableLike 51. def filterNot(pred: (T) ⇒ Boolean): ParSet[T] Selects all elements of this mutable parallel set which do not satisfy a predicate. Selects all elements of this mutable parallel set which do not satisfy a predicate. the predicate used to test elements. a new mutable parallel set consisting of all elements of this mutable parallel set that do not satisfy the given predicate p. Their order may not be preserved. Definition Classes ParIterableLike → GenTraversableLike 52. def finalize(): Unit Called by the garbage collector on the receiver object when there are no more references to the object. Called by the garbage collector on the receiver object when there are no more references to the object. The details of when and if the finalize method is invoked, as well as the interaction between finalize and non-local returns and exceptions, are all platform dependent. Definition Classes @throws( classOf[java.lang.Throwable] ) not specified by SLS as a member of AnyRef 53. def find(pred: (T) ⇒ Boolean): Option[T] Finds some element in the collection for which the predicate holds, if such an element exists. Finds some element in the collection for which the predicate holds, if such an element exists. The element may not necessarily be the first such element in the iteration order. If there are multiple elements obeying the predicate, the choice is nondeterministic. This method will use abort signalling capabilities. This means that splitters may send and read abort signals. predicate used to test the elements an option value with the element if such an element exists, or None otherwise Definition Classes ParIterableLike → GenTraversableOnce 54. [use case] Builds a new collection by applying a function to all elements of this mutable parallel set [use case] Builds a new collection by applying a function to all elements of this mutable parallel set and using the elements of the resulting collections. For example: def getWords(lines: Seq[String]): Seq[String] = lines flatMap (line => line split "\\W+") The type of the resulting collection is guided by the static type of mutable parallel set . This might cause unexpected results sometimes. For example: // lettersOf will return a Seq[Char] of likely repeated letters, instead of a Set def lettersOf(words: Seq[String]) = words flatMap (word => word.toSet) // lettersOf will return a Set[Char], not a Seq def lettersOf(words: Seq[String]) = words.toSet flatMap (word => word.toSeq) // xs will be a an Iterable[Int] val xs = Map("a" -> List(11,111), "b" -> List(22,222)).flatMap(_._2) // ys will be a Map[Int, Int] val ys = Map("a" -> List(1 -> 11,1 -> 111), "b" -> List(2 -> 22,2 -> 222)).flatMap(_._2) the element type of the returned collection. the function to apply to each element. a new mutable parallel set resulting from applying the given collection-valued function f to each element of this mutable parallel set and concatenating the results. Definition Classes ParIterableLike → GenTraversableLike 55. def flatten[B]: ParSet[B] [use case] Converts this mutable parallel set of traversable collections into a mutable parallel set formed by the elements of these traversable collections. [use case] Converts this mutable parallel set of traversable collections into a mutable parallel set formed by the elements of these traversable collections. The resulting collection's type will be guided by the static type of mutable parallel set . For example: val xs = List(Set(1, 2, 3), Set(1, 2, 3)) // xs == List(1, 2, 3, 1, 2, 3) val ys = Set(List(1, 2, 3), List(3, 2, 1)) // ys == Set(1, 2, 3) the type of the elements of each traversable collection. a new mutable parallel set resulting from concatenating all element mutable parallel set s. Definition Classes 56. def fold[U >: T](z: U)(op: (U, U) ⇒ U): U Folds the elements of this sequence using the specified associative binary operator. Folds the elements of this sequence using the specified associative binary operator. The order in which the elements are reduced is unspecified and may be nondeterministic. Note this method has a different signature than the foldLeft and foldRight methods of the trait Traversable. The result of folding may only be a supertype of this parallel collection's type parameter T. a type parameter for the binary operator, a supertype of T. a neutral element for the fold operation, it may be added to the result an arbitrary number of times, not changing the result (e.g. Nil for list concatenation, 0 for addition, or 1 for a binary operator that must be associative the result of applying fold operator op between all the elements and z Definition Classes ParIterableLike → GenTraversableOnce 57. def foldLeft[S](z: S)(op: (S, T) ⇒ S): S Applies a binary operator to a start value and all elements of this mutable parallel set , going left to right. Applies a binary operator to a start value and all elements of this mutable parallel set , going left to right. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. or the operator is associative and commutative. the start value. the binary operator. the result of inserting op between consecutive elements of this mutable parallel set , going left to right with the start value z on the left: op(...op(z, x_1), x_2, ..., x_n) where x[1], ..., x[n] are the elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 58. def foldRight[S](z: S)(op: (T, S) ⇒ S): S Applies a binary operator to all elements of this mutable parallel set and a start value, going right to left. Applies a binary operator to all elements of this mutable parallel set and a start value, going right to left. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. or the operator is associative and commutative. the start value. the binary operator. the result of inserting op between consecutive elements of this mutable parallel set , going right to left with the start value z on the right: op(x_1, op(x_2, ... op(x_n, z)...)) where x[1], ..., x[n] are the elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 59. Tests whether a predicate holds for all elements of this mutable parallel set . Tests whether a predicate holds for all elements of this mutable parallel set . This method will use abort signalling capabilities. This means that splitters may send and read abort signals. a predicate used to test elements true if p holds for all elements, false otherwise Definition Classes ParIterableLike → GenTraversableOnce 60. def foreach[U](f: (T) ⇒ U): Unit Applies a function f to all the elements of mutable parallel set in a undefined order. Applies a function f to all the elements of mutable parallel set in a undefined order. the result type of the function applied to each element, which is always discarded function applied to each element Definition Classes ParIterableLike → GenTraversableLike → GenTraversableOnce 61. def formatted(fmtstr: String): String Returns string formatted according to given format string. Returns string formatted according to given format string. Format strings are as for String.format (@see java.lang.String.format). Implicit information This member is added by an implicit conversion from ParSet[T] to StringFormat performed by method any2stringfmt in scala.Predef. Definition Classes 62. def genericBuilder[B]: Combiner[B, ParSet[B]] The generic builder that builds instances of mutable.ParSet at arbitrary element types. 63. def genericCombiner[B]: Combiner[B, ParSet[B]] 64. final def getClass(): Class[_] A representation that corresponds to the dynamic class of the receiver object. A representation that corresponds to the dynamic class of the receiver object. The nature of the representation is platform dependent. a representation that corresponds to the dynamic class of the receiver object. Definition Classes AnyRef → Any not specified by SLS as a member of AnyRef 65. Partitions this mutable parallel set into a map of mutable parallel set s according to some discriminator function. Partitions this mutable parallel set into a map of mutable parallel set s according to some discriminator function. Note: this method is not re-implemented by views. This means when applied to a view it will always force the view and return a new mutable parallel set . the type of keys returned by the discriminator function. the discriminator function. A map from keys to mutable parallel set s such that the following invariant holds: (xs partition f)(k) = xs filter (x => f(x) == k) That is, every key k is bound to a mutable parallel set of those elements x for which f(x) equals k. Definition Classes ParIterableLike → GenTraversableLike 66. def hasDefiniteSize: Boolean 67. def hashCode(): Int The hashCode method for reference types. The hashCode method for reference types. See hashCode in scala.Any. the hash code value for this object. Definition Classes GenSetLike → AnyRef → Any 68. def head: T Selects the first element of this mutable parallel set . Selects the first element of this mutable parallel set . Note: might return different results for different runs, unless the underlying collection type is ordered. the first element of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableLike Exceptions thrown if the mutable parallel set is empty. 69. def headOption: Option[T] Optionally selects the first element. Optionally selects the first element. Note: might return different results for different runs, unless the underlying collection type is ordered. the first element of this mutable parallel set if it is nonempty, None if it is empty. Definition Classes ParIterableLike → GenTraversableLike 71. def init: ParSet[T] Selects all elements except the last. Selects all elements except the last. Note: might return different results for different runs, unless the underlying collection type is ordered. a mutable parallel set consisting of all elements of this mutable parallel set except the last one. Definition Classes ParIterableLike → GenTraversableLike Exceptions thrown if the mutable parallel set is empty. 72. def initTaskSupport(): Unit 73. def intersect(that: GenSet[T]): ParSet[T] Computes the intersection between this set and another set. Computes the intersection between this set and another set. the set to intersect with. a new set consisting of all elements that are both in this set and in the given set that. Definition Classes 74. Tests whether the mutable parallel set is empty. Tests whether the mutable parallel set is empty. true if the mutable parallel set contains no elements, false otherwise. Definition Classes ParIterableLike → GenTraversableOnce 75. final def isInstanceOf[T0]: Boolean Test whether the dynamic type of the receiver object is T0. Test whether the dynamic type of the receiver object is T0. Note that the result of the test is modulo Scala's erasure semantics. Therefore the expression 1.isInstanceOf[String] will return false, while the expression List(1).isInstanceOf[List[String]] will return true. In the latter example, because the type argument is erased as part of compilation it is not possible to check whether the contents of the list are of the specified type. true if the receiver object is an instance of erasure of type T0; false otherwise. Definition Classes 76. def isParIterable: Boolean 78. def isParallel: Boolean 79. def isStrictSplitterCollection: Boolean Denotes whether this parallel collection has strict splitters. Denotes whether this parallel collection has strict splitters. This is true in general, and specific collection instances may choose to override this method. Such collections will fail to execute methods which rely on splitters being strict, i.e. returning a correct value in the remaining method. This method helps ensure that such failures occur on method invocations, rather than later on and in unpredictable ways. Definition Classes 80. final def isTraversableAgain: Boolean Tests whether this mutable parallel set can be repeatedly traversed. 81. def iterator: Splitter[T] Creates a new split iterator used to traverse the elements of this collection. Creates a new split iterator used to traverse the elements of this collection. By default, this method is implemented in terms of the protected splitter method. a split iterator Definition Classes ParIterableLike → GenIterableLike 82. def last: T Selects the last element. Selects the last element. Note: might return different results for different runs, unless the underlying collection type is ordered. The last element of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableLike Exceptions thrown If the mutable parallel set is empty. 83. def lastOption: Option[T] Optionally selects the last element. Optionally selects the last element. Note: might return different results for different runs, unless the underlying collection type is ordered. the last element of this mutable parallel set $ if it is nonempty, None if it is empty. Definition Classes ParIterableLike → GenTraversableLike 84. def map[B](f: (A) ⇒ B): ParSet[B] [use case] Builds a new collection by applying a function to all elements of this mutable parallel set . [use case] Builds a new collection by applying a function to all elements of this mutable parallel set . the element type of the returned collection. the function to apply to each element. a new mutable parallel set resulting from applying the given function f to each element of this mutable parallel set and collecting the results. Definition Classes ParIterableLike → GenTraversableLike Full Signature def map[S, That](f: (T) ⇒ S)(implicit bf: CanBuildFrom[ParSet[T], S, That]): That 85. def max: A [use case] Finds the largest element. [use case] Finds the largest element. the largest element of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce Full Signature def max[U >: T](implicit ord: Ordering[U]): T 86. def maxBy[S](f: (T) ⇒ S)(implicit cmp: Ordering[S]): T 87. def min: A [use case] Finds the smallest element. [use case] Finds the smallest element. the smallest element of this mutable parallel set Definition Classes ParIterableLike → GenTraversableOnce Full Signature def min[U >: T](implicit ord: Ordering[U]): T 88. def minBy[S](f: (T) ⇒ S)(implicit cmp: Ordering[S]): T 89. def mkString: String Displays all elements of this mutable parallel set in a string. Displays all elements of this mutable parallel set in a string. a string representation of this mutable parallel set . In the resulting string the string representations (w.r.t. the method toString) of all elements of this mutable parallel set follow each other without any separator string. Definition Classes ParIterableLike → GenTraversableOnce 90. Displays all elements of this mutable parallel set in a string using a separator string. Displays all elements of this mutable parallel set in a string using a separator string. the separator string. a string representation of this mutable parallel set . In the resulting string the string representations (w.r.t. the method toString) of all elements of this mutable parallel set are separated by the string sep. Definition Classes ParIterableLike → GenTraversableOnce 1. List(1, 2, 3).mkString("|") = "1|2|3" 91. Displays all elements of this mutable parallel set in a string using start, end, and separator strings. Displays all elements of this mutable parallel set in a string using start, end, and separator strings. the starting string. the separator string. the ending string. a string representation of this mutable parallel set . The resulting string begins with the string start and ends with the string end. Inside, the string representations (w.r.t. the method toString) of all elements of this mutable parallel set are separated by the string sep. Definition Classes ParIterableLike → GenTraversableOnce 1. List(1, 2, 3).mkString("(", "; ", ")") = "(1; 2; 3)" 92. Equivalent to !(this eq that). Equivalent to !(this eq that). true if the argument is not a reference to the receiver object; false otherwise. Definition Classes 93. The builder that builds instances of type mutable.ParSet[A] 95. Tests whether the mutable parallel set is not empty. Tests whether the mutable parallel set is not empty. true if the mutable parallel set contains at least one element, false otherwise. Definition Classes ParIterableLike → GenTraversableOnce 96. final def notify(): Unit Wakes up a single thread that is waiting on the receiver object's monitor. Wakes up a single thread that is waiting on the receiver object's monitor. Definition Classes not specified by SLS as a member of AnyRef 97. final def notifyAll(): Unit Wakes up all threads that are waiting on the receiver object's monitor. Wakes up all threads that are waiting on the receiver object's monitor. Definition Classes not specified by SLS as a member of AnyRef 98. Returns a parallel implementation of this collection. Returns a parallel implementation of this collection. For most collection types, this method creates a new parallel collection by copying all the elements. For these collection, par takes linear time. Mutable collections in this category do not produce a mutable parallel collection that has the same underlying dataset, so changes in one collection will not be reflected in the other one. Specific collections (e.g. ParArray or mutable.ParHashMap) override this default behaviour by creating a parallel collection which shares the same underlying dataset. For these collections, par takes constant or sublinear time. All parallel collections return a reference to themselves. a parallel implementation of this collection Definition Classes ParIterableLike → CustomParallelizable → Parallelizable 99. The default par implementation uses the combiner provided by this method to create a new parallel collection. The default par implementation uses the combiner provided by this method to create a new parallel collection. a combiner for the parallel collection of type ParRepr Definition Classes CustomParallelizable → Parallelizable 100. def partition(pred: (T) ⇒ Boolean): (ParSet[T], ParSet[T]) Partitions this mutable parallel set in two mutable parallel set s according to a predicate. Partitions this mutable parallel set in two mutable parallel set s according to a predicate. the predicate on which to partition. a pair of mutable parallel set s: the first mutable parallel set consists of all elements that satisfy the predicate p and the second mutable parallel set consists of all elements that don't. The relative order of the elements in the resulting mutable parallel set s may not be preserved. Definition Classes ParIterableLike → GenTraversableLike 101. def product: A [use case] Multiplies up the elements of this collection. [use case] Multiplies up the elements of this collection. the product of all elements in this mutable parallel set of numbers of type Int. Instead of Int, any other type T with an implicit Numeric[T] implementation can be used as element type of the mutable parallel set and as result type of product. Examples of such types are: Long, Float, Double, BigInt. Definition Classes ParIterableLike → GenTraversableOnce Full Signature def product[U >: T](implicit num: Numeric[U]): U 102. def reduce[U >: T](op: (U, U) ⇒ U): U Reduces the elements of this sequence using the specified associative binary operator. Reduces the elements of this sequence using the specified associative binary operator. The order in which operations are performed on elements is unspecified and may be nondeterministic. Note this method has a different signature than the reduceLeft and reduceRight methods of the trait Traversable. The result of reducing may only be a supertype of this parallel collection's type parameter T. A type parameter for the binary operator, a supertype of T. A binary operator that must be associative. The result of applying reduce operator op between all the elements if the collection is nonempty. Definition Classes ParIterableLike → GenTraversableOnce Exceptions thrown if this mutable parallel set is empty. 103. def reduceLeftOption[U >: T](op: (U, T) ⇒ U): Option[U] Optionally applies a binary operator to all elements of this mutable parallel set , going left to right. Optionally applies a binary operator to all elements of this mutable parallel set , going left to right. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. or the operator is associative and commutative. the binary operator. an option value containing the result of reduceLeft(op) is this mutable parallel set is nonempty, None otherwise. Definition Classes ParIterableLike → GenTraversableOnce 104. def reduceOption[U >: T](op: (U, U) ⇒ U): Option[U] Optionally reduces the elements of this sequence using the specified associative binary operator. Optionally reduces the elements of this sequence using the specified associative binary operator. The order in which operations are performed on elements is unspecified and may be nondeterministic. Note this method has a different signature than the reduceLeftOption and reduceRightOption methods of the trait Traversable. The result of reducing may only be a supertype of this parallel collection's type parameter T. A type parameter for the binary operator, a supertype of T. A binary operator that must be associative. An option value containing result of applying reduce operator op between all the elements if the collection is nonempty, and None otherwise. Definition Classes ParIterableLike → GenTraversableOnce 105. def reduceRight[U >: T](op: (T, U) ⇒ U): U Applies a binary operator to all elements of this mutable parallel set , going right to left. Applies a binary operator to all elements of this mutable parallel set , going right to left. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. or the operator is associative and commutative. the binary operator. the result of inserting op between consecutive elements of this mutable parallel set , going right to left: op(x_1, op(x_2, ..., op(x_{n-1}, x_n)...)) where x[1], ..., x[n] are the elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce Exceptions thrown if this mutable parallel set is empty. 106. def reduceRightOption[U >: T](op: (T, U) ⇒ U): Option[U] Optionally applies a binary operator to all elements of this mutable parallel set , going right to left. Optionally applies a binary operator to all elements of this mutable parallel set , going right to left. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. or the operator is associative and commutative. the binary operator. an option value containing the result of reduceRight(op) is this mutable parallel set is nonempty, None otherwise. Definition Classes ParIterableLike → GenTraversableOnce 107. def repr: ParSet[T] 108. def reuse[S, That](oldc: Option[Combiner[S, That]], newc: Combiner[S, That]): Combiner[S, That] Optionally reuses an existing combiner for better performance. Optionally reuses an existing combiner for better performance. By default it doesn't - subclasses may override this behaviour. The provided combiner oldc that can potentially be reused will be either some combiner from the previous computational task, or None if there was no previous phase (in which case this method must return newc). The combiner that is the result of the previous task, or None if there was no previous task. The new, empty combiner that can be used. Either newc or oldc. Definition Classes 109. [use case] Checks if the other iterable collection contains the same elements in the same order as this mutable parallel set . [use case] Checks if the other iterable collection contains the same elements in the same order as this mutable parallel set . Note: might return different results for different runs, unless the underlying collection type is ordered. Note: will not terminate for infinite-sized collections. the collection to compare with. true, if both collections contain the same elements in the same order, false otherwise. Definition Classes ParIterableLike → GenIterableLike 110. def scan(z: T)(op: (T, T) ⇒ T): ParSet[T] [use case] Computes a prefix scan of the elements of the collection. [use case] Computes a prefix scan of the elements of the collection. Note: The neutral element z may be applied more than once. neutral element for the operator op the associative operator for the scan a new mutable parallel set containing the prefix scan of the elements in this mutable parallel set Definition Classes ParIterableLike → GenTraversableLike Full Signature def scan[U >: T, That](z: U)(op: (U, U) ⇒ U)(implicit bf: CanBuildFrom[ParSet[T], U, That]): That 111. def scanBlockSize: Int 112. def scanLeft[S, That](z: S)(op: (S, T) ⇒ S)(implicit bf: CanBuildFrom[ParSet[T], S, That]): That Produces a collection containing cumulative results of applying the operator going left to right. Produces a collection containing cumulative results of applying the operator going left to right. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. the actual type of the resulting collection the initial value the binary operator applied to the intermediate result and the element an implicit value of class CanBuildFrom which determines the result class That from the current representation type Repr and and the new element type B. collection with intermediate results Definition Classes ParIterableLike → GenTraversableLike 113. def scanRight[S, That](z: S)(op: (T, S) ⇒ S)(implicit bf: CanBuildFrom[ParSet[T], S, That]): That Produces a collection containing cumulative results of applying the operator going right to left. Produces a collection containing cumulative results of applying the operator going right to left. The head of the collection is the last cumulative result. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. List(1, 2, 3, 4).scanRight(0)(_ + _) == List(10, 9, 7, 4, 0) the actual type of the resulting collection the initial value the binary operator applied to the intermediate result and the element an implicit value of class CanBuildFrom which determines the result class That from the current representation type Repr and and the new element type B. collection with intermediate results Definition Classes ParIterableLike → GenTraversableLike 115. def slice(unc_from: Int, unc_until: Int): ParSet[T] Selects an interval of elements. Selects an interval of elements. The returned collection is made up of all elements x which satisfy the invariant: from <= indexOf(x) < until Note: might return different results for different runs, unless the underlying collection type is ordered. the lowest index to include from this mutable parallel set . the lowest index to EXCLUDE from this mutable parallel set . a mutable parallel set containing the elements greater than or equal to index from extending up to (but not including) index until of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableLike 116. Splits this mutable parallel set into a prefix/suffix pair according to a predicate. Splits this mutable parallel set into a prefix/suffix pair according to a predicate. This method will use indexFlag signalling capabilities. This means that splitters may set and read the indexFlag state. The index flag is initially set to maximum integer value. the predicate used to test the elements a pair consisting of the longest prefix of the collection for which all the elements satisfy pred, and the rest of the collection Definition Classes ParIterableLike → GenTraversableLike 117. Splits this mutable parallel set into two at a given position. Splits this mutable parallel set into two at a given position. Note: c splitAt n is equivalent to (but possibly more efficient than) (c take n, c drop n). Note: might return different results for different runs, unless the underlying collection type is ordered. the position at which to split. a pair of mutable parallel set s consisting of the first n elements of this mutable parallel set , and the other elements. Definition Classes ParIterableLike → GenTraversableLike 118. def stringPrefix: String Defines the prefix of this object's toString representation. Defines the prefix of this object's toString representation. a string representation which starts the result of toString applied to this mutable parallel set . By default the string prefix is the simple name of the collection class mutable parallel set Definition Classes ParSet → ParIterable → GenTraversableLike 119. Tests whether this set is a subset of another set. Tests whether this set is a subset of another set. the set to test. true if this set is a subset of that, i.e. if every element of this set is also an element of that. Definition Classes 120. def sum: A [use case] Sums up the elements of this collection. [use case] Sums up the elements of this collection. the sum of all elements in this mutable parallel set of numbers of type Int. Instead of Int, any other type T with an implicit Numeric[T] implementation can be used as element type of the mutable parallel set and as result type of sum. Examples of such types are: Long, Float, Double, BigInt. Definition Classes ParIterableLike → GenTraversableOnce Full Signature def sum[U >: T](implicit num: Numeric[U]): U 121. final def synchronized[T0](arg0: ⇒ T0): T0 122. def tail: ParSet[T] Selects all elements except the first. Selects all elements except the first. Note: might return different results for different runs, unless the underlying collection type is ordered. a mutable parallel set consisting of all elements of this mutable parallel set except the first one. Definition Classes ParIterableLike → GenTraversableLike Exceptions thrown if the mutable parallel set is empty. 123. Selects first n elements. Selects first n elements. Note: might return different results for different runs, unless the underlying collection type is ordered. the number of elements to take from this mutable parallel set . a mutable parallel set consisting only of the first n elements of this mutable parallel set , or else the whole mutable parallel set , if it has less than n elements. Definition Classes ParIterableLike → GenTraversableLike 124. def takeWhile(pred: (T) ⇒ Boolean): ParSet[T] Takes the longest prefix of elements that satisfy the predicate. Takes the longest prefix of elements that satisfy the predicate. This method will use indexFlag signalling capabilities. This means that splitters may set and read the indexFlag state. The index flag is initially set to maximum integer value. the predicate used to test the elements the longest prefix of this mutable parallel set of elements that satisy the predicate pred Definition Classes ParIterableLike → GenTraversableLike 125. implicit def task2ops[R, Tp](tsk: SSCTask[R, Tp]): TaskOps[R, Tp] 126. The task support object which is responsible for scheduling and load-balancing tasks to processors. The task support object which is responsible for scheduling and load-balancing tasks to processors. Definition Classes See also 127. Changes the task support object which is responsible for scheduling and load-balancing tasks to processors. Changes the task support object which is responsible for scheduling and load-balancing tasks to processors. A task support object can be changed in a parallel collection after it has been created, but only during a quiescent period, i.e. while there are no concurrent invocations to parallel collection Here is a way to change the task support of a parallel collection: import scala.collection.parallel._ val pc = mutable.ParArray(1, 2, 3) pc.tasksupport = new ForkJoinTaskSupport( new scala.concurrent.forkjoin.ForkJoinPool(2)) Definition Classes See also 128. def to[Col[_]]: Col[A] [use case] Converts this mutable parallel set into another by copying all elements. [use case] Converts this mutable parallel set into another by copying all elements. Note: will not terminate for infinite-sized collections. The collection type to build. a new collection containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 129. def toArray: Array[A] [use case] Converts this mutable parallel set to an array. [use case] Converts this mutable parallel set to an array. Note: will not terminate for infinite-sized collections. an array containing all elements of this mutable parallel set . An ClassTag must be available for the element type of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce Full Signature def toArray[U >: T](implicit arg0: ClassTag[U]): Array[U] 130. def toBuffer[U >: T]: Buffer[U] Converts this mutable parallel set to a mutable buffer. Converts this mutable parallel set to a mutable buffer. Note: will not terminate for infinite-sized collections. a buffer containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 131. Converts this mutable parallel set to an indexed sequence. Converts this mutable parallel set to an indexed sequence. Note: will not terminate for infinite-sized collections. an indexed sequence containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 132. Converts this mutable parallel set to an iterable collection. Converts this mutable parallel set to an iterable collection. Note that the choice of target Iterable is lazy in this default implementation as this TraversableOnce may be lazy and unevaluated (i.e. it may be an iterator which is only traversable once). Note: will not terminate for infinite-sized collections. an Iterable containing all elements of this mutable parallel set . Definition Classes ParIterable → ParIterableLike → GenTraversableOnce 133. Returns an Iterator over the elements in this mutable parallel set . Returns an Iterator over the elements in this mutable parallel set . Will return the same Iterator if this instance is already an Iterator. Note: will not terminate for infinite-sized collections. an Iterator containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 134. def toList: List[T] Converts this mutable parallel set to a list. Converts this mutable parallel set to a list. Note: will not terminate for infinite-sized collections. a list containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 135. [use case] Converts this mutable parallel set to a map. [use case] Converts this mutable parallel set to a map. This method is unavailable unless the elements are members of Tuple2, each ((T, U)) becoming a key-value pair in the map. Duplicate keys will be overwritten by later keys: if this is an unordered collection, which key is in the resulting map is undefined. Note: will not terminate for infinite-sized collections. a map of type immutable.Map[T, U] containing all key/value pairs of type (T, U) of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 136. def toParArray: ParArray[T] 137. def toParCollection[U >: T, That](cbf: () ⇒ Combiner[U, That]): That 138. def toParMap[K, V, That](cbf: () ⇒ Combiner[(K, V), That])(implicit ev: <:<[T, (K, V)]): That 139. def toSeq: ParSeq[T] Converts this mutable parallel set to a sequence. Converts this mutable parallel set to a sequence. As with toIterable, it's lazy in this default implementation, as this TraversableOnce may be lazy and unevaluated. Note: will not terminate for infinite-sized collections. a sequence containing all elements of this mutable parallel set . Definition Classes ParIterable → ParIterableLike → GenTraversableOnce 140. Converts this mutable parallel set to a set. Converts this mutable parallel set to a set. Note: will not terminate for infinite-sized collections. a set containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 141. def toStream: Stream[T] Converts this mutable parallel set to a stream. Converts this mutable parallel set to a stream. Note: will not terminate for infinite-sized collections. a stream containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 142. def toString(): String Creates a String representation of this object. Creates a String representation of this object. The default representation is platform dependent. On the java platform it is the concatenation of the class name, "@", and the object's hashcode in a String representation of the object. Definition Classes ParIterableLike → AnyRef → Any 143. Converts this mutable parallel set to an unspecified Traversable. Converts this mutable parallel set to an unspecified Traversable. Will return the same collection if this instance is already Traversable. Note: will not terminate for infinite-sized collections. a Traversable containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 144. def toVector: Vector[T] Converts this mutable parallel set to a Vector. Converts this mutable parallel set to a Vector. Note: will not terminate for infinite-sized collections. a vector containing all elements of this mutable parallel set . Definition Classes ParIterableLike → GenTraversableOnce 145. Transposes this mutable parallel set of traversable collections into a mutable parallel set of mutable parallel set s. Transposes this mutable parallel set of traversable collections into a mutable parallel set of mutable parallel set s. the type of the elements of each traversable collection. an implicit conversion which asserts that the element type of this mutable parallel set is a Traversable. a two-dimensional mutable parallel set of mutable parallel set s which has as nth row the nth column of this mutable parallel set . Definition Classes (Changed in version 2.9.0) transpose throws an IllegalArgumentException if collections are not uniformly sized. Exceptions thrown if all collections in this mutable parallel set are not of the same size. 146. def union(that: GenSet[T]): ParSet[T] Computes the union between of set and another set. Computes the union between of set and another set. the set to form the union with. a new set consisting of all elements that are in this set or in the given set that. Definition Classes ParSetLike → GenSetLike 147. def unzip[A1, A2](implicit asPair: (T) ⇒ (A1, A2)): (ParSet[A1], ParSet[A2]) Converts this mutable parallel set of pairs into two collections of the first and second half of each pair. Converts this mutable parallel set of pairs into two collections of the first and second half of each pair. the type of the first half of the element pairs the type of the second half of the element pairs an implicit conversion which asserts that the element type of this mutable parallel set is a pair. a pair mutable parallel set s, containing the first, respectively second half of each element pair of this mutable parallel set . Definition Classes 148. def unzip3[A1, A2, A3](implicit asTriple: (T) ⇒ (A1, A2, A3)): (ParSet[A1], ParSet[A2], ParSet[A3]) Converts this mutable parallel set of triples into three collections of the first, second, and third element of each triple. Converts this mutable parallel set of triples into three collections of the first, second, and third element of each triple. the type of the first member of the element triples the type of the second member of the element triples the type of the third member of the element triples an implicit conversion which asserts that the element type of this mutable parallel set is a triple. a triple mutable parallel set s, containing the first, second, respectively third member of each element triple of this mutable parallel set . Definition Classes 150. final def wait(): Unit 151. final def wait(arg0: Long, arg1: Int): Unit 152. final def wait(arg0: Long): Unit 153. def wrap[R](body: ⇒ R): NonDivisible[R] 154. [use case] Returns a mutable parallel set formed from this mutable parallel set and another iterable collection by combining corresponding elements in pairs. [use case] Returns a mutable parallel set formed from this mutable parallel set and another iterable collection by combining corresponding elements in pairs. If one of the two collections is longer than the other, its remaining elements are ignored. Note: might return different results for different runs, unless the underlying collection type is ordered. the type of the second half of the returned pairs The iterable providing the second half of each result pair a new mutable parallel set containing pairs consisting of corresponding elements of this mutable parallel set and that. The length of the returned collection is the minimum of the lengths of this mutable parallel set and that. Definition Classes ParIterableLike → GenIterableLike 155. def zipAll[B](that: Iterable[B], thisElem: A, thatElem: B): ParSet[(A, B)] [use case] Returns a mutable parallel set formed from this mutable parallel set and another iterable collection by combining corresponding elements in pairs. [use case] Returns a mutable parallel set formed from this mutable parallel set and another iterable collection by combining corresponding elements in pairs. If one of the two collections is shorter than the other, placeholder elements are used to extend the shorter collection to the length of the longer. Note: might return different results for different runs, unless the underlying collection type is ordered. the type of the second half of the returned pairs The iterable providing the second half of each result pair the element to be used to fill up the result if this mutable parallel set is shorter than that. the element to be used to fill up the result if that is shorter than this mutable parallel set . a new mutable parallel set containing pairs consisting of corresponding elements of this mutable parallel set and that. The length of the returned collection is the maximum of the lengths of this mutable parallel set and that. If this mutable parallel set is shorter than that, thisElem values are used to pad the result. If that is shorter than this mutable parallel set , thatElem values are used to pad the result. Definition Classes ParIterableLike → GenIterableLike Full Signature def zipAll[S, U >: T, That](that: GenIterable[S], thisElem: U, thatElem: S)(implicit bf: CanBuildFrom[ParSet[T], (U, S), That]): That 156. def zipWithIndex: ParSet[(A, Int)] [use case] Zips this mutable parallel set with its indices. [use case] Zips this mutable parallel set with its indices. Note: might return different results for different runs, unless the underlying collection type is ordered. A new mutable parallel set containing pairs consisting of all elements of this mutable parallel set paired with their index. Indices start at 0. Definition Classes ParIterableLike → GenIterableLike Full Signature def zipWithIndex[U >: T, That](implicit bf: CanBuildFrom[ParSet[T], (U, Int), That]): That 1. List("a", "b", "c").zipWithIndex = List(("a", 0), ("b", 1), ("c", 2)) 157. Computes the union between this set and another set. Computes the union between this set and another set. Note: Same as union. the set to form the union with. a new set consisting of all elements that are in this set or in the given set that. Definition Classes 158. def →[B](y: B): (ParSet[T], B) Implicit information This member is added by an implicit conversion from ParSet[T] to ArrowAssoc[ParSet[T]] performed by method any2ArrowAssoc in scala.Predef. Definition Classes Implicit information This member is added by an implicit conversion from ParSet[T] to StringAdd performed by method any2stringadd in scala.Predef. This implicitly inherited member is shadowed by one or more members in this class. To access this member you can use a type ascription: (parSet: StringAdd).+(other) Definition Classes Implicit information This member is added by an implicit conversion from ParSet[T] to StringAdd performed by method any2stringadd in scala.Predef. This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler To access this member you can use a type ascription: (parSet: StringAdd).self Definition Classes Implicit information This member is added by an implicit conversion from ParSet[T] to StringFormat performed by method any2stringfmt in scala.Predef. This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler To access this member you can use a type ascription: (parSet: StringFormat).self Definition Classes A syntactic sugar for out of order folding. See fold. scala> val a = LinkedList(1,2,3,4) a: scala.collection.mutable.LinkedList[Int] = LinkedList(1, 2, 3, 4) scala> val b = (a /:\ 5)(_+_) b: Int = 15 Definition Classes (Since version 2.10.0) use fold instead Implicit information This member is added by an implicit conversion from ParSet[T] to ArrowAssoc[ParSet[T]] performed by method any2ArrowAssoc in scala.Predef. This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler To access this member you can use a type ascription: (parSet: ArrowAssoc[ParSet[T]]).x Definition Classes (Since version 2.10.0) Use leftOfArrow instead Implicit information This member is added by an implicit conversion from ParSet[T] to Ensuring[ParSet[T]] performed by method any2Ensuring in scala.Predef. This implicitly inherited member is ambiguous. One or more implicitly inherited members have similar signatures, so calling this member may produce an ambiguous implicit conversion compiler To access this member you can use a type ascription: (parSet: Ensuring[ParSet[T]]).x Definition Classes (Since version 2.10.0) Use resultOfEnsuring instead
{"url":"http://www.scala-lang.org/api/current/scala/collection/parallel/mutable/ParSet.html","timestamp":"2014-04-16T08:16:56Z","content_type":null,"content_length":"460073","record_id":"<urn:uuid:7af7a103-2402-46fb-b7fc-690d3a2c7a75>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the "right" definition of a ring? up vote 13 down vote favorite This is somewhat related to Greg's question about groups and abelian groups. Suppose you met someone who was well-acquainted with groups, but who was unwilling to accept rings as a meaningful object of study. How would you describe rings to them in a natural way given that they like talking about groups? (Admittedly this is not really the question the title asks.) soft-question ra.rings-and-algebras 1 All the answers so far have motivated noncommutative ring theory. I'd be interested in seeing motivation for commutative rings (in the spirit of mathoverflow.net/questions/2551/…). – Eric Wofsey Oct 27 '09 at 3:09 A strange question (that's good :-). – Wlodzimierz Holsztynski Apr 8 '13 at 6:32 Actually, algebraists of the past ... E Artin, ... Jacobson ... already answered this question a long time ago -- that's how they have approached the structure theory of rings. – Wlodzimierz Holsztynski Apr 8 '13 at 6:36 (I mean that they provided at least a significant partial answer). – Wlodzimierz Holsztynski Apr 8 '13 at 6:39 add comment 8 Answers active oldest votes Well rings are naturally the objects which act on abelian groups - indeed composition always endows the endomorphisms of an abelian group with the structure of a ring. So if one is interested in the endomorphisms of groups one is actually interested in rings. One can make this analogy more precise especially if one picks a particular ring and looks at the forgetful functor to abelian groups from its module category. This analogy can then be used again for instance to motivate the definition of plethory which are the natural objects which act on rings. To address Eric's comment about commutative rings there is an analogue in this case of something which was mentioned in the discussion of groups versus abelian groups. Indeed one can up vote 21 obtain commutative rings by considering the identity in an additive symmetric monoidal category. In this case the endomorphisms of the tensor unit are endowed with an abelian group down vote structure via the augmentation over abelian groups and the Eckmann-Hilton argument applied to tensoring endomorphisms and composing endomorphims forces the composition to be abelian. So accepted from this point of view commutative rings are the gadgets which naturally act on the hom-sets of additive symmetric monoidal categories. Since I mentioned this one can take this slightly further. If one considers such a category together with an autoequivalence (for instance if we take a tensor triangulated category) then one can consider the graded endomorphism ring of the identity. This naturally gives rise to an integer graded ring which is commutative up to some unit which squares to the identity and which has a natural action on the category. add comment Here is some more support for rings as objects that act on abelian groups, as already mentioned by Greg Stevenson. Someone well-acquainted with groups would likely know that representations of groups are important to their study. Given a group G, a representation of G over a field k is an action of G on a k-vector space V as a group of linear automorphisms. Familiarity with rings allows us to realize that this is the same as a ring homomorphism from the group ring kG into End[k](V). Then one can immediately begin to investigate group actions by asking questions about the structure of the group ring kG. In fact, one can even show that the category of G-modules (representations of G) is equivalent to the category of (say left) modules over the ring kG. up vote 15 down From this perspective, rings are important because they act on modules. In this vein, every ring can be realized as an endomorphism ring of a module: for a ring R, the right module R[R] vote satisfies R ≅ End(R[R]). (In analogy with the terminology for group representations, R[R] is sometimes referred to as the regular representation of R.) To go one step further, the endomorphism ring of an object in any abelian (or even preadditive) category is a ring. (Though from Greg's post, it sounds as if one can go even further than this!) So we see that rings greatly generalize the notion of groups acting on objects with additive structure. Suppose this hypothetical person understood on an intuitive level the notion of symmetries of objects in Euclidean space but was unwilling to accept the formal definition of a vector space (more precisely the part where a field is necessary)... – Qiaochu Yuan Oct 27 '09 at 6:47 add comment Much like abelian groups and groups, commutative rings and non-commutative rings have different motivations in my mind. As people have said, non-commutative rings are naturally endomorphisms of abelian groups. The first non-commutative ring people should have in their head should be M_n, I think. up vote However, I'm surprised no one has brought up that commutative rings are naturally the set of functions on something. Granted, it takes a couple of semesters of algebraic geometry to make 14 down this true, but its the idea that motivates the theory. The first commutative ring people should have in their head is C[x], with Z as a second example where it seems tantalyzingly bizarre vote that everything still works (prime factorization, ideals, etc). Certainly, when I try to convince a skeptic that rings are awesome (which I have done a couple of times now), I wave my hands wildly and talk about how cool rings of functions on things are. 2 Well-- the set of ring-valued functions. – Tim Campion Dec 7 '10 at 6:45 add comment I'll offer another "explanation" for rings: a ring (see here) is a monoid in the monoidal category of abelian groups (with respect to the standard tensor product of abelian groups). This perspective is useful in that it shows what the right generalizations and categorifications of rings are. This is a general phenomenon: when you want to know which of several equivalent definitions is the fundamental or right one, check for which of these you can find natural oo-categorical versions. The more natural a concept, the easier it generalizes this For rings, we notice that the category of abelian groups is the archetypical abelian category. The oo-version of an abelian category is a stable (oo,1)-category. The archetypical one is the (oo,1)-category of spectra - Spec. A commutative monoid in Spec is an "commutative oo-ring" usually called an E-oo ring. If it is non-commutative it is called an A-oo ring. up vote 8 down vote This is the story about rings and their vertical categorifciation. There is also insight into the nature of rings to be gained from their horizontal categorification: a monoid in Ab, hence a ring, is equivalently an an enriched category with a single object over the category of abelian groups: write pt for the single object of an Ab-enriched category, then Hom(pt,pt) is an abelian group equipped with a homomorphism of abelian groups Hom(pt,pt)otimesHom(pt,pt) --> Hom(pt,pt) that is associative and unital. So Hom(pt,pt) is some ring, and every ring works. So a general Ab-enriched category may be thought of as a ringoid, if you wish. add comment If they agreed that the representation theory of groups was interesting (and if they didn't agree to this, I might contest their claim to be well-acquainted with groups...) I would argue that thinking about modules for the group ring C[G] is a very clean way to do representation theory. (On preview, Manny Reyes wrote a more complete answer along the same lines while I up vote 5 started this, so I'll stop here.) down vote add comment I'll add in another little piece of the puzzle to help motivate the study of commutative rings once we believe that we are interested in endomorphisms of modules. There's an exercise in Rotmans Homological Algebra book that if R and S are (non-commutative) rings, such that the module categories R-mod and S-mod are equivalent, then Z(R) and Z(S) are isomorphic as (commutative) rings. Here, of course, Z(R) means the center of R. up vote 3 down vote It follows from this that if R and S are commutative rings then R = S if and only if R-mod = S-mod. So if modules are what we want to study, it makes sense to single out those rings which have such a close relationship with their module categories. 1 Of course, an different lesson from this is that Morita equivalence is not as interesting if you restrict yourself to the commutative world. – Chris Phan Jun 14 '10 at 9:36 add comment One simple answer is that rings are modeled after the ring of integers. Admittedly, it is a commutative ring with many special properties, but it's a simple intuitive model. up vote 0 down 1 This is the usual explanation, but as far as this hypothetical person is concerned, Z is the free group on one element. What's this multiplication nonsense? – Qiaochu Yuan Oct 27 '09 at 14:25 add comment Today I came across an expository paper which reminded me of this particular question. The paper is STANDARD DEFINITIONS CONCERNING RINGS by KEITH CONRAD. This is an answer to the up vote 0 down title and clearly not to the body of the question. This explains why I sometimes forget to read the body only to find that(after I have typed up an answer)the body was a different story – Unknown Feb 17 '11 at 18:07 This could have better gone here:mathoverflow.net/questions/22579/… – Unknown Feb 17 '11 at 18:15 add comment Not the answer you're looking for? Browse other questions tagged soft-question ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/2748/what-is-the-right-definition-of-a-ring","timestamp":"2014-04-16T11:18:05Z","content_type":null,"content_length":"90877","record_id":"<urn:uuid:82c4035d-fe8c-4c5c-b613-4971b13142a8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Introductory Algebra for College Students Plus NEW MyMathLab with Pearson eText -- Access Card ... Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/bk-detail?isbn=9780321828187","timestamp":"2014-04-19T04:22:54Z","content_type":null,"content_length":"37334","record_id":"<urn:uuid:58364df8-1fe3-4222-a5c2-fc4f1523b5e2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Fulshear Trigonometry Tutor ...I have seen the look regarding subjects from an elementary level to a collegiate level. Regardless of what subject I am working with someone on, I will strive to make sure the student understands. Here is a list of the subjects I've have taught or am capable of teaching: Math- Pre-Algebra ... 38 Subjects: including trigonometry, reading, calculus, chemistry ...My success stories: * A 6th grader struggling with grammar and math improved her 'C' to 'A-' * A high school student who was scared of even attempting her math quizzes and exam conquered her lifelong fear of math within three months and did well on her final exam * An MBA student scored well o... 20 Subjects: including trigonometry, reading, writing, geometry ...So I decided whenever I had spare time I would try and be a tutor. However, being a tutor wasn't easy in college with the different responsibilities from high school. So after taking three semesters in one year (spring, fall, summer) I decided to take a break for a semester and earn tuition. 32 Subjects: including trigonometry, reading, chemistry, English ...Algebra I and II are trivial to me. I have taken and tutored a great many mathematics courses, and algebra II is one of the simpler courses that I can tutor. While the technical subjects are my greatest strength and specialty, I can also offer tutoring in the social sciences. 37 Subjects: including trigonometry, chemistry, calculus, writing ...I have worked with students of all abilities including learner disabilities. I like to bring the best out each student and help her believe in herself and in the power of mathematics. Sometimes, I even convince the student that math can be fun.At another private tutoring company, I tutored several students for the ISEE test. 14 Subjects: including trigonometry, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/Fulshear_Trigonometry_tutors.php","timestamp":"2014-04-18T19:09:15Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:04c650dd-8c61-4886-9c32-5b60218a709a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Thoughts On Teaching I know, I arrived late on this scene. Only last summer, I caught the Sudoku virus. It’s totally under control now, but for a while I couldn’t go to bed unless I’d worked through whatever Sudoku I was on. Actually, I still can’t leave a puzzle half finished, but I don’t spend time solving five or more of them each day. Explain Your Thinking That’s the kind of explanation I want to hear from my students about literature, quotations on the board at the beginning of class, political beliefs, school policies, any opinion they have. But they often don’t consider other possibilities and tend to voice their arguments as if the rest of the world already agrees with them, as if they don’t need to explain their opinion. Can Sudoku teach logic? Will working on Sudoku have an impact on logic expressed in other arguments? What if class started with Sudoku one day a week? Be sure to pick an easy puzzle at the beginning. Even more important, be sure you know the solution before you begin so that you can redirect if needed. As a class, work through why each box has certain numbers in it. After talking about a few boxes, making sure you get rid of all the “obvious” ones, students write silently for a few minutes, then share out. Here are the directions: Now that we’ve talked about a few numbers and filled in some of the boxes in this puzzle, you need to fill in three more boxes on your own. Describe which box you’re filling in (3rd row, 4th column or something similar), tell which number fits in the box, and discuss why. Just so you’re sure, briefly run through reasons that box can be no other number than the one you pick. You should end up writing about 2 sentences for each box you fill in, a short paragraph in total. Be prepared to share your reasoning with others. Where To Go For Sudoku Web Sudoku. You can just click and type numbers with ease, hitting “Clear” between classes to reset the whole thing. Don’t use their print feature, hoping to get this puzzle onto an overhead. It looks like a good idea, but it doesn’t work well. Puzzles print across two pages. But shrinking things down to fit on a single page causes some of the grid lines disappear. Neither case is ideal. Daily SuDoku. Their print feature works great. The site downloads a PDF that you can print onto an overhead. The PDF should be on your desktop within seconds. Click to open it and you have a puzzle that fits onto a single page and looks nice in the end. Another possibility is to use Excel. I put together a template for you that creates one big Sudoku to use on a projector and automatically creates smaller versions of that puzzle to give your students, six to a page. Find a Sudoku generator (again,Web Sudoku and Daily SuDoku are good sources, but books work well, too) and type the numbers on your own. Since you have complete control using this method, you can either print out the spreadsheet on an overhead or project it in the classroom from your computer. 1 comment 1. Ryan says: Another good game for teaching logic is Mastermind. A version that works well on a projector can be found at: It works well when played as a group and they have to discuss their reasoning. Add Your Comment - Join the Fun!
{"url":"http://www.toddseal.com/rodin/2007/08/sudoku-logic/","timestamp":"2014-04-18T00:51:39Z","content_type":null,"content_length":"14694","record_id":"<urn:uuid:d5da4c50-5a50-4807-b02c-32b87cb72a47>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Please help for when a limit doesn't exist March 20th 2008, 08:38 AM Please help for when a limit doesn't exist Ok this is the problem: Find the limit as x approaches zero 0^3 +x/x Ok.. so I know it's zero over zero so it's undefined but how can I further distribute the problem ? I remember my professor saying that we could take a step further but I don't know how to do it in this case. Hopefully this is enough information to get MUCH needed help March 20th 2008, 08:57 AM Ok this is the problem: Find the limit as x approaches zero 0^3 +x/x Ok.. so I know it's zero over zero so it's undefined but how can I further distribute the problem ? I remember my professor saying that we could take a step further but I don't know how to do it in this case. Hopefully this is enough information to get MUCH needed help What is your function? 1) $x^3 + \frac{x}{x}$ 2) $\frac{x^3 + x}{x}$ 3) something else? March 20th 2008, 09:00 AM This problem is a ripe candidate for L'Hopital's Law. Are you allowed to use it for this problem? March 20th 2008, 09:02 AM When a limit doesn't exist it's #2 on your reply. And once I plugged the zero into the equation I got 0/0. In a similar problem done in class our professor further simplfied the answer by distributing the orignal function then cancelling out the like terms from the denominator. But I am somewhat fuzzy on the rest and don't know how to further simplify the answer. (Worried) March 20th 2008, 09:06 AM Yes, I would assume so... He told us that 0/0 would not suffice as an answer. Please don't laugh but I've never even heard of that rule... I am researching it now (Sadsmile) March 20th 2008, 09:36 AM it's #2 on your reply. And once I plugged the zero into the equation I got 0/0. In a similar problem done in class our professor further simplfied the answer by distributing the orignal function then cancelling out the like terms from the denominator. But I am somewhat fuzzy on the rest and don't know how to further simplify the answer. (Worried) L'Hopital's rule requires you to know derivatives, which you may have not learned yet. Take a numerical approach to a limit when you are lost... $f(0.5)=\frac{0.125+0.5}{0.5} \Rightarrow 1.25$ $f(0.1)=\frac{0.001+0.1}{0.1} \Rightarrow \frac{0.101}{0.1} = 1.01$ $f(0.01)=\frac{0.000001+0.01}{0.01} \Rightarrow \frac{0.010001}{0.01} = 1.0001$ It is obviously going to one. Now, think about it intuitively. What is happening? Very small values that are cubed are comparable to zero. What is left? Just $x/x$ which simplifies to one. Also, note that you can just simplify the function... $f(x)=\frac{x^3+x}{x} \Rightarrow x^2+1$ for $x e 0$ Limits do not take the value of a function at the limiting value, but merely the value an infinitesimally small distance away from the limiting value, so doing this primitive simplification is allowable since the limit never looks at $f(0)$ which so happens to be undefined. March 20th 2008, 02:46 PM mr fantastic Sorry, but l'Hospital's Rule is the lazy way. you can just simplify the function... $f(x)=\frac{x^3+x}{x} \Rightarrow x^2+1$ for $x e 0$ Limits do not take the value of a function at the limiting value, but merely the value an infinitesimally small distance away from the limiting value, so doing this primitive simplification is allowable since the limit never looks at $f(0)$ which so happens to be undefined.
{"url":"http://mathhelpforum.com/calculus/31565-please-help-when-limit-doesnt-exist-print.html","timestamp":"2014-04-21T07:49:00Z","content_type":null,"content_length":"13108","record_id":"<urn:uuid:657be5a0-495d-4c69-aa9a-e486e22b6fd9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
what is the aera of the circle with center E? August 17th 2012, 04:14 PM what is the aera of the circle with center E? ok so the problem wants me to find the area of the circle with center E. it was pretty easy to figure out that the diameter of the small circle was 6. once i figure out the diameter i can use that relationship to say that the area of the circle with center A and center C is 36pi. but this is where im stuck. im not sure how im supposed to use these relationships to figure out the area of the circle with center E. any thoughts or help would be greatly appreciated. Attachment 24534 August 17th 2012, 04:42 PM Re: what is the aera of the circle with center E? $BC = 3$, so $AC = 6$. It is easy to show that triangle ACD is equilateral (since the side lengths are all radii of either circle w/ center A or C). Using some 30-60-90 triangles, $BD = 3 \sqrt{3}$, so $ED = 2BD = 6 \ ED is a radius of the circle with center E, so the area of this circle is $\pi (6 \sqrt{3})^2 = 108 \pi$, answer choice (C).
{"url":"http://mathhelpforum.com/geometry/202272-what-aera-circle-center-e-print.html","timestamp":"2014-04-19T12:55:30Z","content_type":null,"content_length":"5300","record_id":"<urn:uuid:55c52815-2186-4b86-9af8-3a8c212dbfc7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there a faster way to solve Algebra Equations ? Would you be interested in learning how to directly solve even the difficult algebra questions (in just one step ). Learn the handy tricks to verify (double-check) your answer so that you can avoid making those silly calculation errors (thus get that 100% score in your Algebra test) In short, would you like to discover the fastest and easiest way to master Algebra (from basic to advanced level )
{"url":"http://www.glad2teach.co.uk/26_Math_tricks_to_learn_algebra_fast.htm","timestamp":"2014-04-18T03:05:20Z","content_type":null,"content_length":"19529","record_id":"<urn:uuid:404c8e04-052b-4551-8b37-b13a0bdd80e5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Mental Speed Test - Version 1 The following test is meant to assess your mental speed - how quickly you can process information and make decisions based upon that information. The exercise consists of word/image pairs and simple mathematical equations or number sequences. If a pair matches, click the "Correct" button (the left arrow key on your keyboard). If the pair does not match, click the "Incorrect" button (the right arrow key on your keyboard). However, if the word "Opposite" appears at the top of the screen, you need to reverse your answer. In the first example (pear and star), the answer would be incorrect ("Incorrect"). For the second example, although it is an exact match, the word Opposite appears at the top of the screen, so rather than choosing "Correct" you would have to choose "Incorrect" For the mathematical equations simply indicate whether the answer is correct or incorrect. For the number sequences indicate whether the number in red correctly completes the sequence. As you can see, the first equation is wrong - the answer should be 5, so in this case, you would choose "incorrect". However, although the number sequence is also wrong (the answer should be 22), the word Opposite requires you to reverse your answer, so in this case you would choose "Correct" rather than "Incorrect". Remember, you are being timed, so try to answer as quickly as possible - and remember to reverse your answer when the word "Opposite" appears. After finishing the test, you will find out how accurate and fast you were. Have fun!
{"url":"http://www.queendom.com/tests/access_page/index.htm?idRegTest=1242","timestamp":"2014-04-16T10:16:57Z","content_type":null,"content_length":"47365","record_id":"<urn:uuid:80b2118c-d9bb-4e56-a8bc-f9735f468dca>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: If f(x) = [ 4 tanx / x ], find f′( x ). • one year ago • one year ago Best Response You've already chosen the best response. \[f(x)=\frac{4tanx}{x} => f'(x)=\frac{ x(4tanx)'-4tanx(x)' }{ (x)^2 }=\frac{ 4xsec ^2 x-4\tan x }{x^2 }\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51214585e4b06821731cf6f6","timestamp":"2014-04-20T13:57:36Z","content_type":null,"content_length":"27669","record_id":"<urn:uuid:192987e6-d838-4f43-a975-e3e85722ff5c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Monte Carlo studies on a spreadsheet Volume 13, Issue 2, 1999 Simple Monte Carlo studies on a spreadsheet Guy Judge University of Portsmouth 1 Introduction A large number of papers have appeared in CHEER promoting the use of spreadsheets in teaching and learning economics. Judge (1990) and Taylor (1990) were amongst the first, while Wilder (1999) and Whigham and Whyte (1999) are among the most recent. This paper suggests yet another possible application - for implementing simple Monte Carlo studies on introductory econometrics courses. The availability of relatively inexpensive computing power has allowed Monte Carlo studies to become an important part of modern econometrics. Researchers can investigate the properties (especially the small sample properties) of estimators and test procedures where results cannot be derived theoretically. Using a computer, a large number of artificial or simulated data sets can be created according to a known data generation process. Then an estimator or test procedure can be applied to the artificial data sets so that the pattern of results obtained can be analysed and compared with the (known) features that were designed into the data. In this way investigators can get a measure of the extent of any inherent biases in the estimators or in the power of the test procedures under various conditions. Students of economics and econometrics, even at undergraduate level, ought to be aware of the important contribution that such studies are having to the development of the subject. Indeed, there may be some educational value in getting them to conduct their own simple Monte Carlo exercises. This point has been recognised by the authors of such recent texts as Kennedy (1998) and Thomas (1997), both of whom provide a number of suggested Monte Carlo exercises for readers to work through. The purpose of such exercises is not only to teach students about the use of Monte Carlo studies in a research context, for discovering properties of estimators and test procedures in situations where they cannot be derived analytically. It is also to help beginning student learners to understand concepts (such as the sampling distribution of a least squares estimator) which may be difficult for them to grasp when they have to rely solely on their imagination. A modern spreadsheet package provides a very convenient environment for undergraduate students to use for simple Monte Carlo experiments. Most students will be comfortable in using such packages and they provide built-in functions that can be used to generate the random disturbances for the data generating process. They contain built-in least squares regression estimation procedures (or matrix tools if it is required to construct other estimators or test statistics). In addition they can provide summary statistics and graphical displays to enable students to assess patterns in the results obtained. It might be argued that other specialist econometrics software (or even packages like Mathematica or Maple) might be more natural tools for a researcher to use for Monte Carlo studies. However it can make sense for a student to work with a tool with which she is already familiar unless it is insufficient for the task in hand. A couple of years ago I decided to include a simple Monte Carlo exercise, based on the use of the spreadsheet package Excel, in my second year undergraduate course Introduction to Econometrics . In Section 2 I describe the exercise and make some comments on the reactions of the students. Section 3 gives a few other ideas for simple spreadsheet based Monte Carlo exercises. Section 4 briefly 2 A simple Monte Carlo exercise The purpose of this exercise was to help students to understand the meaning of the sampling distribution of a least squares regression estimator, and the way in which the properties of the sampling distribution reflect the characteristics of the regression model itself. In addition it was hoped to convey to the students something of the flavour of Monte Carlo studies in general. The exercise was the third of twelve weekly exercises given to students on a one semester second year undergraduate course on econometrics. On entering the course the students should have had a basic understanding of the bivariate regression model (from their previous semester statistics course) and of the use of spreadsheets (from their Computing Skills for Economists and Economics Workshop As Kennedy notes there are four stages in A Monte Carlo study. The first stage is to construct a model of the data generation process. The students on my course were asked to assume that it took the Y[i] = a + bX[i] + u[i] [1] with a=20 and b=0.6 and where u[i] is N(0,1). A fixed set of 25 values for X was given as shown in Table 1. Any set of values could of course be used, but these values have a clearly recognizable mean and variance. Table 1 i 1 2 3 4 5 6 7 8 9 10 11 12 13 Xi 88 89 90 91 92 93 94 95 96 97 98 99 100 i 14 15 16 17 18 19 20 21 22 23 24 25 Xi 101 102 103 104 105 106 107 108 109 110 111 112 The second stage is to create sets of data. The students were asked to use the computer s random number generator to generate 100 sets of 25 values of the random disturbance u by taking random drawings from the standard normal distribution (mean = 0, standard deviation = 1). In Excel version 5 this can be achieved by selecting Random Number Generation from the Data Analysis option on the T ools menu. So in the dialog box the Number of Variables is set at 100, the Number of Random Numbers is set at 25 and the Distribution is set to be Normal (see figure 1). Figure 1: Generating the disturbance values Now the student must create 100 sets of Y values to go with the u values, based on equation 1. This process is relatively straightforward, if a little tedious, with the student needing to enter the formula for the first observation of Y in each sample and then use the spreadsheet s copying feature to produce the other 24 data points. Of course the X values are designed to remain fixed over the different samples. In stage 3 the estimator is used with the artificial data sets to estimate the parameters of the model. I asked the students to run 100 regressions, using each of the samples of 25 observations on X and Y, and to extract the estimated slope coefficient and put them into a table. Stage 4 is the analysis of the results. The students were asked to examine the distribution of the set of estimates obtained, calculating the mean and variance. They were asked to produce a frequency table and histogram of the values. In Excel you can do this by selecting Tools, Data Analysis, Histogram from the Menu. For the Input Range you put the cells where the slope estimates are to be found. For the Output Range you can put any empty cell with plenty of blank space below and to the right of it. Make sure that you also check the Chart Output option. You should get a column of frequencies next to a Bin column; Excel will automatically select a suitable set of limits for the classes in the frequency table - and then to the right of that you should get a histogram showing the distribution in a visual way. Students were invited if they wished to calculate a suitable measure of skewness. Now the students were asked to consider a number of questions and to write a brief report on their findings to bring to the class. How does the mean slope estimate compare with the true know value of the parameter (0.6)? How, if at all, is the variance of these estimates of b related to the variance of the X values? Does the distribution of the values as illustrated in the histogram appear to be normal? How different do you think your answers would have been if you had created 1000, or 10000, rather than 100 samples of 25 observations? Explain what is meant by the Sampling Distribution of the Least Squares Slope estimate. In preparing their reports the students were asked to consult their notes and textbooks on the theoretical properties of the sampling distribution of the slope coefficient, its mean and variance and to consider questions of bias (and unbiasedness). Overall the exercise was a success in that it provided a concrete focus for a discussion about the properties of the sampling distribution of the slope estimate in a way that was more meaningful than had been possible in previous years (where I could only appeal to students to imagine large numbers of samples of fixed size n being taken from data generated from equation 1). It enabled me also to discuss with the students the benefits both of being able to establish theoretical (asymptotic) properties of estimators analytically and the use of Monte Carlo Studies in cases where this was not possible. It allowed the students to join in the process in an active way, as well as getting them to think about the concepts involved. The exercise was not without its difficulties, however. More students needed to consult me either by e-mail or in my office hour about this exercise than for any of the other 11 weekly exercises. Some students were initially confused about the difference between the size of each sample (n=25) and the number of replications to be analysed (r=100). This may have been my fault for not explaining things clearly enough in the lecture or accompanying notes. I found that I also had to explain what is mean by a seed when generating random numbers, having failed to mention this point in my lecture. A few students found it difficult to handle within the spreadsheet the large quantity of data generated in the exercise. However, this led to an interesting discussion with one group of students about the how one might design software specifically for Monte Carlo exercises of this type. Since I was about to introduce the students to the use of PcGive in the following week (for regression analysis, not Monte Carlo studies) in some ways it was helpful to be able to point out to students the limitations of spreadsheet packages as well as their advantages, and to discuss the virtues of general purpose and more dedicated software packages. In retrospect, given that over 80 students take this course, I wish that I had provided slightly different specifications of the exercise to various of groups of students (perhaps giving everyone different sets of X values or even varying the values of b, n, r and au^2). Obviously this would have produced a more varied set of results and it might have allowed more students to contribute to the discussion. 3 Other examples of spreadsheet based Monte Carlo studies As a follow up to this exercise I did offer a Monte Carlo study as one of the options for the students assessed coursework towards the end of the course (as an alternative to two other more standard modelling projects). Students were invited to specify their own Monte Carlo study to investigate the effects of autocorrelation or non-normality in the error structure. Only about 10 to 12% of the students opted for this assignment. As a matter of fact the reports produced included both one of the worst (a rather poor rerun of the exercise described above) and one of the best (a thorough treatment of the effect on the properties of least squares estimators of varying the assumptions about the distribution of the error term - considering uniform and t as well as normal distributions, and including the consequences of simple forms of autocorrelation and heteroskedasticity). Other Monte Carlo exercises of this type can be designed to be given to students. For example you could get them to look at the effect on least squares estimators of measurement error, omitted variables or simultaneous equations bias. They could also examine the standard error of the Y estimate (to see why the degrees of freedom are used in the formula rather than n or n-1). At a more simple level it might be worth adopting the approach described here to demonstrate to students on a statistics course the validity of the Central Limit Theorem. 4 Conclusion Students taking introductory courses in econometrics ought to have an awareness of the use of Monte Carlo studies in the subject. It may help their understanding, not only of the use of Monte Carlo studies themselves but also of important but difficult concepts such as the sampling distribution of an estimator, if they were to undertake a simple Monte Carlo study of their own. Such a study can be undertaken using a standard spreadsheet package such as Excel. Students should, however, be aware of the limitations of using a spreadsheet package for simulations of this type and recognise the benefits of using dedicated statistics and econometrics software tools for more advanced work in the subject. Judge G (1990) Chaos on a spreadsheet. CHEER Number 11 pp 8-11 Kennedy P (1998) A Guide to Econometrics. Fourth Edition. Blackwell Publishers. Taylor P (1990) Numerical Analysis Using Spreadsheets. CHEER Number 11 pp 3-7. Thomas R L (1997) Modern Econometrics. Addison-Wesley. Wilder L (1999) Investigating Parameter Time Invariance or Stationarity Using Excel Graphs. CHEER Volume 13 Issue 1 pp 4-10 Whigham D and Whyte J (1999) Explaining Input-Output and Equilibrium Relationships Using Excel Display Facilities. CHEER Volume 13 Issue 1 pp 11-15
{"url":"http://www.economicsnetwork.ac.uk/cheer/ch13_2/ch13_2p12.htm","timestamp":"2014-04-20T10:51:29Z","content_type":null,"content_length":"18500","record_id":"<urn:uuid:4de6d797-977d-4005-a020-c5e8f9ae635f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Maxwell Lagrangian P: 7 Hello, Where can I find a good explanation (book) of the derivation via Noether's theorem of the three momentum and angular momentum operators of the usual maxwell lagrangian ? Thank you! Sci Advisor This is standard QFT (actually QED) material, any thorough book should have it. Check out a nice treatment in Chapter 2 of F. Gross' "Relativistic Quantum Mechanics and Field Theory", Wiley, 1999. HW Helper In purely classical context (no operators), advanced electrodynamics books should also have this. P: 11,863 P: 7 I've been watching the book and yes, the book treats it but don't deduce them. He just announces and perform some calculations with them Sci Advisor Maxwell Lagrangian HW Helper Can you calculate [itex] T^{\mu\nu} [/itex] and [itex] M^{\lambda}_{~~\mu\nu} [/itex] from the Lagrangian and the general Noether formula which for the energy-momentum 4 tensor P: 11,863 [itex] T^{\mu}_{~~\nu} [/itex] = ([itex] \frac{\partial \mathcal{L}}{\partial(\partial_{\mu}A_{\rho})}[/itex] [itex] -\mathcal{L}\delta^{\mu}_{\lambda} [/itex]) X [itex] \frac{\ partial x'^{\lambda}}{\partial\epsilon^{\nu}} [/itex], [tex] x'^{\mu} = x^{\mu} + \epsilon^{\mu} [/tex] P: 7 [tex] T^{\mu\nu}=-F^{\mu\nu}\partial^{\nu}A_{\rho}+\frac{1}{4}F^{2}g^{\mu\nu}[/tex] And now? How I relate this to the momentum and total angular momentum operators ? Sci Advisor The momentum should be [itex] T^{0i} [/itex], just like energy is [itex] T^{00} [/itex]. For angular momentum, you should derive the general formula using the linearized version of a general Lorentz transformation (i.e. a linearized space-time rotation): HW Helper x'^μ=x^μ+ϵ^μ [ν] x^ν, where P: 11,863 ϵ^μν = - ϵ^νμ A minor change T^μν=−F^μρ∂^νA[ρ]+1/4 F^2g^μν Sci Advisor HW Helper I too would be interested in seeing this for EM angular momentum. Every place, I have looked seems to use the result in some form without actually deriving it. P: 1,930 Related Discussions integrating by parts Maxwell Lagrangian Classical Physics 1 Calculating the energy-momentum tensor for Maxwell Lagrangian Quantum Physics 5 looking for Lagrangian Systems with Higher Order Time Derivatives in the Lagrangian Special & General Relativity 1 Difference between Maxwell-Boltzmann and Maxwell Distribtion? Introductory Physics Homework 1 Maxwell/Maxwell-Boltzmann Distribution Classical Physics 1
{"url":"http://www.physicsforums.com/showthread.php?p=4159459","timestamp":"2014-04-18T00:31:11Z","content_type":null,"content_length":"45126","record_id":"<urn:uuid:9f32a6dd-7d85-44fb-99e2-4ab1bbe87b63>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
NA Digest Sunday, April 21, 1991 Volume 91 : Issue 16 NA Digest Sunday, April 21, 1991 Volume 91 : Issue 16 Today's Editor: Cleve Moler Today's Topics: From: Mike Heath <heath@csrd.uiuc.edu> Date: Sun, 14 Apr 91 21:23:36 CDT Subject: Last Call for SIAG/LA Prize This is a reminder that the deadline is April 30 for nominations of papers for the SIAG/LA Prize for the best paper in applicable linear algebra for the years 1988-1990. Please see previous announcements for details and eligibility rules. Nominations should consist of a complete bibliographic citation and a brief statement justifying the nomination, and should be directed to the Prize Committee Chairman at the following address: Michael T. Heath Center for Supercomputing Research and Development 305 Talbot Laboratory University of Illinois 104 South Wright Street Urbana, IL 61801-2932 Phone: 217-244-6915 Fax: 217-244-1351 Email: heath@csrd.uiuc.edu From: Rob MacLeod <macleod@vissgi.cvrti.utah.edu> Date: Sun, 14 Apr 91 21:28:58 MDT Subject: Sparse Solvers for Overdetemrined Systems Hello na-netters, I am looking for a solver that will work on a slightly overdetermined linear system which is very sparse and reasonably large (ca. 3024 by 2832). It arises from a minimization problem with which I can perform inteprolation of electric potential values over a three-dimensional surface (the human thorax). I matrix is not symmetric, or shouldn't be in theory, and I have not yet checked diagonal dominance or positivity. In the past, I have used direct QR-solvers on the full matrix in problems of this type which were a bit smaller (1112 by 1048) in very tolerable amounts of time (6 minutes) on an IBM RS/6000 520. Unfortunately, the scaled-up version employing this approach for a 3024 by 2832 case ran for 12 hours before I gave up and stopped it. I need something considerably faster that will run as a two-step process so that I can either backsubstitute numerous right hand sides back into the decomposed matrix, or do an iterative computation that will run in a few minutes. Any suggestions out there? Rob MacLeod, Ph.D. Nora Eccles Harrison Cardiovascular Research and Training Institute (CVRTI) Building 500 University of Utah Salt Lake City Utah 84112 From: Tilak Ratnanather <sg391@city.ac.uk> Date: Mon, 15 Apr 91 12:28:59 +0100 Subject: Block Tridiagonal Matrices The following problem arising in magnetism was brought to my attention by colleagues who'd greatly appreciate input from experienced numerical analysts: First question: is there a program that will compute the eigenvalues of a BLOCK TRIDIAGONAL (Hermitian) MATRIX. Attempts to search for one in netlib failed to yield a program. (Fortran or even C is the preferred language). Second question: at a more deeper level, the block tridiagonal Hermitian matrix M may be written in the shorthand form: (B*, A, B) where A and B are 5x5 matrices and * indicates the complex conjugate and M is 100x100. Is there a way of reducing this "large" problem to a sequence of reduced problems (or 5x5) problems? One step that has been suggested is to make use of the diagonalised form of A. The entries of A and B may be changed pending the physical Any ideas or pointers to references would be greatly appreciated. It may be possible to summarise the responses at a later date. Thanks a lot Tilak Ratnanather Dept of Mathematics City University Northampton Square London EC1V 0HB e-mail: sg391@city.ac.uk From: Roland England <R_ENGLAND@vax.acs.open.ac.uk> Date: Tue, 16 APR 91 14:19:12 GMT Subject: Shampine/Gordon Integrator in C In connection with Kathryn Brenan's enquiry, we were also about to start rewriting the Shampine/Gordon integration software (Adams numerical integration code for ordinary differential equations) in C. We would greatly welcome any available information, if this has already been done elsewhere, and particularly, of course, if there is a public domain version in existence which we could use. Thanks for any replies. Roland England Tel. +44-908-65-2329 The Open University Faculty of Mathematics Milton Keynes MK7 6AA Great Britain From: Steve Nichols <gt3930b@prism.gatech.edu> Date: 20 Apr 91 13:57:35 GMT Subject: C Source for a Stiff ODE Integrator I'm looking for C source code for a stiff ODE integrator. If anyone has converted LSODE (from the netlib) to C, this would be perfect. Thanks for any help, Steve Nichols Georgia Tech Physics Department From: Nick Higham <mbbgsnh@cms.manchester-computing-centre.ac.uk> Date: Sat Apr 13 14:31:37 PDT 1991 Subject: Three Measures of Precision in Floating Point Arithmetic Three measures of precision in floating point arithmetic by Nick Higham This note is about three quantities that relate to the precision of floating point arithmetic. For t-digit, rounded base b arithmetic the quantities are (1) machine epsilon (eps), defined as the distance from 1.0 to the smallest floating point number bigger than 1.0 (and given by eps = b**(1-t), which is the spacing of the floating point numbers between 1.0 and b), (2) mu = smallest floating point number x such that fl(1 + x) > 1, and (3) unit roundoff u = b**(1-t)/2 (which is a bound for the relative error in rounding a real number to floating point form). The terminology I have used is not an accepted standard; for example, the name machine epsilon is sometimes given to the quantity in (2). My definition of unit roundoff is as in Golub and Van Loan's book `Matrix Computations' [1] and is widely used. I chose the notation eps in (1) because it conforms with MATLAB, in which the permanent variable eps is the machine epsilon. [Ed. note: Well, not quite. See my comments below. --Cleve] The purpose of this note is to point out that it is not necessarily the case that mu = eps, or that mu = u, as is sometimes claimed in the literature, and that, moreover, the precise value of mu is difficult to It is helpful to consider binary arithmetic with t = 3. Using binary notation we have 1 + u = 1.00 + .001 = 1.001, which is exactly half way between the adjacent floating point numbers 1.00 and 1.01. Thus fl(1 + u) = 1.01 if we round away from zero when there is a tie, while fl(1 + u) = 1.00 if we round to an even last digit on a tie. It follows that mu <= u with round away from zero (and it is easy to see that mu = u), whereas mu > u for round to even. I believe that round away from zero used to be the more common choice in computer arithmetic, and this may explain why some authors define or characterize u as in (2). However, the widely used IEEE standard 754 binary arithmetic uses round to even. So far, then it is clear that the way in which ties are resolved in rounding affects the value of mu. Let us now try to determine the value of mu with round to even. A little thought may lead one to suspect that mu <= u(1+eps). For in the b = 2, t = 3 case we have x = u*(1+eps) = .001*(1+.01) = .00101 => fl(1 + x) = fl( 1.00101 ) = 1.01, assuming ``perfect rounding''. I reasoned this way, and decided to check this putative value of mu in 386-MATLAB on my PC. MATLAB uses IEEE standard 754 binary arithmetic, which has t = 53 (taking into account the implicit leading bit of 1). Here is what I >> format compact; format hex >> x = 2^(-53)*(1+2^(-52)); y = [1+x 1 x] y = 3ff0000000000000 3ff0000000000000 3ca0000000000001 >> x = 2^(-53)*(1+2^(-11)); y = [1+x 1 x] y = 3ff0000000000000 3ff0000000000000 3ca0020000000000 >> x = 2^(-53)*(1+2^(-10)); y = [1+x 1 x] y = 3ff0000000000001 3ff0000000000000 3ca0040000000000 Thus the guess is wrong, and it appears that mu = u*(1+2^(42)*eps) in this environment! What is the explanation? The answer is that we are seeing the effect of ``double-rounding'', a phenomenon that I learned about from an article by Cleve Moler [2]. The Intel floating-point chips used on PCs implement internally the optional extended precision arithmetic described in the IEEE standard, with 64 bits in the mantissa [3]. What appears to be happening in the example above is that `1+x' is first rounded to 64 bits; if x = u*(1+2^(-i)) and i > 10 then the least significant bit is lost in this rounding. The extended precision number is now rounded to 53 bit precision; but when i > 10 there is a rounding tie (since we have lost the original least significant bit) which is resolved to 1.0, which has an even last bit. The interesting fact, then, is that the value of mu can vary even between machines that implement IEEE standard arithmetic. Finally, I'd like to stress an important point that I learned from the work of Vel Kahan: the relative error in addition and subtraction is not necessarily bounded by u. Indeed on machines such as Crays that lack a guard digit this relative error can be as large as 1. For example, if b = 2 and t = 3, then subtracting from 1.0 the next smaller floating number we have Exactly: 1.00- Computed, without 1.00- a guard digit: .11 The least significant bit is dropped. The computed answer is too big by a factor 2 and so has relative error 1! According to Vel Kahan, the example I have given mimics what happens on a Cray X-MP or Y-MP, but the Cray 2 behaves differently and produces the answer zero. Although the relative error in addition/subtraction is not bounded by the unit roundoff u for machines without a guard digit, it is nevertheless true that fl(a + b) = a(1+e) + b(1+f), where e and f are bounded in magnitude by u. [1] G. H. Golub and C. F. Van Loan, Matrix Computations, Second Edition, Johns Hopkins Press, Baltimore, 1989. [2] C. B. Moler, Technical note: Double-rounding and implications for numeric computations, The MathWorks Newsletter, Vol 4, No. 1 (1990), p. 6. [3] R. Startz, 8087/80287/80387 for the IBM PC & Compatibles, Third Edition, Brady, New York, 1988. Editor's addendum: I agree with everything Nick has to say, and have a few more comments. MATLAB on a PC has IEEE floating point with extended precision implemented in an Intel chip. The C compiler generates code with double rounding. MATLAB on a Sun Sparc also has IEEE floating point with extended precision, but it is implemented in a Sparc chip. The C compiler generates code which avoids double rounding. On both the PC and the Sparc eps = 2^(-52) = 3cb0000000000000 = 2.220446049250313e-16 However, on the PC mu = 2^(-53)*(1+2^(-10)) = 3ca0040000000000 = 1.111307226797642e-16 While on the Sparc mu = 2^(-53)*(1+2^(-52)) = 3ca0000000000001 = 1.110223024625157e-16 Note that mu is not 2 raised to a negative integer power. MATLAB on a VAX usually uses "D" floating point (there is also a "G" version under VMS). Compared to IEEE floating point, the D format has 3 more bits in the fraction and 3 less bits in the exponent. So eps should be 2^(-55), but MATLAB says eps is 2^(-56). It is actually using the 1+x > 1 trick to compute what we're now calling mu. There is no extended precision or double rounding and ties between two floating point values are chopped, so we can find mu by just trying powers of 2. On the VAX with D float eps = 2^(-55) = 2.775557561562891e-17 mu = 2^(-56) = 1.387778780781446e-17 The definition of "eps" as the distance from 1.0 to the next floating point number is a purely "geometric" quantity depending only on the structure of the floating point numbers. The point Nick is making is that the more common definition of what we here call mu involves a comparison between 1.0 + x and 1.0 and subtle rounding properties of floating point addition. I now much prefer the simple geometric definition, even I've been as responsible as anybody for the popularity of the definition involving addition. -- Cleve From: Milo Dorr <dorr@hyperion.llnl.gov> Date: Sun, 14 Apr 91 15:34:53 PDT Subject: Meeting on Mathematics, Computations, and Reactor Physics The American Nuclear Society (Mathematics & Computation Division and Reactor Physics Division) International Topical Meeting: April 28 - May 2, 1991 Green Tree Marriott Pittsburgh, PA The banquet speaker on Wednesday, May 1 will be George E. Lindamood of the Gartner Group. The title of his lecture will be "Why Supercomputing Matters: A Perspective of the Proposed Federal High Performance Computing and Communication Program". There will also be a plenary session at 9:30am on Monday, April 29, consisting of a panel discussion on the topic "Perspectives on Advances in Supercomputing Performance" moderated by James. R. Kasdorf from Westinghouse Corporate Computer Services and the Pittsburgh Supercomputing Center. The panelists and the titles of their prepared presentations are: Gregory J. McRae (Carnegie Mellon University) "Grand Challenges in Computational Science" W. B. Barker (BBN Advanced Computers, Inc.) "Parallel Computing: Past, Present, and Future" M. L. Barton (Intel Supercomputing Systems Division) "Technology Development for Supercomputing 2000" Kenichi Miura (Fujitsu America, Inc.) "Perspectives on Advances in Supercomputer Performance - Fujitsu's View" Steve Nelson (Cray Research, Inc.) "Heterogeneous Supercomputing: Now and Then" For a copy of the Technical Program or further information, please contact the Program Chairman: I. K. Abu-Shumays RT-Mathematics, 34F Bettis Atomic Power Laboratory P. O. Box 79 West Mifflin, PA 15122-0079 (412) 476-6469, FAX (412) 476-5151 From: Biswa Datta <dattab@math.niu.edu. Date: Mon, 15 Apr 91 22:40:59 CDT Subject: Second NIU Conference Second NIU Conference on Linear Algebra, Numerical Linear, Algebra and Applications May 3 - 5, 1991 Holmes Student Center Northern Illinois University Organizer and Chairman - Biswa Datta, Northern Illinois University Advisor: Hans Schneider, University of Wisconsin - Madison Sponsored By The Institute for Mathematics and Its Applications (Minnesota) and International Linear Algebra Society (ILAS) Northern Illinois University The purpose of the conference is to bring together researchers in linear algebra, numerical linear algebra, and those working in various application areas for an effective exchange of ideas and discussion of recent developments and future directions of research. All events will take place at the Holmes Student Center, Northern Illinois University. Registration with begin at 7pm on Thursday, May 2. The meeting itself with be Friday, May 3, through Sunday May 5. Invited speakers include: R. Thompson, University of California-Santa Barbara R. Plemmons, Wake Forest University Clyde Martin, Texas Tech University Roger Horn, The Johns Hopkins University Charles R. Johnson, College of William and Mary Daniel Hershkowitz, Tecnion-Israel Institute of Technology Robert Grossman, University of Illinois at Chicago James R. Bunch, University of California-San Diego Floyd Hanson, University of Illinois at Chicago Chris Bischof, Argonne National Laboratory William Gragg, Naval Post Graduate School at Monterey, California Lothar Reichel, Naval Postgraduate School Richard Brualdi, University of Wisconsin - Madison Thomas Laffey, University College, Dublin, Ireland Jose Dias de Silva, University of Lisbon, Lisbon, Portugal S. Campbell, North Carolina State University Kenneth Clark, Army Research Office William Hager, University of Florida George Cybenko, University of Illinois at Urbana-Champaign Patricia Eberlein, University of Buffalo/SUNY Kermit Sigmon, University of Florida Daniel Boley, University of Minnesota Homer Walker, Utah State University Roland Freund, RIACS, NASA AMES Research Center A. Yeremin, USSR Academy of Sciences David Young, University of Texas, Austin Michael Neumann, University of Connecticut Avi Berman, Technion-Israel Institute of Technology Chris Byrnes, Washington University S. P. Bhattacharyya, Texas A & M University Bijoy Ghosh, Washington University Rama K. Yedavalli, Ohio State University S. Friedland University of Illinois at Chicago S. Wright, Argonne National Laboratory Pradip Misra, Wright State University Mohsen Pouramdi, Northern Illinois University For more information about the program, contact faculty coordinator Biswa Nath Datta, Department of Mathematical Sciences, at (815) 753-6759. For more information about the logistics, contact Margaret Shaw, College of Continuing Education, at (815) 753-1458. From: J. C. Butcher <butcher@mat.aukuni.ac.nz> Date: Wed, 17 Apr 91 10:13:28 NZS Subject: Position at the University of Auckland Applied and Computational Mathematics Unit The University of Auckland invites applications for a lectureship in the Applied and Computational Mathematics Unit within the Department of Mathematics and Statistics. Applicants should have a proven record in teaching and research in some branch of Applied or Computational Mathematics. Applications from candidates with expertise in a field that will strengthen and enhance the existing research interests of the Applied and Computational Mathematics Unit in differential equations and their numerical solutions and applications of scientific computing to physical and other problems are particularly welcome. The University of Auckland has a student population of about 17,000 and is the only university in Auckland, a city with about one million inhabitants. Mathematics and Statistics is the largest University positions are organised in a system similar to that of the UK. Thus a lectureship is intended to be a permanent appointment although, in the first instance, it is for a four year term. Most appointments are continued after that period. The salary scale for a lecturer begins at $NZ37,440 and increases in annual increments. (Details of the salary scale and prospects for promotion are available on request). It is hoped that the successful applicant will take up his or her duties by 1 August 1991 or soon after. Applications, in accordance with the "Method of Application" (available on request) should be submitted as soon as possible and no later than 30 June 1991. The University of Auckland is an Equal Opportunity Employer For further information or enquiries about the position please contact the Head of the Applied and Computational Mathematics Unit Professor J. C. Butcher using email. butcher@mat.aukuni.ac.nz or na.butcher@na-net.ornl.gov From: Alan Craig <Alan.Craig@durham.ac.uk> Date: Thu, 18 Apr 91 16:35:46 BST Subject: Postgraduate Opportunity at University of Durham The Department of Mathematical Sciences, University of Durham and the British Gas Research Station in Northumberland have a CASE studentship in Adaptive Finite Element Analysis for Nonlinear Elastic Problems The award is for three years, tenable from October 1991 and for a programme of research leading to a PhD. Outline of project: the finite element method is the computational tool for the solution of structural analysis problems. The technique relies on replacing the underlying differential equations by an approximating set of equations. However the correct choice of this approximating set is by no means a trivial procedure. In recent years adaptive methods have been developed which attempt to choose the approximation in an automatic way linked to the problem. The analysis and implementation of these methods for linear problems is now well understood. The situation for nonlinear systems is however, less satisfactory and the interest in adaptive techniques for general nonlinear response stems from the need to maintain structural integrity in hazardous situations. In particular British Gas are interested in techniques which can be used to analyse their offshore The project will maintain a careful balance between theoretical analysis and practical implementation. Experience has shown that in this area, as in many others, the implementation can be guided and enriched by the analysis. The ultimate aim is to produce high quality algorithms for nonlinear problems. As part of the project the successful applicant will work in the British Gas Research Station for periods totaling three months under the supervision of the Industrial Supervisor Mrs. Jane Haswell. The applicant will have, or will be about to obtain, a good honours degree in Mathematics, Engineering or a related discipline. In addition to the standard SERC award British Gas will make an extra payment of 1620 pounds per annum to the student. Further information can be obtained from Dr. Alan Craig at the address below and informal enquiries are welcomed. Application should be made to the same address and should include the names and addresses of at least two referees, one of whom should be able to judge the applicants suitability for research. Department of Mathematical Sciences University of Durham South Road Durham DH1 3LE From: Antonio Pizarro Date: Fri, 19 Apr 91 08:35:56 IST Subject: Position at Centenary College of Louisiana Applications are invited for two tenure-track positions in MATH. A Ph.D in Mathematics is required. Linear Algebra, Topology or Applied Math are preferred fields, but all fields are considered. Beginning Fall 1991. Send resume and three letters of recommendation to: Antonio G. Pizarro, Chair Math Department Centenary College of Louisiana Shreveport LA 71134, USA. From: Anthony Skjellum <tony@helios.llnl.gov> Date: Fri, 19 Apr 91 11:15:54 PDT Subject: Position at Lawrence Livermore Laboratory Postdoctoral Position in Parallel Computation - Lawrence Livermore A postdoctoral position in scientific parallel algorithms research is presently available in the Numerical Mathematics Group (NMG) at the Lawrence Livermore National Laboratory, Computing and Mathematics Researh Division. Candidates should be well versed in parallel computation; particularly, distributed memory machines and issues. Experience with large-scale parallel scientific algorithms and applications is strongly desired. In-depth experience with the Unix operating system and the C/C++ languages is also desired. Candidate will participate in the growing NMG research effort in parallel computation, but also will enjoy freedom to explore his or her own allied research interests. A one year position is offered, renewable for a second year if both NMG and the fellow concur. Candidate should have completed his or her Ph.D. in an appropriate discipline prior to October 1, 1991. Starting date is on or after October 1, 1991. US citizenship is required. Send three letters of recommendation, resume, and publications list to Dr. Anthony Skjellum, LLNL L-316, PO Box 808, Livermore, CA 94550. (415)422-1161. e-mail: tony@helios.llnl.gov. End of NA Digest From surfer.EPM.ORNL.GOV!nacomb Sat Apr 27 22:36:25 0400 1991
{"url":"http://www.netlib.org/na-digest-html/91/v91n16.html","timestamp":"2014-04-18T08:04:31Z","content_type":null,"content_length":"28281","record_id":"<urn:uuid:a291cdcd-f31f-4449-96ee-99bdb3f3a9b9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Related Rates question? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509897e7e4b085b3a90d73b9","timestamp":"2014-04-19T10:21:27Z","content_type":null,"content_length":"305338","record_id":"<urn:uuid:fb1ee74c-b952-453b-b155-0b4ebf2f09c9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Trochor – Animated Virtual Harmonograph The pattern traced out by Trochor is what you’d get if you took a pencil moving in ellipses, and used it to draw on a sheet of paper that’s also moving in ellipses. It’s a bit like a spirograph, but not constrained in quite the same ways. It’s more like a harmonograph; more on that later. Have fun, play around with the settings, especially ‘ratio’; that’s probably the best way of figuring out what’s going on. ‘Eccentricity’, by the way, is a measure of how flattened an ellipse is – an ellipse with 0 eccentricity is a circle, one with 1 is a line. The Long Version I first stumbled on trochoids by playing around with plotting trigonometric functions. I wondered what would happen if you took the basic parametric equations for plotting a circle - x = radius * cos (θ) y = radius * sin (θ) - and added them to the equation for another circle, turning more than one circle in the time it takes to draw the first one, so we get something like the trails left by a point on a wheel rolling around another wheel (an epitrochoid or hypotrochoid), with these equations: x = radius[1] * cos (θ) + radius[2] * cos (ratio * θ) y = radius[1] * sin (θ) + radius[2] * sin (ratio * θ) This makes some pleasing shapes, so I thought I would try animating it. The most obvious thing to do is to change the relative phase of the first and second circles, but this just turns the whole thing round. What’s more interesting is to animate the phase of the x component (the cosine) in the opposite direction to the y component (sine): x = radius[1] * cos (θ) + radius[2] * cos (ratio * θ + f) y = radius[1] * sin (θ) + radius[2] * sin (ratio * θ – f) This makes the circle shrink to a line, then grow into a circle flowing the opposite way, and then go through the same cycle again. Combined with the first circle, this makes the sort of animations you can see above. Pleased with the results of this, in 2001 I made an applet to let people play with it, and that’s where things stood till September 2004. Then I read a gorgeously produced wee book called Harmonograph: A Visual Guide to the Mathematics of Music. Harmonograph provides an impressively clear and thorough introduction to the basics of music theory, and ways of visualising music. It does this in 50-odd pages, replete with beautiful illustrations of harmony made by harmonographs, kaleidophones, Chladni plates and the like. A harmonograph is an instrument invented in the mid-nineteenth century, using two or more pendulums to produce beautiful pictures. The pictures are especially beautiful when the ratios of the frequencies of the pendulums are close to whole numbers, just as chords are especially beautiful when the ratios of the frequencies notes making them up are close to whole numbers. The pictures I was making with Trochor were almost the same pictures produced by a rotary harmonograph, except that the pendulums of a harmonograph steadily wind down as they draw, and Trochor was arbitrarily restricted to whole-number ratios between the two drivers. A kaleidophone, like a harmonograph, creates images of harmonics, but these are made only fleetingly, in light. It consists of a metal rod fixed in a stand, with a reflective bead on top, which is struck and then stroked with a bow. A beam of light reflected from the top of it casts patterns onto a screen, rather like those of a rotary harmonograph, but more complex. The biggest difference, judging by the illustrations I have seen, is that it is not restricted to circular motion: Free to vibrate in any direction, it produces patterns composed of interacting linear waves and ellipses. I have incorporated the idea of using ellipses, together with the damped spiralling motion and non-integer ratios of the harmonograph, into Trochor. With 0 eccentricity, we are back to the rotary harmonograph; with 1, we have the linear harmonograph, producing Lissajous-type figures. The equations of this version are as follow: x = (1-damping)^n * (axis[1a] * cos (θ) + axis[2a] * cos (ratio * θ + f)) y =(1-damping)^n * (axis[1b] * sin (θ) + axis[1b] * sin (ratio * θ – f)) The code for this applet is available for anyone curious – but bear in mind I wrote most of this around eight years ago, and the rest three years or so after that, and it’s not necessarily the best code! I should probably re-do this in Processing really. Wow, amazing Pretty neato. beautiful and perplexing animations these,i often imagine visual music interfaces and find your work very interesting along with some other stuff i have found online,might be sending you a message someday,asking for some code.must learn processing first though… i just read the harmonograph book and it has just occurred to me that it is possible to reverse engineer the concept and make sounds from your applets,generate from you drawings sounds? Absolutely love these things of beauty. No idea about the maths. I would imagine you have heard of krazydad.com, really nice flash toys. So you know, safari 3.2.1 doesn’t handle this site very well, but Firefox 3.5.2 does fine. I found the site by googling harmonograph. Again it is a good thing that you have made this site i cannot understand it. i am not good in mathemtics. please give me a point how to become good in trigonometry!!!!! Hi kerth, you might like to have a read of my trigonometry page at http://oolong.co.uk/trig.htm …that’s about the best I can offer you! Here is another really cool online harmonograph. Move and zoom with the mouse scrool wheel
{"url":"http://oolong.co.uk/play/trochor","timestamp":"2014-04-21T04:30:44Z","content_type":null,"content_length":"30726","record_id":"<urn:uuid:e24df31e-82ac-47c2-8c12-d53164831b6e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
A string of symbols that make a mathematical statement. Listed alphabetically below are some of the common formulas utilized in building asset management to establish expenditure quantum. In other words, they ask the question: "How much money will we need?" Reinvestment formulas can be grouped into three general categories, as follows:
{"url":"http://www.assetinsights.net/Glossary/G_Formula.html","timestamp":"2014-04-18T13:37:59Z","content_type":null,"content_length":"5449","record_id":"<urn:uuid:adb29fc6-6c1d-46dc-9da0-fae1816914a5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with evaluating Trig Limits March 9th 2009, 04:17 PM Need help with evaluating Trig Limits Hey, this is my first time posting here so I will try to keep this brief! So I am redoing pre-calculus to upgrade my math for post-secondary, and it's been a year since I did this in grade 12 so I'm pretty rusty. =( I am going through limits at the moment and I came across the equation, lim sinAx = A/B x→0 Bx Or otherwise lim sinx = 1 x→0 x Now this has helped me greatly in evaluating limits, but I remember there being a cosine limit something like this that also equals 1 which I could use to isolate and simplify the problem, but I just can't remember and I can't seem to find it in my textbooks, does anyone know of anything like this? Thanks you very much for the help! (Hi) March 9th 2009, 05:28 PM Hey, this is my first time posting here so I will try to keep this brief! So I am redoing pre-calculus to upgrade my math for post-secondary, and it's been a year since I did this in grade 12 so I'm pretty rusty. =( I am going through limits at the moment and I came across the equation, lim sinAx = A/B x→0 Bx Or otherwise lim sinx = 1 x→0 x Now this has helped me greatly in evaluating limits, but I remember there being a cosine limit something like this that also equals 1 which I could use to isolate and simplify the problem, but I just can't remember and I can't seem to find it in my textbooks, does anyone know of anything like this? Thanks you very much for the help! (Hi) i'n not sure if there is one for cosine, but it is true that: $\lim_{x \rightarrow 0} \frac{x}{\sin{x}}=1$ March 10th 2009, 08:02 PM further thinking about this made me realize that there is not one with cosine the reason is that in calculus, you will/have learned that $\lim_{x \rightarrow 0} \frac{x}{\sin{x}}=1$ is in an indeterminant form so you can use l'hopital's rule to get the limit. $\lim_{x \rightarrow 0} \frac{x}{\cos{x}}=1$ is not in an indeterminant form. if this doesn't make sense now, just wait until you take calculus. March 11th 2009, 02:17 AM I believe you are thinking of $\lim_{x\rightarrow 0} \frac{1- cos(x)}{x}= 0$ March 11th 2009, 07:39 AM
{"url":"http://mathhelpforum.com/pre-calculus/77808-need-help-evaluating-trig-limits-print.html","timestamp":"2014-04-21T14:54:52Z","content_type":null,"content_length":"9271","record_id":"<urn:uuid:164c6490-570b-42a6-be82-119ef1c37bed>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
How to optimize the following algorithm? I have a large matrix-M, and the rows have been sorted based on the value on some entry. Next I want to identify all rows that can be dominated, i.e., row #i is dominated by #j if j<i and M(j,:)<=M If rows #i is dominated, we can remove row i. I notice that remove row #i takes a lot of time for a large matrix so I set a index() and each time if it is dominated then index(i)=1. At last, M(index) =[]. (remove those dominated rows at last) I run my code several times, and find out there is a bottleneck-N loops. Below I show some code: for i=3:N j=2; %j from i-1 to may be faster! while j<=i-1 if all(M(j,1:N)<=M(i,1:N)) I am thinking if someone can help me to optimize the code? Can we do better here? Thanks. 2 Comments Is idx preallocated? idx = zeros(N,1); Yes, Sean, you are right. That is the index that records which row is dominated, I will delete them at last. No products are associated with this question.
{"url":"http://www.mathworks.com/matlabcentral/answers/78220","timestamp":"2014-04-24T16:53:09Z","content_type":null,"content_length":"24078","record_id":"<urn:uuid:ec44e324-2775-4dfe-b36f-47982cc6e2d5>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Excel: Calculate Sliding Scale Tax/Commission. Calculate Bracket Calculate Sliding Scale Tax/Commission Excel: Calculate Sliding Scale Tax/Commission. Calculate Bracket Tax/Commission Excel: Calculate Sliding Scale Tax/Commission. Calculate Bracket Tax/Commission See Also: Calculate Sliding Scale Tax Custom Excel Function VBA. Download Working Examples Calculating tax, or commission that is based on a sliding scale, or by bracket, can be complicated. The formula below is one that can be used using Excel's built in function/formulas. That is, the IF function/formula and the SUM function/formula. The formula that can be used is; =IF(A5>Level4Tax,SUM((A5-Level4Tax *Level4TaxRate,Level3TaxAmount*Level3TaxRate,Level2TaxAmount*Level2TaxRate,Level1TaxAmount*Level1TaxRate),IF(A5>Level3Tax,SUM((A5-Level3Tax) I have color coded the formula for easier reading. As you can see, this formula uses Named Ranges for easier reading and modification. A named range can be created by selecting the cell, then typing the name wanted in the Name Box (left of formula bar) and pushing Enter. To make this easier to read, I have placed the cell names next to their named cell. In the formula above, only the grey cells are being used. The last column (Amount of Tax Payable on) is the result of subtracting the Level*Tax (one row down) from the Level*Tax on the same row. For example, $13,000.00 (Level1TaxAmount) is derived by subtracting Level2Tax (25000) from Level1Tax (12000). That is: If you prefer, these key numbers, can become Named Constants as opposed to Named Ranges . For example, to create the Named Constant: Level1Tax you would go to Insert>Name>Define and type: Level1Tax in the Names in workbook: box, then: =12000 in the Refers to: box, then click Add. Using Excel Vlookup Formula There is another way, which some may prefer where the VLOOKUP function/formula is used. This method relies on some "Quick deductions" being pre calculated and placed at the end of the white & grey table shown above. This is best seen by Downloading Working Example Thanks to Albert Tsang for this excellent method. We can go one step further toward simplifying the calculation by using Named Formulas for each tax level calculation. After doing this, we can then use; Here are the steps to achieve this. BTW, this method can also be used to overcome the 7 nested IF Function limitation . 1) Create Named Ranges, or Named Constants that will hold the figures needed. See screen shot above. 2) Place you Gross pays in cell A1 down. 3) Select cell B1 and go to Insert>Name>Define. 4) In the Names in workbook: box type: Level1TaxCalc Then, in the Refers to: box type: =SUM((A1-Level1Tax)*Level1TaxRate) Then click Add. **Note how we have referred to cell A1. This now makes the Named Formula (Level1TaxCalc) always look on the same row in the immediate column to the left for the gross pay. 5) Repeat step 4 using the names and formulas shown below; It is important to know that Level*TaxCalc will ALWAYS look on the same row, but left column for the gross pay figure. See Also: Calculate Sliding Scale Tax Custom Excel Function VBA. Download Working Examples Excel Dashboard Reports & Excel Dashboard Charts 50% Off Become an ExcelUser Affiliate & Earn Money Special! Free Choice of Complete Excel Training Course OR Excel Add-ins Collection on all purchases totaling over $64.00. ALL purchases totaling over $150.00 gets you BOTH! Purchases MUST be made via this site. Send payment proof to special@ozgrid.com 31 days after purchase date. Instant Download and Money Back Guarantee on Most Software Excel Trader Package Technical Analysis in Excel With $139.00 of FREE software! Microsoft ® and Microsoft Excel ® are registered trademarks of Microsoft Corporation. OzGrid is in no way associated with Microsoft Some of our more popular products are below... Convert Excel Spreadsheets To Webpages | Trading In Excel | Construction Estimators | Finance Templates & Add-ins Bundle | Code-VBA | Smart-VBA | Print-VBA | Excel Data Manipulation & Analysis | Convert MS Office Applications To...... | Analyzer Excel | Downloader Excel | MSSQL Migration Toolkit | Monte Carlo Add-in | Excel Costing Templates
{"url":"http://www.ozgrid.com/Excel/sliding-bracket.htm","timestamp":"2014-04-16T23:05:08Z","content_type":null,"content_length":"11800","record_id":"<urn:uuid:5203ae35-1d73-4011-a11c-7348356bac08>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Fallacy of equivication fallacy of the undistributed middle is a logical fallacy that is committed when the middle term in a categorical syllogism . It is thus a syllogistic fallacy The fallacy of the undistributed middle takes the following form: 1. All Zs are Bs 2. Y is a B 3. Therefore, Y is a Z This can be graphically represented as: where the premises are in the green box and the conclusion is derived above them. It may or may not be the case that "all Zs are Bs," but in either case it is irrelevant to the conclusion. What is relevant to the conclusion is whether it is true that "all Bs are Zs," which is ignored in the argument. The fallacy is similar to affirming the consequent and denying the antecedent in that if the terms were swapped around in either the conclusion or the first co-premise, then it would no longer be a fallacy. For example: 1. All students carry backpacks. 2. My grandfather carries a backpack. 3. Therefore, my grandfather is a student. 1. All students carry backpacks. 2. My grandfather carries a backpack. 3. Everyone who carries a backpack is a student 4. Therefore, my grandfather is a student. The middle term is the one that appears in both premises — in this case, it is the class of backpack carriers. It is undistributed because neither of its uses applies to all backpack carriers. Therefore it can't be used to connect students and my grandfather — both of them could be separate and unconnected divisions of the class of backpack carriers. Note below how "carries a backpack" is truly undistributed: grandfather is someone who carries a backpack; student is someone who carries a backpack Specifically, the structure of this example results in affirming the consequent. However, if the latter two statements were switched, the syllogism would be valid: 1. All students carry backpacks. 2. My grandfather is a student. 3. Therefore, my grandfather carries a backpack. In this case, the middle term is the class of students, and the first use clearly refers to 'all students'. It is therefore distributed across the whole of its class, and so can be used to connect the other two terms (backpack carriers, and my grandfather). Again, note below that "student" is distributed: grandfather is a student and thus carries a backpack See also
{"url":"http://www.reference.com/browse/Fallacy+of+equivication","timestamp":"2014-04-21T04:51:57Z","content_type":null,"content_length":"80694","record_id":"<urn:uuid:fedb495d-d265-456a-a830-f6eca7b20290>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Economic phenomenon predicting and analyzing system using neural network - Mitsubishi Denki Kabushiki Kaisha 1. Field of the Invention This invention relates to a system for predicting and analyzing economic phenomena, such as variations in stock prices, bond prices and exchange rates, using a neural network. 2. Description of the Related Art FIG. 2 of the accompanying drawings shows an economic phenomenon predicting and analyzing system, which is disclosed in, for example, "Stock Market Prediction System with Modular Neural Networks", by T. Kimoto and K. Asakawa, in Proceedings of International Joint Conference on Neural Networks, June 1990. This system predicts variations in TOPIX and analyzes causes for the variations. TOPIX (Tokyo Stock Exchange Prices Index) is a kind of stock index used in Japan and is a stock index for stock on the market in Japan. This system comprises two subsystems: a prediction system 1 and an analysis system 2. The prediction system 1 is composed of a preparation module 3, a number of neural networks 4 connected to the back stage of the preparation module 3, and an unification module 5 for obtaining a weighted average of the outputs of the neural networks 4. The preparation module. 3 inputs time series data 6 indicating the time variation of TOPIX in a predetermined past period. The preparation module 3 further inputs various time series data 7-1-7-n indicating the past time variations such as of turnover, interest rate, foreign exchange rate and New York Dow-Jones average. Moreover the preparation module 3 performs a logarithmic arithmetic operation and a normalization arithmetic, an error function arithmetic on the input time series data and then supplies them to the individual neural networks Each neural network 4 has an hierarchical structure including an input layer 8, a hidden layer 9 and an output layer. In FIG. 2, each of the output layers is composed of a single output neuron 10. Each neural network 4 cannot be used for prediction until it is provided with learning. The learning of the neural network 4 is provided according to a so-called back propagation method. During this process, learning data including two kinds of data, i.e. input data and teaching data have to be given to the neural network 4. Data indicating economic phenomena that have actually occurred in the past are used as the input data, and data indicating economic phenomena that have actually occurred following the past economic phenomena are used as the teaching data. More specifically, the input data are time series data indicating variations of TOPIX in a period and data indicating variations such as of turnover, interest rate, foreign exchange rate and New York Dow-Jones average in the same period, and the teaching data are data indicating actual variations of TOPIX in a period following the previous period. In the system of FIG. 2, since the preparation module 3 is located on the front stage of the neural network 4, the time series data indicating variations of TOPIX of the input data are given to the preparation module 3 as input data 6 and are then input to the neural network 4. Likewise, of the input data, the time series data indicating variations such as of turnover, interest rate, foreign exchange rate and New York Dow-Jones average are given to the preparation module 3 as input data 7-1-7-n and are then input to the neural network 4. For learning, all prepared learning data is repeatedly input to the neural networks 4. As all learning data is thus repeatedly given according to the back propagation method, the individual neural network 4 is self-organized. Upon termination of this learning, the individual neural network 4 will be an organization which is provided with learning of past economic phenomena by experience. Therefore, after termination of the learning process, the economic time series data 6, 7-1-7-n are given successively to the preparation module 3 so that the result of prediction of a future economic phenomenon based on the experience of the past economic phenomena can be obtained from the neural networks 4 via the unification module 5. The unification module 5 calculates the weighted average of the outputs of the output neurons 10 of the individual neural networks 4. Specifically, the unification module 5 performs the following arithmetic operation: Firstly, the rate of increase of TOPIX at a time t is represented by TOPIX (t)/TOPIX (t-1), where TOPIX (t) is a stock index at a week t. The unification module 5 obtains a logarithm, usually a natural logarithm, of this value. In other words, γ[t] is obtained by the equation (1): γ[t] =ln(TOPIX (t)/TOPIX (t-1)) (1) Secondly, return γ[N] (t) at a time t is obtained using the following equation (2): γ[N] (t)=Σφ^1 γ[t]+1 (i=1, ...N)(2) where φ^1 is the weight of a natural logarithm γ[t]+1 of the rate of increase of TOPIX at a time t+i. φ^1 is determined within a range of 0.5 to 1, and so as to decrease as i is large. Since "i is large" means "it is a distant future", the equation (2) is a weighted average operating equation which evaluates the return γ[N] (t) to be smaller as it is a more distant future. The unification module 5 outputs the return γ[N] (t) to be obtained as the result of this weighting arithmetic operation. Accordingly, the output 11 of the unification module 5 will be an index indicating the rate of increase of weighted average of TOPIX in a predetermined period after the present time. This value will be positive if the stock price increases in the future and negative if the stock price decreases in the future. From the unification module 5, the return γ[N] (t) as significant data for estimating economic trends from the present time (after a time t) can be obtained as the output 11. A period to give the data 6, 7-1-7-n to the prediction system 1 should preferably be a week. Further, the above-identified publication discloses a method of analizing the causal relationship between the time series input data 6, 7-1-7-n and the output value 11 using the individual neural networks 4 provided with learning. In this method, the number of neurons of each hidden layer 9 is predetermined to be small (e.g., five). For analizing the causal relationship between the time series input data 6, 7-1-7-n and the output of the individual neural networks 4 the time series input data 6, 7-1-7-n for learning are input to the analysis system 2 as input data 12 and corresponding outputs of the individual hidden layers 9 are input to the analysis system 2 as hidden multivariate analysis is performed over the input data 12 as independent variables and outputs 13 as dependent variables. Specifically, a cluster analysis is made over vectors of the outputs 13 of the individual hidden layers 9. In the learning process to give various learning data 12, the similar outputs 13 for different learning data 12 can be obtained from the hidden layers 9. The set of such learning data is called a cluster. The analysis system 4 is a system for obtaining the causal relation between the time series input data 6, 7-1-7-n and the output value 11 by sorting the clusters. Generally, the greater the number of neurons constituting each of the hidden layers 9, the higher the degree of prediction accuracy of the prediction system 1 that will be obtained. On the other hand, when the number of neurons constituting each of the hidden layers 9 is set as many clusters to cluster, redundancy will be eliminated, and therefore a cluster analysis using the outputs 13 of the hidden layers 9 as shown in FIG. 2 will become easy. The number of clusters is usually several and hence the number of neurons of each of the hidden layers 9 is determined to be small so as to meet the number of clusters. Thus the redundant neurons are eliminated. This analysis system 2 is convenient for providing a detailed analysis of economic phenomena. Specifically, by researching the learning data 12 sorted into individual clusters by the analysis system 2, a more detailed analysis of economic phenomena can be achieved. For example, by researching the date of the learning data 12 belonging to the individual cluster on the time series data of stock indices, it is possible to determine the kind of market (i.e., a bull market, a stagnant market and a bear market) corresponding to the cluster. Further, by researching the frequency distribution of the input data (learning data) belonging to the individual cluster, it is possible to research the main factor of occurrence of market trends corresponding to the individual cluster. Namely, assuming that the only input data (learning data 12) sorted into this cluster, are data which belong to the input data (learning data 12) in a predetermined distribution, it can be considered that if the content is deviated, the values of the input data (learning data 12) near the deviated input data are ones of the factors occurrence of market trends corresponding to the cluster. In the prior system, it is preferable that the number of neurons constituting each of the hidden layers is as many as the clusters to which the learning data is to be sorted. However, it is possible to anticipate how many clusters the input data should actually be sorted into. Therefore, the number of neurons has to be determined by a trial-and-error experimental method. Since the number of clusters is only a few, the number of hidden layer neurons was restricted in order to satisfy the demand of the analysis system. However, to improve the prediction performance, the number of hidden layer neurons is preferably large. In the prior system, therefore, demands for removing redundant neurons by the analysis system have been the bottle-neck in improving the prediction performance. Further, with the prior system, analyses such as a technical analysis could not be performed. A technical analysis, like fundamental analysis, is one of the classic economic analyses and a method for checking variation patterns of stock prices in the past using, for example, a chart and grasping variations of stock prices after the present using the past variations. This method adopts the conventional statistic method and enables the obtaining of the causal relation between the variations of input data and the output. However, with the prior system using neural networks, since the technical analysis could not be performed, the causal relationship between the past variation pattern and the future variation and the causal relation between the variations of input data and the output would not be obtained. A first object of this invention is to provide an economic phenomenon predicting and/or analyzing system using neural networks which enables the determination of the number of hidden layer neurons without using a trial-and-error experimental method. A second object of the invention is to provide an economic phenomenon predicting and/or analyzing system which enables the increase of the number of hidden layer neurons of predicting neural networks, thereby improving the prediction performance. A third object of the invention is to provide an economic phenomenon predicting and/or analyzing system which enables the obtaining of the causal relationship between the past variation patterns and possible variations and the causal relationship between input data variations and outputs, thus enabling a technical analysis. According to a first aspect of the invention, there is provided a neural network, for predicting economic phenomena, comprising: (a) an input layer having a plurality of input layer neurons for inputting signals indicating respective economic phenomena; (b) a predetermined number of hidden layers having a plurality of hidden layer neurons, respectively, each of said input neurons being adapted to make synaptic combinations with arbitrary ones of the hidden layer neurons; and (c) an output layer having a predetermined number of output layer neurons, each of said hidden layer neurons being adapted to make synaptic combinations with arbitrary ones of the output layer neurons, each of the output layer neurons being adapted to output a signal; (d) wherein weights of the synaptic combinations between the input layer neurons and hidden layer neurons and weights of the synaptic combinations between the hidden layer neurons and said output layer neurons are organized by learning in such a manner that when fundamental analysis data and technical analysis data are input to the input layer neurons represent a result of prediction of the economic phenomenon. In the neural network having the above-mentioned construction, when predicting an economic phenomenon, information for fundamental analysis and that for technical analysis are input to the input layer neurons. The neural network is provided in advance with learning by the fundamental analysis and technical analysis information obtained in the past as well as information concerning the objective economic phenomenon corresponding to these past data. Therefore the output of the output layer neuron is the result of prediction based on both the fundamental analysis information and the technical analysis information. Further, it is possible to perform a principal analysis based on the structure of the hidden layer (i.e., the weighting of synaptic combination with the output layer According to a second aspect of the invention, there is provided a neural network for predicting economic phenomena, comprising: (a) an input layer having a plurality of input layer neurons for inputting signals indicating respective economic phenomena; (b) a predetermined number of hidden layers having respective hidden layer neurons, each of said input neurons being adapted to make a synaptic combination with an arbitrary one of said hidden layer (c) an output layer having a predetermined number of output layer neurons, each of said hidden layer neurons being adapted to make a synaptic combination with an arbitrary one of said output layer neurons, each of said output layer neurons being adapted to output a signal; and (d) weighting of the synaptic combinations between said input layer neurons and hidden layer neurons and weighting of the synaptic combinations of said hidden layer neurons and output layer-neurons being organized by leaning such a manner that when data indicating variations of various principal economic indices including an economic phenomenon to be predicted and data indicating a variation pattern of the economic phenomenon to be predicted are input to the input layer neurons, said signal output from said output layer neurons represents a result of prediction of the economic In the neural network having this construction, when predicting an economic phenomenon, data indicating variations of various principal economic indices and data indicating a pattern of variations of an objective economic phenomenon to be predicted are input to the input layer neurons. The various principal economic indices include the objective economic phenomena to be predicted. The neural network is provided in advance with learning by data indicating trends of various principal economic indices obtained in the past and a pattern of variations of the objective economic phenomenon to be predicted as well as information concerning the objective economic phenomenon corresponding to these data. Therefore the output of the output layer neurons is the result of prediction based on both the trends of various principal economic indices and the pattern of variations of the objective economic phenomenon to be predicted. Further, it is possible to perform a principal analysis based on the structure of the hidden layers (i.e., the weights of synaptic combinations with the output layer neurons). Thus the causal relationship between the past variation pattern and future variations of the objective economic phenomenon and the causal relationship between the input data to the neural network and its output data can be noticed by the principal analysis. This is true because the input information to the input layer neurons includes information indicating variations of various principal economic indices, i.e. differential information. Also, since the input information to the input layer neurons includes data indicating a pattern of variation of the objective economic phenomenon to be predicted, a so-called technical analytical explanation can be achieved. According to a third aspect of the invention, there is provided an economic phenomenon predicting system comprising: (a) a predicting neural network including (a1) an input layer having a plurality of input layer neurons for inputting signals indicating respective economic phenomena, (a2) a predetermined number of hidden layers having a plurality of hidden layer neurons, respectively, each of the input neurons being adapted to make synaptic combinations with arbitrary ones of the hidden layer neurons, and (a3) an output layer having a predetermined number of output layer neurons, each of the hidden layer neurons being adapted to make synaptic combinations with arbitrary ones of the output layer neurons, each of the output layer neurons being adapted to output a signal, (a4) wherein weights of the synaptic combinations between the input layer neurons and hidden layer neurons and weights of the synaptic combinations between the hidden layer neurons and the output layer neurons are organized by learning in such a manner that when data indicating trends of various principal economic phenomena including an economic phenomenon to be predicted and data indicating a variation pattern of the economic phenomenon to be predected are input to the input layer neurons, the signals output from the output layer neurons represent a result of prediction of the economic phenomenon; (b) moving-average-value arithmetic means for inputting time series data indicating various principal economic indices and for obtaining moving-average values for a plurality of recent predetermined periods, the moving-average-value arithmetic means being adapted to supply the obtained moving-average values, as part of the data indicating the trends of the various principal economic indices including the economic phenomena, to the input layer neurons; (c) difference arithmetic means for obtaining differences between the moving-average values, which are obtained in the individual common period, for at least from the first to the n-th order differences, the difference arithmetic means being adapted to supply the obtained differences, as part of the data indicating the trends of the various principal economic indices including the economic phenomenon to be predicted; (d) trend removing means for removing trends from time series data indicating the economic phenomenon to be predicted by subtracting from the time series data the individual moving-average value of the economic phenomenon for any of the predetermined periods; and (e) pattern-sorting means for sorting the time series data indicating the economic phenomenon to be predicted after removing the trends into patterns, the pattern-sorting means being adapted to output the patterns, which are obtained from the sorting, as data indicating a variation pattern of the economic phenomenon to be predicted to the input layer neurons. In the system having this construction, firstly the time series data indicating various principal economic indices (including the objective economic phenomenon to be predicted) is input to the moving-average-value arithmetic means. The objective economic phenomenon is a stock index, e.g. TOPIX, indicating prices of stocks which may be sold or bought. Various principal economic indices other than the objective economic phenomenon are turnover, various interest rates, principal merchandise prices, financial indices, trade indices, price indices, etc. The moving-average-value arithmetic means obtains moving averages of the input time series data in most recent predetermined numbers of periods and performs a moving average arithmetic using the weights set so that the sum is 1, for example. Also the moving-average-value arithmetic means supplies the obtained moving averages to the input layer neurons. Further, the difference arithmetic means obtains at least first to n-th order differences of the values obtained in the same period, out of the moving averages. The different arithmetic means supplies the obtained differences to the input layer neurons. Although n may be 1 or more, a second order is preferable. The moving averages thus obtained and their differences are data indicating trends of various principal economic indices. Namely, these data are information usable for fundamental analysis. To the input layer neurons, these fundamental analysis data as well as information usable for technical analysis are input. In this arrangement, the technical analysis data to be input to the input layer neurons can be obtained by the pattern sorting means. The pattern sorting means sorts the time series data, which is obtained by the trend removing means, into patterns. The trend removing means subtracts from the time series data, indicating the objective economic phenomenon, the values of any period out of the moving averages obtained as mentioned above. The thus obtained data are time series data indicating the objective economic phenomenon and free of trends. Thus the patterns obtained by this sorting will become data indicating patterns of variations of the objective economic phenomenon. The pattern sorting means itself may be a neural network provided with learning by patterns that have frequently appeared in the past, so as to output the data indicating a corresponding pattern in responding to the input of the trend-free time series data. Practical topology may be a self-organizing map. When the data indicating trends of various principal economic indices including the objective economic phenomenon and also the data indicating the pattern of variation of the objective economic phenomenon are input, the result of prediction of the objective economic phenomenon can be obtained, as signals, from the respective output layer neurons. This is true because the neural network is provided with the above-mentioned learning process. More specifically, this is because the weights of synaptic combinations of the input layer and hidden layer neurons and the weights of synaptic combinations of the hidden layer and output layer neurons are organized by learning so as to obtain an expected result of prediction when the data indicating variations of various principal economic indices and the data indicating the pattern of variation of the objective economic phenomenon are input to the input layer neurons. According to a fourth aspect of the invention, there is provided an economic phenomenon predicting and analyzing system comprising: (a) a neural network for predicting economic phenomena, including (a1) an input layer having a plurality of input layer neurons for inputting signals indicating respective economic phenomena, (a2) a predetermined number of hidden layers having a plurality of hidden layer neurons, respectively, each of said input neurons being adapted to make synaptic combinations with arbitrary ones of the hidden layer neurons, and (a3) an output layer having a predetermined number of output layer neurons, each of the hidden layer neurons being adapted to make synaptic combinations with arbitrary ones of the output layer neurons, each of said output layer neurons being adapted to output a signal, and (a4) wherein weights of the synaptic combinations between said input layer neurons and hidden layer neurons and weights of the synaptic combinations between the hidden layer neurons and the output layer neurons are organized by learning in such a manner that when data indicating variations of various principal economic phenomena including an economic phenomenon to be predicted and data indicating a variation pattern of the economic phenomenon to be predicted are input to the input layer neurons, the signals output from the output layer neurons represent a result of prediction of the economic phenomenon; (b) moving-average-value arithmetic means for inputting time series data indicating various principal economic indices and for obtaining moving-average values for a plurality of recent predetermined periods, the moving-average-value arithmetic means being adapted to supply the obtained moving-average values, as part of the data indicating the variations of the various principal economic indices including the economic phenomenon to be predicted, to the input layer neurons; (c) difference arithmetic means for obtaining differences between the moving-average values, which are obtained in the individual common period, for at least from the first to the n-th order differences, the difference arithmetic means being adapted to supply the obtained differences, as part of the data indicating the variations of the various principal economic indices including the economic phenomena to be predicted; (d) principal component arithmetic means for obtaining principal components of the output of the hidden layer neurons by principal analysis; and (e) correlation analyzing means for analyzing a correlation between variation of the economic phenomenon to be predicted and variation of the output of the output layer neurons by analyzing the obtained principal components. In this system, like the foregoing system, the moving averages of time series data indicating various principal economic indices and their differences are obtained. These are input to the input layer neurons as data indicating variations of various principal economic indices including the objective economic phenomenon. In response to this input, the neural network outputs the result of prediction of the objective economic phenomenon as signals. This is because, as described above, the weights of synaptic combinations of the input layer and hidden layer neurons and the weights of synaptic combinations of the hidden layer and output layer neurons are organized, by learning, in such a manner that when the data indicating variations of various principal economic indices and the data indicating a pattern of variation of the objective economic phenomenon are input to the input layer neurons, the output signals of the output layer neurons will be the result of prediction of the objective economic phenomenon. Further, it is possible to perform a principal analysis based on information concerning of the output of the hidden layer neurons. The principal component arithmetic means obtains principal components by performing a principal analysis for the output of the hidden layer neurons. The term "principal component" means a small number of variants when expressing a large number of variants using only a small number of variants. Intuitively, the large number of variants should be regarded as vectors, and the small number of variants should be regarded as unit vectors. Accordingly the large number of variants can be grasped as a linear combination of unit vectors. In other words, this unit vector (direction) can be regarded as the principal component. Generally, if there is any correlation between the large number of vectors, it is possible to approximate the vector indicating the large number of variants by linear combinations of fewer variants (principal components). The principal component arithmetic means obtains principal components by, for example, making an arithmetic of the elgen vectors of covariance matrix of the outputs of the hidden layer neurons according to the Jacobi method. The correlation analyzing means analyzes the correlation between variations of the objective economic phenomenon and variations of output of the output layer neurons by analyzing the obtained principal components. For example, the correlation analyzing means inputs the inputs of neural network as the outputs of the hidden layer neurons and calculates explanation variant based on these outputs. The term "explanation variant" are variants to be factors of the objective economic phenomenon. The explanation variants may be exemplified by the inputs of the neural network for prediction, the difference between moving averages of different periods for the time series data indicating the same principal economic indices, and the difference between moving averages of the same period for the time series data indicating different principal economic indices. The explanation variants can be obtained by, for example, the principal component arithmetic means. In this case, the principal component arithmetic means inputs the inputs of the neural network as the outputs of the hidden layer neurons and calculates each explanation variant based on these outputs. After the principal components for every learning data have been obtained by the principal component arithmetic means, the correlation analyzing means obtains, with respect to the obtained principal components, a frequency distribution of the principal component rankings, then the strength of the correlation between the rankings and each explanation variant from the obtained frequency distribution, and then an effect of the rankings on the objective economic phenomenon from the obtained frequency distribution. At that time, the strength of the correlation between the ranking and each explanation variant may be obtained in the following manner. Firstly, from the obtained frequency distribution, a first section including only the maximum value, a second section to which an upper predetermined proportion of learning data except the maximum value belongs, a third section to which a lower predetermined proportion of learning data except the minimum value belongs and a fourth section including only the minimum value are obtained, and then an average of explanation variants and a standard deviation in each of the second and third sections are obtained. Alternatively, for the explanation variant in which the difference between the average of the second section and that of the third section is relatively large and in which the proportion of the standard deviation to an absolute value of the difference is small, the correlation of the explanation variant with respect to the rankings is then regarded as being relatively strong. Further, for obtaining an effect of the ranking on the objective economic phenomenon, firstly, from the obtained frequency distribution, a first section including only the maximum value, a second section to which an upper predetermined proportion of learning data except the maximum value belongs, a third section to which a lower predetermined proportion of learning data except the minimum value belongs and a fourth section including only the minimum value are obtained, and then the rankings are weighted by the weights of synaptic combinations of the individual hidden layer and output layer neurons, the weighted rankings are summed in respective sections, and, whereupon the result of these weighted summations are multiplied by the range of the corresponding section and summed. According to this system, it is possible to relax the restriction of the number of hidden layer neurons to meet requirements for the analysis system. Namely, since the number of hidden layer neurons is not limited by the number of clusters, etc., many hidden layer neurons can be used. This can be realized using a single neural network; in other words, it can be realized using the neural network for prediction also as a neural network for analysis. Further, since sorting in clusters does not take place, it is possible to determine the number of hidden layer neurons without using a trial-and-error experimental method. Furthermore, since moving averages are used for inputs to the neural network, it is possible to perform an analysis using information concerning the crossing point of folded lines of the moving average which is useful in the monetary field. When the number of hidden layers is set to one and the number of output layer neurons is set to one, the result of analysis will be simple. For example, the causal relationship between the rankings and the variation of indices of the objective economic phenomenon will be simple. Also, this system may be equipped with a post-process means for converting the output data from the output layer neurons of the neural network into a rate of change of the indices of the objective economic phenomenon to be predicted. According to a fifth aspect of the invention, there is provided an economic phenomenon predicting method comprising the steps of: (a) inputting time series data indicating various principal economic indices and obtaining moving-average values for a plurality of recent predetermined periods, the various principal economic indices including an economic phenomenon to be predicted; (b) obtaining differences between the moving-average values, which are obtained in the individual common period, for at least from the first to the n-th order differences; (c) removing a trend from time series data indicating the economic phenomenon to be predicted by inputting the time series data indicating the economic phenomenon to be predicted and by subtracting from the time series data the individual moving-average value of the economic phenomenon for any of the predetermined periods; (d) sorting the time series data indicating the economic phenomenon to be predicted after step (c) into patterns; and (e) predicting the economic phenomenon, based on the moving-average values, the hierarchical differences and the patterns obtained by sorting, using a neural network including (e1) an input layer having a plurality of input layer neurons for inputting the moving-average values and hierarchical differences as data indicating the trends of various principal economic indices including the economic phenomenon to be predicted and inputting the pattern obtained by sorting as data indicating a variation pattern of the economic phenomenon to be predicted, the pattern obtained by said sorting, (e2) a predetermined number of hidden layers having a plurality of hidden layer neurons, respectively, each of the input neurons being adapted to make synaptic combinations with arbitrary ones of the hidden layer neurons, and (e3) an output layer having a predetermined number of output layer neurons, each of the hidden layer neurons being adapted to make synaptic combinations with arbitrary ones of the output layer neurons, each of the output layer neurons being adapted to output a signal representing the result of prediction of the economic phenomenon, and (e4) wherein weights of the synaptic combinations between input layer neurons and hidden layer neurons and weights of the synaptic combinations between the hidden layer neurons and the output layer neurons are organized by learning in such a manner that when the data indicating the trends of the various principal economic indices and the data indicating the variation pattern of the economic phenomenon to be predicted are input to the input layer neurons, the signals output from the output layer neurons represent the result of prediction of the economic phenomenon. According to a sixth aspect of the invention, there is provided an economic phenomenon predicting and analyzing method comprising the steps of: (a) inputting time series data indicating various principal economic indices and obtaining moving-average values for a plurality of recent predetermined periods, each of the various principal economic indices including an economic phenomenon to be predicted; (b) obtaining differences between the moving-average values, which are obtained in the individual common period, for at least from the first to the n-th order differences; (c) predicting the economic phenomenon, based on the moving-average values and hierarchical differences, using a neural network including (cl) an input layer having a plurality of input layer neurons for inputting the moving-average values and differences as data indicating the trends of various principal economic indices including the economic phenomenon to be predicted, (c2) a predetermined number of hidden layers having a plurality of hidden layer neurons, respectively, the input neurons being adapted to make synaptic combinations with arbitrary ones of the hidden layer neurons, and (c3) an output layer having a predetermined number of output layer neurons, each of the hidden layer neurons being adapted to make synaptic combinations with arbitrary ones of the output layer neurons, each of the output layer neurons being adapted to output a signal representing the result of prediction of the economic phenomenon, and (c4) wherein weights of the synaptic combinations between the input layer neurons and hidden layer neurons and weights of the synaptic combinations between the hidden layer neurons and the output layer neurons are organized by learning in such a manner that when the data indicating the variations of the various principal economic indices and the data indicating the variation pattern of the economic phenomenon to be predicted are input to the input layer neurons, the signals output from the output layer neurons represent the result of prediction of the economic phenomenon to be predicted; (d) obtaining principal components by a principal analysis for the outputs of the hidden layer neurons; and (e) analyzing the correlation between the variation of economic phenomenon to be predicted and the variations of outputs of the output layer neurons by analyzing the obtained principal components. According to a seventh aspect of the invention, there is provided an economic phenomenon predicting system comprising: (a) a network organized so as to output a result of prediction of an economic phenomenon when data indicating variations of various principal economic indices including the economic phenomenon to be predicted and data indicating a variation pattern of the economic phenomenon to be predicted are input; (b) moving-average-value arithmetic means for inputting time series data indicating various principal economic indices and for obtaining moving-average values for a plurality of recent predetermined periods, the moving-average-value arithmetic means being adapted to supply said obtained moving-average values, as part of the data indicating the variations of the various principal economic indices including the economic phenomenon to be predicted, to the network; (c) difference arithmetic means for obtaining differences between the moving-average values, which are obtained in the individual common period, for at least from the first to the n-th order differences, the difference arithmetic means being adapted to supply the obtained differences, as part of the data indicating the trends of said various principal economic indices including the economic phenomena to be predicted, to the network; (d) trend removing means for removing trends from time series data indicating the economic phenomenon to be predicted by subtracting from the time series data the individual moving-average value of the economic phenomenon for any of the predetermined periods; and (e) pattern-sorting means for sorting the time series data indicating the economic phenomenon to be predicted into patterns, the pattern-sorting means being adapted to input the patterns, which are obtained from the sorting, as data indicating a variation pattern of the economic phenomenon to be predicted to the network. According to an eighth aspect of the invention, there is provided an economic phenomenon predicting and analyzing system comprising: (a) a neural network organized so as to generate a number of hidden layer outputs according to both data indicating variations of various principal indices including an economic phenomenon to be predicted and data indicating a variation pattern of the economic phenomenon to be predicted and so as to output signals indicating a result of prediction of the economic phenomenon by combining the hidden layer outputs; (b) moving-average-value arithmetic means for inputting time series data indicating various principal economic indices and for obtaining moving-average values for a plurality of recent predetermined periods, the moving-average-value arithmetic means being adapted to supply the obtained moving-average values, as part of the data indicating the trends of the various principal economic indices including the economic phenomena to be predicted, to the network; (c) difference arithmetic means for obtaining a differences between the moving-average values, which are obtained in the individual common period, for at least from the first to the n-th order differences, the difference arithmetic means being adapted to supply the obtained differences, as part of the data indicating the variations of the various principal economic indices including the economic phenomenon to be predicted, to the network; (d) principal component arithmetic means for obtaining principal components of the hidden layer outputs; and (e) correlation analyzing means for analyzing a correlation between variation of the economic phenomenon to be predicted and variation of the output of the network by analyzing the obtained principal FIG. 1 is a diagram showing an economic phenomenon predicting and analyzing system, using neural networks, according to one embodiment of this invention; and FIG. 2 is a diagram showing an economic phenomenon predicting and analyzing system, using neural networks, according to the conventional art. Preferred embodiments of this invention will now be described with reference to the accompanying drawings. FIG. 1 shows an economic phenomenon predicting and analyzing system according to one embodiment of the invention. This system predicts and analyzes variations in TOPIX. This invention can be applied also to prediction and analysis of economic time series data other than TOPIX as any person skilled in the art may make such applications based on the following disclosure. The system of FIG. 1 comprises a prediction system 14 and an analysis system 15 as subsystems. The prediction system 14 includes a preparation module 16, a plurality of preparation modules 17, a pattern sorting unit 18, and a predicting neural network 19. The analysis system 15 includes a principal analysis module 20 and a correlation analysis module 21. The preparation module 16 inputs time series data 22 in TOPIX, which are to be predicted, and performs preprocessing over the input data. As a result, various differences 23 containing difference information of time series data 22, and a variation pattern 24, from which trends have been removed, are obtained. Upon inputting time series data 22, the preparation module 16 supplies various differences 23 and the trend-free variation pattern 24 to the neural network 19 and the pattern sorting unit 18, respectively. At the same time, each preparation module 17 inputs time series data 25-1 to 25-n of principal economic indices described below and performs a preprocess. As a result, various differences 26 containing difference information of these principal economic indices are obtained. Each preparation module 17 supplied various differences 26 to the neural network 19. Specifically, the preparation module 16 obtains a moving average of time series data 22 over the recent month, a moving average of time series data 22 in the recent three months, and a moving average of time series data 22 in the recent six months. The moving average is a value which is obtained by defining a time point in the past as the origin and assigning variations of time series data after this time point in an estimate equation and which shows the drift of variations in time series data from the origin to this time point. In the illustrated example, when making arithmetic operations for these three kinds of moving averages, the origins for the three values are determined for one month ago, three months ago and six months ago, respectively. In the illustrated example, the estimate equation is a weighted overage arithmetic equation. Further, the time series data to be assigned in this estimate equation is time series data 22 in the case of the preparation module 16. In the data composing the time series data 22, assume that the data at the time t is represented by x[o] (t) and the weights, by which the data at the time t-i is to be multiplied, is represented by φ[01]. If the moving average is for one month, it is represented by a subscript N=1; if the moving average is for three months, it is a subscript N=2; and if the moving average is for six months, it is represented by N=3. Accordingly the moving average a[ON] (t) of TOPIX at the time t is given by the following equation (3): ##EQU1## The weighting φ[01] is determined based on the following equations (4) and (5): ##EQU2## Thus the weights φ[01] are normalized in such a manner that their sum will be 1. After obtaining the three kinds of moving averages a[01] (t), a[03] (t) and a[06] (t) using the equation (3), the preparation module 16 obtains final values X[o]^o N(t), first order difference X[o]^1 [N] (t), and second order differences X[o]^2[N] (t)(N=1, 3, 6) using the following equations (6) to (8). The preparation module 16 supplies these values, as various differences 23, to the neural network 19. X[o]^o[N] (t)=a[ON] (t) (6) X[o]^1[N] (t)=a[ON] (t)-a[ON] (t-1) (7) X[o]^2[N] (t)=a[ON] (t)-2a[ON] (t-1)+a[ON] (t-2)(8) Meanwhile, the individual preparation modules 17 input the respective time series data 25-1 to 25-n. The time series data 25-1 to 25-n to be input are, for example, turnovers, a longest-period interest of national bonds, a three-month official interest, expenditures of private final consumption, received orders of building construction, a Dubai crude oil price, a yen-dollar rate, price indices of nationwide urban districts, current profits, a real wage index, an average balance of money supply, a shipping index of mining and manufacturing industries, a trade balance, a Yew York Dow-Jones average, a wholesale price index, etc. The preparation modules 17 have the same function as the preparation module 16; they perform a moving average process and a difference arithmetic process based on the input time series data 25-1 to 25-n. The preparation modules 17 output such processed data to the neural network 19. The processes to be executed by the preparation modules 17 have the same formats as the above-mentioned equations (3) to (8) except that the data to be processed are time series data 25-1 to 25-n rather than time series data 22. Namely the preparation modules 17 can represent each subscript 0 in the equations (3) to (8) by j (j=1, 2, ..., n) indicating any of the time series data 25-1 to More specifically, the preparation modules 17 obtain final values X[j]^o[N] (t), first order differences X[j]^1[N] (t) and second order differences X[j]^2[N] (t) (N=1, 3, 6) for the data to be processed by themselves, i.e. the respective time series data 25-1 to 25-n, by performing processes expressed by the following equations (3') to (8'): ##EQU3## Each preparation module 17 supplies various kinds of differences 26, i.e. the final values X[j]^o[N] (t), first order differences X[j]^1[N] (t) and second order differences X[j]^2[N] (t) (N=1, 3, 6), to the neural network 19. Thus the information to be supplied to the neural network 19 can be used as a so-called fundamental 22 analysis index. As described above, the time series data to be processed by the preparation module 16 is data concerning TOPIX, i.e. kinds of principal economic indices. Likewise the time series data 25-1 to 25-n to be processed by the respective preparation modules 17 are also turnovers and other principal economic indices. In the illustrated embodiment, the difference information of these indices are obtained in the following forms to be input to the neural network 19: the final values X[o]^o[N] (t), first order differences X[o]^1[N] (t) and second order differences X[o]^2[N] (t); or final values X[j]^o[N] (t), first order differences X[j]^1[N] (t) and second order differences X[j]^2[N] (t). The first order differences X[o]^1[N] (t) and X[j]^1[N] (t) represent rates of increase or decrease of the corresponding variants (time series data 22 or 25-1 to 25-n) and the second order differences X[o]^2[N] (t) and X[j]^2[N] (t) represent a changes of the increasing or decreasing rates of corresponding variants. Therefore, in this embodiment, the data as various differences 23 to be supplied to the neural network 19 are usable as fundamental analysis indices that indicate economic realities, ex. Japan, United States, surrounding the stock market, which are data to be expressed in a difference form. Although arithmetic operations are performed up to the second differences in this embodiment, this invention should by no means be limited to any order of arithmetic operation of differences. Practically, up to second order differences to be grasped intuitively is enough. The preparation modules 16 obtains data to be supplied to the pattern sorting unit 18, i.e. a variation pattern 24 after trends have been removed from the time series data 22. The preparation module 16 firstly subtracts a six-month-range moving average a[o]^o[3] (t), i.e. a final value X[o]^o[3] (T), from the data x[o] (t) at the time t, which constitutes the time series data 22, to remove trends. That is, the data p[01] (t) free of trends is obtained by carrying out an arithmetic operation based on the following equation (9): p[01] (t)=x[o] (t)-X[o]^o[6] (t) (9) The preparation module 16 normalizes p[01] (t) in the past eight months. That is, assume that if a subscript i of p[01] (t) represents data on the i-th month, the preparation module 16 obtains the i-th month pattern p[01] (t) free of trends by performing an arithmetic operation based on the following equation (10): ##EQU4## The pattern sorting unit 18 is realized by a neural network called "a self-organizing map". This neural work is provided in advance with learning of all learning data to determine synaptic combination weights between the individual neurons constituting the neural network. The algorithm implementing the self-organizing map is discussed in, for example, "The Self-Organizing Map", by Teuvo Kohonen, in Proceedings of the IEEE, Vol. 78, No. 9, Sept. 1990. In the system of this embodiment, the topology in which 16 neurons are arranged in a pattern of a 4×4 mesh is used, and the following equation is used for the learning factor α(t): α(i)=0.5/(1+(i-1)*0.1) (11) where i stands for a number of learning cycles and the initial value of i is 1. i increases by 1 every time all learning data is exhibited with respect to the self-organizing map. For application to this embodiment, it is preferable to repeatedly exhibit all learning data about 10,000 times with respect to the self-organizing map, i.e. until i=10,000. The outputs of sixteen neurons constituting the self-organizing map are output from the pattern sorting unit 18 to the neural network 19. Therefore, since the learning of the pattern sorting unit 18 is performed according to variations in the time series data 22 concerning TOPIX, i.e. P[01] (t) which is information obtained based on a price variation of TOPIX, the output of the pattern sorting unit 18 in actual use is information indicating an optimum pattern to simulate actual price variations of TOPIX selected from patterns that have appeared frequently in the past. This information can be used as indices for technical analysis. Thus, one feature of this embodiment is that the indices for fundamental analysis and the indices for technical analysis are input to the neural network 14. The neural network 19 comprises an input layer 27, a hidden layer 28 and an output layer. Either the input layer 27 or the hidden layer 28 is composed of a predetermined number of neurons 35, 36. The output layer of the neural network 19 includes a single output neuron 29. The neural network 19 is a hierarchical neural network. In this invention, however, the number of either the output neurons 29 or hidden layers 28 should by no means be limited to one. Each input layer neuron 35 receives any of the output 23 of the preparation module 16, the outputs 26 of the preparation modules 17-1-17-n and the output 34 of the pattern sorting unit 18. Each of the outputs of these input layer neurons 35 is combined with the input of the respective hidden layer neurons 36 in synaptic combination form. Weights of these combinations are organized when learning is performed. Furthermore, the outputs of the hidden layer neurons 36 are combined with the input of the output neuron 29. The synaptic combination weights concerning these combinations are self-organized during learning. The learning of the neural network 19 takes place using learning data obtained in past time points. These learning data are the data used for predictions of TOPIX and thus-obtained TOPIX data at past time points. Input signals out of this learning data are signals obtained by inputting the economic time series data 22, 25-1-25-n, which are obtained at a month previous to the month which is the objective month of prediction, to the preparation modules 16, 17-1-17-n and the pattern sorting unit 18. Specifically, the user supplies the economic time series data 22 of the previous month to the preparation module 16 to cause the preparation module 16 and the pattern sorting unit 18 to perform the above-mentioned process so that various differences 23 of moving averages of TOPIX and a pattern of trend-free variation are obtained. The pattern sorting unit 18 to be a neural network itself sorts the thus obtained trend-free pattern of variation as it is to be any of patterns that have frequently appeared in the past. Then the pattern sorting unit 18 outputs the result of sorting. The individual preparation modules 17-1 to 17-n inputs the respective time series data 25-1 to 25-n of principal economic indices and output various differences 26 of their moving average. The individual input layer neurons 35 input these data, i.e. various differences 23 of moving average of TOPIX, various differences 26 of moving average of other principal indices and the pattern of TOPIX variation which is the output 34 of the pattern sorting unit 18. Teaching data out of the learning data of the neural network 19 may be the data that is obtained from TOPIX values for the previous month t and the objective month t+1 in the following manner: ln(X [o] (t+1)/X[o] (t))/2.0 (12) The learning algorithm of the neural network 19 is exemplified by "Learning Internal Representation by Error Propagation", by D. E. Runmelhart, G. E. Hinton and R. J. Williams, in Parallel Distributed Processing, The MIT Press (1986). Preferably, the learning factor is 0.1 and the factor of inertial term is 0.9. By performing this learning process, the neural network 19 is self-organized so as to output a TOPIX value for the objective month, in response to the input of information for fundamental analysis and technical analysis. Specifically, when the fundamental analysis indices obtained as the outputs 23, 26 of the preparation modules 16, 17-1 to 17-n and the technical analysis indices obtained as the outputs 34, 37 from the pattern sorting unit 18 are input to the neural network 19 at real use, the neural network 19 outputs ln(Xo(t+1)/X[o] (t))/2.0, where t represents the month of real use rather than the previous month of learning. The post-process module 30 converts ln(X[o] (t+1)/X[o] (t)/2.0 into a rate of change: (X[o] (t+1)-X[o] (t))/X[o] (t) (13) Namely, assuming that the input of the post process module 30 is x and the output 31 thereof is y, the post-process module 30 performs the following arithmetic: y=e^2x -1 (14) The analysis system 15, as described above, is composed of the principal analysis module 20 and the correlation analysis module 21. The principal analysis module 20 performs a principal analysis of the outputs 32 of the hidden layer 28. The term "principal analysis" is used in the field of multivariate analysis. This analysis is used for finding a small number of variants (principal components) explaining the change of a large number of variants. Intuitively, each really existing variant is not treated individually and is regarded as a linear combination of the unit vectors. Each of the unit vectors indicates a variant which explains the change of these really existing variants. Each of the principal components can be intuitively grasped as each of these unit vectors. The principal components include a first, a second, a third, ..., etc.. The first principal component means the direction in which variance of variant vectors is greatest, where the variant vectors are the really existing variants. The second principal component means the direction which is perpendicular to the first principal component and in which variance of the variant vectors is second greatest to the first principal component. The third principal component means the direction which is perpendicular to the second principal component and in which variance of the variant vectors second greatest to the second principal component. Thus the individual principal components are directions defined successively. When the neural network 19 is learning, the principal analysis module 20 inputs the output values 32 of all neurons 36 of the hidden layer 28 for every learning data. The principal analysis module 20 then obtains principal components, based on the outputs 32 of the neurons 36 of the hidden layer 28, in the following manner: Firstly, the outputs 32 of the neurons 36 of the hidden layer 28 are represented by h[ik], where i=1, 2, M, and k=1, 2, ..., K. M is the number of neurons 36 of the hidden layer 28, and K is the number of learning data. Accordingly the component s[ij] of (s[ij]), which is the covariance matrix of h[ik], can be expressed by the following equation (15): S[ij] =(h[ik] -h[im])X(h[jk] -h[jm]) where h[im] represents an average of h[ik] for i (i=1, 2, ..., K). The principal components can be obtained as eigen vectors of the covariance matrix (s[ij]). The principal analysis module 20 obtains the eigen vectors of the covariance matrix by the Jacobi method. The correlation analysis module 21 obtains a correlation between the explanation variants calculated from the inputs 33 to the network 19 with the medium of the outputs 32 of the hidden layer 28 of the neural network 19 and the output of the neural network 19. The term "explanation variant" means a variant on which an objective variant depends when explaining the objective variant and its change. Thus the explanation variant is treated as having a causal relationship with the objective variant. In this embodiment, the objective variant is economic data to be predicted, i.e. time series data 22 of TOPIX and its change, and the explanation variants are a) the inputs 33, b) the differences of moving averages of different periods (i.e., moving averages of different N) output of moving averages obtained for the same economic time series data (any of 22, 25-1 to 25-n), c) the differences of moving averages of the same period out of the moving average obtained for different kinds of economic time series data. The moving average curve is a can be represented as a folded line indicating the moving averages with respect to time t in x coordinate. For example, assuming that the folded line of the moving average of a short period (e.g. one month) climbs over the folded line of the moving average of a long period (e.g. six months) in the situation where the long-period moving average turns no marked variations or upwards tendencies from a decreasing tendency, the technical analysis grasps it as the time to buy. The second one of the foregoing explanation variants is a variant which enables this analysis. Assuming that the amount of imports is greater than the amount of exports over a somewhat long period, a change will appear in the economic situation and hence an effect will appear on stock prices over a long period. The third one of the foregoing explanation variants is a variant explaining such an effect. These explanation variants are variants effective for explaining variations in TOPIX in market prediction. The correlation analysis module 21 obtains a distribution of frequency of ranking of principal components for all learning data based on the principal components calculated by the principal analysis module 20 and also obtains a) a maximum value, b) a minimum value, c) a section including a lower 10% number of learning data except the minimum value, and d) a section including an upper 10% number of data except the maximum value. Assuming that the vector composed of vectors of the average of various variants is the origin vector, the rankings of principal component are represented as is projections in the direction of principal component of the variate vector. The analysis module 21 then considers the group of learning data contained in the lower 10% of data for the principal components and the group of data contained in the upper 10% of the same, and obtains average values and standard deviations of explanation variants in the respective groups. The difference in average value between the upper 10% data and the lower 10% data is relatively large and the standard deviation with respect to an absolute value of this difference is relatively small, the correlation between the explanation variants and the principal component rankings will be regarded as relatively strong. On the other hand, the effect E[i] of the principal component rankings P[1] on the rate of change of the economic time series data to be predicted is obtained by the following equation: E[i] =exp((max[1] -min[1])ΣP[ij] W[j])-1(16) where min[1], max[1] are minimum and maximum values of the i-th principal component ranking, P[ij] is the j-th component of the i-th principal component, and W[j] is a synaptic combination weight 37 from the j-th neuron to the output neuron 29. This way of obtaining this correlation is based on the assumption that the relationship between the explanation variants and the principal component rankings are monotonic. Therefore, although it is not necessarily statistically correct, it is known that this method practically suffices in many cases. Further, it is guaranteed that the relation between the principal component rankings and the variations of economic time series data is monotonic since the output neuron is single and only a single hidden layer 28 is connected to the output neuron 29. The analysis module 21 outputs the result of this analysis. In the prior art, as described above, since redundant neurons would be provided if the hidden layer contained many neurons, easy analysis could not be achieved. Whereas, in this invention, even if the hidden layer contains many neurons, it is possible to analyze a cause for variations in the economic time series data to be predicted. Consequently, in the conventional method, the neural network to be used for actual prediction and that to be used for analysis must be different from each other. Whereas in this invention, the same neural network can be used. Further, in the prior art, since the individual economic time series data are simply logarithmically converted and normalized to be input to the neural network, it is possible to analyze only the relation between the largeness of these input values and the variations of the economic time series data to be predicted. Whereas, in the economic time series data analyzing system according to this invention, since the input data includes difference data indicating the variations of various economic time series data, it is possible to analyze the relationship between the variations(e.g., increase, decrease, maximum, minimum, etc.) of input economic time series data and the variations of objective economic time series data to be predicted. Furthermore, since the input data includes the result of pattern sorting for the objective economic time series data, it is possible to obtain the causal relationship between this pattern and the objective economic time series data, namely, the technical analytical explanation. Conventionally, in analysis of the monetary market, the point at which graphs of moving averages in different periods cross one another can mean something analytically significant on many occasions. On some other occasions, the relation in largeness between some kinds of economic time series data can mean something analytically significant. In this invention, since the difference between moving averages in different periods for the same economic time series data and also the difference between moving averages in the same period for different economic time series data are defined as explanation variants, it is possible to perform an analysis tempered with that information. The pattern sorting unit 18 in the form of a self-organizing map and the neural network 19 may be implemented by either hard logic or software.
{"url":"http://www.freepatentsonline.com/5444819.html","timestamp":"2014-04-20T08:46:43Z","content_type":null,"content_length":"121222","record_id":"<urn:uuid:b2201d5b-8d22-470a-8b2a-3db7b9646ad8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
We already mentioned that slicing a solid with known cross-section is like slicing a loaf of cinnamon raisin bread. We could also think of it like we are slicing carrots for pot roast or potato slices for potatoes au gratin. Hungry for more? When asked to find the volume of one of these solids, we are given a few things to start. We'll be told what the base of the solid is, which way it's being sliced, and what the slices look like. Sample Problem We can call this one the stick of butter example. Let R be the region enclosed by the x-axis, the graph y = x^2, and the line x = 4. Write an integral expression for the volume of the solid whose base is Rand whose slices perpendicular to the x-axis are squares. Let's use our attack plan. 1) First we have to understand what the solid is. The region R looks like this: Let's turn R on its side to make it easier to think of as the base of the solid. We're told that if the solid is sliced perpendicular to the x-axis the slices are squares. Since the base of the solid stretches from the x-axis up to the graph y = x^2, the side-length of the slice at position x is y = x^2. This means if we take slices near x = 0, they'll be tiny. If we take slices near x = 4, they'll be much bigger. 2) Now that we understand what the solid looks like, we need to slice it and find the approximate volume of a slice. We've already sliced it, and we know that each slice is a square with side-length y = x^2. Each square has a tiny little bit of thickness Δ x. To find the volume of the slice we multiply the area of the square by the thickness of the slice to get (x^2)^2 Δ x. 3) The variable x goes from 0 to 4 within this solid. When we add the volumes of all the slices and take the limit as the number of slices approaches ∞, we find the volume of the solid is Be Careful: We recommend drawing pictures. Lots of them. Don't be stingy. You'll use less paper and time drawing a couple extra images than you would getting the wrong answer and having to start all over. At minimum, three of them: 1) the region that forms the base of the solid 2) the region with at least one slice sitting on it 3) a slice all by itself The best way to get better at these is to practice. Feel free to have your favorite 3-D sweet treat while you go through these exercises. Let R be the region enclosed by the x-axis, the graph y = x^2, and the line x = 4. Write an integral expression for the volume of the solid whose base is R whose slices perpendicular to the x-axis are semi-circles. Let R be the region enclosed by the x-axis, the graph y = x^2, and the line x = 4. Write an integral expression for the volume of the solid whose base is R whose slices perpendicular to the y-axis are squares. Let R be the region enclosed by the x-axis, the graph y = x^2, and the line x = 4. Write an integral expression for the volume of the solid whose base is R whose slices perpendicular to the y-axis are equilateral triangles. Let R be the region bounded by y = x and y = x^2. Write an integral expression for the volume of the solid with base R whose slices perpendicular to the y-axis are semi-circles Let R be the region bounded by y = x and y = x^2. Write an integral expression for the volume of the solid with base R whose slices perpendicular to the y-axis are squares Let R be the region bounded by y = x and y = x^2. Write an integral expression for the volume of the solid with base R whose slices perpendicular to the x-axis are equilateral triangles Let R be the region bounded by x^2 + y^2 = 1. Write an integral expression for the volume of the solid with base R whose slices perpendicular to the y-axis are semi-circles Let R be the region bounded by x^2 + y^2 = 1. Write an integral expression for the volume of the solid with base R whose slices perpendicular to the x-axis are equilateral triangles. Let R be the region bounded by x^2 + y^2 = 1. Write an integral expression for the volume of the solid with base R whose slices perpendicular to the x-axis are squares
{"url":"http://www.shmoop.com/area-volume-arc-length/solid-volume-help.html","timestamp":"2014-04-20T13:24:35Z","content_type":null,"content_length":"53101","record_id":"<urn:uuid:d5532c9b-95ef-4d33-9be0-85471a3f4ead>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Deformation of Noncylindrical Extension Springs This Demonstration models the deformation of noncylindrical springs under an applied tension load. An extension spring is designed to get longer when a load is applied to it. The spring coils can be cylindrical, with constant outer diameter of the windings, or noncylindrical, with variable outer diameter. Popular shapes for noncylindrical springs are hyperboloidal (hourglass/barrel) springs or conical springs. When a load is applied, a cylindrical spring has constant deformation of the windings over its entire length. Noncylindrical springs have variable deformation of the windings, depending on their diameter. Larger diameter windings deform more than smaller ones. The spring constant of a helical spring with constant pitch and winding diameter is taken from WolframAlpha["spring constant",…] and adapted for steel wire: N/cm; is the coil diameter, is the wire diameter, and is the number of windings. A shaped spring is taken as a stack of cylindrical springs with varying outer diameters. The effective spring constant of a set of springs in series is then given by . The extension for each partial winding is taken from Hooke's law: . Snapshot 1: all windings of a cylindrical spring deform equally Snapshots 2–5: shaped springs deform more in the larger diameter windings and less in those with smaller diameter
{"url":"http://demonstrations.wolfram.com/DeformationOfNoncylindricalExtensionSprings/","timestamp":"2014-04-16T21:53:37Z","content_type":null,"content_length":"44169","record_id":"<urn:uuid:f42a6968-2e91-404b-96c0-40ec6bdb4a30>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
A100670 - OEIS A100670 Number of two-card Baccarat hands of point n. 0 210, 128, 132, 128, 132, 128, 132, 128, 132, 128 (list; graph; refs; listen; history; text; internal format) OFFSET 0,1 COMMENTS Baccarat is a family of gambling card games (including Chemin-de-fer) usually played with three to six decks of standard playing cards. The value or point of a hand consists of the last digit of the sum of the individual card values, which are 0 for each face card or ten, 1 for each ace and the number shown for each other card; i.e., the point is the sum modulo 10. The object of the game is to come closer to the number 9 than the opponent(s) but by drawing only up to one additional card. A Table of Play determines when a player must receive a third card, must not receive a third card, or has the option - based upon the player's two-card hand's point; therefore there's little room for strategy. Finally, a two-card hand with point of 8 or 9 is called a natural and wins immediately except that a natural 9 beats a natural 8 and there can be ties. The sequence is different for a casual game played with a single deck of cards: 190,128,124,128,124,128,124,128,124,128 (although the terms are identical for odd points). REFERENCES P. Arnold, The Book of Card Games, Barnes and Noble, Inc., 1988, pp. 7-12. A. H. Morehead and G. Mott-Smith, editors, Hoyle's Rules of Games, The New American Library, Inc., New York, 1963, pp. 178-180. LINKS Table of n, a(n) for n=0..9. EXAMPLE a(2) = 132 because - taking into account that there are three types of face cards, four suits and both cards may be identical (by coming from different original decks) - there are 132 distinct two-card hands where the point of the pair is 2: 0+2 (64 combinations), 1+1 (10), 3+9 (16), 4+8 (16), 5+7 (16) and 6+6 (10). CROSSREFS Sequence in context: A125549 A104876 A050516 * A220441 A025392 A025383 Adjacent sequences: A100667 A100668 A100669 * A100671 A100672 A100673 KEYWORD fini,full,nonn AUTHOR Rick L. Shepherd, Dec 05 2004 STATUS approved
{"url":"https://oeis.org/A100670","timestamp":"2014-04-21T12:58:15Z","content_type":null,"content_length":"15195","record_id":"<urn:uuid:2d2fa733-5379-4e77-b66e-23eeb367bde9>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Physics and Quantum Field Theory Electron. J. Diff. Eqns., Conf. 04, 2000, pp. 175-195.. Modular theory and Eyvind Wichmann's contributions to modern particle physics theory Bert Schroer Abstract: Some of the consequences of Eyvind Wichmann's contributions to modular theory and the QFT phase-space structure are presented. In order to show the power of those ideas in contemporary problems, I selected the issue of algebraic holography as well as a new nonperturbative constructive approach (based on the modular structure of wedge-localized algebras and modular inclusions) and show that these ideas are recent consequences of the pathbreaking work which Wichmann together with his collaborator Bisognano initiated in the mid Seventies. Published July 12, 2000. Mathematics Subject Classifications: 46L05, 81T05, 47L90. Key words: Local Quantum Physics, modular theory, algebraic holography, constructive quantum field theory. Show me the PDF file (196K), TEX file, and other files for this article. Bert Schroer Institut für Theoretische Physik der FU-Berlin and CBPF, Rua Dr. Xavier Sigaud, 150 22290-180 Rio de Janeiro, RJ, Brazil e-mail: schroer@cbpfsu1.cat.cbpf.br Return to the Proceedings of Conferences: Electr. J. Diff. Eqns.
{"url":"http://www.maths.soton.ac.uk/EMIS/journals/EJDE/conf-proc/04/s1/abstr.html","timestamp":"2014-04-17T03:48:37Z","content_type":null,"content_length":"1807","record_id":"<urn:uuid:9c66f4a0-514b-481d-b43d-9bde8fb2b400>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/filthymcnasty/answered","timestamp":"2014-04-17T19:21:50Z","content_type":null,"content_length":"119002","record_id":"<urn:uuid:a6308e51-7478-4a7f-8340-da98eb53f9a8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Science Archive | October 06, 2008 | Chegg.com Computer Science Archive: Questions from October 06, 2008 • Anonymous asked 2 answers • Anonymous asked 1 answer • RottenPanther7864 asked whichof the follo... Show more ____ 1. Given the function prototype:double testAlpha(int u, char v, double t); whichof the following statements is legal? a. cout<<testAlpha(5, 'A', 2); b. cout<<testAlpha( int 5, char 'A', int 2); c. cout<<testAlpha(‘5.0’, 'A',2.0); d. cout<<testAlpha(5.0, 65, 2.0); ____ 2. Which of the following function prototypes is valid? a. int funcTest(int x, int y, float z){} b. funcTest(int x, int y, float){}; c. int funcTest(int, int y, float z) d. int funcTest(int, int, float); ____ 3. Which of the following functionprototypes is valid? a. int funcExp(int x, float v); b. funcExp(int x, float v){}; c. funcExp(void); d. int funcExp(void); ____ 4. The standard header file forthe abs(x)function is _____. a. <cmath> c. <cctype> b. <ioinput> d. <cstdlib> • Show less 1 answer • RottenPanther7864 asked 27. Consider the statements above. After the sta... Show more string str1 = "ABCDEFGHIJKML"; string str2 27. Consider the statements above. After the statementstr2 = str1.substr(1,4); executes, the valueof str2 is"____". a. ABCD c. BCD b. BCDE d. CDE string str1 = "Gone with the Wind"; string str2 28. Consider the statementsabove. After the statement str2 =str1.substr(5,8); executes, the value ofstr2 is"_____". a. Gone c. the b. with d. wind string str = "ABCDEFD"; string size_typeposition; 29. Consider thestatements above. After the statement position =str.find('D'); executes, the value ofposition is _____. string str1 = "goofy"; string newstr = " "; for (int j = 0; j <str.length( ); j++) newstr = str [j] +newstr; cout <<newstr<<endl 30. Consider the C++ code above. The output is _____. a. goofy c. foogy b. ygoof d. Yfoog 31. In C++, _____ is called thescope resolution operator. 32. The scopeof a namespace member is local to the _____. a. function c. file b. block d. namespace • Show less 1 answer • Anonymous asked 1 answer • RottenPanther7864 asked 1 answer • Archit asked 2 answers • Anonymous asked 1 answer • Farree asked I have written the codes, but i cant seem to replace 'X' inspecific element of array that i wanted t... Show more I have written the codes, but i cant seem to replace 'X' inspecific element of array that i wanted to. Can anyone pleasecorrect it? #define ROW_SIZE 7 #define COL_SIZE 4 void printSeat(int seatNum[], char seatAlp[][COL_SIZE]); void chooseSeat(int seatNum[], char seatAlp[][COL_SIZE]); void main() int seatNum[ROW_SIZE] = {1,2,3,4,5,6,7}; char seatAlp[ROW_SIZE][COL_SIZE]; void printSeat(int seatNum[], char seatAlp[][COL_SIZE]) int row; for (row=0;row<ROW_SIZE;row++){ void chooseSeat(int seatNum[], char seatAlp[][COL_SIZE]) int row,col; printf("Choose row (1-7) and seats (1-A, 2-B, 3-C, 4-D)\n"); scanf("%d %d", &row,&col); seatAlp[row][col]= 'X'; • Show less 1 answer • Anonymous asked 2 answers • Anonymous asked 0 answers • Anonymous asked 0 answers • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 0 answers • Anonymous asked 3 answers • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked where n = 0, 1, 2, . . .and m = 0, 1, 2, . .... Show more Use the pumping lemma to show that the language{a^nb^na^m where n = 0, 1, 2, . . .and m = 0, 1, 2, . . .} = { Λ, a, aa, ab, aaa, aba, . . .} isnonregular. This is problem 1v. • Show less 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked Cre... Show more x.øi5PLEASE EMAIL ME THIS PROGRAM WHEN FINISHED DOING IT! IT'S SO CONFUSING! I'D APPRECIATE IT! Create a new project to store your files in, and create a program to perform the following: 1. Please use the names that I have indicated in red text for the names of your controls and indicators. (3 POINTS) 2. Front panel items required (you may have others if you like) (5 POINTS): a. Numeric controls labeled the following: i. Number of Samples (an integer) ii. Offset (a floating-point) iii. Coefficient a (an integer) iv. Coefficient b (an integer) v. Coefficient c (an integer) vi. Coefficient l (an integer) vii. Coefficient m (an integer) viii. Coefficient n (an integer) ix. Array Subset Start (an integer) b. Numeric indicator labeled Elapsed time (ms) (an integer). c. Boolean buttons labeled the following: i. Simulate Data ii. Stop d. Two text rings labeled the following: i. Array Equation to Use ii. Cluster Equation to Use e. A 5 row by 4 column array of numerics (floating-point) labeled Array Subset f. A waveform chart labeled Simulated Signal Chart g. A x-y graph labeled Simulated Data Graph 3. Once running, the program should continue running until the Stop boolean button is pressed. (1 POINT) 4. Until the user presses the Simulate Data boolean button, the program should not be doing anything but allowing the user to change values of the controls. (1 POINT) 5. When the user presses the Simulate Data boolean button, a eight-frame sequence should be started. You may build the code as a flat or a stacked sequence. (1 POINT) 6. In Frame 0, a for-loop should be used to create a two-dimensional array which should be passed to subsequent frames of the sequence. The user’s Number of Samples control should be used to control the number of times the loop runs. The first column of the array should contain numbers from 0 and increasing in increments of 0.01. This simulates data acquisition at 100 samples per second, or a sample every 1/100 of a second. Use the “Simulate Signal Express VI” to obtain theõafx.øi5umn 2 of the array. Configure the express VI for a “DC” signal type, with Uniform White Noise, with a Noise Amplitude of 1, and 100 samples per second Timing. The user should be able to use the Offset numeric control to modify the offset. The output of the “Simulate Signal Express VI” should be displayed on the Simulated Signal Chart. Be sure to perform error handling on the “Simulate Signal Express VI” such that the user is notified via dialog box if an error has occurred within the Express VI. (10 POINTS) 7. The purpose of sequence Frame 1 is to obtain the third column of data for the array which should be passed to subsequent frames of the sequence. To obtain each value for the third column of data, use a case structure which uses the Array Equation to Use text ring wired to the selector terminal. The text ring should have at least three equations in it, with all of them containing a “x” variable. The following ones work, but you may use others if you like: You may have to edit the form of the equations to match the appropriate syntax of the formula node. Inside the case structure, use formula nodes with the Coefficient a, Coefficient b, and Coefficient c, and the values of x taken from the first column of the array (the 0.00, 0.01, 0.02, etc. values) to calculate each value of the new column. (10 POINTS) 8. In Frame 2, append the third column of data calculated in the above step to the data array. The new data array should be passed to subsequent frames of the sequence. (10 POINTS) 9. Create a subVI in Frame 3 to obtain the fourth column of data for the array which should be passed to subsequent frames of the sequence. The subVI should only have one input and one output, and will basically do the same thing as step 7 above, but it will use the Cluster Equation to Use text ring and the Coefficient l, Coefficient m, and Coefficient n. The text ring should have at least three equations in it, with all of them containing a “x” variable. The following ones work, but you may use others if you like: You may have to edit the form of the equations to match the appropriate syntax of the formula node. Inside the case structure, use formula nodes with the Coefficient l, Coefficient m, and Coefficient n, and the values of x taken from the first column of the array (the 0.00, 0.01, 0.02, etc. values) to calculate each value of the new column. (10 POINTS) 10. In Frame 4, append the fourth column of data calculated in the above step to the data array. The new data array should be passed to subsequent frames of the sequence. (10 POINTS) 11. In Frame 5, show the user five consecutive rows of the final array created in the above step. The starting row number of the subset should be the row number that the user entered in the Array Subset Start control. (10 POINTS) 12. In Frame 6, use the Simulated Data Graph to graph the following on the same graph (10 POINTS): Column 2 vs. Column 1 Column 3 vs. Column 1 Column 4 vs. Column 1 Remember to graph “y versus x” and not vice versa. 13. In Frame 7 alert the user that the simulation is complete by causing the computer to beep. Also, calculate the time (in ms) that has elapsed from the time the user hit the Simulate Data button to the time the sequence ended. (10 POINTS) Other requirements: 1. Group the Number of Samples, Offset, Simulate Data, Elapsed Time (ms), and Stop controls/indicators together with a colored decoration. (1 POINT) 2. Group the Array Equation to Use, Coefficient a, Coefficient b, and Coefficient c controls together with a colored decoration. (1 POINT) 3. Group the Cluster Equation to Use, Coefficient l, Coefficient m, and Coefficient n controls together with a colored decoration. (1 POINT) 4. Group the Array Subset Start and Array Subset controls/indicators together with a colored decoration. (1 POINT) 5. Simulated Signal Chart and Simulated Data Graph changes (5 POINTS): a. Display the legends for the chart and the graph, and edit the text to aid in understanding what each line represents. b. Make the Graph Palette, Cursor Legend, and Scale Legend visible and investigate how the different buttons work. c. Change the Line Style of one of the lines in the Simulated Data Graph. d. Change the Point Style on one of the other lines in the Simulated Data Graph. e. Change the Line Width on the last unchanged line in the Simulated Data Graph. f. Put a Graph Annotation on the Simulated Data Graph that says “Simulated Data Graph”. Helpful Hints: 1. Don’t forget about auto-indexing on for-loops, and when it should be used (and when it shouldn’t). 2. Remember you can bundle together a bunch of different data types with a cluster. 3. If you don’t want to see certain items on the front panel, you can hide some of them or move them off the visible screen. 4. Beep.vi makes the computer beep. 5. The Wait (ms) VI can be used to slow program execution down. 6. Right-click on a ring, select Properties, and then select the Edit Items tab to enter items in a ring and see what numerical value corresponds to each item. 7. Arrays and loop iterations are zero-based. 8. Remember Context Help and all of the debugging tools (execution highlighting, probes, etc.). • Show less 0 answers • Anonymous asked Cre... Show more x.øi5PLEASE EMAIL ME THIS PROGRAM WHEN FINISHED DOING IT! IT'S SO CONFUSING! I'D APPRECIATE IT! Create a new project to store your files in, and create a program to perform the following: 1. Please use the names that I have indicated in red text for the names of your controls and indicators. (3 POINTS) 2. Front panel items required (you may have others if you like) (5 POINTS): a. Numeric controls labeled the following: i. Number of Samples (an integer) ii. Offset (a floating-point) iii. Coefficient a (an integer) iv. Coefficient b (an integer) v. Coefficient c (an integer) vi. Coefficient l (an integer) vii. Coefficient m (an integer) viii. Coefficient n (an integer) ix. Array Subset Start (an integer) b. Numeric indicator labeled Elapsed time (ms) (an integer). c. Boolean buttons labeled the following: i. Simulate Data ii. Stop d. Two text rings labeled the following: i. Array Equation to Use ii. Cluster Equation to Use e. A 5 row by 4 column array of numerics (floating-point) labeled Array Subset f. A waveform chart labeled Simulated Signal Chart g. A x-y graph labeled Simulated Data Graph 3. Once running, the program should continue running until the Stop boolean button is pressed. (1 POINT) 4. Until the user presses the Simulate Data boolean button, the program should not be doing anything but allowing the user to change values of the controls. (1 POINT) 5. When the user presses the Simulate Data boolean button, a eight-frame sequence should be started. You may build the code as a flat or a stacked sequence. (1 POINT) 6. In Frame 0, a for-loop should be used to create a two-dimensional array which should be passed to subsequent frames of the sequence. The user’s Number of Samples control should be used to control the number of times the loop runs. The first column of the array should contain numbers from 0 and increasing in increments of 0.01. This simulates data acquisition at 100 samples per second, or a sample every 1/100 of a second. Use the “Simulate Signal Express VI” to obtain theõafx.øi5umn 2 of the array. Configure the express VI for a “DC” signal type, with Uniform White Noise, with a Noise Amplitude of 1, and 100 samples per second Timing. The user should be able to use the Offset numeric control to modify the offset. The output of the “Simulate Signal Express VI” should be displayed on the Simulated Signal Chart. Be sure to perform error handling on the “Simulate Signal Express VI” such that the user is notified via dialog box if an error has occurred within the Express VI. (10 POINTS) 7. The purpose of sequence Frame 1 is to obtain the third column of data for the array which should be passed to subsequent frames of the sequence. To obtain each value for the third column of data, use a case structure which uses the Array Equation to Use text ring wired to the selector terminal. The text ring should have at least three equations in it, with all of them containing a “x” variable. The following ones work, but you may use others if you like: You may have to edit the form of the equations to match the appropriate syntax of the formula node. Inside the case structure, use formula nodes with the Coefficient a, Coefficient b, and Coefficient c, and the values of x taken from the first column of the array (the 0.00, 0.01, 0.02, etc. values) to calculate each value of the new column. (10 POINTS) 8. In Frame 2, append the third column of data calculated in the above step to the data array. The new data array should be passed to subsequent frames of the sequence. (10 POINTS) 9. Create a subVI in Frame 3 to obtain the fourth column of data for the array which should be passed to subsequent frames of the sequence. The subVI should only have one input and one output, and will basically do the same thing as step 7 above, but it will use the Cluster Equation to Use text ring and the Coefficient l, Coefficient m, and Coefficient n. The text ring should have at least three equations in it, with all of them containing a “x” variable. The following ones work, but you may use others if you like: You may have to edit the form of the equations to match the appropriate syntax of the formula node. Inside the case structure, use formula nodes with the Coefficient l, Coefficient m, and Coefficient n, and the values of x taken from the first column of the array (the 0.00, 0.01, 0.02, etc. values) to calculate each value of the new column. (10 POINTS) 10. In Frame 4, append the fourth column of data calculated in the above step to the data array. The new data array should be passed to subsequent frames of the sequence. (10 POINTS) 11. In Frame 5, show the user five consecutive rows of the final array created in the above step. The starting row number of the subset should be the row number that the user entered in the Array Subset Start control. (10 POINTS) 12. In Frame 6, use the Simulated Data Graph to graph the following on the same graph (10 POINTS): Column 2 vs. Column 1 Column 3 vs. Column 1 Column 4 vs. Column 1 Remember to graph “y versus x” and not vice versa. 13. In Frame 7 alert the user that the simulation is complete by causing the computer to beep. Also, calculate the time (in ms) that has elapsed from the time the user hit the Simulate Data button to the time the sequence ended. (10 POINTS) Other requirements: 1. Group the Number of Samples, Offset, Simulate Data, Elapsed Time (ms), and Stop controls/indicators together with a colored decoration. (1 POINT) 2. Group the Array Equation to Use, Coefficient a, Coefficient b, and Coefficient c controls together with a colored decoration. (1 POINT) 3. Group the Cluster Equation to Use, Coefficient l, Coefficient m, and Coefficient n controls together with a colored decoration. (1 POINT) 4. Group the Array Subset Start and Array Subset controls/indicators together with a colored decoration. (1 POINT) 5. Simulated Signal Chart and Simulated Data Graph changes (5 POINTS): a. Display the legends for the chart and the graph, and edit the text to aid in understanding what each line represents. b. Make the Graph Palette, Cursor Legend, and Scale Legend visible and investigate how the different buttons work. c. Change the Line Style of one of the lines in the Simulated Data Graph. d. Change the Point Style on one of the other lines in the Simulated Data Graph. e. Change the Line Width on the last unchanged line in the Simulated Data Graph. f. Put a Graph Annotation on the Simulated Data Graph that says “Simulated Data Graph”. Helpful Hints: 1. Don’t forget about auto-indexing on for-loops, and when it should be used (and when it shouldn’t). 2. Remember you can bundle together a bunch of different data types with a cluster. 3. If you don’t want to see certain items on the front panel, you can hide some of them or move them off the visible screen. 4. Beep.vi makes the computer beep. 5. The Wait (ms) VI can be used to slow program execution down. 6. Right-click on a ring, select Properties, and then select the Edit Items tab to enter items in a ring and see what numerical value corresponds to each item. 7. Arrays and loop iterations are zero-based. 8. Remember Context Help and all of the debugging tools (execution highlighting, probes, etc.). ˜.˜. î $x.øi5 • Show less 0 answers • Anonymous asked 1 answer • Anonymous asked 2 answers • Anonymous asked I'm having some problems with these pointers. Not really surewhere I am going wrong because nothing... Show more I'm having some problems with these pointers. Not really surewhere I am going wrong because nothing is happening with myprograms, no errors popping up, just nothing happening with theprogram. a. Write a C program that has a declaration in main() to store the following numbers into an array namedchannels: 2, 4, 5, 7, 9, 11, 13. There should bea function call to display() thataccepts the channels as an argument named channels and then displays the numbers using the pointernotation *(channels +i). b. Modify thisdisplay() function to alter the addressin channels.Always use theexpression *channels ratherthan *(channels +i) to retrieve the correctelements. int main() int channels[7]={2,4,5,7,9,11,13}; int display(*channels); int display(int *channels) int i=0; int main() int display(*channels); int display(int*channels) int i=0; • Show less 1 answer • Anonymous asked To make profit,a local stor m... Show more Can anyone please help me to answer those programmingexercises: To make profit,a local stor marks up the prices of its items by acertain percentage.write a java program that reads the originalprice of the item sold, the percentage of the marked-up price,andthe sales tax rate. The program then outputs the original price ofthe item,the marked-up per-centage of the item,the store's sellingprice of the item.(the final price of the item is the selling priceplus the sales tax). A milk carton can hold 3.78 liters of milk.Each morning,a dairyfarm ships cartons of milk to a local grocery store.The cost ofproducing one liter of milk is $0.38, and the profit of each cartonof milk is $0.27. write a program that dose the following: 1, a. Prompts the user to enter the total amount of milk producedin the morning b.Output the number of milk cartons needed to hold milk (Round youranswer to the nearest integer.) c. Output the cost of producing milk. d. Output the profit for producing milk. & tell me if my answer for this question is right: write a java program that prompts the user to input the elapse timefor an event in seconds.The program then out puts the elapsed timein hours,minutes,and seconds.(for example,if elapsed time is 9630seconds,then the output is 2:40:30.) import java.util.*; public class EX11 static Scanner console=new Scanner(System.in); public static void main(String[]args) int Time; int H,M,S,R; System.out.println(H + ":" +M+ ":"+S); The book:java programming from problem analysis to program desingby D.S.MALIK • Show less 2 answers • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • Anonymous asked 1 answer • ARod222 asked 1 answer • Anonymous asked 1 answer • jeneen asked The game of Cards is played using a deck of 52 differentcards. Every card has an integer value from... Show more The game of Cards is played using a deck of 52 differentcards. Every card has an integer value from 1 to 13, and belongs toa suit which can be one of the following: Heart: © Diamond: ¨ Club: § Spade: ª A card with value 1 is called an Ace, a card with value 11is called a Jack, a card with value 12 is called a Queen, and acard with value 13 is called a King. The other cards have valuesfrom 2 to a. Create a class called Card that represents a card. You needto provide the “.h” and “.cpp” files. Theclass should have the data members: value (int) and suit (string).The class must also implement the following functions: · A constructor and a destructor · Member functions that modify and return the value and thesuit · A member function that prints information about thecard b. Create a program that contains the main function and thattests all member functions of the class Card. You must create atleast two objects of Card. One object is created statically, whilethe other must be created dynamically. c. Create another class called Hand that represents a hand ofpoker, which contains 5 cards. The class Hand must use objects ofthe class Card. In addition, the class Hand mustimplement the following member functions: ● A constructor and a destructor ● A function that sets the hand of cards, i.e., specifiesthe 5 cards ● A function that prints the hand of cards ● A function that returns true if the hand of card if a“Four of a Kind” hand, which consists of four cards ofthe same rank like four aces or four kings, etc. ● A function that returns true if the hand is a “FullHouse”, which consists of three of a kind and a pair, such asK-K-K-2-2, or 5-5-5-2-2. The function returns falseotherwise. ● A function that returns true if the hand is a“Flush”, which consists of a hand where all of thecards are the same suit, such as A-J-9-7-5, all ofDiamonds. ● A function that returns true if the hand is a“Straight”, which consists of five cards in rank order,but not of the same suit (it can be any combination of the foursuits). An example of a straight is 2-3-4-5-6. The Ace caneither be high or low card, either A-2-3-4-5 or10-J-Q-K-A. ● A function that compares the hand (i.e., object of Hand)on which this function is called to another hand. Comparing twocards is performed according to the following ranking: ○ Four of a kind ○ Full house ○ Flush ○ Straight ○ Any other combination of cards. In other words, a”four of a kind” hand isbetter than a “full house” hand, a “fullhouse” hand is better than a “flush”,etc. Create a cpp program that tests all member functions of theclass Hand. You must create at least two objects of Hand. Oneobject is created statically, while the other must be createddynamically • Show less 1 answer • jeneen asked The game of Cards is played using a deck of 52 differentcards. Every card has an integer value from... Show more The game of Cards is played using a deck of 52 differentcards. Every card has an integer value from 1 to 13, and belongs toa suit which can be one of the following: Heart: © Diamond: ¨ Club: § Spade: ª A card with value 1 is called an Ace, a card with value 11is called a Jack, a card with value 12 is called a Queen, and acard with value 13 is called a King. The other cards have valuesfrom 2 to a. Create a class called Card that represents a card. You needto provide the “.h” and “.cpp” files. Theclass should have the data members: value (int) and suit (string).The class must also implement the following functions: · A constructor and a destructor · Member functions that modify and return the value and thesuit · A member function that prints information about thecard b. Create a program that contains the main function and thattests all member functions of the class Card. You must create atleast two objects of Card. One object is created statically, whilethe other must be created dynamically. c. Create another class called Hand that represents a hand ofpoker, which contains 5 cards. The class Hand must use objects ofthe class Card. In addition, the class Hand mustimplement the following member functions: ● A constructor and a destructor ● A function that sets the hand of cards, i.e., specifiesthe 5 cards ● A function that prints the hand of cards ● A function that returns true if the hand of card if a“Four of a Kind” hand, which consists of four cards ofthe same rank like four aces or four kings, etc. ● A function that returns true if the hand is a “FullHouse”, which consists of three of a kind and a pair, such asK-K-K-2-2, or 5-5-5-2-2. The function returns falseotherwise. ● A function that returns true if the hand is a“Flush”, which consists of a hand where all of thecards are the same suit, such as A-J-9-7-5, all ofDiamonds. ● A function that returns true if the hand is a“Straight”, which consists of five cards in rank order,but not of the same suit (it can be any combination of the foursuits). An example of a straight is 2-3-4-5-6. The Ace caneither be high or low card, either A-2-3-4-5 or10-J-Q-K-A. ● A function that compares the hand (i.e., object of Hand)on which this function is called to another hand. Comparing twocards is performed according to the following ranking: ○ Four of a kind ○ Full house ○ Flush ○ Straight ○ Any other combination of cards. In other words, a”four of a kind” hand isbetter than a “full house” hand, a “fullhouse” hand is better than a “flush”,etc. Create a cpp program that tests all member functions of theclass Hand. You must create at least two objects of Hand. Oneobject is created statically, while the other must be createddynamically • Show less 1 answer • Anonymous asked 2 answers • Anonymous asked In this assignment we are going to define a class Cloud thatwill let us define collection of integer... Show more In this assignment we are going to define a class Cloud thatwill let us define collection of integer values. This cloudrepresents a fuzzy data value. For example, we could create a cloud with the values 1, 1, 1,3. It would represent a situation where the value was mostlikely to be 1, but it could be 3 as well. Another example, we could create a cloud with the values 1, 2,3, 4, and 5. This would represent a range of values from1 to 5. Specifications for Cloud: Your Cloud class must implement the following: 1) You must dynamically allocate the space needed to hold thecollection of integer values. 2) You must keep wasted space at a minimum. 3) A constructor that will take min and max as arguments and willcreate a cloud containing all the integer values frommin to max. 4) A copy constructor. 5) A destructor. 6) A method min() that returns the minimum integer in the cloud. 7) A method max() that returns the maximum integer in the cloud. 8) A method average() that returns the average of the values in thecloud. 9) A method insert(int value) adds a single value into the cloud. 10) Amethod getFrequency(int a ) that returns the number of times that ais in the cloud. 11) Amethod scale(int a) that returns a new scaled Cloud. We willget a value in the new Cloud that is the product of a with a valuein the cloud. See the example. 12) Amethod shift(int a) that returns a new shifted Cloud. We willget a value in the new Cloud that is the sum of a with a value inthe cloud. See the example. 13) Amethod add(Cloud other) that returns a new Cloud represents asum. We will get a value in the new Cloud that is the sum of apair values (one from each cloud). See the example. 14) Amethod multiply(Cloud other) that returns a new Cloud represents aproduct. We will get a value in the new Cloud that is theproduct of a pair values (one from each cloud). See theexample. 15) Amethod bounds(int a) that returns true a is in the bounds of thecloud. 16) operator* that operates on two Clouds. f*gwill return f.multiply(g). 17) operator+ that operates on two Clouds. f+gwill return f.add(g). 18) operator+ that operates on an Cloud and an integer. It willinsert the integer into the the cloud. 19) *=,+=, += operators for each of the 3 operators listed above. 20) operator= that implements assignment. 21) operator[ ] that will get the frequency of the value. 22) operator> that operates on two Clouds. f>g will return true if all the values in f are larger than thevalues in g. 23) operator> that operates on two Clouds. f<g will return true if all the values in f are less than thevalues in g. 24) operator== that determines if two clouds overlap. operator>> and operator<< that implement the streamoperators. The format that you use to output must be readableusing the input operator. You will use { and } as end markersfor the format of the cloud. to get You will write two clients for the Cloud class. 1) The first client named test.cpp will exercise all the operationsthat the Cloud class has implemented. 2) The second client named distributor.cpp will read computedistributions of values. You will read a distribution from afile. You may assume that the values are in the file“data.txt”. The first value in the filewill be an integer count of the values. The remaining valueswill be the integer values in the cloud. Once you have read in the distribution from the file, readin an integer value n from the keyboard. This will bethe number of time you will add the distribution together. Soif f is the cloud and 3 is n, you would compute f+f+f. Print the resulting distribution. • Show less 0 answers • Anonymous asked 1 answer Get the most out of Chegg Study
{"url":"http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2008-october-06","timestamp":"2014-04-18T10:59:44Z","content_type":null,"content_length":"134665","record_id":"<urn:uuid:0828702e-8780-44cc-a69f-ef00a62ac8eb>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
REU 1989: The Red Blood Cell Shape (M. Peterson) The REU project for 1989 directed by Mark Peterson investigated the equilibrium shapes of fluid vesicles, that is, membrane bounded fluid cells. The red blood cell is a naturally occurring vesicle, and artificial vesicles can be made to form spontaneously in lipid-water mixtures. Their shapes are believed to minimize curvature energy at fixed volume V and area A. As first pointed out by Helfrich, this leads to the problem of minimizing, by choice of membrane shape M, where lambda and mu are Lagrange multipliers, H is the mean curvature, kc is the bending modulus, c0 the spontaneous curvature (biasing H), and the integral goes over M, the membrane surface. c0 can also be interpreted as a Lagrange multiplier, fixing the average of H over the surface M. If M is a surface of revolution, the Euler-Lagrange equation for this problem is a system of ODE's. Solutions include shapes which actually occur for red blood cells: the "normal" discocytic shape, and stomatocyte shapes (i.e., "cup" shapes). Three such equilibrium shapes are shown below (the rotational symmetry axis is horizontal). The left shape is the normal shape, the other two are We computed families of equilibrium shapes, which lie along hypersurfaces in the space of parameters (Lagrange multipliers). There are bifurcations. We also checked the infinitesimal stability of the shapes we found (using the method described by Peterson in J. Appl. Phys 57 (1985), 1739). Below is a typical picture, corresponding to c0*rho=-2, A=4*pi*rho^2, (rho arbitrary). The "phase diagram" for this problem turns out to be surprisingly complex. The paper by Udo Seifert, K. Berndl and R. Lipowsky, "Shape transformations of vesicles: Phase diagrams for spontaneous- curvature and bilayer-coupling models," Phys Rev A44 (1991), 1182-1202, which appeared soon after our work, largely superseded ours. The student participants were: • Meredith Goldsmith, Yale '90 • Ted Levin, University of Massachusetts '90 • Julia Long, Harvey Mudd College '90 • Sally Relova, Mount Holyoke College '91 • Sebastian Schreiber, Boston University, '89 • Latha Venkataraman, Mount Holyoke College -> MIT '91 [ REU home page ] [ List of REU projects since 1988 ] [ M. Peterson home page ]
{"url":"https://www.mtholyoke.edu/~mpeterso/reu/89/reu89.html","timestamp":"2014-04-17T05:53:53Z","content_type":null,"content_length":"3325","record_id":"<urn:uuid:11ebd952-37cd-4a68-9bfb-94ef95d8a24d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
The Transformation y = f(bx) - Concept There are different types of math transformation, one of which is the type y = f(bx). This type of math transformation is a horizontal compression when b is greater than one. We can graph this math transformation by using tables to transform the original elementary function. Other important transformations include vertical shifts, horizontal shifts, and reflections. Let's investigate another transformation. I want to know what's the transformation y=f of bx do? So let's start with the function f of x =4-x I'm going to graph that function and then I want to graph f of 4x. So let's make this our parent function it's actually pretty easy to come up with values for it and you know what the shape is going to be, it's a radical function. So I'll make this u and root 4-u, let's pick values to plug in that'll give us nice perfect squares inside the radical. So for example u=0 is a good choice because it gives you 4-0, 4 and 4 is 2. If I want to get 1 inside here I'd pick u=3, so I'll pick 3 I get 1 inside and the square root of that is 1 and if I want to get 0 I'll pick u=4. So 4-4 is 0, 0 is 0, so these are the three points I'll use to graph it. And let me graph it right now 0 2, 3 1, 4 0 and you can see that if it's a radical function. This is it's end point so it's going to open to the left and it'll look something like this, now what does the transformation do? Well let me make a substitution, I want to graph f of 4x so let me make the substitution, first of all this is y equals the square root of 4-4x right? f of 4x is replacing the x by 4x, so I'll substitute u for 4x. Now if u equals 4x that means x equals one quarter u. That's why I take these u values and I multiply them by a quarter. And I get 0 times a quarter is 0, 3 times a quarter is three quarters and four times a quarters 1. These are my x values, and then here I'll have root 4-4x and this would be exactly the same as this because 4x is u, so 4-u exactly these values 2, 1, 0. Alright so I'm going to plot 0, 2 three quarters 1 and 1, 0. So here is 0, 2, three quarters, 1 is here and 1, 0 is here. Now 1, 0 is the transformation of 4, 0 the old end point. So the old end point which is way out here has moved in to here, and this is what my new graph looks like. This is a horizontal compression the graph has been squeezed in to the y axis and it's a compression by a factor 1 quarter right 1 to 4, so just remember when you see the transformation f of 4x the number 4 it's greater than 1 you might expect this to be a horizontal stretch but it's actually a horizontal compression. So when you describe the transformation y=f of bx, if the b value is bigger than 1 you get a horizontal compression of the original graph by a factor of one over b. Just like we saw here this is compression by a factor of one quarter. And those are our two graphs. the square root function horizontal shift horizontal compression compression factor endpoint
{"url":"https://www.brightstorm.com/math/precalculus/introduction-to-functions/the-transformation-y-equals-fbx/","timestamp":"2014-04-19T07:31:48Z","content_type":null,"content_length":"61449","record_id":"<urn:uuid:ced2a466-d537-4f46-808c-b8b467dc9354>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Stable stem enabled Shannon entropies distinguish non-coding RNAs from random backgrounds • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2012; 13(Suppl 5): S1. Stable stem enabled Shannon entropies distinguish non-coding RNAs from random backgrounds The computational identification of RNAs in genomic sequences requires the identification of signals of RNA sequences. Shannon base pairing entropy is an indicator for RNA secondary structure fold certainty in detection of structural, non-coding RNAs (ncRNAs). Under the Boltzmann ensemble of secondary structures, the probability of a base pair is estimated from its frequency across all the alternative equilibrium structures. However, such an entropy has yet to deliver the desired performance for distinguishing ncRNAs from random sequences. Developing novel methods to improve the entropy measure performance may result in more effective ncRNA gene finding based on structure detection. This paper shows that the measuring performance of base pairing entropy can be significantly improved with a constrained secondary structure ensemble in which only canonical base pairs are assumed to occur in energetically stable stems in a fold. This constraint actually reduces the space of the secondary structure and may lower the probabilities of base pairs unfavorable to the native fold. Indeed, base pairing entropies computed with this constrained model demonstrate substantially narrowed gaps of Z-scores between ncRNAs, as well as drastic increases in the Z-score for all 13 tested ncRNA sets, compared to shuffled sequences. These results suggest the viability of developing effective structure-based ncRNA gene finding methods by investigating secondary structure ensembles of ncRNAs. Statistical signals in primary sequences for non-coding RNA (ncRNA) genes have been evasive [1-3]. Because single strand RNA folds into a structure, the most exploitable feature for structural ncRNA gene finding has been the secondary structure [4-6]. The possibility that folded secondary structure may lead to successful ab initio ncRNA gene prediction methods has energized leading groups to independently develop structure-based ncRNA gene finding methods [7,8]. The core of such a program is its secondary structure prediction mechanism, for example, based on computing the minimum free energy for the query sequence under some thermodynamic energy model [9-12]. The hypothesis is that the ncRNAs' secondary structure is thermodynamically stable. Nonetheless, stability measures have not performed as well as one might hope [13]; there is evidence that the measures may not be effective on all categories of ncRNAs [14]. A predicted secondary structure can be characterized for its fold certainty, using the Shannon base pairing entropy [15,16]. The entropy ∑p[i,j ]log p[i,j ]of base pairings between all bases i and j can be calculated based on the partition function for the Boltzmann secondary structure ensemble, which is the space of all alternative secondary structures of a given sequence; the probability p[i,j ]is calculated as the total of Boltzmann factors over all equilibrium alternative structures that contain the base pair (i, j) [17]. As an uncertainty measure, the base pairing Shannon entropy is maximized when base pairing probabilities are uniformly distributed. A structural RNA sequence is assumed to have a low base pairing Shannon entropy, since the distribution of its base pairing probabilities is far from uniform. The entropy measure has been scrutinized with real ncRNA data revealing a strong correlation between entropy and free energy [18,19]. However, there has been mixed success in discerning structural ncRNAs from their randomly shuffled counterparts. Both measures perform impressively on precursor miRNAs but not as well on tRNAs and some rRNAs [14,18]. The diverse results of the entropy measuring on different ncRNAs suggest that the canonical RNA secondary structure ensemble has yet to capture all ncRNAs structural characteristics. For example, a Boltzmann ensemble enhanced with weighted equilibrium alternative structures has also resulted in higher accuracy in secondary structure prediction [19]. There is strong evidence that the thermodynamic energy model can improve its structure prediction accuracy by considering energy contributions in addition to those from the canonical free energy model [20,21]. Therefore, developing ncRNA structure models that can more effectively account for critical structural characteristics may become necessary for accurate measurement of RNA fold certainty. In this paper, we present work that computes Shannon base pairing entropies based on a constrained secondary structure model. The results show substantial improvements in the Z-score of base pairing Shannon entropies on 13 ncRNA datasets [18] over the Z-score of entropies computed by existing software (e.g., NUPACK [23] and RNAfold [12,29]) with the canonical (Boltzmann) secondary structure ensemble and the associated partition function [22]. Our limited constraint to the secondary structure space is to require only canonical base pairs to occur in stable stems. The constrained secondary structure model is defined with a stochastic context-free grammar (SCFG) and entropies are computed with the Inside and Outside algorithms. Our results suggest that incorporating more constraints may further improve the effectiveness of the fold certainty measure, offering improved ab initio ncRNA gene finding. We implemented the algorithm for Shannon base pairing entropy calculation into a program named TRIPLE. We tested it on ncRNA datasets and compared its performance on these ncRNAs with the performance achieved by the software NUPACK [23] and RNAfold [12,29] developed under the Boltzmann standard secondary structure ensemble [17,22]. Data preparation We downloaded the 13 ncRNA datasets previously investigated in Table 1 of [18]. They are of diverse functions, including pre-cursor microRNAs, group I and II introns, RNase P and MRP, bacterial and eukaryotic signal recognition particle (SRP), ribosomal RNAs, small nuclear spliceosomal RNAs, riboswitches, tmRNAs, regulatory RNAs, tRNAs, telomerase RNAs, small nucleolar RNAs, and Hammerhead The results from using these datasets were analyzed with 6 different types of measures, including Z-score and p-value of minimal free energy (MFE), and Shannon base pairing entropy [18], in comparisons with random sequences. The six measures correlate to varying degrees, hence using MFE Z-score and Shannon base pairing entropy may be sufficient to cover the other measures. However, these two measures, as the respective indicators for the fold stability and fold certainty of ncRNA secondary structure, have varying performances on the 13 ncRNA datasets. For our tests, we also generated random sequences as control data. For every ncRNA sequence, we randomly shuffled it to produce two sets of 100 random sequences each; one set was based upon single nucleotide shuffling, the other was based upon di-nucleotide shuffling. In addition, all ncRNA sequences containing nucleotides other than A, C, G, T, and U were removed for the reason that NUPACK [ 23] doesn't accept sequences containing wildcard symbols. Shannon entropy distribution of random sequences Two energy model based softwares, NUPACK (with the pseudoknot function turned off) and RNAfold, and our program TRIPLE computed base pairing probabilities on ncRNA sequences and on random sequences. In particular, for every ncRNA sequence x and its associated randomly shuffled sequence set S[x], the Shannon entropies of these sequences were computed. A Kolmogorov-Smirnov test (KS test) [24] was applied to verify the normality of the entropy distributions from all randomly shuffled sequence sets. The results show that for 99% of the sequence sets we fail to reject the hypothesis that entropies are normally distributed with 95% confidence level. This indicates that we may use a Z-score to measure performance. Z-score scores and comparisons For each ncRNA, the average and standard deviation of Shannon entropies of the randomly shuffled sequences were estimated. The Z-score of the Shannon entropy Q(x) of ncRNA sequence x is defined as where μ(Q(S[x])) and σ(Q(S[x])) respectively denote the average and standard deviation of the Shannon entropies of the random sequences in set S[x]. The Z-Score measures how well entropies may distinguish the real ncRNA sequence x from their corresponding randomly shuffled sequences in S[x]. Figure Figure11 compares the averages of the Z-scores of Shannon base pairing entropies computed by NUPACK, RNAfold, and TRIPLE on each of the 13 ncRNA datasets. It shows that TRIPLE significantly improved the Z-scores over NUPACK and RNAfold across all the 13 datasets. Comparisons of averaged Z-score of Shannon base pairing entropies. Comparisons of averaged Z-score of Shannon base pairing entropies computed by NUPACK, RNAfold, and TRIPLE for each of the 13 ncRNA datasets downloaded from [18]. To examine how the Z-scores might have been improved by TRIPLE, we designated four thresholds for Z-scores, which are 2, 1.5, 1, and 0.5. The percentages of sequences of each dataset with Z-score greater than or equal to the thresholds were computed. Table Table11 shows details of the Z-score improvements over NUPACK when di-nucleotide shuffling was used. With a threshold 2 or 1.5, our method performed better than NUPACK in all datasets. With the threshold 1 and 0.5, our method improved upon NUPACK in 12 and 10 datasets, respectively. The results of TRIPLE and NUPACK using a single nucleotide random shuffling are given in Table Table2,2, which shows that our method also performs better than NUPACK in the majority of datasets. In particular, TRIPLE performed better than NUPACK in all datasets with threshold of 2; with threshold equal to 1.5 or 1, our method had better results than NUPACK in 12 datasets and in 9 datasets with threshold equal of 0.5. Comparisons of TRIPLE and NUPACK by the percentages of sequences falling in each category of a Z-score range. Comparisons of TRIPLE and NUPACK by the percentages of sequences falling in each category of a Z-score range. The results of RNAfold using the default setting are given in Table Table33 and and4.4. Table Table33 shows results on di-nucleotide shuffling datasets. TRIPLE works better in the majority of datasets. It outperforms RNAfold in all datasets with threshold equal to 2 and 1.5. With threshold of 1 and 0.5, TRIPLE wins 12 (tie 1) and 8 (tie 1) datasets, respectively. In Table Table4,4, TRIPLE shows similar performance on single nucleotide shuffling datasets. It has better scores than RNAfold in 13, 13, 11, and 7 (tie 1) datasets with threshold of 2, 1.5, 1, and 0.5, respectively. In addition, RNAfold was tested with the available program options (tables not shown). With option "noLP" on RNAfold, TRIPLE performs better in 13, 13, 11 (tie 1), and 9 di-nucleotide shuffling datasets in terms of threshold of 2, 1.5, 1, and 0.5, respectively. In single nucleotide shuffling datasets, TRIPLE wins 13, 13, 12 and 8 datasets separately with threshold of 2, 1.5, 1, and 0.5. Comparisons of TRIPLE and RNAfold by the percentages of sequences falling in each category of a Z-score range. Comparisons of TRIPLE and RNAfold by the percentages of sequences falling in each category of a Z-score range. When we specify "noLP" and "noCloseGU" on RNAfold, TRIPLE beats RNAfold in 13, 13, 12, and 11 di-nucleotide shuffling datasets, and 13, 13, 13, and 11 single nucleotide shuffling datasets with threshold 2, 1.5, 1, and 0.5, respectively. If we specify "noLP" and "noGU" on RNAfold, our method performs better on all di-nucleotide shuffling and single nucleotide shuffling datasets with all four thresholds. We also compared TRIPLE, NUPACK, and RNAfold on some real genome background tests. Several genome sequences from bacteria, archaea, and eukaryotes were retrieved from the NCBI database. Using these genome sequences, we created genome backgrounds for the 13 ncRNA data sets. In particular, for each RNA sequence from 13 ncRNA data sets, 100 sequence segments of the same length were sampled from each genome sequence and used to test against the RNA sequence to calculate base pairing entropies and Z-score. With such genome backgrounds, the overall performance of TRIPLE on the 13 ncRNA data sets is mixed and is close to that of NUPACK and RNAfold (data not shown). This performance of TRIPLE on real genomes indicates that there is still a gap between the ability of our method and successful ncRNA gene finding. Nevertheless, the test results reveal that the constrained "triple base pairs" model is necessary but still not sufficient enough. This suggests incorporating further structural constraints will improve the effectiveness for ncRNA search on real genomes. To roughly evaluate the speed of the three tools, the running time for 101 sequences, including 1 real miRNA sequence and its 100 single nucleotide shuffled sequences, was measured on a Linux machine with an Intel dual-core CPU (E7500 2.93 GHz). Each sequence has 100 nucleotides. TRIPLE, NUPACK, and RNAfold spent 20.7 seconds, 36.2 seconds and 3.4 seconds, respectively. We point out that TRIPLE has the potential to be optimized for each specific grammar to improves its efficiency. This work introduced a modified ensemble of ncRNA secondary structures with the constraint of requiring only canonical base pairs to only occur and that stems must be energetically stable in all the alternative structures. The comparisons of performances between our program TRIPLE and energy model based software (NUPACK and RNAfold) implemented based on the canonical structure ensemble have demonstrated a significant improvement in the entropy measure for ncRNA fold certainty by our model. In particular, an improvement of the entropy Z-scores was shown across almost all 13 tested ncRNAs datasets previously used to test various ncRNA measures [18]. We note that there is only one exceptional case observed from Table Table1,1, ,2,2, ,3,3, ,4:4: SRP whose entropy Z-score performance was not improved (as much as other ncRNAs) when Z <1.5. The problem might have been caused by the implementation technique rather than the methodology. Most of the tested SRP RNA sequences (Eukaryotic and archaeal 7S RNAs) are of length around 300 and contain about a dozen stems. In many of them, consecutive base pairs are broken by internal loops into small stem pieces, some having only two consecutive canonical pairs; whereas, in our SCFG implementation we simply required three consecutive base pairs as a must in a stem, possibly missing the secondary structure of many of these sequences. This issue with the SCFG can be easily fixed, e.g., by replacing the SCFG with one that better represents the constrained Boltzmann ensemble in which stems are all energetically stable. To ensure that the performance difference between TRIPLE and energy model based software (NUPACK and RNAfold) was not due to the difference in the thermodynamic energy model (Boltzmann ensemble) and the simple statistical model (SCFG) with stacking rules, we also constructed two additional SCFG models, one for unconstrained base pairs and another requiring at least two consecutive canonical base pairs in stems. Tests on these two models over the 13 ncRNA data set resulted in entropy Z-scores (data not shown) comparable to those obtained by NUPACK and RNAfold but inferior to the performance of TRIPLE. We attribute the impressive performance by TRIPLE to the constraint of "triple base pairs" satisfied by real ncRNA sequences but which is hard to achieve for random sequences. Since the entropy Z-score improvement by our method was not uniform across the 13 ncRNAs, one may want to look into additional other factors that might have contributed to the under-performance of certain ncRNAs. For example, the averaged GC contents are different in these 13 datasets, with SRP RNAs having 58% GC and standard deviation of 10.4%. A sequence with a high GC content is more likely to produce more spurious, alternative structures, possibly resulting in a higher base pairing entropy. However, since randomly shuffled sequences would also have the same GC content, it becomes very difficult to determine if the entropies of these sequences have been considerably affected by the GC bias. Indeed, previous investigations [25] have revealed that, while the base composition of a ncRNA is related to the phylogenetic branches on which the specific ncRNA may be placed, it may not fully explain the diverse performances of structure measures on various ncRNAs. Notably it has been discovered that base compositions are distinct in different parts of rRNA secondary structure (stems, loops, bulges, and junctions) [26], suggesting that an averaged base composition may not suitably represent the global structural behavior of an ncRNA sequence. Technically the TRIPLE program was implemented with an SCFG that assumes stems to have at least three consecutive canonical base pairs. Yet, as we pointed out earlier, the performance results should hold for a constrained Boltzmann ensemble in which stems are required to be energetically stable. This constraint of stable stems was intended to capture the energetic stability of helical structures in the native tertiary fold [27,28]. Since the ultimate distinction between a ncRNA and a random sequence lies in its function (thus tertiary structure); additional, critical tertiary characteristics may be incorporated into the structure ensemble to further improve the fold certainty measure. In our testing of stem stability (see section "Energetically stable stems"), ncRNA sequences from the 51 datasets demonstrated certain sequential properties that may characterize tertiary interactions, e.g., coaxial stacking of helices. However, to computationally model tertiary interactions, a model beyond a context-free system would be necessary; thus it would be difficult to use an SCFG or a Boltzmann ensemble for this purpose. We need to develop methods to identify tertiary contributions critical to the Shannon base pairing entropy measure and to model such contributions. Although this method and technique have been developed with reference to non-coding RNAs, it is possible that protein-coding mRNAs would display similar properties, when sufficient structural information about them has been gathered. We present work developing structure measures that can effectively distinguish ncRNAs from random sequences. We compute Shannon base pairing entropies based on a constrained secondary structure model that favors tertiary folding. Experimental results indicate that our approach significantly improves the Z-score of base pairing Shannon entropies on 13 ncRNA datasets [18] in comparison to that computed by NUPACK [23] and RNAfold [12,29]. These results shows that investigating secondary structure ensembles of ncRNAs is helpful for developing effective structure-based ncRNA gene finding Method and model Our method to distinguish ncRNAs from random sequences is based on measuring of the base pairing Shannon entropy [15,16] under a new RNA secondary structure model. The building blocks of this model are stems arranged in parallel and nested patterns connected by unpaired strand segments, similar to those permitted by a standard ensemble [11,17,29]. The new model is constrained, however, to contain a smaller space of equilibrium alternative structures, requiring there are only energetically stable stems (e.g., of free energy levels under a threshold) to occur in the structures. The constraint is basically to consider the effect of energetically stable stems on tertiary folding and to remove spurious structures that may not correspond to a tertiary fold. According to the RNA folding pathway theory and the hierarchical folding model [27,28,30], building block helices are first stabilized by canonical base pairings before being arranged to interact with each other or with unpaired strands through tertiary motifs (non-canonical nucleotide interactions). A typical example is the multi-loop junctions in which one or more pairs of coaxially stacked helices bring three or more regions together, further stabilized by the tertiary motifs at the junctions [31,32]. The helices involved are stable before the junction is formed or any possible nucleotide interaction modifications are made to the helical base pairs at the junction [33]. Energetically stable stems A stem is the atomic, structural unit of the new secondary structure space. To identify the energy levels of stems suitable to be included in this model, we conducted a survey on the 51 sets of ncRNA seed alignments, representatives of the ncRNAs in Rfam [34], which had been used with the software Infernal [35] as benchmarks. From each ncRNA seed structural alignment, we computed the thermodynamic free energy of every instance of a stem in the alignment data using various functions of the Vienna Package [12,29] as follows. RNAduplex was first applied to the two strands of the stem marked by the annotation to predict the optimal base pairings within the stem, then, the minimum free energy of the predicted stem structure, with overhangs removed, was computed with RNAeval. Figures Figures22 and and33 respectively show plots of the percentages and cumulative percentages of free energy levels of stems in these 51 ncRNA seed alignments. Percentages of free-energy of stems. Percentages of free-energy of stems from 51 Rfam datasets (percentages of stems with free-energy less than -12 are not given in this figure). Cumulative percentages of free-energy of stems. Cumulative percentages of free-energy of stems from 51 Rfam datasets (cumulative percentages of stems with free-energy less than -12 are not given in this figure). Note the step at -3.4. The peaks (with relatively high percentages) on the percentage curve of Figure Figure22 indicate concentrations of certain types of stems at energies levels around -4.5, -3.3, and -2.4 kcal/mol. Since a G-U pair is counted weakly towards the free energy contribution (by the Vienna package), we identified the peak value -4.5 kcal/mol to be the free energy of stems of three base pairs, with two G-C pairs and one A-U in the middle or two A-U pairs and one G-C in the middle. The value -3.3 kcal/mol is the free energy of stems containing exactly two G-C pairs or stems with one G-C pair followed by two A-U pairs. Values around -2.4 kcal/mol are stems containing one G-C and an A-U pair or simply four A-U pairs. Based on this survey, we were able to identify two energy thresholds: -3.4 and -4.6 kcal/mol for semi-stable stems and stable stems respectively. Both require at least three base pairs of which at least one is G-C pair. We further observed the difference between these two categories of stems on the 51 ncRNA datasets. In general, although levels of energy appear to be somewhat uniformly distributed (see Figure Figure3),3), an overwhelmingly large percentage of stems in both categories are located in the vicinity of other stems. In particular, 79.6% of stable stems (with a free energy -4.6 kcal/mol or lower) have 0 (number of nucleotides) distance from their closest neighbor stem and 16.5% of stable stems have distance 1 from their closest neighbors. For semi-stable stems, the group having zero distance to other stems is 85.6% of the total while the group having distance 1 is 10.6%. Since zero distance between two stems may reflect a contiguous strand connecting two coaxially stacked helices in tertiary structure, our survey suggests a semi-stable stem interacts with another stem to maintain even its own local stability. In the rest of this work, we do not distinguish between stable and semi-stable stems. In conducting this survey, we did not directly use the stem structures annotated in the seed alignments to compute their energies. Due to evolution, substantial structural variation may occur across species; one stem may be present in one sequence and absent in another but a structural alignment algorithm may try to align all sequences to the consensus stem, giving rise to "misalignments" which we have observed [36]. Most of such "malformed stems" mistakenly aligned to the consensus often contain bulges or internal loops and have higher free energies greater than the threshold -3.4 kcal/mol. The RNA secondary structure model In the present study, a secondary structure model is defined with a Stochastic Context Free Grammar (SCFG) [37]. Our model requires there are at least three consecutive base pairs in every stem; the constraint is described with the following seven generic production rules: (1) X → a (2) X → aX (3) X → aHb (4) X → aHbX (5) H → aHb (6) H → aYb (7) Y → aXb where capital letters are non-terminal symbols that define substructures and low case letters are terminals, each being one of the four nucleotides A, C, G, and U. The starting non-terminal, X, can generate an unpaired nucleotide or a base pair with the first three rules. The fourth rule generates two parallel substructures. Non-terminal H is used to generate consecutive base pairs with non-terminal Y to generate the closing base pair. Essentially, the process of generating a stem needs to recursively call production rules with the left-hand-side non-terminals X, H and Y each at least once. This constraint guarantees that every stem has at least three consecutive base pairs, as required by our secondary structure model. Probability parameter calculation There are two sets of probability parameters associated with the induced SCFG. First, we used a simple scheme of probability settings for the unpaired bases and base pairs, with a uniform 0.25 probability for every base. The probability distribution of {0.25, 0.25, 0.17, 0.17, 0.08, 0.08} is given to the six canonical base pairs G-C, C-G, A-U, U-A, G-U, and U-G; a probability of zero is given to all non-canonical base pairs. Alternatively, probabilities for unpaired bases and base pairs may be estimated from available RNA datasets with known secondary structures [34], as has been done in some of the previously work with SCFGs [38,39]. Second, we computed the probabilities for the production rules of the model as follows. To allow our method to be applicable to all structural ncRNAs, we did not estimate the probabilities based on a training data set. In fact, we believe that the probability parameter setting of an SCFG for the fold certainty measure should be different from that for fold stability measure (i.e., folding). Based on the principle of maximum entropy, we developed the following approach to calculate the probabilities for the rules in our SCFG model. Let p[i ]be the probability associated with the production rule i, for i = 1, 2,...,7, respectively. Since the summation of probabilities of rules with the same non-terminal on the left-hand-side is required to be 1, we can establish the following equations: be the geometric average of the six base pair probabilities. According to the principle of maximum entropy, given we have no prior knowledge of a probability distribution, the assumption of a distribution with the maximum entropy is the best choice, since it will take the smallest risk [40]. If we apply this principle to our problem, the probability contribution from a base pair should be close to the contribution from unpaired bases. Rule probabilities can be estimated to satisfy following equations: From above equations, it follows that Computing base pairing Shannon entropy Based on the new RNA secondary structure model, we can compute the fold certainty of any given RNA sequence, which is defined as the Shannon entropy measured on base pairings formed by the sequence over the specified secondary structure space Ω. Specifically, let the sequence be x = x[1]x[2 ]... x[n ]of n nucleotides. For indexes i < j, the probability P[i,j ]of base pairing between bases x[i ] and x[j ]is computed with $Pi,j(x)= ∑s∈Ωp(s,x)δ(x)i,js$ where p(s, x) is the probability of x being folded into to the structure s in the space Ω and $δ(x)i,js$ is a binary value indicator for the occurrence of base pair (x[i], x[j]) in structure s. The Shannon entropy of P[i,j](x) is computed as [15,16] $Q(x)=-1n ∑i<jPi,j(x)logPi,j(x)$ To compute the expected frequency of the base pairing, P[i,j](x), with formula (2), we take advantage of the Inside and Outside algorithms developed for SCFG [37]. Given any nonterminal symbol S in the grammar, the inside probability is defined as i.e., the total probability for the sequence segment x[i]x[i][+1 ]... x[j ]to adopt alternative substructures specified by S. Assume S[0 ]to be the initial nonterminal symbol for the SCFG model. Then α(S[0], 1, n, x) is the total probability of the sequence x's folding under the model. The outside probability is defined as i.e., the total probability for the whole sequence x[1 ]... x[n ]to adopt all alternative substructures that allow the sequence segment from position i to position j to adopt any substructure specified by S (see Figure Figure44 for illustration). Illustration of the application of the generic production rule. Illustration of the application of the generic production rule S → aRbT that produces a base pair between positions i and j for the query sequence x, provided that the start non-terminal ... P[i,j](x) then can be computed as the normalized probability of the base pair (x[i], x[j]) occurring in all valid alternative secondary structures of x: $γ(R,S,T,i,j,x)= ∑j<k≤nα(R,i+1,j-1,x)×β(S,i,k,x)×α(T,j+1,k,x)$ in which variables S, R, T are for non-terminals and variable production S → aRbT represents rules (3)~(7) which involve base pair generations. For rules where T is empty, the summation and term α(T, j + 1, k, x) do not exist and k is fixed as j. The efficiency to compute P[i,j](x) mostly depends on computing the Inside and Outside probabilities, which can be accomplished with dynamic programming and has the time complexity O(mn^3) for a model of m nonterminals and rules and sequence length n. Competing interests The authors declare that they have no competing interests. Authors' contributions YW contributed to grammar design, algorithm development, program implementation, data acquisition, tests, result analysis, and manuscript drafting. AM contributed to algorithm design and program implementation. PS and TIS contributed to data acquisition and tests. YWL participated in model discussion. RLM contributed to the supervision, data acquisition, results analyses, biological insights, and manuscript drafting. LC conceived the overall model and algorithm and drafted the manuscript. All authors read and approved the manuscript. This research project was supported in part by NSF MRI 0821263, NIH BISTI R01GM072080-01A1 grant, NIH ARRA Administrative Supplement to NIH BISTI R01GM072080-01A1, and NSF IIS grant of award No: This article has been published as part of BMC Bioinformatics Volume 13 Supplement 5, 2012: Selected articles from the First IEEE International Conference on Computational Advances in Bio and medical Sciences (ICCABS 2011): Bioinformatics. The full contents of the supplement are available online at http://www.biomedcentral.com/bmcbioinformatics/supplements/13/S5. • Eddy S. Non-coding RNA genes and the modern RNA world. Nature Reviews Genetics. 2001;2(12):919–929. doi: 10.1038/35103511. [PubMed] [Cross Ref] • Schattner P. In: Noncoding RNAs: Molecular Biology and Molecular Medicine. Barciszewski, Erdmann, editor. Springer; 2003. Computational gene-finding for noncoding RNAs; pp. 33–48. • Griffiths-Jones S. Annotating noncoding RNA genes. Annu Rev Genomics Hum Genet. 2007;8:279–298. doi: 10.1146/annurev.genom.8.080706.092419. [PubMed] [Cross Ref] • Eddy S. Computational genomics of noncoding RNA genes. Cell. 2002;109(2):137–140. doi: 10.1016/S0092-8674(02)00727-4. [PubMed] [Cross Ref] • Uzilov AV, Keegan JM, Mathews DH. Detection of non-coding RNAs on the basis of predicted secondary structure formation free energy change. BMC Bioinformatics. 2006;7:173. doi: 10.1186/ 1471-2105-7-173. [PMC free article] [PubMed] [Cross Ref] • Machado-Lima A, del Portillo HA, AM D. Computational methods in noncoding RNA research. Journal of Mathematical Biology. 2008;56(1-2):15–49. [PubMed] • Washietl S, Hofacker IL, Stadler PF. Fast and reliable prediction of noncoding RNAs. Proc Natl Acad Sci USA. 2005;102(7):2454–2459. doi: 10.1073/pnas.0409169102. [PMC free article] [PubMed] [ Cross Ref] • Pedersen J, Bejerano G, Siepel A, Rosenbloom K, Lindblad-Toh K, Lander E, Rogers J, Kent J, Miller W, Haussler D. Identification and classification of conserved RNA secondary structures in the human genome. PLoS Computational Biology. 2006;2(4):e33. doi: 10.1371/journal.pcbi.0020033. [PMC free article] [PubMed] [Cross Ref] • Turner D, Sugimoto N, Kierzek R, Dreiker S. Free energy increments for bydrogene bounds in nucleic acid base pairs. Journal of Am Chem Soc. 1987;109:3783–3785. doi: 10.1021/ja00246a047. [Cross • Turner DH, Sugimoto N, Freier SM. RNA structure prediction. Annu Rev Biophys Biophys Chem. 1988;17:167–192. doi: 10.1146/annurev.bb.17.060188.001123. [PubMed] [Cross Ref] • Zuker M, Steigler P. Optimal computer folding of larger RNA sequences using thermodynamics and auxiliary information. Nucleic Acids Res. 1981;9:133–148. doi: 10.1093/nar/9.1.133. [PMC free article] [PubMed] [Cross Ref] • Hofacker I, Fontana W, Stadler P, Bonhoeffer L, tacker M, Schuster P. Fast folding and comparison of RNA sequence structures. Monatsh Chem. 1994;125:167–168. doi: 10.1007/BF00818163. [Cross Ref] • Moulton V. Tracking down noncoding RNAs. Proc Natl Acad Sci USA. 2005;102(7):2269–2270. doi: 10.1073/pnas.0500129102. [PMC free article] [PubMed] [Cross Ref] • Bonnet E, Wuyts J, Rouzé P, Van de Peer Y. Evidence that microRNA precursors, unlike other non-coding RNAs, have lower folding free energies than random sequences. Bioinformatics. 2004;20 (17):2911–2917. doi: 10.1093/bioinformatics/bth374. [PubMed] [Cross Ref] • Mathews D. Using an RNA secondary structure partition function to determine confidence in base pairs predicted by free energy minimization. RNA. 2004;10(8):1178–1190. doi: 10.1261/rna.7650904. [ PMC free article] [PubMed] [Cross Ref] • Huynen M, Gutell R, Konings D. Assessing the reliability of RNA folding using statistical mechanics. Journal of Molecular Biology. 1997;267:1104–1112. doi: 10.1006/jmbi.1997.0889. [PubMed] [Cross • McCaskill J. The equilibrium partition function and base pair probabilities for RNA secondary structure. Biopolymers. 1990;29(6-7):1105–1119. doi: 10.1002/bip.360290621. [PubMed] [Cross Ref] • Freyhult E, Gardner P, Moulton V. A comparison of RNA folding measures. BMC Bioinformatics. 2005;6(241) [PMC free article] [PubMed] • Ding Y, Lawrence C. A statistical sampling algorithm for RNA secondary structure prediction. Nucl Acids Res. 2003;31(24):7280–7301. doi: 10.1093/nar/gkg938. [PMC free article] [PubMed] [Cross Ref • Walter A, Turner D, Kim J, Matthew H, Muller P, Mathews D, Zuker M. Coaxial stacking of helices enhances binding of oligoribonucleotides and improves predictions of RNA folding. Proc Natl Acad Sci USA. 1994;91(20):9218–9222. doi: 10.1073/pnas.91.20.9218. [PMC free article] [PubMed] [Cross Ref] • Tyagi R, Mathews D. Predicting helical coaxial stacking in RNA multibranch loops. RNA. 2007;13(7):939–951. doi: 10.1261/rna.305307. [PMC free article] [PubMed] [Cross Ref] • Dirks R, Pierce N. An algorithm for computing nucleic acid base-pairing probabilities including pseudoknots. J Comput Chem. 2004;25:1295–1304. doi: 10.1002/jcc.20057. [PubMed] [Cross Ref] • Dirks R, Bois J, Schaeffer J, Winfree E, Pierce N. Thermodynamic analysis of interacting nucleic acid strands. SIAM Rev. 2007;49:65–88. doi: 10.1137/060651100. [Cross Ref] • Kolmogorov A. Sulla determinazione empirica di una legge di distribuzione. G Inst Ital Attuari. 1933;4:83–91. • Schultes E, Hraber P, LaBean T. Estimating the contributions of selection and self-organization in secondary structure. Journal of Molecular Evolution. 1999;49:76–83. doi: 10.1007/PL00006536. [ PubMed] [Cross Ref] • Smit S, Yarus M, Knight B. Natural selection is not required to explain universal compositional patterns in rRNA secondary structure categories. RNA. 2006;12:1–14. doi: 10.1261/rna.2183806. [PMC free article] [PubMed] [Cross Ref] • Masquida B, Westhof E. In: The RNA World. 3. Gesteland RF, Cech TR, Atkins JF, editor. Cold Spring Harbor Laboratory Press; 2006. A modular and hierarchical approach for all-atom RNA modeling; pp. 659–681. • Tinoco I, Bustamante C. How RNA folds. Journal of Molecular Biology. 1999;293(2):271–281. doi: 10.1006/jmbi.1999.3001. [PubMed] [Cross Ref] • Hofacker IL. Vienna RNA secondary structure server. Nucleic Acids Research. 2003;31(13):3429–3431. doi: 10.1093/nar/gkg599. [PMC free article] [PubMed] [Cross Ref] • Batey RT, Rambo RP, Doudna JA. Tertiary motifs in RNA structure and folding. Angew Chem Int Ed Engl. 1999;38:2326–2343. doi: 10.1002/(SICI)1521-3773(19990816)38:16<2326::AID-ANIE2326>3.0.CO;2-3. [PubMed] [Cross Ref] • Lescoute A, Westhof E. Topology of three-way junctions in folded RNAs. RNA. 2006;12:83–93. doi: 10.1261/rna.2208106. [PMC free article] [PubMed] [Cross Ref] • Laing C, T S. Analysis of four-way junctions in RNA structures. J Mol Biol. 2009;390(3):547–559. doi: 10.1016/j.jmb.2009.04.084. [PMC free article] [PubMed] [Cross Ref] • Thirumalai D. Native secondary structure formation in RNA may be a slave to tertiary folding. Proc Natl Acad Sci USA. 1998;95(20):11506–11508. doi: 10.1073/pnas.95.20.11506. [PMC free article] [ PubMed] [Cross Ref] • Griffiths-Jones S, Moxon S, Marshall M, Khanna A, Eddy S, Bateman A. Rfam: Annotating Non-Coding RNAs in Complete Genomes. Nucleic Acids Research. 2005;33:D121–D141. [PMC free article] [PubMed] • Nawrocki E, Kolbe D, Eddy S. Infernal 1.0: inference of RNA alignments. Bioinformatics. 2009;25(10):1335–1337. doi: 10.1093/bioinformatics/btp157. [PMC free article] [PubMed] [Cross Ref] • Huang Z, M M, Malmberg R, Cai L. RNAv: Non-coding RNA secondary structure variation search via graph Homomorphism. Proceedings of Computational Systems Bioinformatics; Stanford. 2010. pp. 56–68. • Durbin R, Eddy S, Krogh A, Mitchison GJ. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge, UK: Cambridge University Press; 1998. • Klein RJ, Eddy SR. RSEARCH: finding homologs of single structured RNA sequences. BMC Bioinformatics. 2003;4:44. doi: 10.1186/1471-2105-4-44. [PMC free article] [PubMed] [Cross Ref] • Knudsen B, Hein J. Pfold: RNA secondary structure prediction using stochastic context-free grammars. Nucl Acids Res. 2003;31(13):3423–3428. doi: 10.1093/nar/gkg614. [PMC free article] [PubMed] [ Cross Ref] • Jaynes ET. Prior probabilities. IEEE Transactions on Systems Science and Cybernetics. 1968;4(3):227–241. Articles from BMC Bioinformatics are provided here courtesy of BioMed Central • Characterising RNA secondary structure space using information entropy[BMC Bioinformatics. ] Sükösd Z, Knudsen B, Anderson JW, Novák Á, Kjems J, Pedersen CN. BMC Bioinformatics. 14(Suppl 2)S22 • Information-Theoretic Uncertainty of SCFG-Modeled Folding Space of The Non-coding RNA[Journal of theoretical biology. 2013] Manzourolajdad A, Wang Y, Shaw TI, Malmberg RL. Journal of theoretical biology. 2013 Feb 7; 318140-163 • Simultaneous prediction of RNA secondary structure and helix coaxial stacking[BMC Genomics. ] Shareghi P, Wang Y, Malmberg R, Cai L. BMC Genomics. 13(Suppl 3)S7 See all... • MedGen Related information in MedGen • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3358654/?tool=pubmed","timestamp":"2014-04-17T01:36:10Z","content_type":null,"content_length":"140560","record_id":"<urn:uuid:e2645f2f-6bac-443b-8c38-b22dc2ca2276>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Probability using tree tables and outcome tables. October 13th 2008, 01:15 PM #1 Junior Member Jun 2007 Finding Probability using tree tables and outcome tables. A tes has four true/false questions. A) draw a tree diagram showing all possible true/false combinations for the answers to the test. 1)T =T/F F =T/F 2)T =T/F F =T/F 3)T =T/F F =T/F 4)T =T/F F = T/F 16 different combos. b) Determine the probability that student will get four correct 1/2 chance of getting each correct so 1/2 x 1/2 x 1/2 x1/2 (4 questions, 4x)= 1/16 c) Determine the probability that student will get three questions correct by guessing. This one i kept getting 1/8 i might be solving it the wrong way but I followed the same pattern as part b. A tes has four true/false questions. A) draw a tree diagram showing all possible true/false combinations for the answers to the test. 1)T =T/F F =T/F 2)T =T/F F =T/F 3)T =T/F F =T/F 4)T =T/F F = T/F 16 different combos. b) Determine the probability that student will get four correct 1/2 chance of getting each correct so 1/2 x 1/2 x 1/2 x1/2 (4 questions, 4x)= 1/16 c) Determine the probability that student will get three questions correct by guessing. This one i kept getting 1/8 i might be solving it the wrong way but I followed the same pattern as part b. (c) There are 4 branches on your tree diagram corresponding to three correct guesses and one wrong guess. Each branch has a probability of 1/16. So it's 4 (1/2)^4 = 1/4. October 13th 2008, 05:14 PM #2
{"url":"http://mathhelpforum.com/statistics/53492-finding-probability-using-tree-tables-outcome-tables.html","timestamp":"2014-04-16T08:35:04Z","content_type":null,"content_length":"34955","record_id":"<urn:uuid:0d36c508-823d-4ed6-9a75-6dd5a717dbdd>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: If cos theta = -5/13 and sin theta > 0, then tan theta is ? • one year ago • one year ago Best Response You've already chosen the best response. From sin theta> 0, we can draw out a graph to see where it might be. Sin theta is y/r where r is the radius. The r (radius) must be positive because distance is never negative. If sin theta>0, that means y cannot be negative or else sin theta will be negative. Therefore, our "triangle" is either in quadrant I or II|dw:1360627130554:dw| Best Response You've already chosen the best response. Then, we know that cos theta is x/r, where r is the radius. Then, we can determine the length and xcor of the outreaching point|dw:1360627258061:dw| Also, we can determine the remaining side by using Pythagorean theorum. Now, you input the values for tan theta to get your answer. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511983c3e4b0e554778c8451","timestamp":"2014-04-19T02:27:56Z","content_type":null,"content_length":"36943","record_id":"<urn:uuid:ba3b98c5-ba0e-474b-b716-fc307598c046>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of multilinear form multilinear algebra , a multilinear form is a of the type $f: V^N to K$, where V is a vector space over the field K, that is separately linear in each its N variables. As the word "form" usually denotes a mapping from a vector space into its underlying field, the more general term "multilinear map" is used, when talking about a general map that is linear in all its For N = 2, i.e. only two variables, one calls f a bilinear form. An important type of multilinear forms are alternating multilinear forms which have the additional property of changing their sign under exchange of two arguments. When K has characteristic other than 2, this is equivalent to saying that i.e. the form vanishes if supplied the same argument twice. (The exceptional case of characteristic 2 requires more care.) Special cases of these are determinant forms and differential forms. See also
{"url":"http://www.reference.com/browse/multilinear+form","timestamp":"2014-04-24T03:49:06Z","content_type":null,"content_length":"80520","record_id":"<urn:uuid:9897ca2d-6adb-40c1-84e0-9e5760221083>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
lass Reference Main Page Class Hierarchy Alphabetical List Compound List File List Compound Members Related Pages sglUnProject Class Reference #include <sglUnProject.hpp> Inheritance diagram for sglUnProject:: List of all members. Public Methods sglUnProject (bool fixedPixelScale=false) virtual ~sglUnProject () void setFixedPixelScale (bool on) bool getFixedPixelScale () const virtual sglBound::IntersectResultEnum intersect (sglIntersectf &isector) const virtual sglBound::IntersectResultEnum intersect (sglIntersectd &isector) const virtual void pick (sglPickf &pick_state, unsigned int cull_flags) const virtual void pick (sglPickd &pick_state, unsigned int cull_flags) const virtual sglNode* clone (unsigned int mode) const virtual void printInfo (ostream &ostrm, const char *indent_string) const Protected Methods virtual void cull (sglCull< float > &trav_state, unsigned int cull_flags) const virtual void cull (sglCull< double > &trav_state, unsigned int cull_flags) const void copyTo (sglUnProject *dst, unsigned int mode) const Detailed Description This node modifies the model view matrix maintained in the various traversals such that the child of this node are transformed to appear on the near clip plane. For example, this node could be used to create target indicators on a game HUD (heads up display). There are two modes for this node: with and without fixed pixel scale: 1. with fixed pixel scale, the following coordinate system results for the children of this node: x - left to right, a unit length of one = one pixel y - bottom to top, a unit length of one = one pixel z - 0 (on the near clip plane) ... (negatives extending into the scene) 2. without fixed pixel scale: x - left to right, viewport width = 1.0 * aspect ratio \ y - bottom to top, viewport height = 1.0 \ z - 0 (on the near clip plane) ... (negatives extending into the scene) Note: Due to the way culling is performed (with pre-computed bounding volumes), this node may exhibit non-intuitive culling behavior. The dynamically generate bounding-volume for this node will reflect the size of its children in world-space, not the transformed screen space. Thus, by default, the bounding volume is likely to be much smaller than the drawn object. It is recommended that you set a static bound of appropriate size for this sub-tree. sglGeode *geode = new sglGeode; sglUnProject *unproject_node = new sglUnProject(true); // fixed pixel scale Implement pick and intersect functions. Definition at line 75 of file sglUnProject.hpp. Constructor & Destructor Documentation sglUnProject::sglUnProject ( bool fixedPixelScale = false ) fixedPixelScale boolean flag that sets whether fixed pixel scale mode is to be used or not (defaults to false). sglUnProject::~sglUnProject ( ) [virtual] Member Function Documentation void sglUnProject::setFixedPixelScale ( bool on ) [inline] Set whether geometry is scaled in units of pixels. on If true then fixed pixel scale is to be used. Definition at line 90 of file sglUnProject.hpp. bool sglUnProject::getFixedPixelScale ( ) const [inline] Query for fixed pixel scale mode. true if fixed pixel scale mode is being used; otherwise, false. Definition at line 95 of file sglUnProject.hpp. virtual sglBound::IntersectResultEnum sglUnProject::intersect ( sglIntersectf & isector ) const [virtual] The single precision intersection traversal function which returns the closest object (bounding volume and/or triangle) that intersects with the given intersect segment. isector A reference to a single precision intersection state Reimplemented from sglGroup. virtual sglBound::IntersectResultEnum sglUnProject::intersect ( sglIntersectd & isector ) const [virtual] The double precision intersection traversal function which returns the closest object (bounding volume and/or triangle) that intersects with the given intersect segment. isector A reference to a double precision intersection state Reimplemented from sglGroup. virtual void sglUnProject::pick ( sglPickf & pick_state, unsigned int cull_flags ) const [virtual] The single precision pick traversal function which returns all objects that fall within the pick frustum. pick_state A reference to a single precision pick state. cull_flags Bit flags that indicate which planes of the pick frustum (polytope) still need to be tested to determine if bounding spheres and boxes lie within the frustum. Cull-free picking can be accomplished by setting this to 0. Reimplemented from sglGroup. virtual void sglUnProject::pick ( sglPickd & pick_state, unsigned int cull_flags ) const [virtual] The double precision pick traversal function which returns all objects that fall within the pick frustum. pick_state A reference to a double precision pick state. cull_flags Bit flags that indicate which planes of the pick frustum (polytope) still need to be tested to determine if bounding spheres and boxes lie within the frustum. Cull-free picking can be accomplished by setting this to 0. Reimplemented from sglGroup. virtual sglNode* sglUnProject::clone ( unsigned int mode ) const [virtual] Make a copy of the scenegraph rooted at this node. mode Bit masks to control the behaviour of the clone. These are OR-ed together from the mode values in sglObject::CloneModeEnum. Pointer to root of cloned scene graph. Reimplemented from sglGroup. virtual void sglUnProject::printInfo ( ostream & ostrm, const char * indent_string ) const [virtual] Output the state of this node to the specified ostream. ostrm the ostream to which the output is sent indent_string the string (usually spaces) that is output at the beginning of every line of output Reimplemented from sglGroup. virtual void sglUnProject::cull ( sglCull< float > & trav_state, unsigned int cull_flags ) const [protected, virtual] The single precision cull traversal function that culls out subgraphs that do not lie in the view frustum (stored in the sglCull parameter). Subclasses must implement this function. The entry point for user-friendly culling is in the sglScene class. trav_state The single precision traversal state that collects all the state and geometry information that passes the cull. cull_flags Bit flags that indicate which planes of the view frustum (polytope) still need to be tested to determine if bounding spheres and boxes lie within the frustum. Cull-free drawing can be accomplished with cull_flags = 0. Reimplemented from sglGroup. virtual void sglUnProject::cull ( sglCull< double > & trav_state, unsigned int cull_flags ) const [protected, virtual] The double precision cull traversal function that culls out subgraphs that do not lie in the view frustum (stored in the sglCull parameter). Subclasses must implement this function. The entry point for user-friendly culling is in the sglScene class. trav_state The single precision traversal state that collects all the state and geometry information that passes the cull. cull_flags Bit flags that indicate which planes of the view frustum (polytope) still need to be tested to determine if bounding spheres and boxes lie within the frustum. Cull-free drawing can be accomplished with cull_flags = 0. Reimplemented from sglGroup. The documentation for this class was generated from the following file: Generated at Mon Jul 1 18:00:11 2002 for SGL by 1.2.6 written by Dimitri van Heesch, © 1997-2001
{"url":"http://sgl.sourceforge.net/doxygen/html/class_sglUnProject.html","timestamp":"2014-04-21T04:38:31Z","content_type":null,"content_length":"21652","record_id":"<urn:uuid:1412a5e7-9edb-437b-8b54-7e4b20cbfb21>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"}