content
stringlengths
86
994k
meta
stringlengths
288
619
Theory Seminar Friday April 25 Swastik Kopparty (Rutgers) WWH 1314 TBA Friday April 18 Nir Ailon (Technion) WWH 805 Lower Bound for Fourier Transform in the Well-Conditioned Linear Model (Or: If You Want a Faster Fourier Transform You'd Need to Sacrifice Accuracy) Friday April 11 Andris Ambainis (University of Latvia and IAS) WWH 1314 On the power of exact quantum algorithms Friday April 4 CCI meeting All day Princeton Arithmetic circuits Thursday Jan. 30 Shubhangi Saraf (Rutgers) WWH 1314 Incidence geometry over the reals and locally correctable codes
{"url":"http://cs.nyu.edu/web/Calendar/colloquium/cs_theory_seminar/","timestamp":"2014-04-17T18:50:15Z","content_type":null,"content_length":"16304","record_id":"<urn:uuid:9c102cd7-688c-4624-9040-41e898f3f52b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with homework please 10-09-2003 #1 Registered User Join Date Oct 2003 Help with homework please I need help with below problem. Can somebody help me please ? Thank you The Problem Your program will prompt the user for a single positive integer greater than 1. If the user does not enter a valid integer, then your program should continue to reprompt the user for a value until a valid integer is entered. Once the integer is read in, your program will print out the following information about the integer: 1) A list of each of the positive factors of the given integer. 2) The number of factors the given integer has. 3) The sum of the factors of the integer. 4) The product of the factors of the integer. The key idea which will help solve this problem is attempting to divide the given integer by each integer in between 1 and itself. If a particular division works "perfectly," then the value you have tried is a factor of the given integer. (Note: This description is intentionally vague so that you have to determine the specifics of the method of solution.) Name the file you create and turn in numbers.c. Although you may use other compilers, your program must compile and run using gcc. If you use your olympus account to work on this assignment, please follow the steps shown in class to create, compile, and test your program. Your program should include a header comment with the following information: your name, course number, section number, assignment title, and date. You should also include comments throughout your code, when appropriate. If you have any questions about this, please see a TA. Chance for Extra Credit There is a relationship between the integer entered by the user, the number of factors that integer has and the product of those factors. If you can determine this relationship, you will be eligible for some extra credit for this assignment. Please put your answer in a comment right after your header comment in your program. (Note: If you need to specify a power in the text of this comment, use the ^ symbol. For example, (x+y)^z stands for the quantity of x plus y raised to the z power. If you have any questions about the extra credit, please ask your instructor or TA.) Input Specification The value the user enters will be an integer. It will be your job to check to see whether this integer is greater than or equal to two or not. If it is, your program should proceed. If it is not, your program should reprompt the user for a value until one greater than or equal to two is entered. You are guaranteed that the integer entered by the user will be such that the product of the factors of the integer will NOT cause an overflow problem for the int data type. In particular, none of the values your program will have to print out will be more than 231-1. Output Specification Your output should follow the specification below: Here is a list of the positive factors of X: A B C D The number of positive factors of X is Y. The sum of the positive factors of X is Z. The product of the positive factors of X is W. A single source file named numbers.c turned in through WebCT. Output Samples Here are three sample outputs of running the program. Note that this set of tests is NOT a comprensive test. You should test your program with different data than is shown here based on the specifications given. The user input is given in italics while the program output is in bold. Output Sample #1 Enter a positive integer greater than one. Sorry, that input is not valid. Enter a positive integer greater than one. Sorry, that input is not valid. Enter a positive integer greater than one. Here is a list of the positive factors of 9: The number of positive factors of 9 is 3. The sum of the positive factors of 9 is 13. The product of the positive factors of 9 is 27. Output Sample #2 Enter a positive integer greater than one. Here is a list of the positive factors of 12: The number of positive factors of 12 is 6. The sum of the positive factors of 12 is 28. The product of the positive factors of 12 is 1728. Output Sample #3 Enter a positive integer greater than one. Here is a list of the positive factors of 100: The number of positive factors of 100 is 9. The sum of the positive factors of 100 is 217. The product of the positive factors of 100 is 1000000000. What have you tried so far? The information given in this message is known to work on FreeBSD 4.8 STABLE. *The above statement is false if I was too lazy to test it.* Please take note that I am not a technical writer, nor do I care to become one. If someone finds a mistake, gleaming error or typo, do me a favor...bite me. Don't assume that I'm ever entirely serious or entirely joking. to twm: I can ask the user to input a positive integer greater than 1 but this is as far as I could get. I have no idea how to set up the loops or anything like that. Are you two in the same class? Either way, you have to try on your own first, then post your code here when you get stuck and someone will guide you. When all else fails, read the instructions. If you're posting code, use code tags: [code] /* insert code here */ [/code] >I have no idea how to set up the loops or anything like that. There are relics of the past that can help you in this area. They're called 'Books' and they act like a repository of information. They seem to be categorized by topic (such as Programming, History, etc.), then further categorized by arbitrary names referred to as 'Titles' and then even further by the one that collected the information called an 'Author'. I like to collect these 'Books' and glean information from them like how to use loops and things like that. Though they're hard to get used to, you have to actually grasp a thin rectangle of paper (how primitive!) and manually move your hand to the other side, taking the sheet of paper with you so that you can view the other side and the page that the sheet was covering. It takes some effort to figure out how they work, but I rather like them. They're easier to transport than even the smallest laptop and most of them don't even weight as much. What won't they think of next, eh? Warning: 'Books' are hard to grep, and cut/paste doesn't work nearly as well as you would hope. The information given in this message is known to work on FreeBSD 4.8 STABLE. *The above statement is false if I was too lazy to test it.* Please take note that I am not a technical writer, nor do I care to become one. If someone finds a mistake, gleaming error or typo, do me a favor...bite me. Don't assume that I'm ever entirely serious or entirely joking. Two of the main loop styles for your level: do{} while() - do the code then check to see if condition is true while(){} - check the condition and if true do the code Originally posted by twm Warning: 'Books' are hard to grep, and cut/paste doesn't work nearly as well as you would hope. Hey, I know the feeling of frustration from not being able to grep a book... □ Using The GNU GDB Debugger: Tutorial with examples and exercises. Help with home work You guys are right by saying, " show us what you've done so far?" but at the same time, i'd implore you to stop insulting the intelligence of most of us, the newbies. We are here to learn from you, the better programmers. Kindly point out what need to be done. E.g "Well you need to try something first and then we'll see where things are going wrong" rather than " Go and read some Thanks to all of you for your wonderful support ad patience. Thank you dalguy. You really understood my message. I need some guidance in solving this project not somebody to do it for me. >>i'd implore you to stop insulting the intelligence of most of us Insulting isn't my intent, but it's difficult to answer questions that are along the lines of "please help me", with no detail of what help is actually required. >>Your program will prompt the user for a single positive integer greater than 1.<< printf() a message, then get a number from the user. Stick that section of code in a loop, and repeat until you've got a number you consider to be valid. When you've done that, use more loops, and basic maths operators to work out those sums. What more help are you looking for at this stage? When all else fails, read the instructions. If you're posting code, use code tags: [code] /* insert code here */ [/code] Help me please! Your intention is quite clear, you are very willing to help and you are ready to see that the person who need help make enough effort. Vleonte, i hope with the simple instructions given you'd accomplish the task, you might wanna look into any book that you are reading for related examples. Basically the algorithm is the most important, take time to form the algorithm for what you want to do, then transformig that into codes won't be too difficult. Good luck. Thank you dalguy2004 & hammer . I will try to put it together somehow and I will post the result so you can see what I did. Help pleeeeaaaaase The Problem Your program will prompt the user for a single positive integer greater than 1. If the user does not enter a valid integer, then your program should continue to reprompt the user for a value until a valid integer is entered. Once the integer is read in, your program will print out the following information about the integer: 1) A list of each of the positive factors of the given integer. 2) The number of factors the given integer has. 3) The sum of the factors of the integer. 4) The product of the factors of the integer. The key idea which will help solve this problem is attempting to divide the given integer by each integer in between 1 and itself. If a particular division works "perfectly," then the value you have tried is a factor of the given integer. (Note: This description is intentionally vague so that you have to determine the specifics of the method of solution.) Name the file you create and turn in numbers.c. Although you may use other compilers, your program must compile and run using gcc. If you use your olympus account to work on this assignment, please follow the steps shown in class to create, compile, and test your program. Your program should include a header comment with the following information: your name, course number, section number, assignment title, and date. You should also include comments throughout your code, when appropriate. Chance for Extra Credit There is a relationship between the integer entered by the user, the number of factors that integer has and the product of those factors. If you can determine this relationship, you will be eligible for some extra credit for this assignment. Please put your answer in a comment right after your header comment in your program. (Note: If you need to specify a power in the text of this comment, use the ^ symbol. For example, (x+y)^z stands for the quantity of x plus y raised to the z power. If you have any questions about the extra credit, please ask your instructor or TA.) Input Specification The value the user enters will be an integer. It will be your job to check to see whether this integer is greater than or equal to two or not. If it is, your program should proceed. If it is not, your program should reprompt the user for a value until one greater than or equal to two is entered. You are guaranteed that the integer entered by the user will be such that the product of the factors of the integer will NOT cause an overflow problem for the int data type. In particular, none of the values your program will have to print out will be more than 231-1. Output Specification Your output should follow the specification below: Here is a list of the positive factors of X: A B C D The number of positive factors of X is Y. The sum of the positive factors of X is Z. The product of the positive factors of X is W. A single source file named numbers.c turned in through WebCT. Output Samples Here are three sample outputs of running the program. Note that this set of tests is NOT a comprensive test. You should test your program with different data than is shown here based on the specifications given. The user input is given in italics while the program output is in bold. Output Sample #1 Enter a positive integer greater than one. Sorry, that input is not valid. Enter a positive integer greater than one. Sorry, that input is not valid. Enter a positive integer greater than one. Here is a list of the positive factors of 9: The number of positive factors of 9 is 3. The sum of the positive factors of 9 is 13. The product of the positive factors of 9 is 27. Output Sample #2 Enter a positive integer greater than one. Here is a list of the positive factors of 12: The number of positive factors of 12 is 6. The sum of the positive factors of 12 is 28. The product of the positive factors of 12 is 1728. Output Sample #3 Enter a positive integer greater than one. Here is a list of the positive factors of 100: The number of positive factors of 100 is 9. The sum of the positive factors of 100 is 217. The product of the positive factors of 100 is 1000000000. This is what I wrote so far. I do not know how to set up to count the number of factors. Can somebody look over my code and tell me if I have any mistakes please. The code is attached. Thank you First off you must remember indentation. It is so important. #include < stdio.h > int main () int N, a, f, sum, prod; printf (" Enter a positive integer greater than one:\n"); scanf ("%d", &N); if ( N < 2) printf (" Please enter a positive integer greater than one:\n"); for ( a >= 1 && a =< N; N % a == 0; a++) f= N % a; printf (" Here is the list of the positive factors of %d : %d ", N, f); for ( sum = 0; sum = sum + f, sum++ ) printf ("The sum of the positive factors of %d is %d .\n", N, sum); prod = 1; prod = prod * f; }while ( prod <= (2^31-1)); printf ( "The product of the positive factors of %d is %d.\n", N, prod); return 0; Ok now to tear it apart #include < stdio.h > my compiler yelled at me for the spaces. needs to be <stdio.h> printf (" Enter a positive integer greater than one:\n"); scanf ("%d", &N); if ( N < 2) printf (" Please enter a positive integer greater than one:\n"); for ( a >= 1 && a =< N; N % a == 0; a++) f= N % a; printf (" Here is the list of the positive factors of %d : %d ", N, f); Put the scanf and the if statement in a do while loop. That way you can force the user to input a valid number. Take the stuff in the else and put it after the loop since you will then know the number is valid. for ( a >= 1 && a =< N; N % a == 0; a++) This is horribily wrong. The first part of the for loop is assigning a value and is not a conditional statement. for ( a >= 1 && a =< N; N % a == 0; a++) f= N % a; The for loop is assigning the reminder of the integer divison to f, but it does nothing with it, why? for ( sum = 0; sum = sum + f, sum++ ) while this might be syntaxally correct it will give you an infinate loop. How can sum ever equal itself plus another number? Remember it reevaulates the left and right value of the conditional statement on every iteration. prod = 1; prod = prod * f; }while ( prod <= (2^31-1)); Why are you making the program do unnessary math? Since (2^31)-1 is a constent why not just put the value in yourself? (2147483647) One thing I forgot to add: Try compiling your program and running it, lots of answers can be found that way. Last edited by Thantos; 10-12-2003 at 11:13 AM. But how do I make the program to count. Can somebody outline exactly how exactly to change it. Thank you 10-09-2003 #2 Join Date Sep 2003 10-09-2003 #3 Registered User Join Date Oct 2003 10-09-2003 #4 10-09-2003 #5 Join Date Sep 2003 10-09-2003 #6 10-09-2003 #7 10-10-2003 #8 Registered User Join Date Sep 2003 10-10-2003 #9 Registered User Join Date Oct 2003 10-10-2003 #10 10-10-2003 #11 Registered User Join Date Sep 2003 10-10-2003 #12 Registered User Join Date Oct 2003 10-12-2003 #13 Registered User Join Date Oct 2003 10-12-2003 #14 10-12-2003 #15 Registered User Join Date Oct 2003
{"url":"http://cboard.cprogramming.com/c-programming/45735-help-homework-please.html","timestamp":"2014-04-17T21:57:06Z","content_type":null,"content_length":"108811","record_id":"<urn:uuid:e3f1cf47-dd6b-4928-9b59-919624f02630>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Everett, WA Trigonometry Tutor Find an Everett, WA Trigonometry Tutor ...Let a professional Math Coach help! I have a Master's in Math Education and would love to demonstrate my ability to help you improve your skills and confidence. I currently teach at a College in Everett, but live in Oak Harbor. 19 Subjects: including trigonometry, calculus, differential equations, public speaking ...The subjects I tutor are algebra, geometry, trigonometry, and pre-calculus. My experience so far has mostly been tutoring family and friends. Students are involved in solving problems rather than just told or shown, for better understanding and retention of the material. 8 Subjects: including trigonometry, geometry, algebra 2, SAT math ...I'd studied Chinese for over ten years, before I moved to the U.S. I graduated from University of Washington with a B.A. degree in business. So I'm fluent in English as well. 13 Subjects: including trigonometry, geometry, Chinese, algebra 1 ...I have worked with students K-12 in the Seattle Public Schools, Chicago Public Schools, and Seattle area independent schools. I have also worked with students at the University level. I believe in the importance of differentiated learning or tailoring lessons for each particular student so that... 27 Subjects: including trigonometry, chemistry, reading, writing ...I have been tutoring high school and college students since 2008 in math (pre-Algebra through Calculus), and Science (chemistry, physics, biology and biochemistry). I enjoy tutoring as it allows me to work one on one with students and tailor a learning program to their learning style and needs. ... 17 Subjects: including trigonometry, chemistry, calculus, physics Related Everett, WA Tutors Everett, WA Accounting Tutors Everett, WA ACT Tutors Everett, WA Algebra Tutors Everett, WA Algebra 2 Tutors Everett, WA Calculus Tutors Everett, WA Geometry Tutors Everett, WA Math Tutors Everett, WA Prealgebra Tutors Everett, WA Precalculus Tutors Everett, WA SAT Tutors Everett, WA SAT Math Tutors Everett, WA Science Tutors Everett, WA Statistics Tutors Everett, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/everett_wa_trigonometry_tutors.php","timestamp":"2014-04-21T05:12:56Z","content_type":null,"content_length":"23979","record_id":"<urn:uuid:c02cdd97-436d-4bfb-a3c0-7322bb763cac>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Time-varying Systems Next: Afterword Up: Future Directions Previous: Finite Arithmetic Testing Time-varying Systems Time-varying distributed systems have not been examined in any detail in the scattering simulation literature, though time-varying WDFs [177] and DWNs [166] have both been proposed, with a focus on vocal tract modelling. Though it is true that time-variations in material parameters generally render a system non-passive, we will show here how passive network representations may be developed for an important class of systems. Consider a system of the form which is a simple generalization of the D symmetric hyperbolic form (3.1) to the case where and depend on both the spatial coordinates and time ; is assumed to be positive definite for all values of these coordinates and smoothly-varying. The matrices are again assumed to be constant and symmetric, and is not required to have any particular structure. It is easy to show that in this form, it is not possible to arrive immediately at an energy condition such as (3.5). In order to put system (6.8) into more useful form, note that we can factor as = where is some left matrix square root of . We can then rewrite (6.8) as Now introduce a new dependent variable defined by , where and is assumed differentiable. Then, in terms of the new variable , we have with . Assuming that this source term is zero, we can then take the inner product of this expression with to get If is positive semi-definite, then integrating over gives the energy condition which is identical to the condition derived in §3.2, under the replacement of with . As long as and the time derivative of are bounded, it is always possible to make a choice of such that is positive semi-definite. For instance, we can choose , with where signifies ``minimum eigenvalue of.'' Here, we essentially have a passivity condition in an exponentially-weighted norm. Consider a generalization of the source-free (1+1)D transmission line system, where , , and , are all smooth positive functions of and . Introducing the variables where is a positive constant as well as the scaled time variable , and transformed coordinates as per (3.18), we can rewrite this system as Under the choices where now we have then and are non-negative, and the terms involving them can be interpreted as voltages across passive inductors, if power-normalized waves are employed (see §3.5.1 for more information on this definition of inductors). If we also choose then and are non-negative and can be interpreted as passive resistances. The resulting MDKC is shown in Figure 6.4; an MDWD network can be immediately obtained through the methods discussed in Chapter 3, or network manipulations and alternative integration rules may be employed to get a DWN. A balanced form (see §3.12) is also possible, and gives a less strict bound on , but the bound on remains unchanged. Figure 6.4: MDKC for time-varying (1+1)D transmission line system (6.9). The exponential weighting of the current variables can be viewed (formally) as a time-varying transformer coupling. A direct application of this MDKC to an important music synthesis problem would be the simulation of acoustic wave propagation in the vocal tract, under time-varying conditions. Such a system of PDEs is mentioned in [145], and has the exact form of (6.9), with , and under the replacements where is the air density, is the speed of sound, is the surface area of the tube, is the volume velocity and is the pressure variation. The condition (6.10) then reduces to If the time variation in is slow, then will be close to zero, and the exponential weighting will not be overly severe. The problems, for real-time synthesis applications, are that we will need to have an a priori estimate of the maximal time variation of the vocal tract area, and that we will apply an exponential weighting to the signal output from the scattering simulation. This exponential weighting may be viewed as a passive operation involving time-varying transformers (as shown in Figure 6.4). Next: Afterword Up: Future Directions Previous: Finite Arithmetic Testing Stefan Bilbao 2002-01-22
{"url":"https://ccrma.stanford.edu/~bilbao/master/node197.html","timestamp":"2014-04-17T14:33:09Z","content_type":null,"content_length":"26162","record_id":"<urn:uuid:32ff8105-d2b5-42c9-9157-4d5a414f307d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Showing two subspaces are a direct sum July 14th 2011, 01:58 PM #1 Nov 2009 Showing two subspaces are a direct sum Consider the subspaces S1={(r,0,t): r,t∈R} and S2={(s,s,0): s∈R} of R^3. Prove that S1⊕S2=R^3. I am not sure how to show this at all, my teacher didn't give us any examples of this. Would really appreciate some help please. Re: Showing two subspaces are a direct sum Prove that every vector $(a,b,c)\in\mathbb{R}^3$ can be written as sum of vector from $S_1$ and vector from $S_2$ in an unique way. And it's clear that $S_1\cap S_2=\{0\}$ These two things will give you $S_1\oplus S_2 =\mathbb{R}^3$ July 14th 2011, 02:46 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/184587-showing-two-subspaces-direct-sum.html","timestamp":"2014-04-17T13:56:13Z","content_type":null,"content_length":"34030","record_id":"<urn:uuid:3072a659-7bcf-4244-88c6-9c31fbec548a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Delila Program normal program Documentation for the normal program is below, with links to related programs in the "see also" section. {version = 3.17; (* of normal.p 1994 sep 5} (* begin module describe.normal *) normal: generate normally distributed random numbers normal(normalp:in, data: out, output: out) normalp: parameter file controlling the program. Two numbers, one per line: seed: random seed to start the process total: the number of numbers to generate data: This is a set of numbers which should have Gaussian distribution if the random number generator is a reasonable one. It will be N(0,1), a normal distribution with mean 0 and standard deviation 1. genhisp: control file for the genhis histogram plotting program. output: messages to the user Test of a random number generator by creating a gaussian distribution of numbers for plotting by genhis. Method: if U is a member of the set [0..1] and Un and Un+1 are two members, then define theta = Un 2 pi r = sqrt(-2 ln(Un+1)) then when these polar coordinates are converted to Cartesian coordinates, one gets two independent Normally distributed numbers, with mean 0 and standard deviation 1. To get other standard deviations multiply by a constant, and to get other means, add a constant. The proof was from a friend; I only have sketch notes at the moment. I'm sure it is available in standard texts. However, it works, as shown by the example. seed := 0.5; total := 10000; The mean was 0.00 (to two places) and the standard deviation was 1.01. see also gentst.p, tstrnd.p, genhis.p Tom Schneider National Cancer Institute Laboratory of Mathematical Biology Frederick, Maryland none known (* end module describe.normal *) {This manual page was created by makman 1.44} {created by htmlink 1.55} Viewing Files Accessibility
{"url":"http://schneider.ncifcrf.gov/delila/normal.html","timestamp":"2014-04-19T14:58:30Z","content_type":null,"content_length":"4154","record_id":"<urn:uuid:78567bad-bf5b-4ee5-8d24-659611e1d1e9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Loopy Solution Brings Infinite Relief Twenty-three centuries after Alexander the Great slashed through the Gordian knot, mathematicians have finally made their first stab at figuring out how long it takes to untie one key class of knots. The unheroic answer is "not forever"--and even that comes with a huge string attached. Still, knot researchers are delighted. As far back as the 1920s, mathematicians had figured out that untangling a knot and putting it in a standard, recognizable form requires only three types of motion--the so-called Reidemeister moves. But there is no easy way to tell how many Reidemeister moves it takes to untangle any loop of string and render it recognizable--even one as simple as mere loop of string, called an unknot, that has been tangled up a bit to disguise it. Now two knot theorists have unravelled the problem. Jeffrey Lagarias of AT&T Labs in Florham Park, New Jersey, and Joel Hass of the University of California, Davis, considered an unknot as the boundary of a crumpled and distorted disk, rather than as a twisted-up loop of string. They performed Reidemeister-equivalent operations on the disk and translated it back into knot form. Their conclusion: A finite number of Reidemeister moves will untangle any given twisted-up unknot, they report in the current issue of the Journal of the American Mathematical Society. Not that the solution is especially practical--if the string in a knot crosses itself n times, they guarantee that you can untangle it in 2^(100,000,000,000n) Reidemeister moves. In other words, if every atom in the universe were doing a googol googol googol Reidemeister moves a second from the beginning of the universe to the end of the universe, that wouldn't even approach the number you need to guarantee unknotting a singly twisted rubberband. "The [bound] is, of course, enormous and hopeless," Lagarias admits. But he says that just putting a cap on it may inspire future researchers to whittle it down to a reasonable size. Indeed, the question of whether a limit even existed at all was "a very big problem," says Joan Birman, a knot theorist at Barnard College in New York City. Related sites Lagarias's home page An introduction to knot theory Go to Comments
{"url":"http://news.sciencemag.org/2001/02/loopy-solution-brings-infinite-relief?mobile_switch=mobile","timestamp":"2014-04-18T19:13:25Z","content_type":null,"content_length":"44145","record_id":"<urn:uuid:01f28b49-9918-41ee-b712-0631763197b9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Greeley, CO Algebra 2 Tutor Find a Greeley, CO Algebra 2 Tutor I graduated from CSU engineering school in 1989 and have been practicing engineering for over 20 years. I am currently looking for appropriate employment on the front range which gives me the time and opportunity to tutor others. I also started, built and then sold my own electronics manufacturing company in the early 2000's and worked for Kodak in Windsor. 14 Subjects: including algebra 2, reading, algebra 1, grammar ...While there I was the supervisor and event coordinator for 10 campers 6 days of the week. This last summer I was the Program Coordinator for the Seed 2 Seed Summer Youth Program in Denver for ages 14-19. Over the course of 8 weeks we went on several field trips and learned about topics such as ... 41 Subjects: including algebra 2, reading, English, Spanish I graduated with an economics degree and loved every class and everything about economics. I am currently working towards my MBA. I also work full time as an accountant in the agriculture 15 Subjects: including algebra 2, accounting, algebra 1, finance ...I have over 100 hours of training with this agency. The training taught me study skills for all types of subjects, tests, and learning styles. I taught the Biology section of the MCAT for a national test prep agency for over 1 year. 26 Subjects: including algebra 2, reading, geometry, biology ...I have taught regular and honors level geometry for over ten years in public school classrooms and have been a tutor in geometry for eight years. I am happy to help with any topics in Geometry and Honors Geometry which include:properties of polygons and polyhedra, transformations, properties of ... 14 Subjects: including algebra 2, reading, geometry, algebra 1 Related Greeley, CO Tutors Greeley, CO Accounting Tutors Greeley, CO ACT Tutors Greeley, CO Algebra Tutors Greeley, CO Algebra 2 Tutors Greeley, CO Calculus Tutors Greeley, CO Geometry Tutors Greeley, CO Math Tutors Greeley, CO Prealgebra Tutors Greeley, CO Precalculus Tutors Greeley, CO SAT Tutors Greeley, CO SAT Math Tutors Greeley, CO Science Tutors Greeley, CO Statistics Tutors Greeley, CO Trigonometry Tutors Nearby Cities With algebra 2 Tutor Boulder, CO algebra 2 Tutors Brighton, CO algebra 2 Tutors Broomfield algebra 2 Tutors Evans, CO algebra 2 Tutors Fort Collins algebra 2 Tutors Garden City, CO algebra 2 Tutors Johnstown, CO algebra 2 Tutors Longmont algebra 2 Tutors Loveland, CO algebra 2 Tutors Milliken algebra 2 Tutors Northglenn, CO algebra 2 Tutors Severance, CO algebra 2 Tutors Thornton, CO algebra 2 Tutors Westminster, CO algebra 2 Tutors Windsor, CO algebra 2 Tutors
{"url":"http://www.purplemath.com/greeley_co_algebra_2_tutors.php","timestamp":"2014-04-18T05:32:47Z","content_type":null,"content_length":"23968","record_id":"<urn:uuid:d6e34891-aa5d-4d03-b9e8-9d78c2e021f7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Maspeth Calculus Tutor Find a Maspeth Calculus Tutor ...I have a strong background in mathematics, including statistics and have applied this knowledge to the statistical study of economics: econometrics. I recently tutored a Brown undergraduate in the subject and helped him better understand the mathematical underpinnings of the material. I have al... 40 Subjects: including calculus, chemistry, English, reading ...I have taught Elementary Math, Algebra, Finite Mathematics, PreCalculus, Introductory and Intermediate Statistics both online and offline at other CUNY and non-CUNY colleges, including The City College of CUNY, La Guardia Community College of CUNY, St. Francis College and Berkeley College, overa... 21 Subjects: including calculus, physics, statistics, geometry ...What makes tutoring an ideal environment in which to learn is that I can help you see what ways of thinking you have that can be a great help in grasping new subjects. I have made a lifetime of learning. Having earned three master's degrees and working on a doctoral degree, all in different fie... 50 Subjects: including calculus, chemistry, physics, geometry ...The focus of his research was in the role logic plays in the foundations of modern physics. The two most relevant courses he took were one entitled "Logic, Computability and Undecidability" and another independent study with Dr. Achille Varzi in first order logic with an emphasis on recursive function theory. 4 Subjects: including calculus, differential equations, logic, linear algebra ...I participated in a chess team in school that became finalists for NYC in a competition against other schools. Prior to attending college for music technology, I was in a rock band for 4 years and was one of the primary co-composers on all the material. As of late I have started to compose slightly within the jazz idiom, and I have also composed for some short film clips. 33 Subjects: including calculus, physics, geometry, GRE
{"url":"http://www.purplemath.com/maspeth_calculus_tutors.php","timestamp":"2014-04-20T15:58:00Z","content_type":null,"content_length":"24017","record_id":"<urn:uuid:cd6c6b66-d438-414b-a7e7-d92a053a6e5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Revista Brasileira de Economia Services on Demand Related links Print version ISSN 0034-7140 Rev. Bras. Econ. vol.56 no.1 Rio de Janeiro Mar. 2002 Estimating and Interpreting a Common Stochastic Component for the Brazilian Industrial Production Index* Paulo Picchetti** Celso Toledo*** Summary: 1. Introduction; 2. The data; 3. The common stochastic component model; 4. Interpretation of results and conclusions. Key words: time series econometrics; brazilian industrial production index; business fluctuations. JEL codes: C32 and E32. This paper employs a state-space formulation to model a common stochastic component in four different series that constitute the aggregate index of industrial production in Brazil. This estimated common component is then interpreted as a measurement of behavior of fundamentals in the brazilian economy and compared to the actual aggregate index. A partir de uma formulação em espaço de estado, modelamos um componente estocástico comum para quatro séries distintas que compõem o índice agregado de produção industrial calculado pelo IBGE para o Brasil. Esse componente estocástico comum estimado é então interpretado como uma medida do comportamento de fundamentos da economia brasileira, e comparado com o índice agregado efetivo. 1. Introduction The Brazilian Institute for Geography and Statistics (IBGE) collects survey data from different sectors of nationwide industrial activity, in order to produce a monthly index. This series is considerably robust in methodology and measurement, providing a source for Brazilian national accounts. The aggregate index for industrial production is built from a weight structure applied to four basic components: capital goods, intermediate goods, durable consumer goods, and non-durable consumer goods. The idea of this paper is to apply Stock and Watson's (1991) methodology to extract a common stochastic component from these four factors, and then compare it with the aggregate index. The purpose of this exercise is to obtain a measurement of the underlying fundamentals of production in the industrial sector, especially considering the period of the sample, during which the Brazilian economy experienced large shocks from different sources. Departing from this analysis, we propose to address some issues concerning the Brazilian economy. The estimated stochastic common component can be interpreted as an indicator of the state of the economy in relation to the business cycle, and can potentially be forecasted through other models incorporating leading variables. As shown below, when converted to the lower frequency at which the GNP is available in Brazil during the whole sample, the estimated common stochastic component is highly correlated with the GNP (more so than the aggregate production index). Applied researchers estimating models in which aggregate product plays a fundamental role are usually constrained by Brazilian data for this series being currently available only on a quarterly frequency. Our estimated common factor for the industrial production index seems to capture relatively well the fundamentals driving aggregate production and can in principle be considered as a potential proxy available on a monthly frequency inside the sample. 2. The Data The monthly industry survey carried out by IBGE provides a source for the national accounts. Figure 1 depicts the monthly industrial output, which is a weighted average of 19 industrial genres of the manufaturing industry and the mineral extraction class, for the period between January 1975 and February 2000. The weights are based on a comprehensive industrial survey conducted by IBGE in 1991. In the first growth boom promoted by the Real Plan stabilization, before the liquidity shock of March 1995, capital goods and durable consumer goods were the major responsibles for the economic growth. While figure 1 depicts the behavior of the raw series during our sample, figure 2 shows the series seasonally adjusted and free of stochastic shocks, which makes the referred strucutural breaks more evident. In what follows we comment very briefly on the stylized driving forces of those movements. The so-called "miracle'' years were characterized by heavy public investments, the setting up of multinational companies and the high absorption of external savings. The shift in the international conditions in the beginning of the 1980s made the model unsustainable. The 1980s were marked by several unsuccesful attempts to stabilize the economy. The chronic inflation process produced some bad consequences: widespread indexation; income concentration; short horizons and uncertainty. The Real Plan has been considered a succesful stabilization attempt so far. The reduction of inflation and the return of some consumer credit lines that were virtually inexistent during the high inflation years stimulated the (repressed) demand for durable goods. The credit squeezes due to external problems and the saturation of the market reversed this process after two years. Our concern here is to try to get some "fundamental'' measure out of such a non-homogeneous growth performance that has had to come along under different and somewhat unfavorable conditions: first because of inflation and then because of the current account deficit. Figure 3 shows the raw behavior of the four basic components of the aggregate index. 3. The Common Stochastic Component Model The basic question goes as follows: is there a way to define this "fundamental'' measure of economic activity that is less affected by sector-specific shocks and, thus, reflects better the underlying economic environment? Of course the answer to this question is far from trivial. Our strategy here is to characterize this essencial feature of the economic activity as a common component removed from the different series exhibited in figure 3. We assume that there is a mutual unobserved element sustaining the comovements in the several economic sectors that are individually affected in different ways by economic shocks. This "macroeconomic building block'' would, in principle, better reproduce the real "state'' of the economy than a simple fixed-base average of the various series. For instance, an economic expansion lead by consumption, allowed by a brief (and unsustainable) stabilization, certainly would give, in one hand, a good feeling to the analyst that only cares about the GDP numbers. It would, on the other hand, appear unpleasant for someone that can see the precarious foundations of such a performance. If the GDP is based on an average of, say, consumer and investment goods, the good performance of consumer goods will, given the investment decisions, push upwards the GDP average. Instead, a measure of the common component out of these different sectors would "recognize'' that the fators affecting consumption would not be affecting investment and, therefore, could not be considered "fundamental''. Visual inspection of the four components of the aggregate index in figure 3 provides evidence of the presence of trends in all components but capital goods. In standard analysis under this context, unit-root tests should be able to determine whether these observed trends are stochastic or not. Under various lag specifications, ADF tests reject the hypothesis of unit-roots in all series representing these components. These results are strenghtened by the fact that these series were subject to large shocks during the sample period, resulting in structural breaks both on the intercepts and the rates of growth, especially the intermediate and durable goods comoponents. Accordingly, to search for a cointegrating relation between these variables does not seem like the best option. Instead, we proceed by estimating a dynamic factor model in first differences of the four components, following Stock and Watsons's (1991) methodology for constructing a coincident indicator for the US economy. The model is based on the relationship between each series and a common component: Index t points to each period in the sample, whereas index i here selects each of the four components of the aggregate industrial production index, whose differences are represented as DY. The common (unobserved) components of each of these series is represented in first difference by DC, and is related to each of the four series via a specific weight given by g[i], which will be estimated here along with the other parameters. In addition, the behavior of each of the four series is determined by an individual component given by D[i]+e[it], more of which below. Equation (1) will be directly interpreted as the transition equation in the state-space formulation of the model. The stochastic terms of the individual components can be formulated so as to incorporate a dynamic effect from shocks as: where e[it] ~ NID(0,s[i]^2); i = 1,2,3,4. The transition equation for the state-space formulation can be represented as: where u[t] ~ NID(0,s[u]^2). In order for the parameters of (1) (3) to be estimated, we set the transition equation as a Markovian process, so that we can apply the Kalman filter in conjunction with maximum likelihood to account for the unobserved components. In selecting a particular specification, we followed the Schwarz information criterion, which penalyzes the likelihood for the inclusion of unnecessary parameters. The final specification chose p = q = 2 for the four equations, written in deviations from means. Therefore, in matrix form we can represent the measurement equation as: or simply Dy[t] = Ha[t]. The transition equation was estimated as: or simply a[t] = Ta[t 1]+v[t]. The parameters of the above state-space formulation were estimated by maximum likelihood, using the prediction-error decomposition proposed by Harvey (1989). The estimated coefficients and respective standard errors are reported in table 1. It can be seen, using the standard asymptotic distribution from maximum likelihood for these parameters, that the chosen specification produced very significant estimates for the parameters. The resulting estimated common component can be more easily seen when compared visually with the actual IBGE production index in figure 4. It is also interesting to look graphically of the rate of variation for the above indexes (figure 5). As can be expected, at first glance the estimated stochastic common component is considerably smoother than the original index, fueling the idea of a measure related to the fundamentals of the economy, which should not be as directly affected by the short-term shocks as is the computed industrial production index. In what follows, we pursue further interpretations and uses of this estimated common component. 4. Interpretation of Results and Conclusions Both the volatile environment that has characterized the Brazilian economy since the beginning of the 1980s and the monetary policy after 1991 had at least three important and readily visible implications for growth. Figure 6 illustrates the brazilian GDP performance along with its smoothed trend since 1970. First, note that the trend turns downwards in 1980 and starts to show a minor recovery with the Real Plan (1994), despite the monetary restraint illustrated before. Second, the behavior of the economy gets more irregular after the recessions of the beginning of the 1980s. Third, it is remarkable to see that, even with eight years of a 22% annual real interest rate, the economy grew, according to the data above, 2.9% per year during this period. What kind of growth is this? Observe in figures 4 and 5 two different illustrations of the official industrial production index and the common component extracted from the different segments. The following remarks appear promptly: a) the two series are quite similar; b) the common component is less volatile as it should be, once its is known that the ups and downs of the Brazilian economy were originated from different shocks affecting the sectors in distinct c) the common component is rarely above the industrial output, which means that, according to the interpretation that we give to C[t], the actual output series tends to inflate the real "state'' of the economy. See this feature in figure 7, where the regression line linking the series is flatter than the 45^o line. Let's provide an intuitive explication for this element. Starting with the first expansion movement in the period 1975-81, we note that, approximately until 1978, the "fundamentals'' and the measured industrial output were walking side by side. When the international conditions started to change, first with the second oil shock (1979) and then with the change in the US monetary policy (1980), the internal decision was to keep the economy growing, despite the swing in the environment. At this point, the "fundamentals'' began grow to less than the measured industrial production. Similar events occur in 1986, with the cruzado stabilization attempt, which was complemented with a populistic wage policy, and also in 1994, after the Real Plan. In both opportunities, the demand expansion was not followed by the "fundamentals''. It is also interesting to note that the common component series has been very flat in the real years in a point equivalent to the one reached temporarily with the Cruzado Plan. A free interpretation of this could be the following. This is the activity level associated with an inflation-free economy. In order to go beyond that level, it would be necessary to "break'' other restrictions. This activity level would be the one tolerable given the "external'' restriction, or the capitals inflows necessary to maintain price stability. This restriction is linked, in the short run, with the country's export performance and, in the long run, with Brazil's ability to deal with its structural problems. It turns out that, given the opening exposition, this restriction was magnified in 1994-98 by the overvalued exchange rate. Hopefully, the succesful devaluation carried out in January 1999 will help to open the road to sustained growth again. Table 2 shows the industrial output growth rates compared with the ''fundamentals'' performance and some comments. We now turn to the examination of some features of the common components series. First, it is interesting to note that the "fundamentals'' series is more coherent with the GDP series than the industrial production (figure 8). The GDP is a broader macroeconomic indicator than the industrial production. It includes the agricultural sector, civil construction and all the services. So, the incidence of specific shocks tend to be diluted or averaged out when compared to the single industrial production indicator and, in thesis, the GDP is closer to the "fundamentals'' than the industrial production. The all important task of linking our measure of the common stochastic component of the industrial production index with the other fundamental driving variables of the economy requires both the formulation of a structural theoretical model and the application of filters similar to the one used here to other key variables, such as the price index and the interest rate. This is the subject of further research. The interesting and somewhat pretentious interpretations given above deserve, in some extent, a little dosage of self-criticism. Our main concern is that the analysis above depends heavily on the assumption that the proposed model is capable of extracting from some arbitrarily selected macroeconomic series what would be the unobserved "fundamentals'' of the Brazilian economy. Given the enormous implications of such a task, it would be necessary to confront this exercise with other ones to access the robustness of the particular C[t] series obtained. In other words, it is desirable to continue the investigation in complementary directions. The first and more obvious is the discussion of whether the notion of the proposed "core'' really exists. Second, and provided the answer before can be (hopefully) considered affirmative, it is necessary that this "core'' of the economy be relatively independent of the chosen set of macroeconomic series. Third, and more subtly, there should be some desirable properties of this "fundamentals'' measure of economic activity. For instance, it should be more appropriate to perform long-run forecasts, once they reflect the underlying economic fundamentals that, in the long-run, supposedly the measured indicators must converge to. Finally, from what can be seen in figure 9, the comparion between the common stochastic component and each of the four individual components of the aggregate index seems to indicate that some very important structural changes occurred in the Brazilian economy during the sample period, affecting differently each of these components. It would be highly desirable to be able to represent these shifts in regime endogenously determined in a correctly specified statistical model. One possible step in this direction is the estimation of a state-space system similar to the one presented here, but subject to Markov-switching time-varying parameters, in the spirit of Kim (1994). Harvey, A. C. Forecasting, structural time series models and the Kalman filter. Cambridge, Cambridge University Press, 1989. [ Links ] Stock, J. H. & Watson, M. W. A probability model of the coincident economic indicators. In: Lahiri, K. & Moore, G. H. (eds.). Leading economic indicators: new approaches and forecasting records, Cambridge, Cambridge University Press, 1991. p. 63-90. [ Links ] Kim, Chang-Jin. Dynamic linear models with Markov-switching. Journal of Econometrics, 60:1-22, 1994. [ Links ] * This paper was received in Nov. 2000 and approved in Sept. 2001. ** Department of Economics/USP. *** MB Associados and Doctoral Program in Economics at USP.
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0034-71402002000100004&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-17T02:05:15Z","content_type":null,"content_length":"41560","record_id":"<urn:uuid:c27dd4f8-5198-4e9c-b9ef-35856de5e7ae>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Manchester, WA Statistics Tutor Find a Manchester, WA Statistics Tutor ...As a member of NHS, I scheduled tutoring sessions before and after school in math and French to help fellow high school students who were struggling in those subjects. In Writers' Circle, I worked with high school students who wanted to learn more about creative writing. I also often tutored in... 35 Subjects: including statistics, English, reading, writing ...At the end of it there was a multiplication problem. I said 'take the first number. Draw that many circles. 17 Subjects: including statistics, calculus, geometry, algebra 2 ...I studied both Micro and Macro economics relatively recently. I got A's in both classes. In addition, I successfully helped other students in class to get a better understanding of the 20 Subjects: including statistics, reading, GED, algebra 1 ...Tutoring: I currently tutor 9 students on a regular basis in a variety of subjects including but not limited to: chemistry, physics, mathematics (algebra through calculus), and statistics. As a tutor my goal is to help the students achieve academic success through mutual goal-setting with my stu... 17 Subjects: including statistics, chemistry, calculus, physics ...I taught high school for one year waiting for my wife to graduate. We moved to Seattle where I was employed as a chemist for 8 years. After that I was employed as a database administrator. 8 Subjects: including statistics, chemistry, geometry, algebra 1 Related Manchester, WA Tutors Manchester, WA Accounting Tutors Manchester, WA ACT Tutors Manchester, WA Algebra Tutors Manchester, WA Algebra 2 Tutors Manchester, WA Calculus Tutors Manchester, WA Geometry Tutors Manchester, WA Math Tutors Manchester, WA Prealgebra Tutors Manchester, WA Precalculus Tutors Manchester, WA SAT Tutors Manchester, WA SAT Math Tutors Manchester, WA Science Tutors Manchester, WA Statistics Tutors Manchester, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/manchester_wa_statistics_tutors.php","timestamp":"2014-04-21T14:50:33Z","content_type":null,"content_length":"23854","record_id":"<urn:uuid:54915bd0-504c-4626-8f5b-0aa86985000f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Surfaces Problem : What must be true of a surface in order for it to be a simple closed surface? The surface must divide space into three distinct regions: the surface itself, the interior of the surface, and the exterior of the surface. Problem : If a line is perpendicular to a plane, is that line perpendicular to every line in the plane? No. The line is only perpendicular to every line in the plane that contains the intersection point of the first line and the plane. Problem : If a polyhedron has 6 faces, how many edges does it have? There is not enough information to know this. The answer depends on how many sides each face has. Problem : Is a surface two-dimensional or three-dimensional? A surface itself is two-dimensional: it has no thickness. A surface can, however, span three dimensions. A polyhedron does not exist is a single plane--it spans three dimensions, but the surface itself is still two-dimensional. Problem : Is it possible for a surface to be contained in a single curve? Generally speaking, no. Surfaces are two-dimensional and curves are one-dimensional, so this is impossible. Consider the following situation, though: Curve One is a line segment of length 10. Curve Two is a line segment of length 3. Curve two moves only within the line that contains it. Thus, the surface that traces the motion of curve two is actually a line segment. Its length depends on how far Curve two moves. It is possible for the surface of the motion of Curve two to be contained in Curve one, whose length is greater than that of Curve two. So in this sense, yes, it is possible. But such a surface isn't really a surface. It is like a curve that is actually a point because the curve traces the motion of a motionless point. The situation is rather obscure and useless. Still, these ideas are interesting to ponder.
{"url":"http://www.sparknotes.com/math/geometry1/geometricsurfaces/problems.html","timestamp":"2014-04-17T01:08:55Z","content_type":null,"content_length":"51137","record_id":"<urn:uuid:e9bd29b4-2f12-41cb-862a-2eaa10cc554e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Single Variable Calculus : Early Transcendentals Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/bk-detail.asp?isbn=9780534355630","timestamp":"2014-04-20T21:33:38Z","content_type":null,"content_length":"52356","record_id":"<urn:uuid:cfb30a48-89fe-4d2c-9338-4dd9f32a6ed9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Matlab textbook necessary? I'll be doing a programming subject where the first half will be spent on Matlab then the second half on C. It has a prescribed textbook: which solely focuses on Matlab programming. With all the resources available on the internet, is it really necessary to have such a textbook? Then again it would be easier to have something condensed all in one book, rather than having to filter through web pages, tutorials, etc. Plus I could use it in later courses as a reference...but I'm still trying to justify the cost.
{"url":"http://www.physicsforums.com/showthread.php?p=4271229","timestamp":"2014-04-17T15:29:38Z","content_type":null,"content_length":"39023","record_id":"<urn:uuid:1fff72cd-5911-444e-8cd2-448c0c6a7766>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Weequahic, NJ Math Tutor Find a Weequahic, NJ Math Tutor ...I also understand that students aiming to continue into more advanced coursework in physics and engineering need to master algebra at a level beyond imitating problem solutions. I can help students use detailed, color-coded writing to develop this kind of critical thinking. From working with bi... 13 Subjects: including algebra 1, algebra 2, calculus, differential equations ...I also have 1 year experience teaching high school chemistry. I have worked as a private tutor for almost 2 years, tutoring students in high school through college mathematics (pre-algebra to calculus) and science (physical science, chemistry, and physics). I graduated from MIT with a bachelor'... 8 Subjects: including algebra 1, algebra 2, chemistry, geometry ...More than any other SAT section, the SAT Writing Section is the most learnable and the easiest to improve. I have a Master's in Psychology, which provides me with a thorough understanding of many of the unique challenges of special needs students as well as ways to work with the strengths of the... 14 Subjects: including algebra 1, writing, statistics, SAT math ...My approach to mathematics tutoring is creative and problem-oriented. I focus on proofs, derivations and puzzles, and the natural progression from one math problem to another. My problem-solving skills were honed while training for the 40th International Mathematical Olympiad in Bucharest, Romania, at which I won a Bronze Medal. 9 Subjects: including discrete math, algebra 1, algebra 2, calculus ...Dickinson High School in Jersey City, NJ. I have also taught Algebra 1, Algebra 2, Geometry and SAT math. I have been a teacher for 8 years. 10 Subjects: including trigonometry, discrete math, logic, algebra 1 Related Weequahic, NJ Tutors Weequahic, NJ Accounting Tutors Weequahic, NJ ACT Tutors Weequahic, NJ Algebra Tutors Weequahic, NJ Algebra 2 Tutors Weequahic, NJ Calculus Tutors Weequahic, NJ Geometry Tutors Weequahic, NJ Math Tutors Weequahic, NJ Prealgebra Tutors Weequahic, NJ Precalculus Tutors Weequahic, NJ SAT Tutors Weequahic, NJ SAT Math Tutors Weequahic, NJ Science Tutors Weequahic, NJ Statistics Tutors Weequahic, NJ Trigonometry Tutors Nearby Cities With Math Tutor Ampere, NJ Math Tutors Bayway, NJ Math Tutors Chestnut, NJ Math Tutors Doddtown, NJ Math Tutors Elmora, NJ Math Tutors Greenville, NJ Math Tutors Hillside, NJ Math Tutors Irvington, NJ Math Tutors Maplecrest, NJ Math Tutors Midtown, NJ Math Tutors North Elizabeth, NJ Math Tutors Pamrapo, NJ Math Tutors Parkandbush, NJ Math Tutors Roseville, NJ Math Tutors Townley, NJ Math Tutors
{"url":"http://www.purplemath.com/Weequahic_NJ_Math_tutors.php","timestamp":"2014-04-18T11:22:18Z","content_type":null,"content_length":"23867","record_id":"<urn:uuid:e1e2e4fb-d09e-485d-b4e4-ab0800fde376>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
y = x^2, vertex lies at (2,-3) open negative direction April 26th 2009, 11:55 PM #1 Apr 2009 y = x^2, vertex lies at (2,-3) open negative direction Beginning with the function y =x^2, which of the following shows the changes you would make so that the vertex lies at (2,− 3) and the parabola opens in a negative direction? my answer is y=-(x-2)^2 +3 am i correct? For any quadratic of the form $y = a(x - h)^2 + k$, it's vertex lies at $(h, k)$. In other words, the x co-ordinate changes sign, but the y co-ordinate does not. So your answer should be $y = -(x - 2)^2 \color{red}{-} 3$. April 27th 2009, 12:54 AM #2
{"url":"http://mathhelpforum.com/algebra/85922-y-x-2-vertex-lies-2-3-open-negative-direction.html","timestamp":"2014-04-18T18:29:57Z","content_type":null,"content_length":"35056","record_id":"<urn:uuid:1d61532f-08f8-41f2-b4b5-4d535b263cf9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what is angle TAK? (pic in comments) • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. arc TA is 110 Best Response You've already chosen the best response. Is there an image you can attach? Best Response You've already chosen the best response. no so i drew it up there...love ur pic btw Best Response You've already chosen the best response. Thank you, love yours as well. Well, I just did something like this a couple weeks ago. This is an angle equation right? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Ok well. What are you trying to figure out exactly? There seems to be an insignificant amount of information. Best Response You've already chosen the best response. oh , what is angle TAK Best Response You've already chosen the best response. Hm :/ I don't exactly understand what that means. Is there another angle to compare it to? Or like some multiple choice questions? Because right now the angle TAK could be a circle. It could be equal to angle KAT even. Any information other than that? Best Response You've already chosen the best response. no :/ Best Response You've already chosen the best response. i put T, A , and K in the picture, and the arc TA=110 Best Response You've already chosen the best response. So all the question is, is "What is angle TAK?" That's it? Best Response You've already chosen the best response. Hm :/ it must have somethin to do with TA=110 but I don't exactly know what that means:/ I'm sorry Best Response You've already chosen the best response. Oh, what is the degree of the angle TAK? if tht helps Best Response You've already chosen the best response. ooohhhhhhhhhhhhh yes that does help let me see if i remember how to do that xD Best Response You've already chosen the best response. XD ok, ty Best Response You've already chosen the best response. Yeah I'm sorry :/ I cant remeber how to do that part xD Best Response You've already chosen the best response. lol its ok thnx tho! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fc6b90ee4b022f1e12e4758","timestamp":"2014-04-17T12:44:54Z","content_type":null,"content_length":"72802","record_id":"<urn:uuid:f8c5da60-aefd-449b-8cd0-a24b5fb744a6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
September 18, 2013 Description and Instructions The Powerball numbers analysis presents data both on a table and a chart. The chart output can be viewed only after selecting a specific data item and clicking on the "Show Bar Chart" button. Keep in mind that the analysis is performed on 9/18/2013 and may not be the latest depending on when you are looking it. Analysis for any other draw day from January 2010 can be selected on the calendar at the top right of this page. The data are initially arranged by the numbers, in descending order. You may click on one of the column headings (FA, RA, FC, etc.) to arrange them by the values of that column. For example, clicking on "FA" arranges all the data by the "Frequency for All Draws", that is the number of times each Powerball number is drawn in all 2235 draws so far. This will place the least frequently drawn Powerball numbers at the top, and the most frequently drawn at the bottom. The 'brief summary' on the right lists the first 5 and the last 5 as 'Low End' and 'High End' values, respectively. Once you have selected a heading, you can also see the Powerball data on a bar chart by clicking on the button at the bottom of the table. Obviously, graphical representations will give you a visual idea of how the values are close and/or far from each other. The bar charts can be arranged by number or by value. Now, let's describe the significance of the data. Examining the Powerball frequency for all draws will reveal that higher Powerball numbers are drawn significantly fewer times than lower numbers. This is because of subsequent matrix changes in Powerball which started as 5/45 + 1/45 game and is now (today 9/18/2013) a 5/59 + 1/35 game. The "Relative Frequency for All Draws" (RA) takes care of this discrepancy by taking into consideration the life span of a number in addition to the number of times drawn. It is also an indication of how far a number is drawn below or above its expectations. A Relative frequency value of 100 or close to 100 implies that the number is drawn the expected number of times. Thus, higher and lower values indicate most frequently drawn and least frequently drawn Powerball numbers, respectively. On the bar chart we have shown in black those numbers within 5% of the expected draws; in green are those with higher values, and in red those with lower values. In addition, we have listed both the total frequency and the Relative Frequency for the current Powerball matrix (5/59 + 1/35), denoted by FC and RC. There are 492 draws of the current matrix so far. Note that for analysis purposes, we will consider the last two matrices (5/59 + 1/35) and (5/59 + 1/39) as the current matrix since the difference is only in the PB numbers. The next three columns deal with skips. The "Skipping Now" (SN) values indicate how many draws a Powerball number has not been seen (until this draw). It may also be called "Last Seen" since the value is exactly how many draws have elapsed since a number's last appearance. The Powerball numbers drawn today have an SN value of 0, while the highest values belong to long-time-no-see Powerball numbers. In the bar chart, yellow indicates late Powerball numbers, red very late, and green nearly as expected. The "Skipped before its Last Draw" (SL) is the draws that a Powerball number skipped when it was draw last. A value of 0 indicates that the number was drawn back-to-back on its last appearance. If "Skipped before its Last Draw" (SL) and "Skipping Now" (SN) are both 0, it means that the number is drawn back-to-back on this latest draw (on 9/18/2013). "Record Skip" (SR) is just that; the longest time a Powerball number had disappeared. If both SR and SN have the same value, it means that the number is breaking its record right now (on 9/18/2013). Finally, "Back-to-back Draws" (BB) is how many times a Powerball number has been drawn back-to-back throughout it life time till this draw.
{"url":"http://www.powerball.us/results/9-18-2013.php","timestamp":"2014-04-18T05:30:52Z","content_type":null,"content_length":"25926","record_id":"<urn:uuid:fa99851a-fede-4a4a-9ba8-9154e4532d3f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Dunn Loring Prealgebra Tutor I love tutoring! I have been tutoring since the 11th grade, and have tutored throughout college. I have worked with elementary school children on Math and English. 40 Subjects: including prealgebra, English, reading, chemistry ...I've worked with students to set goals, timelines, or even simple routines to aid them in achieving their study and homework goals. I also incorporate these skills into my own life - that's what has enabled me to complete research for my master's degree in engineering, and continues to aide me i... 17 Subjects: including prealgebra, reading, writing, geometry ...GRE & SAT: I have taken these exams several times in the last twelve years, and I have taken the updated computer version of the GRE. As for athletics, I am skilled in the following areas: Swimming: I was a competitive swimmer for three years and enjoy teaching stroke mechanics and customizing... 13 Subjects: including prealgebra, calculus, GRE, writing ...I studied abroad for 4 months in Madrid, Spain. While there I volunteered with Helenski Espana, a human rights group. We taught lessons to school-aged children (in Spanish) on human rights and their basic rights as a citizen of Spain. 17 Subjects: including prealgebra, Spanish, geometry, physics ...I deeply care about the success of each one of my students. I know what works and what does not for each student. I know how to motivate students by integrating each concept of a lesson into everyday life. 10 Subjects: including prealgebra, physics, algebra 2, geometry
{"url":"http://www.purplemath.com/Dunn_Loring_prealgebra_tutors.php","timestamp":"2014-04-20T02:17:14Z","content_type":null,"content_length":"23959","record_id":"<urn:uuid:2c6c2ab4-ad40-4da2-9867-6e1544ba6343>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Norwalk, CT Algebra 2 Tutor Find a Norwalk, CT Algebra 2 Tutor I offer tutoring for a wide range of subjects, SAT, ACT, and AP prep, and college application counseling throughout Connecticut! I was accepted to 12 universities, including Yale University, the University of Pennsylvania, New York University, Fordham University, Boston College, Boston University, ... 44 Subjects: including algebra 2, English, reading, geometry ...I'm detail-oriented and great at proofreading papers. I also love to study languages and to travel!I spent 3 years studying Chinese (through Level 4 at MIT), earning top grades. My teachers were all native speakers. 36 Subjects: including algebra 2, reading, English, Spanish ...I am not a clock watcher. My main focus is guiding students through the process of developing their mathematical abilities by helping them experience a pattern of successful learning that builds self-confidence.I use a variety of instructional techniques that provides students with the opportuni... 16 Subjects: including algebra 2, calculus, geometry, statistics ...I am able to make connections where needed between algebra, geometry, and trigonometry. As a classroom teacher, I have seen how important literacy is in all subjects. I have experience separating decoding issues from struggles with vocabulary and comprehension. 10 Subjects: including algebra 2, reading, Spanish, calculus ...For the past ten years I have devoted myself to my love of mathematics and education by teaching full-time and tutoring part-time in the local area. I have taught all high school math subjects and levels from basic pre-algebra up through and including AP Calculus BC. I also have extensive tutor... 21 Subjects: including algebra 2, calculus, geometry, statistics Related Norwalk, CT Tutors Norwalk, CT Accounting Tutors Norwalk, CT ACT Tutors Norwalk, CT Algebra Tutors Norwalk, CT Algebra 2 Tutors Norwalk, CT Calculus Tutors Norwalk, CT Geometry Tutors Norwalk, CT Math Tutors Norwalk, CT Prealgebra Tutors Norwalk, CT Precalculus Tutors Norwalk, CT SAT Tutors Norwalk, CT SAT Math Tutors Norwalk, CT Science Tutors Norwalk, CT Statistics Tutors Norwalk, CT Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bridgeport, CT algebra 2 Tutors Danbury, CT algebra 2 Tutors Darien, CT algebra 2 Tutors East Norwalk, CT algebra 2 Tutors Fairfield, CT algebra 2 Tutors Greenwich, CT algebra 2 Tutors New Canaan algebra 2 Tutors New Rochelle algebra 2 Tutors Stamford, CT algebra 2 Tutors Stratford, CT algebra 2 Tutors Trumbull, CT algebra 2 Tutors Weston, CT algebra 2 Tutors Westport, CT algebra 2 Tutors White Plains, NY algebra 2 Tutors Wilton, CT algebra 2 Tutors
{"url":"http://www.purplemath.com/Norwalk_CT_algebra_2_tutors.php","timestamp":"2014-04-19T23:31:51Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:ed07d4fd-8526-44b2-8a21-6851c78c77ff>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Gases #1 October 4th 2009, 10:47 AM Gases #1 Can someone please help me with this problem? Thanks a lot! One mole of nitrogen and one mole of neon are combined in a closed container at STP. How big is the container? I did PV=nRT and V= nRT/P so V= ?(0.08206)(273)/1 atm... so I need to find n in moles but I dont know how. Thanks again! October 4th 2009, 10:59 AM Can someone please help me with this problem? Thanks a lot! One mole of nitrogen and one mole of neon are combined in a closed container at STP. How big is the container? I did PV=nRT and V= nRT/P so V= ?(0.08206)(273)/1 atm... so I need to find n in moles but I dont know how. Thanks again! What you need to do is multiply by n which is 2 Assuming both are ideal gases is a valid assumption so pV=nRT is the way to go. But because the gases are at STP we can use the fact that one mole of any gas at STP occupies 22.4L. (You can derive this from the ideal gas law) As there are two moles of gas then the container has a volume of 44.8L This bit is unnecessary for the question at hand but it shows how we arrive at 22.4L. $P_0 = 100\,kPa = 10^5\,Pa$ $T_0 = 273.15\,K$ $R = 8.314\,J\,mol^{-1}\,K^{-1}$ $n = 1$ $V = \frac{nRT_0}{P_0} = \frac{(1)(8.314)(273.15)}{10^5} = 0.0224\, m^3 = 22.4\,L$
{"url":"http://mathhelpforum.com/math-topics/106022-gases-1-a-print.html","timestamp":"2014-04-19T05:12:52Z","content_type":null,"content_length":"5947","record_id":"<urn:uuid:4ec321d7-249e-4673-98e9-090395551d04>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
of Computable Knowledge 20,000 BC Counting abstract objects The invention of arithmetic provides a way to abstractly compute numbers of objects. 3500 BC: Written Language A systematic way to record knowledge A central event in the emergence of civilization, written language provides a systematic way to record and transmit knowledge. Representing events by pictures The Lascaux cave paintings record the first known narrative stories. 3000 BC: Registering Land Ownership Babylonian stone boundary markers begin to include inscriptions that record ownership of land. 2500 BC 2500 BC: Sumerian Calendar Organizing time The first known calendar system is established, rounding the lunar month to 30 days to create a 360-day year. 1790 BC: Code of Hammurabi Codifying civil laws Hammurabi writes down 281 laws prescribing civil behavior in the kingdom of Babylon. Symbols for destiny The 64 possible hexagrams of the Chinese I Ching are taken to enumerate possible features of life and destiny. 1700 BC: Babylonian Mathematical Tables Babylonians make tables of multiplication, reciprocals, squares, cubes, and square and cube roots. 2150 BC: Akkadian Measures Making a standard for measurement The Akkadian Empire adopts a single unified standard for measuring volume, based on the royal gur-cube. 1250 BC: Library at Thebes A building to store knowledge The Library at Thebes is the first known effort to gather and make many sources of knowledge available in one place. 1800 BC: Babylonian Census Taking stock of a kingdom The Babylonian census begins the practice of systematically counting and recording people and commodities for taxation and other purposes. Recording geographic knowledge The Turin Papyrus is the first known topographic map. 1000 BC 600 BC: Lydian Coinage Coins to represent value Lydia (in modern Turkey) introduces gold and silver coins to represent monetary value. 325 BC: Library of Alexandria Collecting the world's knowledge The Library of Alexandria collects perhaps half a million scrolls with works covering all areas of knowledge. 500 BC: Babylonian Astronomy Using arithmetic to predict the heavens The Babylonians introduce mathematical calculation as a way to track the behavior of planets and a few other systems in nature. Organizing mathematical truth Euclid writes his Elements, systematically presenting theorems of geometry and arithmetic. Numbers are the key to nature The Pythagoreans promote the idea that numbers can be used to systematically understand and compute aspects of nature, music, and the world. Computing as a basis for technology Archimedes uses mathematics to create and understand technological devices and possibly builds gear-based, mechanical astronomical calculators. A system of diseases Hippocrates identifies definite classes of human diseases. Labeling the Earth Eratosthenes creates the system of longitude and latitude and uses it to create a scaled map of the known world. Finding the rules of human language Panini creates a grammar for Sanskrit, forming the basis for systematic linguistics. 100 BC: Antikythera Mechanism A machine for computing A gear-based device that survives today is created to compute calendrical computation. 387 BC: Plato's Academy Teaching knowledge systematically Plato founds his "Academy", which operates in Athens for nine centuries. Standardizing the months Julius Caesar institutes the Julian calendar, establishing the lengths of the twelve months. Classifying the world and introducing logic Aristotle tries to systematize knowledge, first, by classifying objects in the world, and second, by inventing the idea of logic as a way to formalize human reasoning.
{"url":"http://www.wolframalpha.com/docs/timeline/","timestamp":"2014-04-17T09:42:09Z","content_type":null,"content_length":"44121","record_id":"<urn:uuid:95bf646f-bc27-495c-ac66-4d26ad25bed6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
increpare games Structuralism, The Canonical Formula, and Computer Games Thursday, October 30, 2008 A copied/pasted selection of a thread from tigsource. There is certain amount of extra discussion there, but all the actual analyses are copied below. It’s a little messy (especially the opening description, which might anger some specialists immensely, should they be unfortunate enough to stumble across this page), for which I apologise. After having played about a little bit today with things relating to structuralism, I thought it might be fun to try to apply Levi-Strauss‘s canonical formula of mythology to some games. (the closest I could find to a discussion of this nature on the web was this rather elementary discussion on gamedev.net). The canonical formula looks like: It’s supposed to depict some sort of transformation, with the fraction on the left representing some sort of relationship between the numerator and the demoninator, the arrow in the middle representing the transformation, and the fraction on the right a relationship between the permuted contents of its numerator and denominator. Basically, you can fill it out however you want. a^-1 is supposed to be some sort of opposite of a. Also, generally either a and b represent characters, and x and y represent some properties, or vice versa. [DEL: And generally f doesn’t mean anything.:DEL] (I take that back. f indicates that there’s a functional relationship between its two arguments. ‘functional relationship’ means that one of its arguments is a property of the other, or is an action performed on/by the other. Basically by ‘meaningless’ I mean ‘not a variable’). It all is a bit arbitrary, but there’s certainly a knack to describing things using it. Actual analyses follow below the fold Also tagged levi-strauss, structuralism | Comments (0) The rack; the rack! Sunday, September 16, 2007 (not due to me, but rather to an acquaintance; new to the internet I think though, so I’m taking the liberty of putting it up here). Also tagged | Comments (4) In which the hero sketches something, for he hath not the strength to venture, unbidden, into more detail. Gerbes? He thinks not. Tuesday, August 14, 2007 Hmm. This is going to be harder than I had initially thought now that I think of it. Basically, given three melodies that work in counterpoint they are written on three staves, one above the other. And we have the identity that the interval between the lower and middle voice added to the interval between the middle and upper voice will give the interval between the lower and upper voice. And we have rules that relate how these voices should interact with eachother. However, one can still apply most of the rules quite well if we make things a little bit abstract and no longer require the above relationship [a,b]+[b,c]=[a,c] to hold, but rather that it hold only up to a certain constant interval I. And what’s the sense in this? Well with this you still have three melodies, only now instead of all three being contrapuntally amenable, we have that any two of them are. I haven’t seen this expounded elsewhere, and given how practical it seems I thought I’d mention it here. And why the devil did I want to mix up Fuchs with cohomology? Well, I was trying to figure out an easy example of a Gerbe :) (I failed, as it happened, but it’s quite teasingly close!). And why this rambley ramble here as opposed to something more deliberate? Because I’ve been meaning to post this since November last year, that’s why. So this means I get to relax about it now. Chill, you know? And if anybody should wish for any explication I would be Only Too Happy to provide it. Also tagged music, music theory | Comments (0) Serial: Chapter 3: C – Columbus Saturday, February 24, 2007 Isaac, dearest, I was leaving through the recently published memoirs of Christopher Columbus (curiously written in the third person; surely such an odd style cannot have been introduced by the translator, whatever other factual errors he makes), last year, and I felt you might be amused with the following story; at least insofar as it contrasts with your recent proof of the finiteness of the the geometrical space in this universe. Ah, no point faffing about; I will just quote the passage from near the end of the book you now Columbus sat in his study, overlooking Cádiz, with a Bible on his desk. He stared across to the horizon, and thought. “How terrible it must be,t o be a highly situated savage, to see the world about you to the horizon and think there nothing more, to feel one’s self trapped on so small a disk of earth”. Though, then he fancied that maybe a normal man might be able to live feeling so encaged, but not an explorer such as he. “God has set forth, he had been told, a bigger cage for us in which to live, many orders of magnitude larger than the disk to our horizon, yet finite nonetheless. More than enough to go on exploring for generations”, he reassured himself. And yet, he felt he could not accept this: How could God imbue the greatest of his men with the spirit of exploration, if there might be some age, maybe in a few centuries, when there will be no to explore, to discover, to find, on this planet. He would have verify this himself, to see the edge of the world with his own eyes, or alternatively spend the rest of his life travelling farther and farther out, boundless. With this monumental task in mind, he petitioned the king for enough gold to sail for five years continuously west and, after five years petitioning, he was granted this sum. One year of eager preparations passed before he was ready to set sail to see if the Atlantic actually had an edge. Six months at sea and land ahoy! What joys he experienced, what vindication: some islands, then a whole new expanse of land – the Vatican said that there was nothing west of Europe: they were wrong, he now knew! This meant for him one of two things: We live on a much, much larger world than the Vatican claim, or alternatively, and his personal theological views led him to regard this as being much more likely: the world is infinite in extent, that one can travel forever in any direction without reaching an edge – this would be the only possible world he could see god having created – otherwise what a cruel fate would await the noble pursuit of exploration! Okay, so here there’s a big chapter about America, which I’m not going to quote for you in full, but essentially what happens is that I decide that I must move south along the coast, because, even if it does go on forever in both directions, the only alternative is to give up and go back and there’s no way I’m gonna do that. So we travel down around, and west for a few more months, until hitting land again. Bang. And they anchor. And, lo, what’s this? A VILLAGE. Sweet. Then: Shortly after entering the village, he found that he himself did find the people look quite familiar, some words of their language, their customs, from his travels east of India; had these people, from the other side of Europe, travelled here first, made these vast journeys? The very notion seemed impossible; the Indians and Chinese hated sailing, he knew. And yet, as his crew went further inland, he began to find more and more similarities until there was no doubt left: they were in China. This world repeated itself then, he realized – it was not Rome’s disk, nor was it an explorer’s world – “What a cruel trick to play”, he thought “To think, if you had a powerful enough telescope you might catch sight of yourself, looking away, that the world was, despite all of it’s illusions, really finite in extent; that one might still sail west forever, but never come across anything new, that the world was finite – there were no barriers to travel, but, nonetheless, there was only a finite amount to discover in it – how cruel a discovery for an explorer to have make! Of course, *everyone* knows that Galileo believed the earth to be shaped like an aubergine. So this casts everything, really, in to the most terrible of doubts, don’t you think? But then again, it is a charming story, neh? Anyway, I hope to see you in the new year. My favourite calculation: Combination tones Wednesday, February 21, 2007 Hmm, so in the interest of subscribing to this mathematical carnival what’s doing the rounds now, I’m writing something specifically mathematical in nature, my favourite elementary derivation. I’m trying to make it understandable and brief – if anyone is having trouble following, I can help in the comments. Okay, so lets say we’re hearing a signal given by f(t); let’s assume it’s periodic. Now, to monitor what we hear, we have to view this as a sum of sine waves f=a*sin(t)+b*sin(2t)+c*sin(3t)+ … so a,b,c represent the frequencies we hear of frequency 1,2,3, etc, and the bigger the coefficient the bigger the amplitude. Lets look at two really simple sounds, pure sine waves of different frequencies sin(at), and sin(bt). SO, they each have exactly one frequency present. Now, so what if there were some non-linearity of our hearing system. That is what if, when someone plays f=sin(at)+sin(bt), we don’t actually hear this, but rather something more complicated. So, the simplest way of such a thing being non-trivial is to introduce a quadratic non-linearity (ignoring coefficients…we’re thinking that because “all” functions can be taylor expanded as f(t)+f(t) ^2/2+f(t)^3/3!+…, the next best thing to having just f(t) is to having the first two terms). So, anyway, now when someone plays a signal f(t), we don’t hear f(t), but rather f(t)+f(t)^2 So, what results from this? Well, we have to break down f(t)+f(t)^2 to being a sum of sine waves first. f(t)+f(t)^2=sin(at)+sin(bt)+ (sin(at)+sin(bt))^2 looking up trig tables and decomposing further we get (ignoring coefficients) Ah, so look at this. We might be led to deduce from this that, if non-linearities were present, when someone plays two frequencies at the time, we will also perceive sounds playing at double either frequency, their sum, and their difference. Now, the multiples of a and b can be reasonably expected to be masked by overtones (though it is possible to bring them out), but the difference (and, to a lesser extent, the sum), on the other hand, can be controlled very easily, just by bringing the two source sounds closer together or further apart. And, indeed, we can quite easily hear them.* Which is darnedly neat. I like this calculation so very, very much because it’s surprisingly fruitful; whenever I feel like I’m loosing my faith in the power of Taylor expansions, I go through this derivation again. The phenomenon was first noted by Tartini, the derivation was by Helmholtz, and these extra tones are sometimes called Tartini tones or, more commonly combination tones. What’s also interesting, by the by, is that these effects are actually quadratic: I remember, the first time I heard an example (they can be found on the interweb quite easily, here for instance), I was listening to them with earphones, and the strength of the combination tone totally overwhelmed the two base ones. But, when I played it on speakers, it was much weaker relative to these tones, and if I went too far away, I couldn’t hear it at all. *okay; this is a lie, actually the third order terms are the easiest to hear, corresponding to things like 2a-b…and there’s an explanation for this (check out Dave Benson’s notes if you want to know Also tagged music, music theory | Comments (0) Serial: Chapter 2:C/A – Blackness Tuesday, February 20, 2007 I was alone, and in darkness. It stretched out forever, and yet it was not a vertigo I felt at the feeling of this expanse, but rather found myself suffocating at the immediacy and intimacy of the Serial: Chapter 2: A – Riemannian Sunday, February 18, 2007 In a dream again, I found myself in a labyrinth. This was not a labyrinth of walls, though, it was a labyrinth of space and, instead of feeling claustrophobic confinement, I felt the oppression of a space too rich, too rich and dense, to meaningfully comprehend. This was a desert, with occasional rocks, trees, allowing me some orientation with respect to my world and, yet, I couldn’t shake off a feeling of general disorientation. I set my sight on a particular tree, which was standing next to a sharply jutting piece of sandstone several paces away. I made my way towards it, or rather, tried to – I found after one or two paces these to be on the periphery of my vision, though a little closer – I turned to face them again, and stepped forward: walking towards this tree was like keeping one’s balance on a bicycle – it took constant corrections. I picked up a small stone, some black, rough stone, from the sand, and tossed it forwards a short distance; I found that I was getting the hang of this desert, and was able, with some small exertion of effort, to walk to it and pick it up again, though I could only move but slowly. I picked up two stones, and made an effort to toss them both t the same point a few metres away. Indeed, they looked to land rather closely together. And, yet, as I walked towards one, I found the other getting further and further away, until, by the time I had reached my destination, the other one was almost as far from me as I was now from the point at which I had thrown it. The geometry of this place was…thick; it seemed that even the slightest error in orientation, on a walk of but a few paces, could result in you being further away than when you started. I then knew, as one does in dreams, that my destination – the place where I needed to go, was many miles away, that I could spend a lifetime of lifetimes trying to find it and, even if someone could point me in its rough direction from where I was, I would have little chance of ever finding it. Serial: Chapter 2: C – Cube Friday, February 16, 2007 I fell asleep to find myself standing on an interior wall of a cube. There was no gravity it seemed, though I was, it seemed, bound to walk on it’s surface. This was not too small a chamber; it stretched almost a hundred metres in each direction, and the isolation or artifice of this situation did not in and of itself worry me. But, after a few minutes of exploration had passed, I noticed that this cube was not as spacious as it had seemed to me at first glance. I stood at one corner and asked myself “Would it matter me if I was to stand at any other corner?” With this thought I could feel the space I had perceived contract about me. I thought “At this corner, is there any difference between facing one other corner, or any other one?” And with this, and all it’s variations and implications, I felt my freedom disappear, to be replaced with a noose, tightening about my neck. Serial: Chapter 1: C/A – High Dim. Friday, January 26, 2007 The following night I had another dream: I awoke inside the dream, to find myself breathing deeply, calmly, in some spacious, geometrical environment. I looked about, trying to take in in my surroundings, only a fear began to grab at me: the space was uninhabited by anything else, it just went on forever in all directions, and yet there was more: as it dawned upon me that I was in a space of dimensions many more than three, that space was going to infinity not merely in three directions, but twenty – this frightened me, I found myself unable to move, praying for some relief to the immense extent of this world. My prayers were suddenly answered: I found myself trapped in a cramped prison cell; trapped between forty different walls on forty different sides. Serial: Chapter 1: C – Quotient Wednesday, January 24, 2007 It took me some time to get to sleep that night. When I did, my fears blossomed into a nightmare: I was, alone, in a dining room, with a heavy mahogany table and the walls were covered with a rich velvety wallpaper. Silver service laid on lace tablecloth, adorned about with plates of food. In contrast to the evening’s fears of expanses, I now began to feel uncomfortably contained in my body, in this oppressive room; my apprehensions grew with every breath, and then I noticed some changes that restricted my enclosure upon me in a way I could not have expected: There were no straightforward physical barriers developing, nor did the walls begin to close in on me; rather I began to perceive things differently – or rather, the reality of the situation changed. I had been looking at the contents of my plate, glancing fearfully away from it up to the ceiling every so often, lest it come crashing down upon me. But then the changes began: suddenly, the lampshade on the ceiling and the plate on the table were not separate things, but rather one thing; still the lampshade and the place, but now thoughts of separating them were impossible. I pulled my eyes away from this transfiguration, allowing my sight to escape to the door, only the door was now not only the door, but simultaneously the lower shelf of the bookcase on the opposite end of the room, and then the bookcase as a whole became the bookcase and the table; this room was folding in on itself, but in no straightforward way; I tried then to look down, and found that the floor had become the floor and the ceiling, and then, all at once, up was combined with down. I suddenly awoke, with the thought “What must be becoming of me?” Also tagged geometry, stories | Comments (3)
{"url":"http://www.increpare.com/tag/maths/","timestamp":"2014-04-18T01:39:24Z","content_type":null,"content_length":"43698","record_id":"<urn:uuid:c33c3eb5-d6a5-47ab-8345-f7c409b0a3c3>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
ANALYTIC COMBINATORICS. This is a book by Flajolet and Sedgewick that has appeared in January 2009, published by Cambridge University Press: see the dedicated book page for details, contents, errata, and availability. We'll be glad to hear from you for technical comments, if you find major errors, and also if you're using this for teaching or research. • ANALYTIC COMBINATORICS (free download link). 810p.+xiv. Electronic edition of June 26, 2009 (identical to the print version). Analytic Combinatorics proposes a unified treatment of analytic methods in combinatorics. We develop in about 800 pages the basics of asymptotic enumeration and the analysis of random combinatorial structures through an approach that revolves around generating functions and complex analysis. A symbolic framework (Chapters I-III) first provides systematically a large number of exact description of combinatorial models in terms of generating functions. Major properties of generating functions that are of interest in this book are singularities. The text then presents (in Chapters IV-VIII) the core of the theory with two chapters on complex analytic methods focusing on rational and meromorphic functions as well as two chapters on fundamentals of singularity analysis and combinatorial consequences, followed by a chapter on the saddle point method. The last section (Chapters IX) covers multivariate asymptotics and limit laws in random structures. Many examples are given that relate to words, integer compositions and partitions, paths and walks, graphs, mappings and allocations, lattice paths, permutations, trees, and planar maps. An Introduction to the Analysis of Algorithms by Sedgewick and Flajolet is published by Addison Wesley (1996) and it has 512 pages (ISBN 0-201-4009-X). Introduction a l'analyse des algorithmes by Sedgewick and Flajolet. This French translation of the English original by Cyril Chabaud is published by International Thomson Publishing France (1996) and it has 421 pages. (ISBN 2-84180-957-9). There is also an authorized Chinese edition, dated 2006. Books edited (or co-edited) Mathematics and Computer Science III: Algorithms, Trees, Combinatorics and Probabilities. Series: Trends in Mathematics (Mathematics, Computer Science). Edited by Drmota, M.; Flajolet, P.; Gardy, D.; Gittenberger, B. 2004, XV, 554 p., Hardcover ISBN: 3-7643-7128-5 A Birkhäuser book. From Birkhäuser-Springer: "This book contains invited and contributed papers on combinatorics, random graphs and networks, algorithms analysis and trees, branching processes, constituting the Proceedings of the 3rd International Colloquium on Mathematics and Computer Science that held in Vienna in September 2004. It addresses a large public in applied mathematics, discrete mathematics and computer science, including researchers, teachers, graduate students and engineers. They will find here current questions in Computer Science and the related modern and powerful mathematical methods. The range of applications is very wide and goes beyond Computer Science." Mathematics and Computer Science II: Algorithms, Trees, Combinatorics and Probabilities. Brigitte, Chauvin, Philippe Flajolet, Danièle Gardy, A. Mokkadem (Editors). ISBN 3-7643-6933-7 (Published by Birkhäuser Verlag, Basel, 2002. Series: Trends in Mathematics. 560 pages. Hardcover). This represents the Proceedings of theColloquium held in Versailles, September 2002 under the same name as the From Birkhäuser: "This is the second volume in a series of innovative proceedings entirely devoted to the connections between mathematics and computer science. Here mathematics and computer science are directly confronted and joined to tackle intricate problems in computer science with deep and innovative mathematical approaches. The book serves as an outstanding tool and a main information source for a large public in applied mathematics, discrete mathematics and computer science, including researchers, teachers, graduate students and engineers. It provides an overview of the current questions in computer science and the related modern and powerful mathematical methods. The range of applications is very wide and reaches beyond computer science." Return to Philippe Flajolet's Home Page
{"url":"http://algo.inria.fr/flajolet/Publications/books.html","timestamp":"2014-04-18T20:42:48Z","content_type":null,"content_length":"10303","record_id":"<urn:uuid:a535f028-ad96-4f34-9cb3-a9bee5c53899>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
February 19th 2010, 10:42 PM The vertices of a triangle are at A(-3,-2), B(2,1) and C(0,6). What is the equation of the two lines. Please help me and show the solution... Thanks! February 19th 2010, 11:45 PM Prove It February 20th 2010, 06:34 AM I mean what is the equation of the altitude from vertex B... Sorry for the wrong question... Thanks! February 20th 2010, 11:30 AM Plot the three points and draw in sidesof triangle.Write the equation ol line AC. Take the negative inverse slope of this line and using the point slope formula write the equation of line BK. Solve the two equations to find the coordinates of K.Draw the BK slope diagram fot BK and determine the lenght of BK using the distance formula. Can you handle this? If not ask for some extra help. February 20th 2010, 03:31 PM I don't know how to do it... I tried but I cannot answer it... Please help me February 20th 2010, 03:39 PM The first step is to find the equation of the line that contains AC. Use the point-slope formula $y - y_0 = m(x - x_0)$ where $(x_0, y_0)$ is point A, $(x_1, y_1)$ is point C, and the slope is $m = \frac{y_1 - y_0}{x_1 - x_0}$. February 21st 2010, 05:10 AM Plot the three pointsLabel as given. Draw a horizontal thru Ameeting the y axis @0,-2.Label pointD. ACD is now the slope diagram for line AC Rise is 8, run is 3,slope is 8/3.Distance between A and C is sq.root 73 (d^2=3^2+8^2) Draw a perpendicular from B to AC meeting AC@K.Slope of BK is the negative inverse of line AC=to -3/8 Equation of line AC from y=mx+b isy=8/3x+6 Equation of BK Use point slope formula where point is 2,1 if a=b and a=c b=c x=-1.4 and y=2.3 using these coordinates the slope for BK is 1.3/3.4 from whence the lenght of Bk =3.64. You can now calculate the area of ABC.This is more than you needed for an answer but i believe you needed
{"url":"http://mathhelpforum.com/geometry/129705-altitude-print.html","timestamp":"2014-04-18T05:44:10Z","content_type":null,"content_length":"8836","record_id":"<urn:uuid:32230d6b-71d5-4ff9-9812-348abc382e15>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
A homogenization result for planar, polygonal networks , 2004 "... A collection of resistors with two possible resistivities is considered. This paper investigates the overall or macroscopic behavior of a square two–dimensional lattice of such resistors when each type is present in fixed proportion in the lattice. The macroscopic behavior is that of an anisotropic ..." Cited by 5 (4 self) Add to MetaCart A collection of resistors with two possible resistivities is considered. This paper investigates the overall or macroscopic behavior of a square two–dimensional lattice of such resistors when each type is present in fixed proportion in the lattice. The macroscopic behavior is that of an anisotropic conductor at the continuum level and the goal of the paper is to describe the set of all possible such conductors. This is thus a problem of bounds in the footstep of an abundant literature on the topic in the continuum case. The originality of the paper is that the investigation focusses on the interplay between homogenization and the passage from a discrete network to a continuum. A set of bounds is proposed and its optimality is shown when the proportion of each resistor on the discrete lattice is 1 2. We conjecture that the derived bounds are optimal for all proportions. Keywords: Γ–convergence, lattice, resistor network, homogenization, bounds, optimality. 1 , 2010 "... Abstract. This paper is concerned with the approximation of effective coefficients in homogenization of linear elliptic equations. One common drawback among numerical homogenization methods is the presence of the so-called resonance error, which roughly speaking is a function of the ratio ε/η, where ..." Cited by 5 (2 self) Add to MetaCart Abstract. This paper is concerned with the approximation of effective coefficients in homogenization of linear elliptic equations. One common drawback among numerical homogenization methods is the presence of the so-called resonance error, which roughly speaking is a function of the ratio ε/η, where η is a typical macroscopic lengthscale and ε is the typical size of the heterogeneities. In the present work, we propose an alternative for the computation of homogenized coefficients (or more generally a modified cell-problem), which is a first brick in the design of effective numerical homogenization methods. We show that this approach drastically reduces the resonance error in some standard cases. "... Abstract. We fully characterize the small-parameter limit for a class of lattice models with twoparticle long or short range interactions with no \exchange energy. " One of the problems we consider is that of characterizing the continuum limit of the classical magnetostatic energy of a sequence of m ..." Cited by 1 (0 self) Add to MetaCart Abstract. We fully characterize the small-parameter limit for a class of lattice models with twoparticle long or short range interactions with no \exchange energy. " One of the problems we consider is that of characterizing the continuum limit of the classical magnetostatic energy of a sequence of magnetic dipoles on a Bravais lattice, (letting the lattice parameter tend to zero). In order to describe the small-parameter limit, we use discrete Wigner transforms to transform the stored-energy which is given by the double convolution of a sequence of (dipole) functions on a Bravais lattice with a kernel, homogeneous of degree with N with the cancellation property, as the lattice parameter tends to zero. By rescaling and using Fourier methods, discrete Wigner transforms in particular, to transform the problem to one on the torus, we are able to characterize the small-parameter limit of the energy depending on whether the dipoles oscillate on the scale of the lattice, oscillate on a much longer lengthscale, or converge strongly. In the case where> N, the result is simple and can be characterized by anintegral with respect to the Wigner measure limit on the torus. In the case where = N, oscillations essentially on the scale of the lattice must be separated from oscillations essentially onamuch longer lengthscale in order to characterize the energy in terms of the Wigner measure limit on the torus, an H-measure limit, and the limiting magnetization. We show that the classical , 2013 "... Sujet: Qualitative and quantitative results in stochastic homogenization Soutenance le 24 février 2012 devant le jury composé de: ..." Add to MetaCart Sujet: Qualitative and quantitative results in stochastic homogenization Soutenance le 24 février 2012 devant le jury composé de: "... Abstract. These notes give a state of the art of numerical homogenization methods for linear elliptic equations. The guideline of these notes is analysis. Most of the numerical homogenization methods can be seen as (more or less different) discretizations of the same family of continuous approximate ..." Add to MetaCart Abstract. These notes give a state of the art of numerical homogenization methods for linear elliptic equations. The guideline of these notes is analysis. Most of the numerical homogenization methods can be seen as (more or less different) discretizations of the same family of continuous approximate problems, which H-converges to the homogenized problem. Likewise numerical correctors may also be interpreted as approximations of Tartar’s correctors. Hence the convergence analysis of these methods relies on the H-convergence theory. When one is interested in convergence rates, the story is different. In particular one first needs to make additional structure assumptions on the heterogeneities (say periodicity for instance). In that case, a crucial tool is the spectral interpretation of the corrector equation by Papanicolaou and Varadhan. Spectral analysis does not only allow to obtain convergence rates, but also to devise efficient new approximation methods. For both qualitative and quantitative properties, the development and the analysis of numerical homogenization methods rely on seminal concepts of the homogenization theory. These notes contain some new results. Résumé. Ces notes de cours dressent un état de l’art des méthodes d’homogénéisation numérique pour les équations elliptiques linéaires. Le fil conducteur choisi est l’analyse. La plupart des méthodes d’homogénéisation numérique s’interprète comme des discrétisations (plus ou moins différentes) d’une
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=12009334","timestamp":"2014-04-20T12:22:56Z","content_type":null,"content_length":"24849","record_id":"<urn:uuid:11e83232-7440-49db-a4e0-238ed84fb193>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Pure Math, Pure Joy 1422555 story Posted by from the msri-loves-company dept. e271828 writes "The New York Times is carrying a nice little piece entitled Pure Math, Pure Joy about the beauty and applicability of pure math as carried out at the Mathematical Sciences Research Institute. There is an accompanying slideshow of pictures of mathematicians in action; I particularly loved the picture titled Waging Mental Battle with a Proof." This discussion has been archived. No new comments can be posted. Pure Math, Pure Joy Comments Filter: • That's the nice thing about math (Score:2, Insightful) by wmspringer (569211) on Sunday June 29, 2003 @02:41PM (#6325892) Homepage Journal It doesn't actually have to be useful for anything now; in the academic setting you can research from obscure branch of mathematics just because you find it interesting. • by Manhigh (148034) on Sunday June 29, 2003 @02:45PM (#6325921) I think that Mathematicians largely arent the philanthropists that scientists are. However, seeing as how every science consists largely of mathematical models, the ends justify the means, so to speak. In other words, while a mathematician isnt looking for a way to make a longer lasting lightbulb, his or her ideas eventually work their way into science and engineering applications, even if it takes decades to happen. • by Jaalin (562843) on Sunday June 29, 2003 @02:46PM (#6325929) Homepage Mathematicians do it for the beauty. Society funds them because what is beautiful to a mathematician often turns out to be useful in many other ways. The NSF is paying me to do math research this summer, and honestly I don't care if what I'm doing has any relevance to anything -- I'm just doing it because what I'm studying is really cool and beautiful. But it may turn out that something I find is useful for something else that I never even thought of. This is what happened in large part with number theory -- many of the underlying results were discovered i nthe 1800's and early 1900's, and only later turned out to be useful in cryptography. You can't predict what will be useful and what won't. • terrible journalism (Score:3, Insightful) by andy666 (666062) on Sunday June 29, 2003 @02:46PM (#6325931) could someone please explain the point of this article ? like most nytimes science article it seems to have zero content. it would be nice if for a change they explained something about • by Ella the Cat (133841) on Sunday June 29, 2003 @02:50PM (#6325940) Homepage Journal If mathematicans aren't really interested in helping understand the world, why should society fund them? Because they're able to create beauty, like artists and writers and musicians do. Not all human activity should be measured with money, even if money is needed to make it happen • by foonf (447461) on Sunday June 29, 2003 @02:52PM (#6325953) Homepage If mathematicans aren't really interested in helping understand the world, why should society fund them? These are two separate things. Many people are attracted to the natural sciences, and even engineering disciplines, not because of a desire to improve the world, but because they find pleasure and abstract beauty in those fields. Yet undeniably work in those areas can lead to benefits for "society", and therefore people doing research in those areas are funded, even if their personal reasons for doing the work have nothing to do with those benefits. Likewise with mathematics, many ideas thought of as purely abstract and disconnected from practical application have turned out, later on, to be useful tools in understanding various real-world phenomena. It is totally unscientific and ultimately counter-productive to close off areas of inquiry because at the time they are undertaken no one can know exactly what the consequences will be. And ultimately the motivations of the people involved are irrelevant; we know based on history that there could turn out to be uses for it in the future, even if neither "we" (the society making the decision to support the research), nor those doing the research, can see any at this time, and this potentiality alone should justify providing support. • by k98sven (324383) on Sunday June 29, 2003 @02:53PM (#6325957) Journal I sure hope this isn't really true. If mathematicans aren't really interested in helping understand the world, why should society fund them? I certainly know that a major motivation for my career in science is that understanding the world through science will help people, cure diseases, etc. Guess what? It gets worse.. it's not only the mathematicians, but just about anyone and everyone involved in fundamental research. I know I am.. I do theoretical chemistry.. and although I'd love to see something useful come out of what I do, I cannot see any immediate uses for my work. The point is: It's the foundation research, the fundamentals, that lead to the big, *big* innovations. Although it might not seem useful at the time, it may (or may not) turn out to be very very important in the future. However, by it's nature, we can't know which research is going to pay off in practical terms. Einsteins work on stimulated emission probably didn't look very useful back in 1910 either, but it lead to the devlopment of the laser, which noone could've predicted at that time. That's why we need to fund this stuff. • by Sprunkys (237361) on Sunday June 29, 2003 @02:54PM (#6325960) For the sheer beauty of it. Asking why you should fund mathematics is asking why you should fund art. Who ever got cured by art? I certainly know that a major motivation for my career in science is the beauty of it. It's like the sunset outside my window, it's like Dido's new single emerging from my speakers. Today I spent studying for my thermodynamics exam and even the simple mathematics used therein is beautiful. Wednesday is my Quantum Mechanics exam and if it weren't for the beauty of the mathematics of the Schrödinger equation it would be a whole lot less intruiging. I make that exam for the joy and beauty I find in the mathematics and physics, not because it makes your cd player work. Beauty. That is why you should fund mathematics. The fact that it helps society is a secondary concern. But hey, that's just my opinion. And that of the Pythagoreans, to name a few. Beauty can be found in more things than a painting or Natalie Portman. It's in logic, in mathematics, hell, it's even in code. It's in patterns, it's in reason, it's in deduction as much as it's in nature, an individual or a thought. • Slahsdot reproduces NYT in it's entirety. (Score:4, Insightful) by igbrown (79452) <spam AT hccp DOT org> on Sunday June 29, 2003 @02:57PM (#6325973) Homepage Journal OK, not in it's entirety, and not it is a serious problem, but it would be nice if the editors could make sure that each Sunday, we don't see so many postings from a single news source. Maybe some sort of summary each Sunday on interesting stories in the NYT Sunday Edition. Pure Math, Pure Joy [slashdot.org] Does Google = God? [slashdot.org] Harry Potter and the Entertainment Industry [slashdot.org] • a recent experience with matrices (Score:4, Insightful) by somethinsfishy (225774) on Sunday June 29, 2003 @03:02PM (#6325993) I'd never studied linear algebra until recently when I had to learn just enough to work through the inverse kinematics of a robot arm. Actually, I never really got along with Mathematics very well anyway. But looking at how matrices can solve all kinds of problems just by drawing zig-zags through rows and columns of numbers made me wonder whether the problems they model or the problems themselves came first. As I was learning the little bit of this math that I did, it started to seem to me that the Math has an independent existence, and a somewhat mysterious set of relationships of correlations and causalities connected to but not dependant on physical nature. • If it's in the NYT.. (Score:0, Insightful) by Anonymous Coward on Sunday June 29, 2003 @03:03PM (#6325997) How do we know that this "math" thing they write about even exists? • by Zork the Almighty (599344) on Sunday June 29, 2003 @03:08PM (#6326019) Journal For the most part, we're in it because we want to know. Maybe you think that's a selfish reason, and maybe it is, but when we discover something we immediately share it with the world. The enduring gifts of mathematics are that it extends the boundaries of what is possible with current technology, while presenting us with direction for the future. • by Roelof (5340) on Sunday June 29, 2003 @03:09PM (#6326025) Homepage I think that Mathematicians largely arent the philanthropists that scientists are. Thus mathematicians aren't scientists. • To put it another way (Score:4, Insightful) by xant (99438) on Sunday June 29, 2003 @03:10PM (#6326031) Homepage "Being interested in helping the world" is not the same thing as "helping the world". An ox is not interested in helping plow the farmer's field, but the farmer still feeds it. • Re:It's not that obvious (Score:5, Insightful) by KDan (90353) on Sunday June 29, 2003 @03:19PM (#6326081) Homepage Very large prime numbers are the basis of the RSA asymmetric encryption algorithms which you trust your credit card numbers and other private information to. Anyway, I'm almost thinking you're trolling because the rest of your post demonstrates some sort of keen-ness for over-simplification. Maybe you're just not out of secondary school yet, but for your information, trig, calculus and the rest are useful for a lot more stuff than what you mention. All the different areas of maths often intermingle in any physical subject. For the interesting tidbit of information, there has yet to be a mathematical discovery which has not found practical applications. Even group theory, which at first was thought to have nothing to do with physics or any engineering sciences, was found to be very applicable to some extremely interesting problems of fundamental physics (describing the symmetries of fundamental particles). • Dumb question to "test" someone. (Score:5, Insightful) by GoofyBoy (44399) on Sunday June 29, 2003 @03:42PM (#6326193) Journal How arbitrary is that? How is e) (prime) less valid than the solution? How about g) (The only number greater than 29)? How about a) because its the "bad luck" number in Chinese culture (Too bad you missed out on that one, "white devil")? How about j) (Because today is Sunday and I feel like its the correct answer)? • Re:Visualizing the solution... (Score:5, Insightful) by TheRaven64 (641858) on Sunday June 29, 2003 @03:44PM (#6326202) Journal How about this one: What is the next in the sequence of: My answer was . The sequence is the largest number of separate enclosed areas it is possible to make by adding a single straight line to a circle. (i.e. 1 for no lines, 2 for one line, 4 for two I hate this kind of question, because it is possible to design a sequence such that any number comes next, so any test which includes the possibility of incorrect answers is just plain wrong. Of course you should have to justify your answer, but since the IQ tests are multiple choice... • Mensa is right based on Ockhams razor (Score:3, Insightful) by f97tosc (578893) on Sunday June 29, 2003 @03:46PM (#6326206) Which is the odd one out: (a) 4 (b) 15 (c) 9 (d) 12 (e) 5 (f) 8 (g) 30 (h) 18 (i) 24 (j) 10 Well, anyone who knows a prime from a hole in the ground would choose (e), but the correct answer was (f), 8. And why? Because it is the only "symmetrical" number, as printed on the page! Well, according to Ockhams razor I would argue that Mensa is right. The concept of symmetry is much simpler than the concept of prime numbers. • Re:Waging mental battle with a proof (Score:3, Insightful) by BWJones (18351) on Sunday June 29, 2003 @03:46PM (#6326210) Homepage Journal So, this is the deal with science and making it attractive to folks, so they see the importance of it. How do you impart the feeling of accomplishment and how efforts of pure thought impact the I thought this photo essay did an admirable job of conveying what thinking for a living is like, yet how does one make this approachable to the general population? I had a conversation with a film director once sitting in an airport (forget his name), but he was asking me what it was like to be a scientist and how one would impart that feeling in film. I responded that he would probably be best by following a scientist for a couple of weeks and shooting lots of time with rather tired looking individuals who had much passion for what they do but who spend lots of time thinking, applying for grants, staring through microscopes, writing code, writing papers, giving talks and talking with colleagues and above all, no matter what they are doing (eating, running, showering etc...), they are thinking. How do you impart that on film? I had some ideas, but he was probably thinking of an action movie. All told however, this article with the accompanying photo essay was well worth the time spent, it would have been nicer to have a more in depth article however. • by samhalliday (653858) on Sunday June 29, 2003 @03:57PM (#6326266) Homepage Journal If mathematicans aren't really interested in helping understand the world, why should society fund them? i am a PhD student in maths... and obviously i will disagree with you. but i have a reason... we may not WANT to change/understand the world; but it happens!!! surprise surprise, but the maths we create is used by physicists (about a 50->100 year time lag), which in turn is applied and picked up by engineers/chemists/biologists (another 10->50 year lag) which ends up being some new device or revolution for society to play with. you kill off maths, you kill off science as a whole. perfect examples involve ANY piece of electrical equipment, communications, medical care and transport. parent is a troll and is very VERY short sighted (see his home page ;-)). • Re:Mensa is right based on Ockhams razor (Score:4, Insightful) by f97tosc (578893) on Sunday June 29, 2003 @04:33PM (#6326428) Can you point us to the authoritative "hierarchy of simplicity? No. I think the best way is to imagine that you have to explain both alternatives to somebody who is completely clueless, and see which is quicker and easier to explain. Of course this method does not always work, but I think that in this case most would agree that the symmetry alternative is simpler. "See if, you turn the paper, the 8 still looks the same. It is the same if you look at it from either direction. If you put a mirror in the middle it does not change. If you look at the other numbers, this does not happen; look!" "See, the 5 is a prime number. That means that it can only be divided evenly by itself, and one. Division means that...[lengthy explanation]. Even division means that [lengthier explanation]. The reason that one is not included in the definition is that [....]. Now we can look at all the other numbers in turn and see that they are not prime numbers [lengthy calculations, or even lengthier explanations on how they can be indentifed quickly]. Etc. Etc." • Re:Visualizing the solution... (Score:2, Insightful) by backdoorstudent (663553) on Sunday June 29, 2003 @04:41PM (#6326456) It is correct that any number can come next in that sequence or any other. This is called the Matiyasevich-Robinson theorem. • by drooling-dog (189103) on Sunday June 29, 2003 @04:45PM (#6326476) Well, according to Ockhams razor I would argue that Mensa is right. The concept of symmetry is much simpler than the concept of prime numbers. Oh, I wouldn't argue that they were wrong; in fact I think that they set up the question this way deliberately to smack mathematically literate people who see numbers and assume that it's about number theory. They're measuring some function of intelligence minus education. • Pure Math (Score:3, Insightful) by MimsyBoro (613203) on Sunday June 29, 2003 @04:54PM (#6326511) Journal I'm a second year college student of pure math. I just wanted to tell all you non-believers taht its true. There is something amazingly beautiful in pure math. And in the way it is almost "above" reality. Math is applied philosophy. And if you've ever tried tackling a hard philosophical problem you know what it's like trying to understand a prinicipal in math... • Re:Mensa is right based on Ockhams razor (Score:4, Insightful) by Wavicle (181176) on Sunday June 29, 2003 @05:28PM (#6326671) If they are deliberately creating questions that have a "correct but not the answer we were looking for" solution, then they are knowingly creating poor tests of intelligence. What they are really looking for then is "people who think like we do" not "very intelligent people". It's sort of like the old biased college aptitude tests and the cup/saucer question where kids from well off white families would know that cup and saucer go together, but poor minority kids had probably never encountered a saucer in their life. • by Anonymous Coward on Sunday June 29, 2003 @05:48PM (#6326763) "Einsteins work on stimulated emission probably didn't look very useful back in 1910 either, but it lead to the devlopment of the laser, which noone could've predicted at that time. That's why we need to fund this stuff." Its a good point; even if you believe that mathematics needs to yield real world applications in order to be justified, it would be short cited to restrict research to topics with anticipated However, I think research in mathematics should be encouraged for more idealogical reasons. We enrich our culture whenever we add to our knowledge of anything. This is why we support the study of fine arts, literature, history, anthropology etc. We do not demand applications from these subjects; the payback is less tangible than that. Pure mathematics gives us beautiful truths that are valuable in themselves even if they don't penetrate into the popular culture. The fact that pure mathematics provides a rich resevoir of knowledge that is heavily exploited by all fields of science and engineering should not be construed as its sole justification. Anyway, when it comes to funding, you'll find it much easier to get support for research under the banner of applied mathematics or engineering than for research in pure math. The money available for the latter is probably more akin to that of the humanities than it is to that of the applied sciences. And that is fine, but there is no cause to whine about money being wasted on research in pure mathematics. • Euclid alone has looked on beauty bare (Score:4, Insightful) by dpbsmith (263124) on Sunday June 29, 2003 @06:42PM (#6327050) Homepage Euclid alone has looked on Beauty bare. Let all who prate of Beauty hold their peace, And lay them prone upon the earth and cease To ponder on themselves, the while they stare At nothing, intricately drawn nowhere In shapes of shifting lineage; let geese Gabble and hiss, but heroes seek release From dusty bondage into luminous air. O blinding hour, O holy, terrible day, When first the shaft into his vision shone Of light anatomized! Euclid alone Has looked on Beauty bare. Fortunate they Who, though once only and then but far away, Have heard her massive sandal set on stone. --Edna St. Vincent Millay • by f97tosc (578893) on Sunday June 29, 2003 @07:00PM (#6327137) If you a) Write the number in binary it is not symmetric. Mind you, it is:) OK. Scratch that. b) If you use an OCR front it is not (the top part of the glyph is skew and smaller). c) If you do not write down the number but represent it in, for instance, a binary set of charges in capacitors ina dynamic RAM device I am not sure that the concept of symmetry applies at all. d) If you write it as a Maya numeral (Which would be 1 line and 3 dot on top of it) it would only be symmetrical in one axis, but so would some of the other numbers. e) Put your computer in a font which displays numbers with different glyphs and wham, no more symmetry. Try Adobe WoobBlock or something weird. So symmetry is NOT a property of the number itself. Primeness is though. Yes, but the whole issue here was whether the symbol should be just a character or treated as an abstraction for a numerical quantity. All these points assume that we have decided that it is an abstraction for a numerical quantity (and that the symmetric property should hold for other ways of writing the same numerical quantity). If the figure 8 is just a meaningless character, then you write it as 8, with the same font, in Maya as well. You cannot asume the mathematical-abstraction interpretation to prove itself. • Re:Pure Math (Score:1, Insightful) by Anonymous Coward on Sunday June 29, 2003 @07:12PM (#6327183) Ah, yes, but /everything/ but math is applied math. I'm a physicist; I'm only a notch lower than the mathematicians on the totem pole. Everything but math and physics is applied physics. :) • Re:You can trust the NYT (Score:3, Insightful) by hobit (253905) on Sunday June 29, 2003 @08:39PM (#6327596) I work in the maths department of a University, and yes.. it's very much like this. We sit around all day in small groups, staring at blackboards, "battling with proofs". Just like in that wonderful movie with the violent australian, "A Beautiful Mind". I'm a computer scientist who does a bit of theory. By far the very best, most enjoyable and most rewarding thing I've done as a graduate student is work on proofs. Usually in small groups, often on a blackboard (although I prefer having colors so a white board is much prefered). There is a fair amount of reading involved but it can be fun... Nowdays I teach, which I enjoy, but occasionally do some math where all I do is sit around and think. Now if I could just find someone to do the write-ups (which I hate). I don't do anything horribly insightful (although some of it has been published) but it is fun! • Re:a recent experience with matrices (Score:1, Insightful) by Anonymous Coward on Sunday June 29, 2003 @09:36PM (#6327854) made me wonder whether the problems they model or the problems themselves came first Generally, most classical mathematics was inspired by real-world problems. Geometry, for instance (literally "earth measure") came about as a way to mark off crop boundaries that got washed away after the river periodically flooded. But I'd say that since the golden age of mathematics (about 18th century), new mathematics has been created primarily for its own sake. Often the only "applications" are in proving theorems in other areas of mathematics as opposed to real-world problems. • by Wavicle (181176) on Monday June 30, 2003 @12:59AM (#6328602) Well, I am probably extrapolating it beyond what he would ever have done; but I am not the first to realize it's applicability to this type of problem. So you are saying because numerical symbols are simpler to explain as shapes than as a field of philosophy, that any problem involving numbers should first consider their shape since any solution involving that would be simpler to explain? No, you haven't realized a valid use of Ockham's razor. You are simply using the validity given to it, and twisting its meaning to make your argument seem more valid. Ockham's razor, as it applies to philosophy, eliminates one of two theories trying to explain the same thing. For example, why do planets in the sky move in such a peculiar way? One theory says "the sun is at the center and we and the other 8 are going around it" the other theory spends a few pages of explanation about the earth being at the center and the planets going around it, and on another sub orbital on their major orbit... all kinds of craziness. Clearly one requires less multiplications than the other. If you want to apply Ockham's razor here, you must have two theories explaining the same thing. But they don't. One theory says "8", the other says "5". By your logic, 1 + 1 = X, because you can make an "X" by crossing the two shapes and it is much easier to explain two shapes overlapping than elementary arithmetic. Just because there is an easier explanation to get a different answer doesn't mean the easier explanation is right, or that Ockham's razor is in any way involved. This is a circular argument. The whole point with the other solution is that "8" can be analyzed by just the properties of the symbol itself, and not by the properties of the mathematical abstraction. You assume it is a mathematical abstraction, and then use that assumption to prove itself. Please quote me proving that it is a mathematical abstraction. I assume that they are numbers and not shapes and then using that assumption evaluate that one and only one is prime. But that doesn't prove that they are abstractions, merely that there is a valid answer if they are. • by njj (133128) on Monday June 30, 2003 @06:24AM (#6329379) If mathematicans aren't really interested in helping understand the world, why should society fund them? This is an important question, and in my opinion has two particularly valid answers. The first of these is the one that usually gets advanced - that (as with other pure scientific disciplines) we just don't know what `useless' knowledge might turn out to be useful or vital in fifty year's time. This is all well and good, and a perfectly decent reason to study something. The other one, which I've come to believe more strongly over the past few years, is that which is often advanced in support of arts funding - that it benefits a society greatly (often in intangible and undefinable ways) to study and research things whether or not they have any practical use. This is a point which, in the UK at least, a succession of education ministers have either missed or fundamentally disagreed with over the past few decades. Last month, Charles Clarke, the current Secretary of State for Education made some very disturbing comments about how he didn't see the point in spending taxpayers' money on maintaining a group of ``mediaeval seekers after truth''. He was initially misquoted as saying he didn't see the point in the study of mediaeval history, which rightly got a lot of historians angry, but a later statement clarified that he actually didn't see the point in studying any subject which didn't have a direct positive contribution to UK industrial or economic interests. Which I find even more disturbing - it's understandable (even ok) for the Chancellor of the Exchequer to have such a viewpoint, but I like to think that the Secretary for Education should at least see some worth in all of the education system. A friend of mine [google.com] (an eminent evolutionary and reproductive biologist who's also helped design aliens for people like Anne McCaffrey, Larry Niven and Jerry Pournelle, and co-written a couple of books with Terry Pratchett) once said ``Most people think that the end-product of a PhD is a neatly-typeset hardback thesis. It's not - the end product of the PhD is the person who's done the PhD'' which I rather agree with. Studying or researching any subject changes the way you look at the world - often for the better. It teaches you new or variant modes of thought which you can then apply (often unconsciously) to other areas of interest. For example: A former office-mate of mine now works for the NHS Breast Cancer Screening Service. The topic of her thesis (permutation group theory) is irrelevant to what she does now. But I find it tremendously reassuring to know that there are people that well-educated, and who have been trained to such a high level in thinking clearly and carefully, involved in something that important and worthwhile. • by Pig Bodine (195211) on Monday June 30, 2003 @06:53AM (#6329441) I sure hope this isn't really true. If mathematicans aren't really interested in helping understand the world, why should society fund them? I certainly know that a major motivation for my career in science is that understanding the world through science will help people, cure diseases, etc. In most cases society doesn't fund them to do mathematical research. Research grants among pure mathematicians are not so prevalent. They earn their keep teaching math to (mostly) scientists and engineers and then prove theorems in whatever time that leaves open. Even aside from the argument that mathematics is intrinsically beautiful like music, art or literature, it doesn't make practical sense to expect everyone to have an eye on applications of their work. People have to specialize if they hope to learn enough to accomplish anything these days and a mathematician who also becomes enough of an expert in curing diseases to let that guide new mathematical research probably won't have time to prove new theorems. Letting mathematicians do math so that everyone can pull out what theorems they might apply in their own field has been pretty effective historically. Related Links Top of the: day, week, month.
{"url":"http://science.slashdot.org/story/03/06/29/183258/pure-math-pure-joy/insightful-comments","timestamp":"2014-04-19T07:17:11Z","content_type":null,"content_length":"193103","record_id":"<urn:uuid:3c35770d-439e-46d1-bb20-a546bb5d08b8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Orange County, CA Algebra Tutor Find an Orange County, CA Algebra Tutor ...Another study tool I use with students is graphic organizers for prewrites. I teach students how to look for key terms when reading a history or science text and also how to answer questions at the end of the chapter. All of these tools help to make the student more successful in school. 37 Subjects: including algebra 1, English, reading, writing ...I am confident I will be of great assistance in mastering Calculus, whether it be preparing for an upcoming midterm or the AP Calculus exam. Geometry is a crucial subject in high school mathematical progression which is often overlooked. Most students will do surprisingly well in the course by simply memorizing the formulas assigned to evaluate various geometrical shapes. 16 Subjects: including algebra 1, algebra 2, calculus, statistics ...Steve PBS UC, Irvine Double major Math and Bio Science. Selected to teach Algebra in federally sponsored Upward Bound program in which students live on the UC campus for the summer. Makes use of Hands-On Algebra tools to give students visual, aural and manual input to manipulating equations and the laws of algebra. 16 Subjects: including algebra 1, algebra 2, physics, chemistry ...Since 2007, I have worked as a chemistry tutor, supplemental instructor, laboratory instructor, and teaching associate at Mt. San Antonio College and CSU Fullerton. I am also a CRLA (College Reading & Learning Association)certified tutor since 2007. 11 Subjects: including algebra 1, algebra 2, chemistry, calculus ...I am also certified in after school education through Santa Ana College. Educating children has always been a passion of mine and I believe that a positive role model can encourage a child to pursue their dreams and achieve to their fullest potential.I am a Child Development major and I have had... 12 Subjects: including algebra 1, reading, English, Spanish Related Orange County, CA Tutors Orange County, CA Accounting Tutors Orange County, CA ACT Tutors Orange County, CA Algebra Tutors Orange County, CA Algebra 2 Tutors Orange County, CA Calculus Tutors Orange County, CA Geometry Tutors Orange County, CA Math Tutors Orange County, CA Prealgebra Tutors Orange County, CA Precalculus Tutors Orange County, CA SAT Tutors Orange County, CA SAT Math Tutors Orange County, CA Science Tutors Orange County, CA Statistics Tutors Orange County, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/orange_county_ca_algebra_tutors.php","timestamp":"2014-04-21T04:33:20Z","content_type":null,"content_length":"24282","record_id":"<urn:uuid:23b6af14-d74e-493a-8987-18bfd551d708>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Israel Nathan Herstein Born: 28 March 1923 in Lublin, Poland Died: 9 February 1988 in Chicago, Illinois, USA Click the picture above to see two larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Yitz Herstein was named Yitzchak but known as 'Yitz' by his friends. This seems to contradict the name we have given above, namely Israel Nathan Herstein, and the fact that he appears as I N Herstein on all his publications. Yitzchak is the Hebrew version of Isaac but when he was in primary school his teachers called him Israel and he was known by that name ever since. His father, Jacob Herstein, was working in his native Poland as a leather cutter in a factory at the time when Yitz was born. However, he was a scholarly man who wanted a good education for his sons and, in 1926 he emigrated to Canada where he settled in Winnipeg. Yitz's mother was Mindel Lichstein and he had a brother Chaim who was nine years his elder. In fact, like Yitz, Chaim was later not known by that name but called Harvey. Mindel and the two sons remained in Poland for two years after Jacob had emigrated, then the family were reunited in Winnipeg. Yitz had a younger sister, born in 1930, but sadly she died aged Jacob had emigrated to be able to give his family a better life, and most of all a better education. However, things were not easy for them in Canada and he grew up in the years of the Great Depression which began in 1929. In fact Canada suffered badly through ten years of the Depression and Yitz grew up in a poor rough area of Winnipeg [3]:- ... his later comment was that in that neighbourhood you either became a gangster or a college professor. He studied at the University of Manitoba, receiving his B.A. in 1945, then went to Toronto where he was awarded an M.A. from the University of Toronto in the following year. After his marriage to Marianne Deson, Herstein moved to the University of Indiana and received a Ph.D. in 1948 for a thesis Divisor Algebras written under Max Zorn's supervision. He published a 22 page paper in the American Journal of Mathematics in the following year with the same title as his thesis. He begins the paper with these words:- The object of this paper is to characterise the divisor algebras of algebraic function fields of degree of transcendence one over algebraically-closed constant fields by a "natural" system of Herstein worked at the University of Kansas for two years as an Instructor, then at Ohio State University for a year before being appointed to Chicago in 1951 [3]:- Both he and his wife found Chicago a very congenial city and determined that this is where the would settle. Before being appointed to Chicago he had published papers such as A proof of a conjecture of Vandiver (1950), On a conjecture on simple groups (1950), and Group-rings as *-algebras (1950). The first of these papers generalised the theorem of Wedderburn that shows that every finite division ring is commutative. The second paper proves a conjecture that the solubility of groups of odd order is equivalent to a condition on the group ring of a group, while the third paper takes methods from the study of Banach rings and topological groups to prove results about group rings over the complex In Chicago, Herstein was influenced by Abraham Albert. During this time he worked on a topic which was to be one of the main themes of his work, namely on conditions on a ring which imply commutativity. For example he worked on conditions of the type x^n = x first studied by Nathan Jacobson in 1945. His first paper on this topic was A generalization of a theorem of Jacobson (1951) in which he proved the following theorem: If R is a ring with centre C, and if x^n - x is in C for all x in R, n a fixed integer larger than 1, then R is commutative. His appointment as Assistant Professor at Chicago was in Mathematics and Economics and Herstein published some papers relating to economics while holding the post. From example Comments on Solow's "Structure of linear models" (1952) and Some mathematical methods and techniques in economics (1953). Kuhn, reviewing this 1953 paper, writes:- This paper performs the useful service of presenting some aspects of pure mathematics being applied currently to problems in economics. Among the methods and problems discussed in some detail are a derivation of the Slutsky equation via the calculus, a problem in Welfare Economics treated by the theory of convex sets, matrix theory as applied to international trade, and a game-theoretical approach to the personnel assignment problem. Many other subjects are touched lightly and cited in an interesting bibliography. He was appointed to the University of Pennsylvania in 1953 [3]:- ... at that time grants were scarce, so with two of his colleagues he organised a team to appear on a radio quiz show. Each win brought in $25 and a long string of wins enabled them to build up a fund to pay for seminar speakers. He left Pennsylvania in 1957 when appointed to Cornell where he was promoted to Professor in 1958. Awarded a Guggenheim Fellowship in 1960, he spent a year in Rome. He returned to Chicago in 1962 and remained there for the rest of his life. A A Albert was chairman of the Mathematics Department at Chicago and had been desperate to bring Herstein back there. Nancy Albert writes [2]:- In June 1961 [Albert] sought to secure the outstanding algebraist I N Herstein to fill the vacancy of Otto Schilling, who was leaving for an appointment at Purdue University. Adrian wrote a letter to "Dear Colleagues". In it, he described two letters that he was preparing to send the Dean. "In one I have recorded the vote for Herstein's appointment as unanimous and recommending his appointment ... The other is my letter of resignation as Chainman of the Department. ... It is up to you to decide which letter I will send". Adrian's strategy succeeded. The faculty approved Herstein's appointment. In addition to work on rings and algebras Herstein also worked on groups and fields. In particular he examined finite subgroups of a division ring. In [3] 115 publications on these topics are listed. Herstein is perhaps best known for his beautifully written algebra texts, especially the undergraduate text Topics in algebra (1964). Other algebra books included a more advanced ring theory book Noncommutative rings (1968) and Topics in ring theory (1969). At a more elementary level he published Matters mathematical with Irving Kaplansky in 1978. A book which he worked on in the last two years of his life was Abstract algebra (1986). Much of his own research is put into context in his book Rings with involution (1976). Let us now record some comments on these famous texts. Allow me [EFR] to make a personal comment on Topics in algebra. I purchased the book in the year in which it was published. It was such a joy to read the book: the ideas are so clearly laid out, and the author's enthusiasm coming through throughout. Frederick Hoffman writes twenty years after the book was first published:- Topics in algebra is a beautiful book, which captured a large market, and became the text for almost everyone's ideal undergraduate course, in addition to making its author's name an adjective for graduate qualifying-exam-level algebra at many institutions. If one approached Topics in algebra in a manner consistent with its author's approach to the undergraduate course, that is, that the precise material covered, and the amount of it, was not as important as the way it was covered, then the book could be used almost anywhere, with the number of pages covered and percent of problems done a function of the audience and the instructor. The charm of the writing, and the wonderful exercises help it retain its position with many of us. Here is Herstein's Preface to Topics in algebra. Noncommutative rings appeared in The Carus Mathematical Monographs series published by The Mathematical Association of America. W S Martindale, III, reviewing the book, first explains the background to the book:- This colourful and informative book on noncommutative ring theory is based on a series of expository lectures given by the author in the summer of 1965 at Bowdoin College before an audience of teachers from colleges and small universities. These lectures were in turn based to a large extent on the author's 1961 and 1965 University of Chicago notes on ring theory. He ends his review by writing:- The spirit of the Carus Monograph series is clearly embodied in this moving and excellently written account of important aspects of classical and modern ring theory. The book will undoubtedly be a popular one for a wide class of mathematicians and students. Topics in Ring Theory was based on lectures Herstein gave at the University of Chicago and first published in the University of Chicago Mathematics Lecture Notes series. The book largely concerns Herstein's work on Lie and Jordan structure of simple associative rings which he published in various papers in the early 1950s. Matters mathematical by Herstein and Kaplansky is an interesting book, based on a course designed to introduce students who were not specialising in mathematics. However, they expanded the material so as to produce a book designed as an introduction to mathematics students, especially for future teachers. B Artmann writes that:- ... the reviewer wishes to stress that he thinks that the authors have succeeded very well in creating an adequate picture of mathematics for their audience, and that the topics chosen are optimal for their intentions. Herstein supervised 30 research students. One said of him:- He was someone of great warmth who took an intense personal interest in his students and had a knack of getting them to believe in themselves. In [1], Faith makes some interesting comments about Herstein:- Israel Nathan Herstein preferred to be called Yitz. While at Perdue I met Yitz at the frequent mathematical meetings that took place at the University of Chicago. Often Yitz would take us to Mama Luigi's for a late dinner, where we talked for hours. He had a love for Italian food and Italy - we always managed to end up Italian: Dimaggio's in San Francisco, Valerio's in Cincinnati, Otello in Rome, Sardis or Manganaro's in New York, the "Annex" in Princeton. (Yitz kept an automobile in storage in Rome for his frequent visits there - but he was equally at home in all of the world's great cities.) ... Like many, many, mathematicians, I fell under the spell of his brilliance ... Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (3 books/articles) Some Quotations (3) Mathematicians born in the same country Additional Material in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © December 2008 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Herstein.html","timestamp":"2014-04-18T13:13:31Z","content_type":null,"content_length":"21910","record_id":"<urn:uuid:54a1cbda-f8ac-48ad-bb65-cdd8ceee3721>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
No data available. Please log in to see this content. You have no subscription access to this content. No metrics data to plot. The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. FIG. 1. (a) Surface SEM image of an uncapped Ga[0.55]In[0.45]As dot sample.The surface is tilted by 70° to enhance the height contrast. Reproduced with permission from A. Loffler, J.-P. Reithmaier, and A. Forchel, J. Cryst. Growth. 286, 6 (2006). Copyright 2012 Elsevier. (b) Schematic diagram of the geometrical shape of dots used in the calculation. FIG. 2. Schematic diagram of the elastic scattering process. In the x-y plane, β is the angle between the initial wave vector k and the x axis, is the scattering angle between k and k′ , is the angle between q and x axis, is polar angle between ρ and q , we can get and the angle between ρ and x axis equals to FIG. 3. Calculated is plotted as a function of the 2DEG density n[2D] for different incidence angle of the initial wave vector k , where =k[F], l=10nm, r=5nm, h=1.5nm, and the position of the dots plane z[dot] is located at the well center, i.e., L∕2. FIG. 4. Calculated anisotropic mobility is plotted as a function of the 2DEG density n[2D]. In this calculation, we choose l=10nm, r=5nm, h=1.5nm, and the dots plane position z[dot]=L∕2, where L is the QW width. FIG. 5. Calculated anisotropic mobility is plotted as a function of the position of the dots plane z[dot]. In this calculation, we have chosen l=10nm, r=5nm, h=1.5nm, and the 2DEG density n[2D] FIG. 6. Anisotropic mobility ratio is plotted as a function of the 2DEG density n[2D] for different values of the length l and width r of the ellipsoid QDs. In this calculation, we have chosen h=1.5nm, and the dots plane position z[dot]=L∕2, where L is the QW width. Table I. Parameters used in the theoretical calculation. Article metrics loading...
{"url":"http://scitation.aip.org/content/aip/journal/jap/113/3/10.1063/1.4775790","timestamp":"2014-04-16T05:02:22Z","content_type":null,"content_length":"83401","record_id":"<urn:uuid:3fa09b81-81e6-4492-8aa1-e69665f850c8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimization with Dense Structured Hessian, Linear Equalities Hessian Multiply Function for Lower Memory The fmincon interior-point and trust-region-reflective algorithms, and the fminunc trust-region algorithm can solve problems where the Hessian is dense but structured. For these problems, fmincon and fminunc do not compute H*Y with the Hessian H directly, because forming H would be memory-intensive. Instead, you must provide fmincon or fminunc with a function that, given a matrix Y and information about H, computes W = H*Y. In this example, the objective function is nonlinear and linear equalities exist so fmincon is used. The description applies to the trust-region reflective algorithm; the fminunc trust-region algorithm is similar. For the interior-point algorithm, see the 'HessMult' option in Hessian. The objective function has the structure where V is a 1000-by-2 matrix. The Hessian of f is dense, but the Hessian of is sparse. If the Hessian of is , then H, the Hessian of f, is To avoid excessive memory usage that could happen by working with H directly, the example provides a Hessian multiply function, hmfleq1. This function, when passed a matrix Y, uses sparse matrices Hinfo, which corresponds to , and V to compute the Hessian matrix product W = H*Y = (Hinfo - V*V')*Y In this example, the Hessian multiply function needs and V to compute the Hessian matrix product. V is a constant, so you can capture V in a function handle to an anonymous function. However, is not a constant and must be computed at the current x. You can do this by computing in the objective function and returning as Hinfo in the third output argument. By using optimoptions to set the 'Hessian' options to 'on', fmincon knows to get the Hinfo value from the objective function and pass it to the Hessian multiply function hmfleq1. Step 1: Write a file brownvv.m that computes the objective function, the gradient, and the sparse part of the Hessian. The example passes brownvv to fmincon as the objective function. The brownvv.mbrownvv.m file is long and is not included here. You can view the code with the command type brownvv Because brownvv computes the gradient and part of the Hessian as well as the objective function, the example (Step 3) uses optimoptions to set the GradObj and Hessian options to 'on'. Step 2: Write a function to compute Hessian-matrix products for H given a matrix Y. Now, define a function hmfleq1 that uses Hinfo, which is computed in brownvv, and V, which you can capture in a function handle to an anonymous function, to compute the Hessian matrix product W where W = H*Y = (Hinfo - V*V')*Y. This function must have the form W = hmfleq1(Hinfo,Y) The first argument must be the same as the third argument returned by the objective function brownvv. The second argument to the Hessian multiply function is the matrix Y (of W = H*Y). Because fmincon expects the second argument Y to be used to form the Hessian matrix product, Y is always a matrix with n rows where n is the number of dimensions in the problem. The number of columns in Y can vary. Finally, you can use a function handle to an anonymous function to capture V, so V can be the third argument to 'hmfleqq'. function W = hmfleq1(Hinfo,Y,V); %HMFLEQ1 Hessian-matrix product function for BROWNVV objective. % W = hmfleq1(Hinfo,Y,V) computes W = (Hinfo-V*V')*Y % where Hinfo is a sparse matrix computed by BROWNVV % and V is a 2 column matrix. W = Hinfo*Y - V*(V'*Y); Step 3: Call a nonlinear minimization routine with a starting point and linear equality constraints. Load the problem parameter, V, and the sparse equality constraint matrices, Aeq and beq, from fleq1.mat, which is available in the optimdemos folder. Use optimoptions to set the GradObj and Hessian options to 'on' and to set the HessMult option to a function handle that points to hmfleq1. Call fmincon with objective function brownvv and with V as an additional parameter: function [fval, exitflag, output, x] = runfleq1 % RUNFLEQ1 demonstrates 'HessMult' option for FMINCON with linear % equalities. problem = load('fleq1'); % Get V, Aeq, beq V = problem.V; Aeq = problem.Aeq; beq = problem.beq; n = 1000; % problem dimension xstart = -ones(n,1); xstart(2:2:n,1) = ones(length(2:2:n),1); % starting point options = optimoptions(@fmincon,'Algorithm','trust-region-reflective','GradObj','on', ... 'Hessian','user-supplied','HessMult',@(Hinfo,Y)hmfleq1(Hinfo,Y,V),'Display','iter', ... [x,fval,exitflag,output] = fmincon(@(x)brownvv(x,V),xstart,[],[],Aeq,beq,[],[], ... To run the preceding code, enter [fval,exitflag,output,x] = runfleq1; Because the iterative display was set using optimoptions, this command generates the following iterative display: Norm of First-order Iteration f(x) step optimality CG-iterations 0 1997.07 916 1 1072.57 6.31716 465 1 2 480.247 8.19711 201 2 3 136.982 10.3039 78.1 2 4 44.416 9.04685 16.7 2 5 44.416 100 16.7 2 6 44.416 25 16.7 0 7 -9.05631 6.25 52.9 0 8 -317.437 12.5 91.7 1 9 -405.381 12.5 1.11e+003 1 10 -451.161 3.125 327 4 11 -482.688 0.78125 303 5 12 -547.427 1.5625 187 5 13 -610.42 1.5625 251 7 14 -711.522 1.5625 143 3 15 -802.98 3.125 165 3 16 -820.431 1.13329 32.9 3 17 -822.996 0.492813 7.61 2 18 -823.236 0.223154 1.68 3 19 -823.245 0.056205 0.529 3 20 -823.246 0.0150139 0.0342 5 21 -823.246 0.00479085 0.0152 7 22 -823.246 0.00353697 0.00828 9 23 -823.246 0.000884242 0.005 9 24 -823.246 0.0012715 0.00125 9 25 -823.246 0.000317876 0.0025 9 Local minimum possible. fmincon stopped because the final change in function value relative to its initial value is less than the selected value of the function tolerance. Convergence is rapid for a problem of this size with the PCG iteration cost increasing modestly as the optimization progresses. Feasibility of the equality constraints is maintained at the solution. problem = load('fleq1'); % Get V, Aeq, beq V = problem.V; Aeq = problem.Aeq; beq = problem.beq; ans = In this example, fmincon cannot use H to compute a preconditioner because H only exists implicitly. Instead of H, fmincon uses Hinfo, the third argument returned by brownvv, to compute a preconditioner. Hinfo is a good choice because it is the same size as H and approximates H to some degree. If Hinfo were not the same size as H, fmincon would compute a preconditioner based on some diagonal scaling matrices determined from the algorithm. Typically, this would not perform as well.
{"url":"http://www.mathworks.se/help/optim/ug/minimization-with-dense-structured-hessian-equality-constraints.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-25T03:24:09Z","content_type":null,"content_length":"47838","record_id":"<urn:uuid:7996d048-5db0-4f91-ab12-f0d4db173ced>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Team Umizoomi Preschool Math Kit Giveaway! This giveaway is now over. The winner is Steph. Congrats! With summer half over already, I like to keep the brain drain from happening (even with Will even though he’s only in preschool). That is why I was happy to try out the Team UmiZoomi Preschool Math Kit! These are offered exclusively at Toy R Us stores this month (and at Nockelodean’s site online) and they offer a hands on way for preschoolers to experience math. Since math isn’t my strong suit, I need all the help I can get (even if it’s only preschool math!). These kits cover the nine areas of math preschoolers need to know before starting kindergarden: numbers, counting, patterns, shapes, measuring, positioning, sorting, classification, and reasoning. There are also three episodes that are included that help reinforce what the kids are doing: Playground Heroes, Carnival, and Aquarium Fix it. The Team Umizoomi Math Kits include: • 48 page workbook that provides for practice and repetition • 24 page storybook that invites kids on an Umizoomi math adventure • 24 Mighty Math Mission cards- they are basically flashcards with math based challenges on them • 48 page activity book that reinforces math kills through simple games and activities • Team Umizoomi episode on DVD • Umizoomi pencils and eraser (Will’s favorite!) My thoughts were that this is perfectly geared for the preschooler. There are so many different things that you can do with this kit that you kids will never find it boring. I liked how we could pick and choose what to do and everything was fun. Will loved it! The mission cards are great for the car. There is also no set way to get through this kit, so take your time and enjoy. Since Will loved this kit so much, I’ve arranged for one lucky reader to get one of their own! Here’s what you need to do: Tell me what part of the Umizoomi Preschool Math Kit will your little one enjoy the most. Is the DVD? The Mission Cards? The Activity book? 1. Twitter about this contest. You can do this daily — just leave me a comment letting me know. 2. Stumble this post and/or add this post to other social media sites like Digg, Kirtsy, Blog Engage, Reddit, Propeller, Etc. (1 entry for each social network). 3. Comment on any other post on this blog. (1 entry for each post). Let me know which posts you commented on. 4. Share this post on Facebook. 5. Add this to any forums you belong too. Please give me the URL, so I can verify. 6. Add this to any linky contest lists. 7. “Like” Lisa Reviews on Facebook. Each of these gives you 5 additional entries: 1. Blog about this on your blog. Please give me the URL, so I can verify. 2. Subscribe to this blog using either my email or RSS feed 3. Add my badge to your sidebar (each blog gets you 5 additional entries): Each of these will give you 10 additional entries: 1. Sign up for my free newsletter! 2. Subscribe to my new Oak Lawn Deals site! 3. Subscribe to my new Lisa’s Mom Deals site! This contest will end August 4th at 11:59 pm CST! Good Luck! Related articles 1. Stephanie V. : seriously all of the kit! Big Team Umizoomi fans in this house! 2. karen M : The workbook, the more they can pratice the easier it will be to do. 3. karen M : shared on facebook -http://www.facebook.com/permalink.php?story_fbid=194899043902405&id=100002394242718 4. karen M : “Like” Lisa Reviews on Facebook. Gumma Medlin 5. karen M : 2 “Like” Lisa Reviews on Facebook. Gumma Medlin 6. karen M : 3 “Like” Lisa Reviews on Facebook. Gumma Medlin 7. karen M : 4 “Like” Lisa Reviews on Facebook. Gumma Medlin 8. karen M : 5 “Like” Lisa Reviews on Facebook. Gumma Medlin 9. karen M : karenmed409 at comcast dot net 10. karen M : 2 subscriber karenmed409 at comcast dot net 11. karen M : 3 subscriber karenmed409 at comcast dot net 12. karen M : 4 subscriber karenmed409 at comcast dot net 13. karen M : 5 subscriber karenmed409 at comcast dot net 14. karen M : Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 15. karen M : 2 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 16. karen M : 3 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 17. karen M : 4 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 18. karen M : 5 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 19. karen M : 6 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 20. karen M : 7 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 21. karen M : 8 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 22. karen M : 9 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 23. karen M : 10 Subscribe to the new Oak Lawn Deals site! karenmed409 at comcast dot net 24. karen M : Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 25. karen M : 2 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 26. karen M : 3 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 27. karen M : 4 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 28. karen M : 5 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 29. karen M : 6 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 30. karen M : 7 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 31. karen M : 8 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 32. karen M : 9 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 33. karen M : 10 Subscribe to the new Lisa’s Mom Deals site! karenmed409 at comcast dot net 34. karen M : rt -http://twitter.com/gummasplace/status/94255245764468736 35. karen M : rt -http://twitter.com/gummasplace/status/94384651900166144 36. Steph : I think my daughter would like the Mighty Math Mission cards 37. Steph : 38. Steph : Daily Tweet I think that my daughter will like the activity book the most because she loves puzzles and games. Stumbled post- MRSMEIER5627 Shared on Facebook: +1'd this post as Kimberley Meier(kimberleymeier@rocketmail.com) “Like” Lisa Reviews on Facebook- Kimberley Rose Meier 1) Added your button to my blog: 2) Added your button to my blog: 3) Added your button to my blog: 4) Added your button to my blog: 5) Added your button to my blog: 50. Steph : Daily Tweet 51. Jenny : Just tweeted it again and like it on Facebook 52. Jenny : I posted comments on the paper jamz, the coupon site and t-shirt.com 53. Steph : Daily Tweet 54. Steph : Daily Tweet 55. Jenny : I did another tweet 56. Jenny : I left a comment on the Conan and the mom needs mars 57. Jenny : I left a comment on Speekee tv 58. Ellen C. : I like the mission cards. Thanks for the chance. This looks great! Probably the activity book but I think the combination of everything makes this an amazing kit! Thanks for the giveaway! 60. Steph : Daily Tweet 61. Steph : Daily Tweet 62. Jenny : 63. Samantha P : My son would like the storybook the best. He has us reading books to him 24/7! marilynbmonroe (at) aol (dot) com 64. Samantha P : I like Lisa Reviews on Facebook as Samantha N Vernon Piper marilynbmonroe (at) aol (dot) com 65. Steph : 66. Rebecca Peters : I think they would like the dvd best 67. Melissa : My daughter would probably enjoy the DVD the most. 68. Paula Hafner : My son would love the dvd the most! 69. Paula Hafner : I Tweeted- https://twitter.com/ksh123/status/989814229493391… 70. Paula Hafner : I Stumbled- ksh123 71. Paula Hafner : email subscriber #1 72. Paula Hafner : email subscriber #2 73. Paula Hafner : email subscriber #3 74. Paula Hafner : email subscriber #4 75. Paula Hafner : email subscriber #5 76. Paula Hafner : newsletter subscriber #1 77. Paula Hafner : newsletter subscriber #2 78. Paula Hafner : newsletter subscriber #3 79. Paula Hafner : newsletter subscriber #4 80. Paula Hafner : newsletter subscriber #5 81. Paula Hafner : newsletter subscriber #6 82. Paula Hafner : newsletter subscriber #7 83. Paula Hafner : newsletter subscriber #8 84. Paula Hafner : newsletter subscriber #9 85. Paula Hafner : newsletter subscriber #10 I have a 3 yr old and a 7 yr old that are huge Umi fans… I think between the 2 they will enjoy all of it and hopefully learn while play! 87. Steph : Daily Tweet My daughter loves activity books. I like Lisa Reviews on Facebook. 90. Jenny : 91. susan smoaks : our daughter will love the activity book 92. Tiffany : My daughter would love the activity book! My daughter just asked me for this yesterday for her birthday! 93. Tiffany : Tweeted (@Mtlgrl4evr) 94. Tiffany : Like Lisa Reviews on Facebook Tiffany Hearn 95. Tiffany : Subscribe to this blog using RSS feed (Google) Entry #1 96. Tiffany : Subscribe to this blog using RSS feed (Google) Entry #2 97. Tiffany : Subscribe to this blog using RSS feed (Google) Entry #3 98. Tiffany : Subscribe to this blog using RSS feed (Google) Entry #4 99. Tiffany : Subscribe to this blog using RSS feed (Google) Entry #5 100. Tiffany : Signed up for your free newsletter Entry #1 101. Tiffany : Signed up for your free newsletter Entry #2 102. Tiffany : Signed up for your free newsletter Entry #3 103. Tiffany : Signed up for your free newsletter Entry #4 104. Tiffany : Signed up for your free newsletter Entry #5 105. Tiffany : Signed up for your free newsletter Entry #6 106. Tiffany : Signed up for your free newsletter Entry #7 107. Tiffany : Signed up for your free newsletter Entry #8 108. Tiffany : Signed up for your free newsletter Entry #9 109. Tiffany : Signed up for your free newsletter Entry #10 110. melanie : The activity book looks fun! 111. Leanne Hill : I think my son would like the mini games the most! 112. DanV : We'd like the activity book 113. DanV : I Subscribe to this blog using RSS feed 1 114. DanV : I Subscribe to this blog using RSS feed 2 115. DanV : I Subscribe to this blog using RSS feed 3 116. DanV : I Subscribe to this blog using RSS feed 4 117. DanV : I Subscribe to this blog using RSS feed 5 118. Gianna : The Activity book 119. Gianna : I'm an email subscriber. 120. Gianna : I'm an email subscriber. 2 121. Gianna : I'm an email subscriber. 3 122. Gianna : I'm an email subscriber. 4 123. Gianna : I'm an email subscriber. 5
{"url":"http://mythoughtsideasandramblings.com/2011/07/14/team-umizoomi-preschool-math-kit/","timestamp":"2014-04-20T06:28:50Z","content_type":null,"content_length":"149285","record_id":"<urn:uuid:0dcc11f6-6868-4f26-b0b8-15809adbe8b5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Wayland, MA Math Tutor Find a Wayland, MA Math Tutor ...I can meet near Alewife, Harvard or MIT and at your house by previous arrangement.I have presented before groups as large as 4,500, led courses, seminars and projects. I have thought a lot about the mechanism of developing a concept, presentation, talk and then how to deliver so everyone in the ... 63 Subjects: including algebra 2, GRE, SAT math, writing ...This lack of challenge can result in the student becoming bored and result in acting out behaviors. I use my knowledge and experience to provide you, the parent, recommendations on how to proceed in getting appropriate services for your child. Having been Director of a child care program with a... 45 Subjects: including algebra 2, precalculus, discrete math, SAT math I'm a very experienced and patient Math Tutor with a wide math background and a Ph.D. in Math from West Virginia University. I teach high school through college students and can teach in person or, if convenient, via Skype. I don't want to take your tests or quizzes, so I may need to verify in some way that I'm not doing that! 14 Subjects: including differential equations, logic, algebra 1, algebra 2 ...When the opportunity came to study at Harvard in the summer after my junior year, I independently prepared for the AP exam and went on to take organic chemistry I and II. I majored in chemistry at Boston University, and worked in organic, inorganic, and materials labs. During my college years, I also grew a fondness for tutoring. 15 Subjects: including precalculus, nutrition, fitness, algebra 1 ...I am passionate about history, and whether it is a paper you need to write, information and clarification or just piece of mind about your work, I am ready to attend to your needs in history. If you need a tutor for beginner to intermediate Italian, I am your man. As well as being a Spanish teacher, I am moderately fluent in Italian and do well tutoring beginners. 21 Subjects: including algebra 1, American history, vocabulary, grammar
{"url":"http://www.purplemath.com/Wayland_MA_Math_tutors.php","timestamp":"2014-04-18T23:28:54Z","content_type":null,"content_length":"24003","record_id":"<urn:uuid:64d64b58-f4ab-4e5a-aa8e-f60294e4c8fc>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Kannapinn in a Nutshell =?Windows-1252?Q?S=F6nke_Kannapinn?= <soenke.kannapinn@wincor-nixdorf.com> 6 May 2003 01:02:23 -0400 From comp.compilers Related articles Kannapinn in a Nutshell joachim_d@gmx.de (Joachim Durchholz) (2003-04-20) Re: Kannapinn in a Nutshell cdc@maxnet.co.nz (Carl Cerecke) (2003-04-27) Re: Kannapinn in a Nutshell joachim_d@gmx.de (Joachim Durchholz) (2003-04-27) Re: Kannapinn in a Nutshell soenke.kannapinn@wincor-nixdorf.com (=?Windows-1252?Q?S=F6nke_Kannapinn?=) (2003-05-06) Re: Kannapinn in a Nutshell cfc@world.std.com (Chris F Clark) (2003-05-06) Re: Kannapinn in a Nutshell cdc@maxnet.co.nz (Carl Cerecke) (2003-05-13) Re: Kannapinn in a Nutshell soenke.kannapinn@wincor-nixdorf.com (=?Windows-1252?Q?S=F6nke_Kannapinn?=) (2003-05-14) Re: Kannapinn in a Nutshell soenke.kannapinn@wincor-nixdorf.com (=?Windows-1252?Q?S=F6nke_Kannapinn?=) (2003-05-16) Re: Kannapinn in a Nutshell joachim.durchholz@web.de (Joachim Durchholz) (2003-06-20) | List of all articles for this month | From: =?Windows-1252?Q?S=F6nke_Kannapinn?= <soenke.kannapinn@wincor-nixdorf.com> Newsgroups: comp.compilers Date: 6 May 2003 01:02:23 -0400 Organization: Siemens Business Services References: 03-04-075 Keywords: parse Posted-Date: 06 May 2003 01:02:23 EDT > I found Kannapinn's thesis extremely well-written and easily > understandable. I have never done any serious parsing theory beyond > the Dragon book (and the Johnstone/??? paper on Tomita-style parsing), > and I found even the theoretic parts of the thesis readable. It was > even easy to trace which pieces of theory were relevant for which > piece of practical advice, something that's often woefully absent in > theoretic papers (if they dwell on practice at all). Thank you for the compliments and for trying to present a "nutshell version" of my thesis. While gives an impression of what readers will find in the paper, I'm not completely happy with your selection of the most important points regarding your sections 1--4 because you left out results that I find even more important than the ones you Especially, the thesis proves that, for a given grammar, there exists a lattice structure of infinitely many LR(k) parsers which all "behave like" the well known Knuth-style canonical LR(k) parser. In the English abstract of my thesis (which I encourage comp.compilers participants with a "parsing affinity" to read; see the link below) I call all these parsers "general LR(k) parsers". (Note that Tomita's GLR parsers are a completely different sort or generalization.) The "minimal LR(k) parser" in this lattice can effectively be computed by applying a minimization algorithm from automata theory. The canonical LR(k) parser and the (what you called) CLR(k) parser are merely special cases having important special properties that can be (and for the canonical case: are) used to simplify the implementation of the parser's actions. Notably, the amount of information available in the stacked parser states differs as follows: * _all_ general LR(k) parsers (if deterministic) have enough information available at their states such that the stacked states and grammar symbols "touched" by the parser actions always allow to determine which (shift or reduce) action is to be applied. (Reduce actions "touch" _all_ topmost stack states and grammar symbols that correspond to the handle plus the k-lookahead; shift actions "touch" only the topmost state plus the k-lookahead.) * _canonical_ LR(k) parsers, as a special case, know what action to perform from inspecting merely the topmost stack state, which makes implementation easy. (In other words, the formal canonical LR(k) parser - as opposed to it's usual implementation - contains a lot of redundant information in its states.) * CLR(k) parsers know, from looking at the topmost stack state, _whether_ a shift, or a reduce action is applicable, but still have to inspect more stack context to find out _which_ reduce action is to be applied. I would like to point the reader's attention to the didactical worth of the LR-related results in the thesis: It clearly distills what parts of the classical Knuth-style item set construction is actually needed in order to contruct formally correct LR parsers, and which parts are serving implementation purposes. I have more comments on your "nutshell version", but do not have more time to elaborate. In a recent post to comp.compilers ("Re: parsing, was: .NET compiler for interactive fiction"), Ralph P. Boland wrote w.r.t. my thesis: > > I downloaded his thesis from the Internet; it's currently available > > at http://edocs.tu-berlin.de/diss/2001/kannapinn_soenke.htm . > Thanks for the link. I think though in retrospect that I have heard > of his thesis before. I have asked him if he would be translating his > thesis into English or publishing any papers. Unfortunately is answer > is not soon. He works in industry now and get papers published is not > a high priority. :-( There is important material in his thesis that I > need to know but I can't count to one in German. I hope to get first work for a journal publication of my thesis done this summer, results concerning "general LR(k) parsers" being the first on the agenda. But this is work I'm not payed for. I encourage those of you who can't wait and don't speak German to look into the thesis anyway: * it has an English abstract, * its formal style is very close to the style of Sippu/Soisalon- Soininen's two-volume "Parsing Theory" book, which is in English and should be available at your home university's library. (If it's not, complain loudly! This is today's * it contains a series of diagrams that, together with the examples from the text, should still be useful for readers who know Sippu/Soisalon-Soininen's book, especially because the main example I used to illustrate the theoretic results is exactly the one used in Sippu/Soisalon-Soininen's chapter on LR parsing. * you can improve your German ;-) > 6. LR parsing for extended CFGs (ECFG) > [...] > Ironically, a detail in this part > of the thesis seems to be in error... the difference being that the > error is inconsequential for the statements made in the thesis ;-) Joachim, can you point me to what you believe to be erroneous? (Post it to comp.compilers if you feel that others should profit.) Dr. Sönke Kannapinn String.concat[ "soenke", ".", "kannapinn", "@", "wincor-nixdorf", ".", "com" ] Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/03-05-014","timestamp":"2014-04-19T02:07:34Z","content_type":null,"content_length":"11729","record_id":"<urn:uuid:d2ec3f51-13e1-4ba0-9ab6-bd90967cd7a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
The Norm Operator Contents Previous Next Subchapters The Norm Operator Syntax | value | See Also norm function abs , max Returns the norm of an integer, real, double-precision, or complex value. An integer or real value returns a real scalar, and a double-precision or complex value returns a double-precision scalar. If the value is not complex, the norm is the square root of the sum of the squares of its elements. If the value is complex, the norm is the square root of the sum of the squares of the real and imaginary parts of its elements. Scalar Norm If the value is a scalar, the result is its absolute value. print |-1| O-Matrix will respond Vector Norm If the value is a vector, the result is the corresponding distance. For example print |[3, 4]| will result in
{"url":"http://www.omatrix.com/manual/norm.htm","timestamp":"2014-04-21T12:18:51Z","content_type":null,"content_length":"4416","record_id":"<urn:uuid:0cb088c0-850d-4bcb-b842-9fcc4705aa52>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Newtown Square Statistics Tutor ...I am conversational in Spanish. As an Army Lieutenant with the 8th Special Forces I completed a 3 month Spanish Language training, and was assigned to Ecuador for 8 months. I experienced and know the stress of learning a new language and living in a foreign culture. 35 Subjects: including statistics, English, reading, chemistry ...I've enjoyed tutoring students in elementary and middle school for the past 10 years. I'm patient, caring, and I'd love to help you improve in school! I'm certified in elementary education and middle school math and language arts. 21 Subjects: including statistics, reading, algebra 1, SAT math ...Finally, I also have a firm grasp of the Microsoft Office Suite. I prefer a hands-on approach to teaching and tutoring, an approach developed and polished during office hours as a TA and adjunct mathematics faculty. I find one-to-one tutoring most effective and personally rewarding.I have been studying and playing guitar for 15+ years. 26 Subjects: including statistics, geometry, GRE, algebra 1 ...I have over 20 years of professional writing experience in business and scientific settings. I also managed a team of consultants for 15 years which included reviewing and training them on report writing, so while I am new to tutoring, I do have extensive experience in improving the writing skil... 28 Subjects: including statistics, writing, geometry, biology ...I sphere headed a literacy program that focused on vocabulary development and reading comprehension for elementary boys using comic books and short stories about super heroes to get them engaged and keep them motivated while learning. I have a Master's Degree is Speech and Language and am thorou... 51 Subjects: including statistics, English, reading, geometry
{"url":"http://www.purplemath.com/newtown_square_statistics_tutors.php","timestamp":"2014-04-17T19:19:41Z","content_type":null,"content_length":"24209","record_id":"<urn:uuid:8c256d6f-f4b9-47df-8ee5-42ae99512f3b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
help...integrals of trigonometric functions Instead of looking at the answer, try just writing everything in terms of t using the t = sinx substitution. Tell us what you get when you do that and if your answer comes out wrong, write down the steps you used. I'm guessing you'll figure it out for yourself if you go through this exercise.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=31817","timestamp":"2014-04-21T15:09:56Z","content_type":null,"content_length":"13054","record_id":"<urn:uuid:cec77f03-bf31-46ef-8ea5-c72b6b771cab>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
CFI Forums | The two envelopes problem 4 of 132 « First Prev 2 3 4 5 6 Next Last » The two envelopes problem kkwan Posted: 10 January 2012 07:09 AM [ Ignore ] [ # 46 ] Sr. Member It’s irrational to take that into account. Total Posts: 1823 That’s the solution. Joined 2007-10-28 Why is it irrational? GdB Posted: 10 January 2012 08:41 AM [ Ignore ] [ # 47 ] Sr. Member It’s irrational to take that into account. Total Posts: 4375 That’s the solution. Joined 2007-08-31 Why is it irrational? Why not? There are 2 envelopes, envelope 1 with amount A and envelope 2 with amount 2A. There are 4 possible situations: a. I took envelope 1 and do not switch: I get A. Gain -A b. I took envelope 2 and do not switch: I get 2A. Gain: A c. I took envelope 1 and switch: I get 2A. Gain: A d. I took envelope 2 and switch: I get A. Gain: -A So chances of gaining A when switching 2/4 = 1/2 (i.e. 50%) Chances of losing A: 2/4 = 1/2, also 50%. Are you just trolling, or do you really believe that you are right? I assume I can prove to you that 1 equals 2: a = b who are non-zero Multiply both sides by a: a^2= ab Subtract b^2: a^2 - b^2 = ab - b^2 (a - b)(a + b) = b(a - b) Now we can take (a - b) away on both sides: a + b = b Now we said that a = b: b + b = b Put them together: 2b = b Divide by b: 2 = 1 kkwan Posted: 10 January 2012 10:50 AM [ Ignore ] [ # 48 ] Sr. Member You’re viewing it as similar to this: Total Posts: I have £10 On the toss of a coin I’m offered double or half so I should do it. 2007-10-28 In the this case double or half is two different amounts £5 or £ 10 so I can gain more than I can lose. Not so wrt the two envelopes. Initially, you have nothing. You selected an envelope but you don’t know whether the envelope you selected has £10 or £20 or whether its amount is larger or smaller than the other envelope. You selected an envelope with £10 in it. On switching you have £20 in the other envelope You selected an envelope with £20 in it. On switching you have £10 in the other envelope. You end up with either £20 or £10. Now, if you selected the £10 envelope and did not switch, you have missed the opportunity to get another £10. (100% gain) OTOH, if you selected the £20 envelope and did not switch, you have avoided the loss of £10. (50% loss) You end up with either £10 or £20. However, the order of £10 and £20 is reversed wrt switching and not switching. So, why switch if the final outcomes are £20 or £10, £10 or £20? That might be the actuality for one instance, but what about multiple instances? Because you don’t know what is in both envelopes and the odds are even, the guide to switch or not to switch is the expected value: Where A is the amount in the selected envelope. Whether A is £10 or £20, the expected value is more than A ....... you gain on average by switching. If there were multiple instances of choosing the two envelopes, you gain on average by switching which means that there will be more £20 verses £10 outcomes and vice versa for not Paradoxically, this is recommended ad infinitum in steps 9, 10 and 11. StephenLawrence Posted: 10 January 2012 11:44 AM [ Ignore ] [ # 49 ] Sr. Member Now, if you selected the £10 envelope and did not switch, you have missed the opportunity to get another £10. (100% gain) Total Posts: 100% of £10 OTOH, if you selected the £20 envelope and did not switch, you have avoided the loss of £10. (50% loss) 2006-12-20 50% of £20 You end up with either £10 or £20. However, the order of £10 and £20 is reversed wrt switching and not switching. So, why switch if the final outcomes are £20 or £10, £10 or £20? That might be the actuality for one instance, but what about multiple instances? Because you don’t know what is in both envelopes and the odds are even, the guide to switch or not to switch is the expected value: Where A is the amount in the selected envelope. Whether A is £10 or £20, the expected value is more than A ....... you gain on average by switching. If there were multiple instances of choosing the two envelopes, you gain on average by switching which means that there will be more £20 verses £10 outcomes and vice versa for not Paradoxically, this is recommended ad infinitum in steps 9, 10 and 11. What is happening is your sum is taking acoount of the 100% gain or 50% loss but what is not included in your sum is the cancelling out effect of the fact that the 100% gain is of half the amount and the 50% loss is of double the amount. Occam. Posted: 10 January 2012 11:54 AM [ Ignore ] [ # 50 ] Moderator It certainly is not irrational. All of the numbers you are dealing with are rational. Total Posts: 5184 Occam Joined 2010-06-16 kkwan Posted: 10 January 2012 12:50 PM [ Ignore ] [ # 51 ] Sr. Member There are 2 envelopes, envelope 1 with amount A and envelope 2 with amount 2A. There are 4 possible situations: Total Posts: 1823 a. I took envelope 1 and do not switch: I get A. Gain -A b. I took envelope 2 and do not switch: I get 2A. Gain: A Joined 2007-10-28 c. I took envelope 1 and switch: I get 2A. Gain: A d. I took envelope 2 and switch: I get A. Gain: -A So chances of gaining A when switching 2/4 = 1/2 (i.e. 50%) Chances of losing A: 2/4 = 1/2, also 50%. Chance of gaining A when switching is 1/4. In b. there is no switching. In a. and b. if the contents are known and are compared to the amounts in the other envelopes, then gain is -A and A respectively. But, it is not known which envelope has A or 2A. Randomly selecting one gives you no information that it is either A or 2A. Thus, by not choosing the option to switch, the loss/gain are not actualized, therefore: a. Gain/loss = unknown b. Gain/loss = unknown c. Gain = A d. Loss = A Chance of gain A when switching is 1/4 (25%) Chance of loss A when switching is 1/4 (25%) Chance of unknown gain/loss by not switching is 2/4 = 1/2 (50%) Write4U Posted: 10 January 2012 03:14 PM [ Ignore ] [ # 52 ] Sr. Member I agree with kkwan (a dubious endorsement, I know) Total Posts: 5976 All this switching does not influence the outcome one bit. Every equation that has been presented favoring one or the other can also be presented in reverse order. Joined 2009-02-26 This is how I see it a gain of +10 has a certainty of 100% a gain of +20 has a probabuility of 50% regardless which envelope you chose first or how many times you may switch. at no time do you actually posess +20, but only a 50% “chance” at +20 Any regrets of ‘not being rewarded’ with +20 and viewing it as a ‘loss’ is a psychological illusion. It remains a matter of luck of the draw, which only materializes when opening the envelope of choice. One might even invoke “vagueness”......... [ Edited: 10 January 2012 03:52 PM by Write4U ] Write4U Posted: 10 January 2012 04:12 PM [ Ignore ] [ # 53 ] Sr. Member let me reduce the problem to its most simple form. Total Posts: Someone places two envelopes on the table in front of you and says. “One envelope contains more money than the other. Take your pick.” How can you possibly ascertain which envelope has the larger amount? And if you are never told the actual amounts, how would you even know if you picked the envelope with the larger Joined or the lesser amount? This is an exercise in futility. The introduction of amounts is used only as a false “known”, which causes a psychological expectation of gain or loss. edited to add: The fallacy lies in the term ‘switching’. In reality you can only pick one or the other. [ Edited: 10 January 2012 07:37 PM by Write4U ] TimB Posted: 10 January 2012 08:31 PM [ Ignore ] [ # 54 ] Sr. Member What am I? Stephen Hawkings? Give me an envelope. That’s my envelope and I’m stickin to it. Total Posts: 2788 Joined 2011-11-04 Write4U Posted: 10 January 2012 08:40 PM [ Ignore ] [ # 55 ] Sr. Member What am I? Stephen Hawkings? Give me an envelope. That’s my envelope and I’m stickin to it. Total Posts: 5976 There you go….. Joined 2009-02-26 kkwan Posted: 10 January 2012 09:18 PM [ Ignore ] [ # 56 ] Sr. Member 100% of £10 Total Posts: Yes, by fixing A= £10 as a known amount, but we don’t know the value of A…....is it the lower or higher amount and what if you switch? You will never know because you selected an 1823 envelope randomly and did not switch. Joined 50% of £20 Again, only if A is a fixed known value but it is not known. What is happening is your sum is taking acoount of the 100% gain or 50% loss but what is not included in your sum is the cancelling out effect of the fact that the 100% gain is of half the amount and the 50% loss is of double the amount. What is this “cancelling out effect” and what/how/why does it cancel wrt the expected value? Please explain. kkwan Posted: 10 January 2012 09:21 PM [ Ignore ] [ # 57 ] Sr. Member One might even invoke “vagueness”......... Total Posts: 1823 And potential…............ Joined 2007-10-28 kkwan Posted: 10 January 2012 09:26 PM [ Ignore ] [ # 58 ] Sr. Member What am I? Stephen Hawkings? Give me an envelope. That’s my envelope and I’m stickin to it. Total Posts: 1823 Good strategy, but are you not interested to get double the amount by switching? Joined 2007-10-28 kkwan Posted: 10 January 2012 09:42 PM [ Ignore ] [ # 59 ] Sr. Member How can you possibly ascertain which envelope has the larger amount? And if you are never told the actual amounts, how would you even know if you picked the envelope with the larger or the lesser amount? Total Posts: 1823 Exactly, unless you have x-ray vision, some fantastic premonition or you peeked. Joined This is an exercise in futility. The introduction of amounts is used only as a false “known”, which causes a psychological expectation of gain or loss. Quite so, but the expected value can be calculated and it is more than A (whatever its value). The fallacy lies in the term ‘switching’. In reality you can only pick one or the other. But, you are offered a switch without prejudice after you have selected the envelope. If a casino offered this game, all who played could walk out richer (whether if they switched or not) and those who are forever switching get richer ad infinitum whereas the casino will go bust. [ Edited: 10 January 2012 09:52 PM by kkwan ] Write4U Posted: 10 January 2012 10:13 PM [ Ignore ] [ # 60 ] Sr. Member kkwan But, you are offered a switch without prejudice after you have selected the envelope Total Posts: 5976 Yes, you can switch a million times and twenty years, but you don’t get to open the envelope until you have made your FINAL pick, i.e. Joined 2009-02-26 TimB - 10 January 2012 08:31 PM What am I? Stephen Hawkings? Give me an envelope. That’s my envelope and I’m stickin to it. [ Edited: 10 January 2012 10:19 PM by Write4U ] 4 of 132 « First Prev 2 3 4 5 6 Next Last »
{"url":"http://www.centerforinquiry.net/forums/viewthread/12459/P45/","timestamp":"2014-04-21T02:02:04Z","content_type":null,"content_length":"101205","record_id":"<urn:uuid:54b2687e-a9f8-4b9d-959f-5df60a641f84>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: STATE ESTIMATING APPARATUS AND STATE ESTIMATING PROGRAM Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A state estimating apparatus permits efficient, highly accurate estimation of the state of an object. A particle in a state variable space defined by a second state variable preferentially remains or increases as the likelihood thereof relative to a current measured value of a first state variable is higher, while a particle is preferentially extinguished as the likelihood thereof is lower. A particle which transitions in the state variable space according to a state transition model with a high probability of being followed by an object (a high-likelihood model) as a next model tends to increase. On the other hand, although in a small quantity, there are particles having models (low-likelihood models) which are different from the high-likelihood model as their unique models. A state estimating apparatus comprising:a first element which measures a value of a first state variable indicative of a state of an object, disperses at least some particles among a plurality of particles around previous positions in a state variable space defined by a second state variable, transits each of the plurality of particles to a current position according to a current model, said current model being unique to each particle and fluidly changing, among a plurality of models, assesses a likelihood of each particle relative to a current measured value of the first state variable, and estimates a current state of the object on the basis of the likelihood; anda second element which preferentially allows the particle, which has been assessed by the first element and the likelihood of which is high, to remain or increase, and preferentially extinguishes the particle, which has been assessed by the first element and the likelihood of which is low, and then determines a next model unique to each particle on the basis of the current model unique to each particle. The state estimating apparatus according to claim 1, whereinthe second element determines a current model unique to the particle according to one region to which a current value of a model variable, which is unique to each particle and which fluidly changes, belongs among a plurality of regions defined in correspondence with each of the plurality of models, respectively. The state estimating apparatus according to claim 2, whereinthe second element changes a next value such that a perturbation amount from a current value to a next value of the model variable falls within a permissible range, and determines a next model unique to the particle according to a region to which the next value belongs. The state estimating apparatus according to claim 1, whereinthe first element estimates at least one of a current model unique to a particle, the likelihood of which becomes a maximum, and a current model common to particles, the total sum of the likelihoods of which becomes a maximum, as a previous state transition model of the object. The state estimating apparatus according to claim 1, whereinthe first element estimates a weighted mean value, which uses the likelihood of the current position of each particle as the weight, as a current value of the second state variable of the object. A state estimating program which causes a computer to function as a state estimating apparatus, the state estimating apparatus comprising:a first element which measures a value of a first state variable indicative of a state of an object, disperses at least some particles among a plurality of particles around previous positions in a state variable space defined by a second state variable, transits each of the plurality of particles to a current position according to a current model, said current model being unique to each particle and fluidly changing, among a plurality of models, assesses a likelihood of each particle relative to a current measured value of the first state variable, and estimates a current state of the object on the basis of the likelihood; anda second element which preferentially allows the particle, which has been assessed by the first element and the likelihood of which is high, to remain or increase, and preferentially extinguishes the particle, which has been assessed by the first element and the likelihood of which is low, and then determines a next model unique to each particle on the basis of the current model unique to each BACKGROUND OF THE INVENTION [0001] 1. Field of the Invention The present invention relates mainly to an apparatus which estimates the state of an object by using a particle filter. 2. Description of the Related Art In order to improve the accuracy of tracking an object by using a particle filter, there has been proposed a technique in which a plurality of motion models, each of which matches each of a plurality of motion characteristics of the object, is prepared for an entire set of particles, and these plural motion models are adaptively changed (refer to Japanese Patent Application Laid-Open No. However, each particle is displaced according to each of the plurality of models corresponding thereto at each particular time, and then the likelihood relative to an observation result is calculated to select one model owned by a group of particles having a higher likelihood among the plurality of models, thereby tracking the object. In other words, the same number of likelihoods relative to the observation results as the number obtained by multiplying the number of particles by the number of models at each particular time is calculated. This is inefficient, because the arithmetic processing volume required for tracking the object inevitably becomes enormous. On the other hand, if the number of particles is decreased to reduce the arithmetic processing volume, then the object tracking accuracy is likely to deteriorate. SUMMARY OF THE INVENTION [0006] The present invention, therefore, has been made with a view toward solving the above problem, and it is an object thereof to provide mainly an apparatus capable of efficiently estimating the state of an object with high accuracy. To this end, a state estimating apparatus according to a first aspect of the invention includes a first element which measures the value of a first state variable indicative of the state of an object, disperses at least some particles among a plurality of particles around previous positions in a state variable space defined by a second state variable, transits each of the plurality of particles to a current position according to a current model, which is unique to each particle and which fluidly changes, among a plurality of models, assesses the likelihood of the each particle relative to a current measured value of the first state variable, and estimates the current state of the object on the basis of the likelihood; and a second element which preferentially allows the particle which has been assessed by the first element and the likelihood of which is high to exist or increase, whereas preferentially extinguishes the particle which has been assessed by the first element and the likelihood of which is low, then determines a next model unique to the each particle on the basis of the current model unique to the each particle. According to the state estimating apparatus in accordance with the first aspect of the invention, in a state variable space defined by a second state variable, the state of an object is estimated by (1) transferring each particle to a current position according to a current model unique thereto, and then (2) assessing the likelihood of each particle relative to a current measured value of a first state variable. This reduces the arithmetic processing load on the apparatus required to estimate an object, as compared with a case where (1') each particle transitions according to each of a plurality of models, and (2') the likelihood of each particle relative to a measured value of the first state variable is assessed. The first state variable and the second state variable may be the same or different. The second state variable may be different from the first state variable and may be a hidden variable, which is not measured. Further, a particle with higher likelihood relative to a current measured value of the first state variable preferentially exists or increases, while a particle with lower likelihood preferentially extinguishes. Hence, particles transferred in the state variable space according to state transition models with high probability that the object is following (hereinafter referred to as "models with high likelihood," as appropriate) tend to gradually increase. Further, a next model is determined on the basis of a current model unique to a particle. Hence, a particle transferred in the state variable space having a high likelihood model as the next model tends to increase. On the other hand, models fluidly change, thus leaving an allowance for a different model from a current model to be determined as the next model, while the current model remaining as the basis. Thus, although in a small quantity, there are particles using models which are different from high-likelihood models (hereinafter referred to as "low-likelihood models," as appropriate) as the models unique thereto in order to provide a brake against the increase of the particles having high-likelihood models as the models unique thereto. With this arrangement, even if the object starts state transition according to a model which is different from a previous model, the state of the object can be estimated with high accuracy by flexibly responding to a change of the model by changing a past low-likelihood model to a future high-likelihood model. This means that the state of the object can be efficiently estimated with high accuracy. According to a state estimating apparatus in accordance with a second aspect of the invention, in the state estimating apparatus according to the first aspect of the invention, the second element determines a current model unique to the particle according to a region to which a current value of a model variable, which is unique to the each particle and which fluidly changes, belongs among a plurality of regions, defined in correspondence with the plurality of models, respectively. The state estimating apparatus according to the second aspect of the invention makes it possible to fluidly change the model unique to each particle by fluidly changing the value of a model variable unique to each particle. Thus, as described above, even if the object starts state transition according to a model which is different from a previous model, the state of the object can be efficiently estimated with high accuracy by flexibly responding to the change of the model by changing a past low-likelihood model to a future high-likelihood model. According to a state estimating apparatus in accordance with a third aspect of the invention, in the state estimating apparatus according to the second aspect of the invention, the second element changes the next value such that the perturbation amount from a current value to a next value of the model variable falls within a permissible range, and determines a next model unique to the particle on the basis of a region to which the next value belongs. According to the state estimating apparatus in accordance with the third aspect of the invention, perturbatiog the value of a model variable allows a current model to be determined so that a previous model unique to a particle that has remained by controlling the perturbation amount of the value to a permissible range is inherited to a certain extent, while leaving an allowance for a different model from the previous model to be determined as the current model. This arrangement makes it possible to estimate the state of the object with high accuracy while respecting the situation in which the probability of the IC object changing the state thereof following a high-likelihood model. According to a state estimating apparatus in accordance with a fourth aspect of the invention, in the state estimating apparatus according to the first aspect of the invention, the first element estimates a current model unique to a particle, the likelihood of which becomes a maximum, or a current model common to particles, the total sum of the likelihoods of which becomes a maximum, as a previous state transition model of the object. According to the state estimating apparatus in accordance with the fourth aspect of the invention, the current model unique to a particle whose likelihood becomes a maximum is estimated as the previous state transition model of the object. Alternatively, the current model common to particles, the total of likelihoods of which becomes a maximum, is estimated as the previous state transition model of the object, taking into account that a particle tends to increase as the likelihood thereof is higher, as described above. According to a state estimating apparatus in accordance with a fifth aspect of the invention, in the state estimating apparatus according to the first aspect of the invention, the first element estimates a weighted mean value, which uses the likelihood of the current position of the each particle as the weight, as the current value of the second state variable of the object. According to the state estimating apparatus in accordance with the fifth aspect of the invention, the weighted mean value of the current position of a particle, which uses the likelihood as the weight, is estimated as the current value of a second state variable. Thus, it is estimated that the object is in a state wherein the object is defined by an estimated value of the second state A state estimating program in accordance with a sixth aspect of the invention causes a computer to function as the state estimating apparatus in accordance with the first aspect of the invention. According to the state estimating program in accordance with the sixth aspect of the invention, a computer can be functioned as an apparatus for efficiently estimating the state of an object with high accuracy. BRIEF DESCRIPTION OF THE DRAWINGS [0021] FIG. 1 is a graphical illustration related to the construction of a state estimating apparatus in accordance with the present invention; [0022]FIG. 2 is a flowchart illustrating functions of the state estimating apparatus in accordance with the present invention; [0023]FIG. 3 is a graphical illustration related to the observation of an object; [0024]FIG. 4 is a first graphical illustration related to a process for producing a particle filter; FIG. 5 is a second graphical illustration related to the process for producing a particle filter; and [0026]FIG. 6 is a graphical illustration related to model variables. DESCRIPTION OF THE PREFERRED EMBODIMENTS [0027] An embodiment of the state estimating apparatus in accordance with the present invention will be described with reference to the accompanying drawings. First, the construction of the state estimating apparatus in accordance with the present invention will be described. A state estimating apparatus 1 illustrated in FIG. 1 is constituted of a computer, which primarily includes a CPU, a ROM, a RAM, I/O, and other electronic circuits. A memory of the computer stores a state estimating program for the computer to function as the state estimating apparatus 1. The state estimating program is read by the CPU from the memory and arithmetic processing is carried out according to the program. The state estimating apparatus 1 has a first element 11 and a second element 12. Each of the first element 11 and the second element 12 is physically constituted of a memory and an arithmetic processor (CPU) which reads a program from the memory and carries out arithmetic processing for which it is responsible. The first element 11 measures a first state variable value indicative of the state of an object. The first element 11 disperses at least some particles among a plurality of particles around previous positions in a state variable space defined by a second state variable. The first element 11 transits each of the plurality of particles to a current position according to a current model which is unique thereto and which fluidly changes among a plurality of models. The first element 11 assesses the likelihood of each particle relative to a current measured value of a first state variable to estimate the current state of the object on the basis of the likelihood. The second element 12 allows a particle whose likelihood assessed by the first element 11 is high to preferentially remain or increase, while allowing a particle whose likelihood assessed by the first element 11 is low to preferentially extinguish. Thereafter, the second element 12 determines the next model unique to each particle on the basis of a current model unique to each particle. The function of the state estimating apparatus 1 having the aforesaid construction will be described. A case will be considered where the position of a ball Q (hereinafter referred to as "the object position") is estimated as the value of a second state variable in a situation wherein the ball (object) Q in a hand of a human being is moved as the human being moves at time k as illustrated in FIG. 3 (a), the ball Q is about to leave the hand at time k+1 as illustrated in FIG. 3 (b), and the ball Q has Left the hand at time k+2 as illustrated in FIG. 3 (c), "k" denoting the number of repeated cycles of the arithmetic processing carried out by the state estimating apparatus 1. A model in which the object position changes in a state wherein no gravity is acting on the ball Q is defined as a first model while a model in which the object position changes in a state wherein gravity is acting on the ball Q is defined as a second model. Hereinafter, a particle to which the first model has been assigned as its unique model will be referred to as a first-class particle, while a particle to which the second model has been assigned as its unique model will be referred to as a second-class particle. First, an index k denoting an arithmetic processing cycle or time is reset (S001 in FIG. 2 ), an image taken by an imaging device, such as a CCD camera, is input to the state estimating apparatus 1, and an object position x (k) at time k is measured as a first state variable by the first element 11 on the basis of the input image (S002 in FIG. 2 Further, a plurality of particles y (k) (i=1, 2, . . . ) is dispersed and arranged in a state variable space by the second element 12 (S004 in FIG. 2 ). Except for an initial state (k=0), some of the plurality of particles y (k) are dispersed around a previous position in the state variable space according to normal distribution or Gaussian distribution. Thus, as conceptually illustrated in FIG. 4 , the first-class particles denoted by black circles and the second-class particles denoted by white circles are arranged in the state variable space. Further, as illustrated in FIG. 5, the first-class particles denoted by upward arrows and the second-class particles denoted by downward arrows are dispersed and arranged around measured values denoted by black dots of state variables. The state variable space is a space defined by a second state variable. If the second state variable is scalar, then the state variable space is defined as a one-dimensional space, and if the second state variable is an n-dimensional vector (n=2, 3, . . . ), then the state variable space is defined as an n-dimensional space. The object position x (k) as the second state variable, which is the object to be estimated, is scalar; therefore, the state variable space is defined as a one-dimensional space. Each particle y (k) has a model variable α (k), the value of which fluidly changes, as illustrated in FIG. 6 . As will be discussed later, the changing mode of the model variable α (k) is adjusted by the second element 12. If the value of the model variable α (k) belongs to a first definition area, then the first model is assigned as the unique model to the particle y (k). If the value of the model variable α (k) belongs to a second definition area, then the second model is assigned as the unique model to the particle y Incidentally, the type of a selectable model may be different for each particle y (k). For instance, one of the first model and the second model may be selected as the unique model for some particles, while one of the first model and a third model, which is different from the second model, may be selected as the unique model for the remaining particles. Further, each particle y (k) is transferred in the state variable space by the first element 11 according to the unique model thereof (S006 in FIG. 2 ). Thus, the first-class particles indicated by black circles and the second-class particles indicated by white circles are transferred and arranged in the state variable space, as illustrated in FIG. 4 Further, likelihood p (k)=p (y (k)|x(k)) of each particle y (k) relative to the object position x(k) measured by the first element 11 is assessed (S008 in FIG. 2 ). Thus, the likelihoods p (k) are calculated, the levels of which are represented by the magnitudes of the diameters of the particles in FIG. 4 Then, the first element 11 estimates, as the object L position x(k), a weighted mean value Σ (k) of each particle y (k) using the likelihood p (k) as the weight (S010 in FIG. 2 ). Alternatively, a particle y (k) whose likelihood p (k) becomes a maximum or the mean value of particles y (k) or a weighed means value or the like using the likelihood p(k as the weight whose likelihoods p (k) are within predetermined high ranks may be estimated as the object position x(k). Further, based on the likelihood (or the probability density distribution) p (k) of each particle discretely expressed relative to the object position x(k) at time k, the second element 12 determines whether each particle y (k) should be allowed to remain or should be extinguished or split up (S012 in FIG. 2 ). Thus, particles with higher likelihood p (k) preferentially remain or increase, while particles with lower likelihood p (k) are preferentially extinguished. Thereafter, based on the unique model, i.e., the current model, assigned to each remaining particle y (k), the second element 12 determines a new unique model, i.e., the next model (S014 in FIG. 2 To be specific, a current model variable value α (k) unique to each particle y (k) is increased or decreased by a perturbation amount δ (k) thereby to determine the next model variable value α (k+1). If the next model variable value α (k+1) belongs to the first definition area, then the first model will be determined as the next unique model. Similarly, if the next model variable value α (k+1) belongs to the second definition area, then the second model will be determined as the next unique model. The perturbation amount δ (k) is adjusted so as to fall within a permissible range. Subsequently, it is determined whether the processing of estimating the object position x(k) has been terminated (S016 in FIG. 2 ). If it is determined that the estimation processing has not been terminated (NO in S016 in FIG. 2 ), then the index k is incremented by 1 (S017 in FIG. 2 ), and then the aforesaid series of processing, such as the measurement of the object position x(k) and the assessment of the likelihood p (k), is repeated (refer to S002 to S016 in FIG. 2 According to the state estimating apparatus 1, which exhibits the functions described above, to estimate the state of an object, (1) each particle transitions to the current position according to a current model unique thereto and (2) the likelihood p (k) of each particle relative to the current measured value x(k) of the first state variable is assessed in the state variable space defined by the second state variable. Hence, the arithmetic processing load on the state estimating apparatus 1 required for the estimation of an object is reduced, as compared with a case where (1') each particle transitions according to each of the plurality of models, and (2') the likelihood p (k) of each particle relative to the measured value x(k) of the first state variable is assessed. Further, particles with lower likelihood p (k) are preferentially extinguished, while particles with higher likelihood p (k) preferentially remain or increase (refer to S012 in FIG. 2 ). Therefore, the particles which have transitioned in the state variable space according to a state transition model having high probability of following an object (high-likelihood model) tend to gradually increase. For example, as illustrated in FIG. 4 and FIG. 5, respectively, the particles denoted by black circles or upward arrows having the first model gradually increase from time k+1 to time k+2 as the high-likelihood model transition from time k to time k+1. Further, the next models are determined on the basis of the current models unique to individual particles (refer to S014 in FIG. 2 ). Hence, particles which transition in the state variable space by using the high-likelihood models as their next models tend to increase (refer to FIG. 5). On the other hand, since the models fluidly change, there is an allowance for a model which is different from the current model to be determined as the next model, while the current model remaining the basic model (refer to FIG. 6 Thus, although in a small quantity, there are particles using models which are different from the high-likelihood models (the low-likelihood models) as the models unique thereto in order to provide a brake against the increase of the particles having high-likelihood models as the models unique thereto. For instance, as illustrated in FIG. 4 and FIG. 5, respectively, the particles having the first model, which is the high-likelihood model (denoted by the black circles or the upward arrows), gradually increase during transition from time k to time k+1, and time k+1 to time k+2, while at the same time, the particles having the second model, which is the low-likelihood model (denoted by the white circles and the downward arrows), also exist. Thus, even if the object starts state transition according to a model which is different from a previous model, the state that is represented by the estimated value of a state variable of the object can be estimated with high accuracy by flexibly responding to the change of the model by changing a past low-likelihood model to a future high-likelihocd model. This means that the state of the object can be efficiently estimated with high accuracy. Further, a next value is adjusted such that the perturbation amount δ (k) from the current value α (k) of a model variable to the next value α (k+1) falls within a permissible range, and the next model unique to a particle is determined according to a region to which the next value α (k+1) belongs (refer to S014 in FIG. 2 FIG. 6 ). Perturbating the model variable value α (k) allows a current model to be determined so that a previous model unique to a particle that has remained by controlling the perturbation amount δ (k) thereof to a permissible range is inherited to a certain extent, while leaving an allowance for a different model from the previous model to be determined as the current model. This arrangement makes it possible to estimate the state of the object with high accuracy while respecting the situation in which the probability of the object changing the state thereof by following the high-likelihood model. As the first state variable and the second state variable, a velocity, acceleration, or the combination of a velocity and acceleration may be estimated in place of the position of an object. Further, the first state variable and the second state variable may differ from each other. In addition, a variety of models may be adopted as the first model and the second model. For example, a model which defines a behavior mode of an object in a state wherein a frictional force or a binding force is acting on the object may be adopted as the first model, while a model which defines the behavior mode of the object in a state wherein neither frictional force nor binding force is acting or the object may be adopted as the second model. Moreover, the first element 11 may estimate, as a previous state transition model of an object, a current model unique to a particle whose likelihood p (k) becomes a maximum or a current model common to particles whose total sum Σ (k) of the likelihood p (k) becomes a maximum. Patent applications by Tadaaki Hasegawa, Wako-Shi JP Patent applications by HONDA MOTOR CO., LTD. Patent applications in class Mechanical Patent applications in all subclasses Mechanical User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20090306949","timestamp":"2014-04-17T10:32:24Z","content_type":null,"content_length":"58933","record_id":"<urn:uuid:66447880-6d1e-40a9-aabc-02bfec6011df>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Time series analysis 34,117pages on this wiki In statistics, signal processing, and econometrics, a time series is a sequence of data points, measured typically at successive times, spaced at (often uniform) time intervals. Time series analysis comprises methods that attempt to understand such time series, often either to understand the underlying theory of the data points (where did they come from? what generated them?), or to make forecasts (predictions). Time series prediction is the use of a model to predict future events based on known past events: to predict future data points before they are measured. The standard example is the opening price of a share of stock based on its past performance. As shown by Box and Jenkins in their book, models for time series data can have many forms and represent different stochastic processes. When modeling the mean of a process, three broad classes of practical importance are the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models (the MA process is related but not to be confused with the concept of moving average ). These three classes depend linearly on previous data points and are treated in more detail in the articles autoregressive moving average models (ARMA) and autoregressive integrated moving average (ARIMA). The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Non-linear dependence on previous data points is of interest because of the possibility of producing a chaotic time series. Among non-linear time series, there are models to represent the changes of variance along time (heteroskedasticity). These models are called autoregressive conditional heteroskedasticity (ARCH) and the collection comprises a wide variaty of representation (GARCH, TARCH, EGARCH, FIGARCH, CGARCH, etc). Recently, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favour. Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependance at multiple A number of different notations are in use for time-series analysis: $X= \{X_1, X_2, \dots \}$ is a common notation which specifies a time series X which is indexed by the natural numbers. We also are accustomed to $Y= \{Y_t : t \in T\ \}$ There are only two assumptions from which the theory is built: The general representation of an autoregressive model well-known as AR(p) is $Y_t =\alpha_0+\alpha_1 Y_{t-1}+\alpha_2 Y_{t-2}+\cdots+\alpha_p Y_{t-p}+\varepsilon_t\,$ where the term ε[t] is the source of randomness and is called white noise. It is assumed to have the following characteristics: 1. $E[\varepsilon_t]=0 \,$ 2. $E[\varepsilon^2_t]=\sigma^2 \,$ 3. $E[\varepsilon_t\varepsilon_s]=0 \quad\forall tot=s \,$ If it also has a normal distribution, it is called normal white noise: $\{\varepsilon_t\}_{(t \in T)} : \mbox{Normal-WN}$ Related tools Tools for investigating time-series data include: Applied time series Time series analysis is exercised in numerous applied fields, from astrophysics to geology. Model selection is often based on the underlying assymption on the data generating process. Take, for example, traffic flow, here we would fully expect periodic behaviour (with bursts at peak travel times). In such a situation one may consider applying Dynamic Harmonic Regression (this is highly similar to airline data which is frequently analysed in the statistics literature). More recently there has been increased use of time series methods in geophysics (the analysis of rain fall and climate change for example). Within industry, almost every sector will in some way perform time series analysis. With retail, for example, tracking and predicting sales. Analysts will typically load their data into a statistics package ( R and S-Plus are examples of such programs). The most important step is the review of the Autocorrelation function (ACF) which indicates the number of lagged observations to be included in any time series model (one should always analyse the partial autocorrelation function as well). In general financial series often require non-linear models (such as ARCH) as the application of autoregressive models often results in a model suggesting that to predict the value of tomorrows, lets say share price here, depends almost entirely on yesterday's share price: $Y_t =\alpha_0+\alpha_1 Y_{t-1}+\varepsilon_t \,$ (where α[1] is close to 1). Robert Engle recognised the importance of including lagged values of the series' variance. In general time series can be considered in the time domain and/or the frequency domain. This duality has led to many of the recent developments in time series analysis. Wavelet-based methods are an attempt to model series in both domains. Wavelets are compactly supported "small waves", which when convolved with the series itself (when scaled and dylated) gives a scale by scale analysis of the temporal dependance of a series. Such wavelet based methods are frequently applied for climate change One other (and less researched) area of time series analysis considers the "mining" of series to reterospectively extract knowlegde. In the literature this is referred to as time series data mining (TSDM). Techniques in this area often depend on "feature detection". In essence this is an attempt to find the "characteristic" behaviour of the series, and use this to find areas of the series which do not adhere to this behaviour. Current efforts are led by the computer science department at the University of California (Riverside). See also External links
{"url":"http://psychology.wikia.com/wiki/Time_series_analysis?direction=prev&oldid=126259","timestamp":"2014-04-18T14:49:30Z","content_type":null,"content_length":"66929","record_id":"<urn:uuid:1e28e5e6-94df-419e-aba2-dc293096d6b6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Cylinders Some curved surfaces may be defined as generated by straight lines which are thought of as moving and changing its direction continuously. Suppose that the line AB cuts a fixed curve BC, not in the same plane with straight line. If AB is allowed to move so that it always remains parallel to its first position and always intersects the fixed curve BC, it will generate a curved surface, called cylindrical surface. The moving line is called the generator. the axis of the cylinder is the line joining the center of the bases. If O and O' are the centers of the top and the base then the line joining them is the axis of the cylinder. Cylinderical shapes objects occur very, in tanks and other containers, in pillers and in various parts of machines. A cylinder is a solid figure generated by a straight line moving to its original position, while its end describes a closed figure in a plane. It is thus a limiting case of a prism whole ends would be circles, ellipse and parabola etc. If the generating line is perpendicular to the base, then the cylinder is known as the right cylinder, otherwise oblique.
{"url":"http://www.emathzone.com/tutorials/geometry/introduction-to-cylinders.html","timestamp":"2014-04-19T09:24:30Z","content_type":null,"content_length":"18296","record_id":"<urn:uuid:edd83848-7392-410e-ab83-0729bd15d8fb>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
(Spring 2009) MCS-274 Lab 3: Intermediate SQL (Spring 2009) Due April 14, 2009 In this lab, you will continue working with your election data tables from lab 2. If your tables are not usable, let me know and I can provide replacements. For each of the following, just turn in your SQL statement. 1. Assuming you have a table of results, or one of state house results, and you have a table of precincts, add the appropriate constraint so that each result needs to correspond to a precinct--assuming you don't already have such a constraint. If any of these assumptions are violated (because you don't have the relevant tables or already have the constraint), let me know and I will give you an alternative assignment. 2. Create a view that is based on your data tables and is equivalent to the SOS's candidates table. If you already did essentially this, but in the form of a table, let me know and I will give you an alternative assignment. 3. Create an index that could be used to speed up queries that refer to a specific candidate name (like the queries from lab 1 concerning 'TERRY MORROW'). You should be prepared for the possibility that two candidates (hopefully for different offices) might have the same name.
{"url":"https://gustavus.edu/+max/courses/S2009/MCS-274/labs/lab3/","timestamp":"2014-04-16T05:05:30Z","content_type":null,"content_length":"1951","record_id":"<urn:uuid:23c8344a-069d-41ad-ae5c-511d37f9fde3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
KIPP On Trickin’ — looking at the raw data I’ve written before about KIPP attrition in response to reports that had been released studying it. When reports conclude that KIPP does not have high attrition, they tout it on their websites. When reports concluded that they do have high attrition, KIPP responds with a rebuttal. The problem with most of these reports is that the data they give us has already been analyzed and then turned into percentages, which are only relative measures. This is why I finally got around to navigating the New York START data system to find the actual raw data for myself which I could then compare to KIPPs annual report card that they release. The reason I’m so committed to uncovering stories of exaggerated success is that these stories have become battle cries for ‘reformers’ like Michelle Rhee. She, and others, have been influencing politicians to create flawed education policies based on misleading success stories. Because KIPP has many critics, they responded to those critics in their recently released annual report card. In it, they address the six main concerns that critics of the program have raised: 1. Are we serving the children who need us? 2. Are our students staying with us? 3. Are KIPP students progressing and achieving academically? 4. Are KIPP alumni climbing the mountain to and through college? 5. Are we building a sustainable people model? 6. Are we building a sustainable financial model? Of course their answers to these questions will be ‘proved’ to be ‘yes’ for all six questions in their report. In this post, I’ll focus on point 2, attrition. On page 15 of the report they write “A KIPP school with great test scores—but high student attrition—is not meeting our mission. By choosing KIPP, students make a commitment to excellence and in return, KIPP promises to help each student on the path to and through college. We believe these promises are sacred and we hold ourselves accountable to fulfilling these promises to every student. Our second essential question asks us to consider whether we are making good on the commitment we have made to every single one of our KIPPsters. This means making sure that the students who join us stay with us year after year. We highlight this question because we believe it is as important to a school’s health as its test results. The reality is that a school with great test scores and high student attrition is not realizing our mission.” Then they show the attrition for each school on the 99 pages that summarize the success of their 99 schools. Some schools boast 2% attrition, while others are as high as 47%. But the most telling statistic is the pictograph on page 15. Well, 88% doesn’t sound too bad when you read it as quickly as most rich donors do. But when you look at it more closely, this does not mean that 88 percent of students who start the middle schools as 5th graders will eventually graduate as 8th graders (most KIPPs are 5-8 middle schools). The 12% attrition is PER YEAR. So this means that 88%, on average, make it to 6th grade, then they lose 12% of those, which takes us down to 77% for 7th graders, 68% for 8th graders, and finally 60% for graduating 8th grade. Suddenly it doesn’t look so good. Then I thought I’d take an individual school ‘KIPP Academy New York’ which is the first New York KIPP, and check their actual school report cards against their claims on the KIPP report card. They claimed to have a 4% attrition rate, which really means that compounded over four years is really a 15% attrition, but is still way better than the 40% attrition based on their published overall 12% attrition rate. So I downloaded the 2009 and 2010 school report cards. I learned that there were 203 5th, 6th, and 7th graders in 2009 who became 192 6th, 7th, and 8th graders in 2010, which is 95% which is very close to the 96% that they published. But then I looked closer at the numbers and also followed a particular cohort from 5th grade in 2007 to 8th grade in 2010 to get a fuller picture of what is going on. For this I needed the 2007 and 2008 school report cards. One thing I noticed was that in 2009 they had 30 students with disabilities in their 5th, 6th, and 7th grades combined while in 2010 they had only 22 in their 6th, 7th, and 8th combined. So of the net 11 students that they lost, 8 of them were students with disabilities. Things got more interesting, though, when I followed the 2010 cohort from the time they were in 5th grade. In 2007 that class had 72 students with 44 girls and 28 boys. Four years later they have lost a total of nine students so they have 63. But, and here’s the strange part. Those 63 kids are 16 boys and 47 girls. (some ratio!) So it seems like they lost AT LEAST 12 students from the 72 (the 12 boys, that is) which is about 17%. Since the number of girls actually increased, it shows that they have REPLACED some of the students they lost with other kids. This is not factored into the attrition rate, though it is possible that the new students are better at the standardized tests than the ones who left, while the attrition rate is not affected. So when we look at the improvements in test scores from 5th to 8th grade, we are looking at two different groups of kids. With sample sizes of 70, a few kids makes a big difference. I also noticed that in 5th grade there were 5 kids with Limited English Proficiency, while in 8th grade there were only 2. Now, this could mean that students lose their LEP status, so I’m not sure if this is relevant. I haven’t fully immersed myself in the KIPP data, but I hope I’ve given enough information to demonstrate that KIPP misleads when they report an 88% retention rate and also that the raw data conclusively demonstrates that they fill in some students with students who leave which makes their attrition rate seem better than it is. Now, I’m making an assumption that the kids that fill in those spots are better, academically, than the ones who are counseled out. Even if I’m incorrect about that, I hope I’ve introduced enough for people to think about when they hear data about KIPP’s success. Also, I hope I’ve given some tips to others who want to investigate miracle schools to find some of their own anomalies and share them. Brilliant Job. Your fatal logic flaw is assuming that students who leave are counseled out. Here are other reasons they may leave. 1) Family moves. Very, very common in transient urban neighborhoods. 2) Students and/or parents dissatisfied with high workload or behavior expectations. 3) Students miss neighborhood peers and return to public school. 4) Parents dissatisfied that their previous B student is now a C or D student because of higher expectations. 5) Student is retained and changes schools, rather than repeat the grade. These are all reasons I have seen kids leave charters. Never, though, have I met a kid who was counseled out. Not saying it doesn’t happen, but it is far, far, less common than you think. Honestly, who cares why they leave? The point is that KIPP and other charter schools claim to have a certain level of academic success over time, but this success may be driven by the fact that a large number of students are leaving. If they matched to test scores of 6th graders only with the scores of those still present in the 8th grade, the substantial increase in scores they tout would be more convincing. Comparing a sample size of say 100 with a sample of 70 doesn’t make much sense. • Reasons 2-5 are all implicit forms of “counseling out”. Renaming the privileges KIPP (and all charters have) with respect to enrollment doesn’t change their nature: they are privileges that have the impact of leaving a culled class of students who are lower-needs. Since higher-needs students do still need an education, they go to regular public schools, who now have student populations that are more concentrated with high needs students. If a school is using retention, parent contracts, etc. in a way that causes students to leave, it is counseling students out. The difference for KIPP is they do not have to do all the crazy NCLB reporting. A public high school must account for every one of those students and report where the students “went” as part of OTGR. A school with low SES will more than likely have a high transient rate because of the job market. This become very resource dependent for the school and take away from instruction. Then schools with high transient rates because of poverty, are labeled as failures because of their low OTGR numbers. CRAZY Thanks for doing this work, Gary Rubinstein. I live in Baltimore City, where KIPP has successfully negotiated a deal with the teachers union to extend the school day. There are two KIPP charter schools here, and the former chair of the KIPP board in Baltimore is running for mayor. So, it’s very helpful to have some insight into the data. I’d like to see these data presented alongside a representative traditional public school to serve as acomparison group. Also, though it is KIPP’s mission to retain these students, the beauty of the charter school movement is that it empowers students and parents with CHOICE. I agree with the assumption that these students are likely NOT being counseled out, but rather are leaving on their own volition. That’s school choice at its finest. • This response is SPIN at its finest. The public middle school in which I work – in a district w KIPP and w a similar demographic and SES – just graduated a cohort of 8th graders that was 78 students as a 6th grade (the school’s entry point), 84 students as a 7th grade and 89 as an 8th grade, close to a 15 percent increase in size. “Things got more interesting, though, when I followed the 2010 cohort from the time they were in 5th grade. In 2007 that class had 72 students with 44 girls and 28 boys. Four years later they have lost a total of nine students so they have 63. But, and here’s the strange part. Those 63 kids are 16 boys and 47 girls. (some ratio!) So it seems like they lost AT LEAST 12 students from the 72 (the 12 boys, that is) which is about 17%.” The key problem is your use of the word “LOST” there. For all you know, some or all those boys were retained. @Heather and others that try to “interpret” the analysis: The data support the conclusion that there is nothing essentially different about being a student in a charter school. The outcomes are typical of the system. If I may add, typical of a lot of Title 1 schools nationwide. This should not come as a surprise. Here is a clear explanation of how drivers of change have to work. It is a PDF and the research is from the U of Toronto. Canada is among the top ten ranked nations and their population diversity issues are quite similar to ours. • Is a student who would be successful at KIPP better off at KIPP or at their neighborhood school? It depends on what both of those schools look like, but it’s entirely possible they get a better education at KIPP. So for that student, going to a charter school is a smart move. I’m using my anecdotal observations as a teacher here – in teaching at a traditional school, I saw many students who would probably be better off in a more structured environment and safer classroom. Great job, Gary! There’s also another good study by Gary Miron that I’ll send you soon unless you have it already. Thanks for doing this important work. I think it is also critical to remember that because KIPP schools engage in behavior like this, they leave the regular public schools with a higher percentage of high-needs students. The demographic changes KIPP creates span their host districts, and KIPP’s success is predicated on making things harder on the schools around them. After all, those kids who leave are going somewhere. Many of the kids who are going “somewhere” are not actually going back to any school, they are leaving school altogether. And E. Rat – if a family is dissatisfied with a heavy workload, how the heck is that “counseling out”? It’s called making a C-H-O-I-C-E! The same applies for a child returning to her/his old school because they miss their peers – no logically-thinking person could interpret that to be a form of counseling out a student. Finally, if a child’s grades drop at KIPP because they’d been inflated elsewhere, and this leads the family to withdraw the child from KIPP, that decision can be attributed to the parent, not to the school. You seem to be suggesting that schools should morph into whatever a given constituent wants, and you’re suggesting that if schools don’t bend their rules and policies, then they’re counseling students out. I suppose that’s true – in the same way that if I brought a firearm to the airport, I’d be “counseled out” of my flight. Whatever. Paula, how does a 5th, 6th, 7th, or 8th grader end their education and never return to school? Needless to say, I agree with E.Rat. □ Sure, it’s a choice: but one intentionally created to have certain outcomes. After all, there is not one iota of evidence even suggesting that massive amounts of homework lead to increased student success. Structuring choices to create the outcomes you want – a specific student group – is counseling out. Similarly, if a child misses his or her peers, it suggests that the social climate at the new school may be challenging. KIPP, which uses exceptionally punitive disciplinary methods that include ostracizing students; I suspect that this management “technique” is so unpleasant for its victims that they do feel excluded and unwelcome – and leave. What I’m really suggesting is that KIPP’s “success” comes on the back of educators at real public schools who serve the students and families KIPP does not want. And to be honest: I’m not suggesting it, I’m living it. Your example is more than spurious; it’s silly. I’m typing about the way schools can create systems that drive enrollment and retention; you respond by attacking a straw man. Your statements about “real public schools” and about “living it” – “it” being serving the students that KIPP supposedly does not want, suggest to me that you have a serious ‘ax to grind’…which is your right. The trouble with all of what you are saying is that you presume to know the intent of a person’s or entity’s actions or statements, whether that person is me or whether that entity is KIPP. So, you are convinced that KIPP assigns tons of homework, not to improve educational outcomes but to deliberately purge certain students from their schools. Because of your own narrowly defined interpretation of research, you are only thinking about ‘homework’ per se, and not framing the homework issue from the perspective of the benefit of practice, overlearning, or the effort effect, all of which have a significant research base to support KIPP’s approach to homework. In your response regarding children who miss their peers, you use words like, “I suspect”, and “it suggests”, which makes this aspect of your rebuttal an easy one to dismiss – obviously your statements are pure conjecture and have no identifiable basis in fact. You should fully admit this in all of your posts rather than writing with some supposed level of confidence as though that which you state has a solid base in reality. My example is neither spurious nor silly – the fact is that standards and rules exist in every institution or setting, not to push people out, but to encourage the kind of behavior that will make things optimal for the benefit of the larger group; towards the common good. Thus, in the same way that violating the rules of air travel results in removing an offender from that space, to send a clear message that participating in air travel requires a certain level of responsibility, similarly, a child in a KIPP class who is removed and/or isolated for being disruptive is supposed to get the message (and in fact, most do), that s/he does not have the right to compromise anyone else’s learning, AND that s/he should be prioritizing her/his own learning as well. Most children in KIPP don’t leave, rather they internalize that message, albeit at varying rates, and are ultimately better off for doing so. There are behaviorally and academically challenging kids in KIPP just like there are in your ‘real’ public school…and if you don’t like that heat, perhaps you should consider getting out of the kitchen, rather than whining about the children who are sitting in your I would elaborate even further, but I think my time would be better spent thinking deeply about, and preparing for, the challenging children who will walk into my charter school’s doors in less than 2 months. They will be sitting and waiting for me to deliver an impactful, life-changing education to them, and all of the lamenting about who and what they are won’t get me any closer to teaching them well. Only preparation will accomplish that, so off I go to do just that. • Far be it from me to keep you from spending July working on your classroom, but accusing me of blaming children for demonstrating the real trauma that comes from abusive situations in which they feel targeted is offensive in the extreme. I suggest that you spend your deep reflection time considering cultural competence in management strategies used in classrooms. Excellent work! I’ve been looking at the Mathematica reports on KIPP schools which go to great lengths to claim that attrition at KIPP is the same as at public schools. This is done in a very misleading fashion by treating transfers from one public school to another as “attrition”. Obviously there is no qualitative similarity between leaving KIPP to go to another school and moving from one (equivalent) public school to another, and the claims are completely invalid. It just shocks me that they can get away with such nonsense. But when students leave KIPP schools, they are by and large not replaced. That’s not due to an official policy, as I understand it; just that KIPP doesn’t encourage transfers in at higher grades. Also, it seems clear, KIPP’s supposed “long waiting lists” are an But when students leave a public school by transferring from one to another, they aren replaced by students transferring from other schools. Since research shows that the most academically challenged students are the ones who leave KIPP schools, that means the KIPP schools end up with a much smaller class, consisting of the higher achievers. Public schools, on the other hand, generally end up with no change in the size of the class, and with a random assortment of achievement levels. Kipp has a policy of not accepting transfers in the middle of the year, as I understand it. They are allowed by whatever idiots who make up the state budget accounting rules to keep the funds for educating a student who has dropped out for the remainder of the entire school year, so they have a financial incentive to push students out as early as possible in the fall. Does the propaganda ever cease: “when students leave KIPP schools, they are by and large not replaced.” Mathematica found the opposite: “Averaging across all sites, KIPP schools in the sample enrolled 13 new students per year in grade 6 (accounting for 19 percent of average total enrollment in that grade), 7 new students per year in grade 7 (12 percent of total enrollment), and 3 new students per year in grade 8 (6 percent of total enrollment).” See also Table 5 from the report: http://www.mathematica-mpr.com/publications/pdfs/education/KIPP_middle_schools_wp.pdf As Table 5 shows, KIPP schools on average gain 2 MORE students than they lose between 5th and 6th grades; and on average they lose 2 more students than they gain between 6th and 7th grades and between 7th and 8th grades. So if you combine all three of the transitions between years, KIPP schools are losing an average of TWO students between 5th and 8th grades that are not replaced. This is a lower replacement rate than the comparison public schools, to be sure, but only a diehard hack would say that a mere TWO students not replaced between 5th and 8th grades means that students are “not replaced.” Wouldn’t this all be moot if test data wasn’t aggregated? If data were available anonymously on each student and teacher, the people could do their own statistics. Sure, most people aren’t good a statistics, but without the raw data, you don’t have a chance. Stuart Buck would have us believe that the 60% of KIPP students who disappear from each grade cohort between grades 5 and 8 were all retained to repeat a grade. Silly me — I would have thought that meant that the grades behind were increasing by that same number instead of showing the same attrition.
{"url":"http://garyrubinstein.teachforus.org/2011/07/08/kipp-on-trickin-looking-at-the-raw-data/","timestamp":"2014-04-18T15:40:11Z","content_type":null,"content_length":"95142","record_id":"<urn:uuid:1ecb1589-5ce5-44ec-9c19-849e827887ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
The Geomblog A new computer science blog, this one on . It has been started by a number of experts in e-voting, including AT&T Labs alumnus Avi Rubin . Here is the first message, posted by Ed Felten Welcome to evoting-experts.com. Our panelists will be posting news and commentary on e-voting issues. We hope our site will serve as a central source of information and insight about e-voting, straight from some of the leading independent experts. This is a non-partisan site. Our goal here is not to advance any political agenda, but to help ensure that all votes are counted fairly and accurately, and to provide honest expert commentary to the public and the press. Given the amount of scrutiny that this election will face, and the amount of FUD that will be generated by the parties and the media over voting processes, this will hopefully be a good resource to understand the real issues behind e-voting and how it affects this year's prelude to a recount This amusing excerpt is from an interview of Bradley Efron, inventor of the bootstrap method in statistics: I thought I was going to be a mathematician. I went to CalTech, and I think I would have stayed a mathematician if mathematics was like it was a hundred years ago where you computed things, but I have no talent at all for modern abstract mathematics. And so I wanted to go into something that was more computational. After CalTech I came to Stanford. And statistics was definitely better. Incidentally, Efron's book is a very nice introduction to the bootstrap method, striking that delicate balance between 'So how do I use this stuff' and 'So why does this really work' I would have called this 'NSF news' except for the fact that it is over three months old. The CRA had a conference in Snowbird in July, and among the presentations there was one on trends in NSF funding, by Greg Andrews from CISE. The presentation itself is quite short: some interesting facts... • Submissions to CISE have gone up 125% from 1997-2003, and are expected to grow even faster in 2004. • The CCF (which includes most of theory, graphics and geometry, but not databases) had a 15% accept rate for CAREER awards, but only 5% for other proposals. • It appears that there will be no non-CAREER competition for CCF grants in FY 2006. This is attributed to budget pressures, and it is indicated that things will be 'back to normal' in FY 2007. As a comparison, acceptance across the board in CISE is roughly 16%. CNS (Computer and Network Systems) had much higher success rates (30% CAREER, 17% otherwise). Not being at a university, I don't know how much of this is already old news. This post was prompted by a lunchtime discussion on pressures to publish, write grants etc. It also appears from anecdotal evidence that there are more people submitting more proposals than ever (akin to the situation with paper submission to conferences). A binary search tree is probably the most basic data structure in computer science, and is one of the first structures you come across when learning algorithms. Given a set of keys and an order defined on the keys, a binary search tree maintains the property that the key of a node is greater than all keys of left descendants and less than all keys of right descendants. One of the most important outstanding problems in data structures is designing an optimal BST. Structures like red-black trees and splay trees can be shown to take O(log n) time (amortized) per insert/delete/find operation, and this is tight, in the sense that there exist sequences of updates of length m for which Omega(mlog n) tree operations are required. For charging purposes, a single 'splay' or 'rotation' takes one unit of time. However, if we restrict ourselves to a static universe, with only searches, can we do better with respect to the optimal offline solution ? The Dynamic Optimality conjecture, first stated by Sleator and Tarjan, conjectures that splay trees cost only a constant factor more than an optimal offline algorithm. Note that this optimal algorithm must change the tree (hence the term 'Dynamic'); if the optimal offline algorithm is forced to choose a fixed tree, then splay trees are asymptotically optimal. A new FOCS 2004 paper by Demaine, Harmon, Iacono and Patrascu takes a crack at the Dynamic Optimality conjecture, presenting an algorithm that comes with O(log log n) of the optimal offline solution. Jordan Ellenberg, novelist and math prof at Princeton, does the impossible: he provides a lucid explanation of both Bayesian analysis and Nash equilibria in the context of electoral strategy. He also reads an awful lot... Update: We could have a course on electoral math: Tall, Dark and Mysterious points out yet another set of articles on election math, this time focussing on the Banzhaf power index (a method for determining the relative power of blocs in a block voting system). For extra credit, consider the following: The runtime of the programs doing these computations is already pretty high (O(2^n)), but I wonder if there are any probabilistic variations on this index as applied to the electoral college. Is there a more efficient approach ? A few months back, I had mentioned an abomination known as the INDUCE Act, that would generalize wildly the class of actions that could be construed as copyright infringement (e.g. making a device that could be used for copyright infringement). It is worth pointing out here that the software industry (one of the largest holder of copyrights) was against this bill. Congress (or more specifically Orrin Hatch) was trying to get this bill passed, and apparently it appears to be DOA at least for this term. Read more about it in this engadget interview with Wendy Seltzer, an attorney with the EFF. The first computer I ever programmed on was the ZX Spectrum 48k (yep, 48K memory) External storage was a tape recorder (and watch that volume control otherwise the data transfer gets hosed !). I wrote BASIC programs, and a lot of assembly code video games. Come to think of it, I learnt some of my first AI techniques on this machine. I also got an early lesson in technology envy; a friend of mine then acquired the 128k Spectrum, and thus I went very quickly from king of the hill to insanely jealous second-best :) This nostalgic rant was brought upon by this site, allegedly one of the oldest continually running websites on the web. William Gibson talks about writing, and didactic novels, arguing that a novel loses aesthetic quality if it seeks to further a point of view. In another way of saying it, ...no genuinely valuable interrogation of reality can take place, and the result will be a literary virtuality built as exclusively from the author's expressed political philosophy as that author can manage. This is best understood, an excellent teacher of mine said, by asking ourselves whether or not a fascist can write a good novel. ...A fascist can't write a good novel because writing a good novel, in the end, is about relinquishing control of the text. In a way, this could be true for research as well. In theoretical work we often possess a hammer, and go hunting nails to pound, but some of the best kind of research is the kind that beautifully slots into the problem being addressed, to the extent that one can only say 'How else could this problem have been tackled ?', and yet, possesses a generality (akin to universal truths in good literature) that appeals to our shared aesthetic of beauty and enriches the field as a whole. More proof parodies, this set more in the line of philosophical proofs rather than mathematical proofs (via Oxblog). All the 'proofs' deal with proving the claim 'p', and here is one of the best; Most people find the claim that not-p completely obvious and when I assert p they give me an incredulous stare. But the fact that they find not- p obvious is no argument that it is true; and I do not know how to refute an incredulous stare. Therefore, p. Mathematics has a long tradition of using counter-examples as a way of illuminating structure in theory. Especially in more abstract areas like topology, canonical counterexamples provide a quick way of teasing out fine structure in sets of axioms and assumptions. A brief foray through Amazon.com revealed catalogues of well known counter examples in topology, analysis, and graph theory. On the web, there are pages on counterexamples in functional analysis, Clifford algebras, and mathematical programming. What would be good candidate areas for a list of counter-examples in theory ? Complexity theory springs to mind: simple constructions (diagonalization, what have you) that break certain claims. In combinatorial geometry, one might be able to come up with a list of useful structures. Personally, I find the projective plane to be a useful example to demonstrate the limits of combinatorial arguments when reasoning about geometric objects. Adam Klivans notes that FOCS attendance is down, to about 172 registered attendees (which is like the reverse of announced attendance at sports events; more people show up than the official registered list), in comparison to STOC 2004, which had 100 more people. The number of papers accepted at STOC this year was 72, in comparison to 62 at FOCS. In general though, (and I only went back a few years because I got tired of opening proceedings), STOC appears to accept 15-20 papers more than FOCS on average (75-80 vs 60-65). There was no significant submission increase (surprising given the trend lines for other conferences), and so one can only surmise that location had at least something to do with the low attendance. After all, if you are submitting (and getting accepted) more papers than before, and if funding is down, you'd have to be pretty careful about choosing meetings to go to, especially if you are not presenting, and are thus not constrained to attend. Although the number of accepted papers is not out of the norm for FOCS, one does have to wonder whether there are really only that few papers worth accepting ? I suspect this is constrained by the whole multi-track vs single-track, conference-as-prestige-stamp vs conference-as-meeting-place issue, and FOCS represents one extreme point. Today, Arnold Schwarzenegger endorsed a proposition supporting funding for embryonic stem cell research. One of the many claims made by supporters of stem cell research is that it can help find a cure for Alzheimer's Disease, which afflicts nearly 3% of Americans between the ages of 65 and 74. The concept of using embryonic stem cells to cure such diseases is tantalizing: in principle, the idea that such cells can be "nudged" into forming different kinds of adult cells indicates that cells (like brains cells) that do not regenerate can be replaced/replenished using stem cells. What worries me though are the kinds of claims that are being made on behalf of stem cell research. Strategically, one can understand the desire to relate this to actual disease prevention (almost all NIH grant proposals mention a connection cancer in the first few paragraphs !), but it also appears at least plausible that there is a long way to go from the basic science of stem cell development to an actual disease treatment. Suppose a cure for Alzheimer's is really fifty years away, or longer. Is there a risk of a 'crying wolf' effect, where the promises of the research are so far that policy makers start becoming more skeptical ? The reason I even bring this up is because I am reminded of a similar plight that overtook AI after its heyday in the 60s. Extreme amounts of hype, and the claim that soon computers could mimic humans, gave way to a serious backlash, and then finally a more nuanced understanding of the potential and limits of AI-related disciplines (fields like robotics/vision/learning appeared to flourish once they were not bound to the chains of "intelligence" and had more specific, local goals). This may sound heretical, but sometimes working away from the limelight can be a lot better for a field; the real questions can be answered without having to worry about politics and controversy entering the picture (as in warming, global). There is an element of blaming the victim here, I admit. After all, stem cell researchers would probably have been content to labor in obscurity if the issue hadn't been brought front and center by administration policy way back in 2001. Critics may complain that there is a serious ethical issue at stake here; I actually feel the ethical dilemma is more manufactured than real, hinging as it does on 'angels on the point of a pin'-like discussions about exactly when life starts. As in, tomorrow ! This year's workshop has a special emphasis on open problems, and as always, previously published work is also welcome. The CG Workshop is one of the nicest venues to meet with other geometers and discuss new and old problems. Via Not Even Wrong, a fascinating interview with Michael Atiyah and Isadore Singer, winners of the 2004 Abel Prize. I took the liberty of reproducing some sections of this rather long interview: it was a pleasure to hear their views on matters that we all wrestle with. On proofs: Any good theorem should have several proofs, the more the better. For two reasons: usually, different proofs have different strengths and weaknesses, and they generalize in different directions - they are not just repetitions of each other. And that is certainly the case with the proofs that we came up with. There are different reasons for the proofs, they have different histories and backgrounds. Some of them are good for this application, some are good for that application. They all shed light on the area. If you cannot look at a problem from different directions, it is probably not very interesting; the more perspectives, the better! On the specialization in math: this has been a topic of discussion in theory as well, and their holistic view of mathematics is comforting to those of us who see the inevitable splitting of the subareas of theoretical computer science. It also reminds me of Avi Wigderson's lecture at STOC this year on the way different areas in theory are connected. It is artificial to divide mathematics into separate chunks, and then to say that you bring them together as though this is a surprise. On the contrary, they are all part of the puzzle of mathematics. Sometimes you would develop some things for their own sake for a while e.g. if you develop group theory by itself. But that is just a sort of temporary convenient division of labour. Fundamentally, mathematics should be used as a unity. I think the more examples we have of people showing that you can usefully apply analysis to geometry, the better. And not just analysis, I think that some physics came into it as well: Many of the ideas in geometry use physical insight as well - take the example of Riemann! This is all part of the broad mathematical tradition, which sometimes is in danger of being overlooked by modern, younger people who say "we have separate divisions". We do not want to have any of that kind, really. On why researchers tend to get specialized (too) quickly: In the United States I observe a trend towards early specialization driven by economic considerations. You must show early promise to get good letters of recommendations to get good first jobs. You can't afford to branch out until you have established yourself and have a secure position. The realities of life force a narrowness in perspective that is not inherent to mathematics. We can counter too much specialization with new resources that would give young people more freedom than they presently have, freedom to explore mathematics more broadly, or to explore connections with other subjects, like biology these day where there is lots to be discovered. And finally, an eloquent argument for simplicity in proofs: The passing of mathematics on to subsequent generations is essential for the future, and this is only possible if every generation of mathematicians understands what they are doing and distils it out in such a form that it is easily understood by the next generation. Many complicated things get simple when you have the right point of view. The first proof of something may be very complicated, but when you understand it well, you readdress it, and eventually you can present it in a way that makes it look much more understandable - and that's the way you pass it on to the next generation! Without that, we could never make progress - we would have all this messy stuff. Maria Farrel laments that learning statistics does require mathematical skill, contrary to what instructors might think. However, the really interesting quote comes from a commenter at Kevin Drum's if you understand the central limits theorm, then stats is (are?) relatively easy to understand. the problem is to get an instructor who understands and can teach the central limits theorm. i had one when i was getting my master's degree at a southern state directional university. the stats prof at the national university where i got my ph.d. was so impressed that i understood the SLT that he exempted me from 2 semesters of graduate statistics, and that hasn't hurt me in the least in my profession. 2 semesters for knowing the CLT ? I wonder if proving an NP-hardness result in the right direction warrants exemption from an algorithms class :) I often find it amusing to rant (in appropriate company) about the lack of understanding of (or even interest in) science and mathematics shown by people in the 'humanities', even though at the same time, the exclusivity and inaccessibility of what I do can be a source of (shameful) joy. But to be fair, the argument goes both ways. I like to think that I try to read and be aware of the arts (to whatever extent possible), but I know nothing about probably the most important literary theorist of this century, a revolutionary probably comparable to Einstein in the way he shook the foundations of his discipline. Jacques Derrida died last week, and beyond the Cliff Notes-like 'Derrida... deconstructionism...text is everything...no meaning outside the text...', there is little I can say about his work. Is it shameful ? Probably not. Should I stop complaining about the lack of awareness of the sciences among humanities folks ? Probably yes. Should I stop talking like Donald Rumsfeld ? Most Dealing with real numbers (as opposed to arbitrary fixed precision rationals) has always been an annoying problem facing the theory of computation. Many problems that we tend to address in theory are combinatorial in nature, and so it has been possible to either ignore the issue of how one deals with reals (ed: hmm...'deals with reals'...I like the sound of that), or pay a token obeisance via notions like strong NP-completeness. Why is dealing with real numbers tricky ? Well, the first problem is one of representation: clearly we can't write down a real number. The second one is one of computation; assuming we have some black box that can store a real number, how can we add two such numbers ? multiply them ? do anything more complicated ? And how do we assign a cost to these operations ? These are very complex questions, and there are far too many angles to cover in a single entry (or even in a single survey). But to emphasize that this is not merely a toy intellectual question, let me present some examples. • TSP in the plane is not known to be in NP. The problem here is that the computation of a distance involves calculating square roots, and this introduces precision issues with the number of bits needed to represent an answer. • A similar situation arises for Minimum Weight Triangulation, one of the original vexing problems in CG. • A careless use of operators can collapse complexity classes rather easily. As Jeff points out in this post, the floor function can be exploited to collapse PSPACE to P ! It is therefore rather important to understand the complexity of operations on reals and be very careful about the definitions one uses in this regard. In practice what one often does is assume a set of operations and assign unit cost to each (the so-called unit-cost RAM model). Of course, the floor example above shows that we have to be somewhat careful in doing so. The work by Ben-Or, and later by Steele and Yao, on algebraic decision trees is based on performing single algebraic operations with unit cost, and then proving lower bounds on the complexity of algorithms in terms of the number of "sign tests" needed. One can think of this as a generalization of a simple comparison test, and it is fairly effective at proving lower bounds in settings where standard models don't work too well. In fact, one can generalize algebraic sign tests to polynomial satisfiability tests; given a collection of polynomials over a set of variables, the decision is an appropriate sign signature (this poly must be positive, that one must be negative, etc). These are the so-called semi-algebraic sets, and going back to Tarski's work on the first order decision theory of the reals, (it is decidable to check if a given sign signature can be achieved), much work has been done on understanding the structure of these sets. It is worth noting that the Collins decomposition, a kind of 'vertical decomposition' for semi-algebraic sets, is one of the tools exploited by Mulmuley in his proof separating P from NC without bit operations (Mishra has a nice survey on computational algebraic The idea for this post started when Chandra Chekuri pointed me to a survey by Lenore Blum in the Notices of the AMS titled 'Computing over the Reals: Where Turing Meets Newton'. What spurred me further was a discussion I had at dinner the other night; a graduate student was asking the following question: Is the Mandelbrot set undecidable ? The discussion continued onto a longer argument over the algorithms from numerical analysis and how they compare to algorithms in the traditional 'Turing' sense that operate over 0-1 inputs. The Blum survey is a fascinating overview of the landscape of real computations (Blum, Shub and Smale have a book on complexity and real computation as well). Critically, it develops tools for talking about the complexity of algorithms on reals, by developing ideas of computation over a ring with "black-box" rational, algebraic operators as atomic computing units. Thus, an answer to the above question becomes immediate: The Mandelbrot set is undecidable. She also talks about the P vs NP question over complex numbers and the reals, (the "classical" P vs NP question is really over Z2), and establishes a connection between Hilbert's Nullstellensatz and satisfiability via the choice of ring. A quote from the article: I have endeavored to give an idea of how machines over the reals tempered with condition, approximation, round-off, and probability enable us to combine tools and traditions of theoretical computer science with tools and traditions of numerical analysis to help better understand the nature and complexity of computation. Studying computation over reals is important stuff. Computational omplexity theory cannot claim to be a definitive theory of computation if it has no language to express the vast array of numerical algorithms that operate on reals. Moreover, at least a few people seem to think that the inability of a Turing machine to represent non-discrete computations is a severe blow to the Church-Turing thesis. Whether this is plausible or not, a well developed theory of computing with reals would go a long way towards clarifying the matter. I have noted in the past that graph drawing is an interesting area on the boundary of geometry, combinatorics, graph theory, and visualization. This year's graph drawing conference was held in Harlem at the City College from Sep 29 to Oct 2, and Yehuda Koren was kind enough to submit a conference report. Here it is, with minor edits: Graph Drawing 2004 was held on Sept. 29-Oct. 2, in City College, NY. More than 50 full papers were presented, dealing with all aspects of drawing graphs/digraphs. These include works ranging from pure graph-theory to design of user interfaces, and geometry-related papers along with heuristics for graph embedding and much more. The invited speakers were Paul Seymour from Princeton University who spoke on "The Structure of Claw-Free Graphs", and Erik Demaine from M. I. T. on "Fast Algorithms for Hard Graph Problems: Bidimensionality, Minors, and Local Treewidth" (ed: Erik and Mohammed Taghi Hajiaghayi have two papers on this topic in SODA 2005) As usual some talks were more interesting, some less, and some I missed. To get some impression for those of you that somehow missed this event; I want to mention here three works with a strong geometry motivation. 1. "Partitions of Complete Geometric Graphs into Plane Trees" by P. Bose, F. Hurtado, E. Rivera-Campo and D.R. Wood. They deal with a known open problem: "Does every complete geometric graph with an even number of vertices have a partition of its edge set into plane spanning trees?" A geometric graph is one in which the vertices are a set of points in the plane in general position (ed: amen)(that is, no three are collinear) and edges are closed segments connecting the points. Plane spanning trees contain all n vertices of original graph and no two edges intersect. Of course, for a complete graph with n vertices we have n(n-1)/2 edges, so the partition must be into n/2 spanning trees. The answer is known to be affirmative for convex complete graphs (where convex means that the vertices are in convex position), so what left for the authors in this case is to characterize all such partitioning for convex complete graph, which is pretty nice. The authors also show the existence of such partitioning for a broader family of graphs. And for desert they show that when the graphs are not necessarily spanning (and then we may partition into more than n/2 trees), an upper bound for the number of trees would be n-\sqrt(n/12). 2. "Reconfiguring Triangulations with Edge Flips and Point Moves", by G. Aloupis, P. Bose and P. Morin The authors deal with transformations between different triangulations. They show that O(nlogn) edge-flips and point moves can transform any geometric near-triangulation on n-points to any other near-triangulation on n (different) points. Thus, they improve a previous O(n^2) bound. They also suggest a more general point move that further reduces the complexity to O(n). 3. "No-three-in-line-in-3D" by A. Por and D. Wood. Their work is based on Erdös' proof in 1951 (of a problem proposed by Dudeney in 1917) that one can place O(n) points in nxn grid with no three points are collinear. The authors prove a smooth generalization to 3-D: One can place O(n^2) point in an nxnxn grid where no three are collinear. They also consider additional generalizations more relevant to graph Academic blogging appears to be a new meme on the blogosphere, and there are some interesting thoughts on the value of academic blogging. An article in the Guardian by Jim McClellan describes some of the ways in which academic bloggers (defined roughly as bloggers writing from academe or on academic disciplines) use blogs. There are the obvious: Blogging is allowing academics to develop and share their ideas with an audience beyond the universities the fairly obvious: Academic researchers are drawn to blogs because they're useful knowledge management tools. MacCallum-Stewart says that her site quickly became a kind of "mind gym", a place to test out and develop ideas and to hone her prose style. The social networking side of blogging became very important here, she says. Her blog helped her build links and share ideas with researchers in the area at other universities. and the more creative: University tutors are also experimenting with blogs as teaching tools, using them to disseminate links and information to classes, sometimes as places where students can collaborate on group Crooked Timber has a stunning list of academic bloggers, and it is not even close to being comprehensive (especially in the area of scientific blogs) [They don't have a section on computer science, that I hope to rectify :), and now have: look in Computers/Media/Communications..]. And this year's BloggerCon has a session on blogging in the academic world. I don't know if blogging can create discussions where none exist, but I have seen many interesting discussions over at Lance's blog (and some here) on topics that are either technical or fairly closely related to technical matters (mechanics of our academic process etc). I am (now) naturally suspicious of any claim that a new technology can revolutionize existing systems, but it is definitely plausible that academic blogging can change in some small way the manner in which research is conducted, as well as influence (more obviously) the dissemination of research into the An interesting anecdote about Andrei Kolmogorov, from Moscow Summer, by Mihajlo Mihajlov: Voznesensky triumphantly told me about the well-known Soviet mathematician and designer of electronic computers Andrei Kolmogorov, who for several years analyzed texts and sent his assistant to poetry-readings with the intention of finding the 'linguistic key' to versification and constructing a computer which would replace various poets. What Kolmogorov needed was Trurl's Electronic Bard.... Tall, Dark And Mysterious talks about women in math, and says this: On the academic front: long hair aside, there are few ways in which I am conventionally feminine. One way, however, in which I adhere closely to my biological (or socialized?) fate is that I am interested in, and good at, distilling my subjects of interest to non-experts. Hmm. That sounds like blogging to me. So does blogging reveal the feminine side ? Or is it just an ego trip... Read the article: it's good... (Via 3d Pancakes) Although I disagreed with Lance and Jeff on whether theory folks are too nice to ask hard questions, let us assume for argument's sake that there is some merit to this argument (there definitely is some just in comparison with other non-CS fields). What might be the cause of this restraint ? A distinct possibility is the fairly objective nature of our discipline itself. Suppose that tomorrow I present a paper describing an algorithm that can do range searching on collections of fetid schizoid tapeworms in logarithmic time. Now, what is clearly true is that my algorithm takes logarithmic time; about this there can be no objection. However, it is likely that some folks will find the area of fetid schizoid tapeworms somewhat uninteresting; they might be partial towards energetic mobile ringworms instead. There might still be others that claim that most schizoid tapeworms are merely malodorous, for which logarithmic time range searching queries are trivial. However, they cannot dispute the objective validity of my algorithm, and their dislike of my choice of object is not in itself objectively defensible. This, to me, is one possible reason why theoreticians don't argue so much. It is not that we don't have strong opinions; if you talk to people one-on-one they will often criticize results based on non-technical reasons. It is that maybe people are less comfortable with non-formal reasoning processes. Aha, but then you will say that "pure" mathematicians should not be the cantankerous bunch that they often can be. Indeed. However, theoretical computer science is most close in spirit to combinatorics, a field defined not by a foundational structure of axioms and theorems, but by a myriad of problems all connected together in countless different ways. It is hard to make objective judgements about the value of problems; most problems are quite interesting to think about in the abstract, at least to someone. It is a lot easier to evaluate the quality of a theoretical edifice, and in fact one of the knocks on combinatorics is precisely the "apparent" lack of such an edifice. I can only hope the folks at the University of Michigan don't read my blog... Two new blogs to add to the blogroll: • Fresh Tracks by Fernando Pereira (AT&T Research alumnus and chair at UPenn CS) • Special Circumstances by Anoop Sarkar (Linguistics/Learning @ Simon Fraser) Maybe soon we can have our own carnival ! Lance asks, and Ernie elaborates on, the following question: Are theoretical computer scientists too nice ? Lance uses evidence (lack of questioning at theory seminars) to infer niceness. Jeff goes further and argues that this is true well beyond theory. I'm not sure I agree. I think this issue goes to the heart of another discussion that Lance initiated: the balkanization of theory into subfields. The assumption is that if a theoretician gives a talk that is attended by other theoreticians, then lively discussion should ensue. However, given the vast scope of theory, how realistic is it for an audience consisting of (say) geometers to be able to discuss details of a speaker's talk on (say) the latest randomized rounding trick for LP-based approximations ? (example chosen completely at random :)). Obviously one can discuss things like the general motivation etc, but as some commenters pointed out, the motivation is often not the most interesting part of a work. And in theory talks, where one can measure fairly objecively the contribution of a particular work, discussions about contribution are either trivial, or require a much deeper understanding of the sub-area of the talk. The reason I bring this up is because it is NOT my experience that theory talks are viewed as missives from the Gods, to be watched reverentially and applauded solemnly. At the Math Seminar talks at AT&T, we have heated discussions, and it is very hard for a speaker to get beyond the first ten slides without some serious pounding :). I think the 'message from God' model applies more to conference presentations, where 1. the problem of sub-area matching comes up: there are probably only a few people who really understand the speaker's work, and they probably figure they can catch the person later on so as not to bore the audience (I have felt that way, and maybe I am wrong to feel so) 2. Time is really limited: you can't get into any kind of discussion, and it takes a while to grok some of the more technical details.Although Lance uses the Econ Workshop as a model for aggressive questioning, that is really a seminar, and not a typical conference venue To support this, I can provide as evidence our AT&T one-hour Cookie Talks, that are typically less theoretical and directed at a larger audience of systems, database, programming languages, algorithms folks, and so are less technical. In these talks there are lots of discussions, and these can often get acrimonious (we sometimes have to warn visiting speakers !). From the NYT: American researchers Richard Axel and Linda B. Buck shared the 2004 Nobel Prize in physiology or medicine on Monday for their work on the sense of smell -- showing how, for example, a person can smell a lilac in the spring and recall it in the winter But this is the best quote: Told of his honor, Axel told Swedish public radio: "That's really marvelous, I'm so honored." [... snip ...] Asked what he would do first, he replied: "I'm going to have a cup of coffee." Amen to that.... I shall now propose a new law of technology discovery: If the New York Times reports on a new technological phenomenon, the only people who don't know about it are the ones without computers, or any technology whatsoever. The latest issue of 'Tech Files' is an article by James Fallows (a journalist whose reporting in the Atlantic is always magnificient) on new trends in technology to ease the consumer's life, and talks about Firefox as a new alternative to IE. [S:He also gets some important points wrong::S]See end of post.... I also keep using IE, meanwhile, because parts of the Internet are coded to accept nothing else. The handy Google toolbar, for example, works only with IE. Actually Firefox has a default built-in Google toolbar, and in fact allows plugins for a number of different search engines (like Opera, from what I am told). It also has a cool extension to load a page in IE if necesssary. This can be convenient for some annoying misbehaved pages Update: Talk about the speed of the internet. I emailed NYT and got a response from James Fallows immediately (I guess the fact that I had commented on an article that hadn't yet been published helped somewhat :)). He points out correctly that the Google toolbar itself cannot be used on Firefox. However, one can search Google using the search toolbar extension that Firefox provides, and that I was referring to. Update II: Ken Clarkson points out that in fact Firefox DOES have a google toolbar. So I was wrong about being wrong.... The H-1 visa is a work permit that allows foreigners to work full-time in the US (student internships are different). There is a quota of 65,000 H-1 visas each year; these are issued starting in October and continue to be issued until the quota runs out. In the late 90s, in response to the growing tech boom and the need for extra manpower, Congress agreed to increase the limit to 195,000 for a three year period, ending in 2002. This was necessitated by an absurd situation where people had to apply for visas in March to have any hope of being far enough along in the queue to get a visa by October. Since most graduating students only got jobs in The end of the three-year period coincided with the tech crash, and the extra quota was no longer needed. Plus, with outsourcing quickly becoming a political hot potato, it became politically infeasible to keep a larger quota. And so here we are today, having come full circle (emphasis mine): A federal official on Friday said the annual limit for the controversial guest worker program has been met for fiscal year 2005, which runs from Oct. 1, 2004, to Sept. 30, 2005. United States Citizenship and Immigration Services, which processes applications for the H-1B program, is no longer accepting petitions for visas for initial employment for this fiscal year, said the official, who spoke on condition of anonymity. The visas, which allow skilled foreign workers to work in the United States for up to six years, have frequently been used by technology companies. That the cap has been reached as of the first day of the fiscal year is sure to stir up debate over the visa program. Businesses are seeking an exemption from the annual cap for foreign students graduating from U.S. schools with master's and doctorate degrees. Labor groups oppose the proposal. If you are reading this and wondering why you should care, here's why: 1. If you are a graduating foreign student, you have to plan your job search very carefully. Given that you will probably have to file an H-1 application in Mar/Apr, you need to have a job lined up as well as a commitment from your employer to file for the visa. 2. If you are involved in any way in recruiting in a[S:n academic or:S] corporate setting, you should know that any foreigner you hire will have this problem, and so you need to be able to act quickly on filing their visa petition. Since most students typically only have a year's worth of grace via their student visa, you only get one shot and filing the H-1 petition. Although I now have this thing, I spent my share of time in H-1 hell. It is not pleasant. Update: A reader points out: The H-1B cap doesn't apply to nonprofit educational institutions (such as universities) From Tim Gower: If you are trying to solve a problem, see if you can adapt a solution you know to a similar problem. By using this principle, one can avoid starting from scratch with each new problem. What matters is not the difficulty of the problem itself but the difficulty of the difference between the problem and other problems whose solutions are known. On the self-absorption of science, a pithy line from Bitch, Ph.D: People are not brains on sticks. People have lives.
{"url":"http://geomblog.blogspot.com/2004_10_01_archive.html","timestamp":"2014-04-21T02:01:00Z","content_type":null,"content_length":"326253","record_id":"<urn:uuid:f04240c5-d426-4a4e-9520-011a607d904e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to Partial Differential Equations with MATLAB This is an excellent textbook that may not, I fear, be particularly well served by its title. The phrase “with MATLAB” that appears there may suggest to a prospective user of this book that MATLAB is an essential part of the exposition, and thus might scare away any person who does not wish to use that software as part of a course on the subject. This would be unfortunate for two reasons: first, the book can be used by a person who has no interest in MATLAB at all, and, second, this book deserves to be considered by — in fact, should be at the top of the list of — any professor looking for an undergraduate text in PDEs. To deal with the former point first: MATLAB is used in this book in two principal ways. Diagrams and tables have been constructed using this software, and a number of exercises (carefully marked) require MATLAB for their solution. However, neither MATLAB nor any other software is required to understand any explanation in this book; a person using this book with nothing but pencil and paper at hand would get a great deal out of it. And there are so many exercises in the book that a reader can simply ignore any of the MATLAB exercises and still have a more than adequate sample for any class. So, while MATLAB definitely serves to enhance the discussion here, it is by no means necessary for somebody using this book. As for the second point above, there are several reasons why I view this book as being in the upper echelon of undergraduate PDE textbooks. One is the extremely high quality of exposition. Coleman writes clearly and cleanly, with a conversational tone and a high regard for motivation. He clearly has a great deal of experience teaching this subject and has learned what points are likely to cause confusion and therefore need expanded discussion. The author also employs the nice pedagogical feature of page-long “preludes” to each chapter, which not only summarize what the chapter will cover and how it fits into the general theme of things, but also typically provide some brief historical commentary as well. In general, the overall effect of this book is like listening to a discussion by a good professor in office hours. In addition, as in any good textbook, there are, as noted above, lots of exercises, covering a reasonable range of difficulty, although most seemed to be of the routine computational variety. A 20-page Appendix gives answers (generally without accompanying computation) to a selection of these. In addition, the publisher’s webpage for the book references a solutions manual for instructors who have adopted the text, but an email to the publisher resulted in the information that this manual will not be available until the end of October. In keeping with the general student-friendly tone of the book, strict mathematical rigor is sometimes sacrificed in favor of readability. The reader will, for example, sometimes see a reference for a proof rather than the proof itself. For an elementary undergraduate course, this seems entirely appropriate. For a graduate course in mathematics, though, this point may be more troublesome (although graduate students in other disciplines who want to know how to use partial differential equations rather than prove things about them may find much here of interest). Another feature of the book that I like (except perhaps for one small point, about which I am somewhat conflicted) is its organization. I came to this book with no formal training at all in PDEs; whatever I knew about them was largely self-taught, mostly learned from the (now long out of print) book Partial Differential Equations: An Introduction by Eutiqio Young. That book convinced me that the subject was attractive enough for me to look at other books over the years, and it quickly became apparent to me that there were several ways to organize the material. Some books, for example, introduce Fourier series quite early (on page 17, for example, of Asmar’s Partial Differential Equations with Fourier Series and Boundary Value Problems) while others wait until the need for them has been more extensively motivated. Some books (like Haberman’s Applied Partial Differential Equations) plunge more or less immediately into a detailed study of one major example (in Haberman’s case, the heat equation); others do not. Young’s book started with first-order PDEs (linear and quasi-linear), which was then followed by a discussion of linear equations and the classification of second-order linear PDEs with constant coefficients into three canonical forms. He then discussed in some detail the representatives of each form — first the wave equation, and then (after a chapter on Fourier series motivated by the wave equation) chapters on the heat equation and Laplace equation. This seemed logical enough, but the book under review takes the different approach of looking not at each equation individually but instead considering solution methods as the motivating theme. (Having never taught a course in PDEs, I can’t recount from personal experience how successful this approach would be in a classroom, but it certainly reads well enough.) The author begins with an excellent introductory chapter discussing basic terminology and examples and then examining very simple PDEs that can actually be solved from basic principles — for example, PDEs that are really ODEs in disguise. The method of separation of variables and eigenvalue problems, make their first appearance here as well. The second chapter introduces the “big three” second-order linear PDEs with constant coefficients that were mentioned above. It is stated, but not yet shown, that these are the three canonical forms for the general linear second-order constant coefficient PDE. The emphasis in this chapter is on the physical derivation of each of these three equations, but the method of separation of variables is used to make a start at solving them, and this leads directly to Fourier series, the subject of chapter 3. Fourier series, and separation of variables, are then applied in chapter 4 to discuss solutions of the big three PDEs on finite domains — i.e., with boundary conditions as well as initial conditions. First homogenous equations with homogenous boundary conditions are treated; then the boundary conditions are allowed to be non-homogenous, and finally the equations themselves are allowed to be The next chapter introduces the method of characteristic curves. Here again, the level of complexity gradually rises: the first equations considered are first order linear equations with constant coefficients (with the convection equation derived and used as a motivating example), followed by first order linear equations with variable coefficients. Quasi-linear equations are referred to in the exercises. Then, second order equations are considered. The first to be discussed is the wave equation on an infinite domain, solvable by a simple change of variable which results in the notion of characteristic lines. In the next section, these ideas are extended to study the wave equation on semi-infinite and finite domains, and then in the next section they are extended to the case of the general second-order linear PDE, first with constant coefficients and then (briefly) with general coefficients. In chapter 6, the notion of Fourier transform is introduced and applied to study other second-order linear PDEs on unbounded domains. Distributions make their first appearance in this chapter, introduced first in a very formal way and then made more precise as linear functionals on the space of test functions. These six chapters amount to what the author calls the “core” of the book, and corresponds to what might be covered in a one-semester course (or perhaps just a bit more). They only amount to about half of the text, however: the remaining chapters cover more advanced topics such as special functions and orthogonal polynomials, Sturm-Liouville theory, PDEs in higher dimensions, the Green’s function and numerical methods. There is no chapter dependence chart, but these chapters seem to be fairly independent of one another, and the author does suggest some ways in which parts of these chapters could be used to flesh out a one-semester course if time permits. The one potential concern that I have with this arrangement of the material (at least as far as the first half of the book is concerned) is that the fairly late introduction of characteristic curves results in some topics that might be seen as candidates for early discussion being deferred until fairly late into the semester. For example, as noted above, linear first order PDEs with constant coefficients can be easily solved by a simple change of variable, yet don’t really get discussed here until after the arguably more complicated second-order PDEs are. Also, it can be readily observed early on that the wave equation (u[tt] = cu[xx]) has the general solution f(x+ct) + g(x-ct), which in turn has a visually pleasing interpretation as the sum of two moving waves; since this is directly relevant to the theory of characteristic curves, it, too, is deferred until late in the semester. However, this does not seem like a terribly serious issue; people who are determined to introduce these topics earlier could easily modify the path through the book to accommodate this desire. There is yet a third reason why I put this book high on the list of potential texts for an undergraduate course, and that is the reasonable price: prices fluctuate, of course, but as I write this, a new copy of this book can be obtained on amazon.com for less than 72 dollars, a much more reasonable price than is charged there for either Strauss (almost $139) or Haberman ($129). More and more, I find myself thinking seriously about the price of a book when I select one for a course, and a disparity like this, it seems to me, should weigh reasonably heavily in favor of this book. Verdict: very highly recommended. I don’t know when or if I will ever teach an undergraduate PDE course, but if I ever do, this book will certainly be on my short list of possible texts. Mark Hunacek (mhunacek@iastate.edu) teaches mathematics at Iowa State University.
{"url":"http://www.maa.org/publications/maa-reviews/an-introduction-to-partial-differential-equations-with-matlab-0?device=mobile","timestamp":"2014-04-17T06:51:46Z","content_type":null,"content_length":"39347","record_id":"<urn:uuid:627274dd-83b4-423f-b7fc-8bd054b5f70f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Man Page Manual Section... (3) - page: atan2l atan2, atan2f, atan2l - arc tangent function of two variables #include <math.h> double atan2(double y, double x); float atan2f(float y, float x); long double atan2l(long double y, long double x); Link with Feature Test Macro Requirements for glibc (see feature_test_macros(7)): atan2f(), atan2l(): _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE; or cc -std=c99 () function calculates the principal value of the arc tangent of , using the signs of the two arguments to determine the quadrant of the result. On success, these functions return the principal value of the arc tangent of in radians; the return value is in the range [-pi, pi]. If y is +0 (-0) and x is less than 0, +pi (-pi) is returned. If y is +0 (-0) and x is greater than 0, +0 (-0) is returned. If y is less than 0 and x is +0 or -0, -pi/2 is returned. If y is greater than 0 and x is +0 or -0, pi/2 is returned. If either x or y is NaN, a NaN is returned. If y is +0 (-0) and x is -0, +pi (-pi) is returned. If y is +0 (-0) and x is +0, +0 (-0) is returned. If y is a finite value greater (less) than 0, and x is negative infinity, +pi (-pi) is returned. If y is a finite value greater (less) than 0, and x is positive infinity, +0 (-0) is returned. If y is positive infinity (negative infinity), and x is finite, pi/2 (-pi/2) is returned. If y is positive infinity (negative infinity) and x is negative infinity, +3*pi/4 (-3*pi/4) is returned. If y is positive infinity (negative infinity) and x is positive infinity, +pi/4 (-pi/4) is returned. No errors occur. C99, POSIX.1-2001. The variant returning also conforms to SVr4, 4.3BSD, C89. This page is part of release 3.24 of the Linux project. A description of the project, and information about reporting bugs, can be found at This document was created by , using the manual pages. Time: 15:26:42 GMT, June 11, 2010
{"url":"http://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=atan2l","timestamp":"2014-04-18T08:03:54Z","content_type":null,"content_length":"54989","record_id":"<urn:uuid:111a20e8-68d1-47eb-bebc-ed97cb800091>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
SuperKids Math Review math worksheets > > fractions > > adding fractions SuperKids Math Review How to Add Fractions Remember . . . Here's a memory trick: the Denominator is the bottom, or Down number in a fraction -- and both Denominator and Down start with the letter D. Adding Fractions with COMMON Denominators Adding fractions with COMMON denominators is simple. Just add the top numbers (the numerators) together, and place the resulting answer in the top of a fraction using the existing denominator for the bottom number. Then reduce the fraction, if possible Example 1: Simple fraction addition No reduction is possible, so we have found the answer! Example 2: Reducing the fraction answer Then reduce: Example 3: Converting the answer to a mixed number Then convert the improper fraction to a mixed number: Creating Common Denominators How do we do that? Simple! Remember, if you multiply the top and bottom of a fraction by the same number, it doesn't affect the value of the fraction. Example 1: If we have the fraction 2/3, we can multiply the top and bottom by 2, and not change its value: (2/2) x (2/3) = 4/6 Then if we reduce 4/6, we still get the original number, 2/3 Example 2: If we have the fraction 2/3, we could multiply top and bottom by 5, and not change its value: (5/5) x (2/3) = 10/15. Then if we reduce 10/15, we still get the original number, 2/3. Why does this work? Because any number divided by itself equals one. 2/2 = 1, 5/5 = 1, etc. And any number multiplied by 1 equals itself! The point is, you don't change the value of a fraction if you multiply its top and bottom numbers by the same number! Adding Fractions with DIFFERENT denominators You can only add together fractions which have the same denominator, so you must first change one or both of the fractions so that you end up with two fractions having a common denominator. The easiest way to do this, is to simply select the opposite fraction's denominator to use as a top and bottom multiplier. Example 1: Say you have the fractions 2/3 and 1/4 Select the denominator of the second fraction (4) and multiply the top and bottom of the first fraction (2/3) by that number: Select the denominator of the first fraction (3) and multiply the top and bottom of the second fraction (1/4) by that number: These two fractions (8/12 and 3/12) have common denominators - the number 12 on the bottom of the fraction. Add these two new fractions together: Example 2: Say you have the fractions 3/5 and 2/7 Select the denominator of the second fraction (7) and multiply the top and bottom of the first fraction (3/5) by that number Select the denominator of the first fraction (5) and multiply the top and bottom of the second fraction (2/7) by that number These two fractions (21/35 and 10/35) have common denominators -- the number 35 on the bottom of the fraction. We can now add these two fractions together, because they have common denominators: Got it? Great! Then go to the SuperKids Math Worksheet Creator for Basic Fractions, and give it a try! [Questions?] Make this your browser's home page! Questions or comments regarding this site? webmaster@superkids.com Copyright © 1998-2014 Knowledge Share LLC. All rights reserved. Privacy Policy
{"url":"http://www.superkids.com/aweb/tools/math/fraction/commond/add.shtml","timestamp":"2014-04-18T13:38:19Z","content_type":null,"content_length":"14562","record_id":"<urn:uuid:fef3dcb0-caff-4e5d-8b4a-f64a60a5d32b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: simulation: finding the winner, and generating unique values Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: simulation: finding the winner, and generating unique values From Kieran McCaul <kieran.mccaul@uwa.edu.au> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject st: RE: simulation: finding the winner, and generating unique values Date Sat, 25 May 2013 11:40:30 +0800 I'm not quite sure what the intent of the simulation is. 1. How can I simulate B so that only unique values of 1-5 are created? I don't understand what you mean here, particularly given that the code you use is under a comment "* simulate variable: B (values 1-5 sampled without replacement) ". There are 20 observations, 5 per level of "a". The only way that "b" can be assigned the values 1 to 5 without replacement is by assigning them within "a". I don't think this is what you are trying to do, but if it is, the following will do that: set seed 12358 * simulate A set obs 20 gen byte a = mod(_n,4) + 1 sort a by a:gen byte b=_n gen byte c = uniform()>=0.5 -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Michael McCulloch Sent: Saturday, 25 May 2013 3:31 AM To: statalist@hsphsun2.harvard.edu Subject: st: simulation: finding the winner, and generating unique values * I have three questions related to the following data simulation. * 1. How can I simulate B so that only unique values of 1-5 are created? * 2. Is there a more elegant way to code simulation of variable B? * 3. How can I identify which variable B had the highest total count of C, within levels of A? set seed 12358 * simulate A set obs 20 //create 20 observations gen a=1 in 1/5 replace a=2 in 6/10 replace a=3 in 11/15 replace a=4 in 16/20 * simulate variable: B (values 1-5 sampled without replacement) gen b=. replace b=(1 + int(5*runiform())) if a==1 replace b=(1 + int(5*runiform())) if a==2 replace b=(1 + int(5*runiform())) if a==3 replace b=(1 + int(5*runiform())) if a==4 * simulate variable: C gen c1=uniform() //Generate n (equal chance of 0-1) gen c=. replace c=0 if c1<0.5 replace c=1 if c1>=0.5 drop c1 list, noobs Best wishes, Michael McCulloch, LAc MPH PhD Pine Street Foundation, since 1989 124 Pine Street | San Anselmo | California | 94960-2674 P: (415) 407-1357 | F: (206) 338-2391 | http://www.PineStreetFoundation.org * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-05/msg00810.html","timestamp":"2014-04-20T18:36:28Z","content_type":null,"content_length":"10650","record_id":"<urn:uuid:590c5459-88f7-4a68-ba13-eab3d59b7b67>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Iterated function systems and the global Construction of fractals Results 1 - 10 of 90 - Bulletin of Symbolic Logic , 1997 "... We present a survey of the recent applications of continuous domains for providing simple computational models for classical spaces in mathematics including the real line, countably based locally compact spaces, complete separable metric spaces, separable Banach spaces and spaces of probability dist ..." Cited by 48 (10 self) Add to MetaCart We present a survey of the recent applications of continuous domains for providing simple computational models for classical spaces in mathematics including the real line, countably based locally compact spaces, complete separable metric spaces, separable Banach spaces and spaces of probability distributions. It is shown how these models have a logical and effective presentation and how they are used to give a computational framework in several areas in mathematics and physics. These include fractal geometry, where new results on existence and uniqueness of attractors and invariant distributions have been obtained, measure and integration theory, where a generalization of the Riemann theory of integration has been developed, and real arithmetic, where a feasible setting for exact computer arithmetic has been formulated. We give a number of algorithms for computation in the theory of iterated function systems with applications in statistical physics and in period doubling route to chao... - IEEE Trans. Image Processing , 1997 "... Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address. We introduce a new waveletbased framework for analyzing block ..." Cited by 42 (2 self) Add to MetaCart Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address. We introduce a new waveletbased framework for analyzing block-based fractal compression schemes. Within this framework we are able to draw upon insights from the well-established transform coder paradigm in order to address the issue of why fractal block coders work. We show that fractal block coders of the form introduced by Jacquin[1] are a Haar wavelet subtree quantization scheme. We examine a generalization of this scheme to smooth wavelets with additional vanishing moments. The performance of our generalized coder is comparable to the best results in the literature for a Jacquin-style coding scheme. Our wavelet framework gives new insight into the convergence properties of fractal block coders, and leads us to develop an unconditionally convergen... - Information and Computation , 1996 "... We introduce the notion of weakly hyperbolic iterated function system (IFS) on a compact metric space, which generalises that of hyperbolic IFS. Based on a domain-theoretic model, which uses the Plotkin power domain and the probabilistic power domain respectively, we prove the existence and uniquene ..." Cited by 30 (10 self) Add to MetaCart We introduce the notion of weakly hyperbolic iterated function system (IFS) on a compact metric space, which generalises that of hyperbolic IFS. Based on a domain-theoretic model, which uses the Plotkin power domain and the probabilistic power domain respectively, we prove the existence and uniqueness of the attractor of a weakly hyperbolic IFS and the invariant measure of a weakly hyperbolic IFS with probabilities, extending the classic results of Hutchinson for hyperbolic IFSs in this more general setting. We also present finite algorithms to obtain discrete and digitised approximations to the attractor and the invariant measure, extending the corresponding algorithms for hyperbolic IFSs. We then prove the existence and uniqueness of the invariant distribution of a weakly hyperbolic recurrent IFS and obtain an algorithm to generate the invariant distribution on the digitised screen. The generalised Riemann integral is used to provide a formula for the expected value of almost everywhere continuous functions with respect to this distribution. For hyperbolic recurrent IFSs and Lipschitz maps, one can estimate the integral up to any threshold of accuracy.] 1996 Academic Press, Inc. 1. , 1994 "... This paper is concerned with function approximation and image representation using a new formulation of Iterated Function Systems (IFS) over the general function spaces L p (X; ¯): An N-map IFS with grey level maps (IFSM), to be denoted as (w; \Phi), is a set w of N contraction maps w i : X ! X o ..." Cited by 29 (10 self) Add to MetaCart This paper is concerned with function approximation and image representation using a new formulation of Iterated Function Systems (IFS) over the general function spaces L p (X; ¯): An N-map IFS with grey level maps (IFSM), to be denoted as (w; \Phi), is a set w of N contraction maps w i : X ! X over a compact metric space (X; d) (the "base space") with an associated set \Phi of maps OE i : R ! R. Associated with each IFSM is an operator T which, under certain conditions, may be contractive with unique fixed point u 2 L p (X; ¯). A rigorous solution to the following inverse problem is provided: Given a target v 2 L p (X; ¯) and an ffl ? 0, find an IFSM whose attractor satisfies k u \Gamma v k p ! ffl. An algorithm for the construction of IFSM approximations of arbitary accuracy to a target set in L 2 (X; ¯), where X ae R D and ¯ = m (D) (Lebesgue measure), is also given. The IFSM formulation can easily be generalized to include the "local IFSM" (LIFSM) which considers the... , 1996 "... With the increase in the number of digital networks and recording devices, digital images appear to be a material, especially still images, whose ownership is widely threatened due to the availability of simple, rapid and perfect duplication and distribution means. It is in this context that several ..." Cited by 27 (0 self) Add to MetaCart With the increase in the number of digital networks and recording devices, digital images appear to be a material, especially still images, whose ownership is widely threatened due to the availability of simple, rapid and perfect duplication and distribution means. It is in this context that several European projects are devoted to finding a technical solution which, as it applies to still images, introduces a code or Watermark into the image data itself. This Watermark should not only allow one to determine the owner of the image, but also respect its quality and be difficult to remove. An additional requirement is that the code should be retrievable by the only mean of the protected information. In this paper, we propose a new scheme based on fractal coding and decoding. In general terms, a fractal coder exploits the spatial redundancy within the image by establishing a relationship between its different parts. We describe a way to use this relationship as a means of embedding a Watermark. Tests have been performed in order to measure the robustness of the technique against JPEG conversion and low pass filtering. In both cases, very promising results have been - Adv. Appl. Prob , 1995 "... We present a systematic method of approximating, to an arbitrary accuracy, a probability measure ¯ on [0; 1] q ; q 1, with invariant measures for Iterated Function Systems by matching its moments. There are two novel features in our treatment: (1) An infinite number of fixed affine contraction ma ..." Cited by 12 (6 self) Add to MetaCart We present a systematic method of approximating, to an arbitrary accuracy, a probability measure ¯ on [0; 1] q ; q 1, with invariant measures for Iterated Function Systems by matching its moments. There are two novel features in our treatment: (1) An infinite number of fixed affine contraction maps on X; W = fw 1 ; w 2 ; : : :g, subject to an "ffl-contractivity" condition, is employed. Thus, only an optimization over the associated probabilities p i is required. (2) We prove a Collage Theorem for Moments which reduces the moment matching problem to that of minimizing the "collage distance" between moment vectors. The minimization procedure is a standard quadratic programming problem in the p i which can be solved in a finite number of steps. Some numerical calculations for the approximation of measures on [0,1] are presented. AMS Subject Classifications: 28A, 41A, 58F 1. Introduction This paper is concerned with the approximation of probability measures on a compact metric space X ... , 1993 "... this memory requirement issue may become a factor, in which case the Random Iteration Algorithm could be adapted to overcome the shortcomings mentioned here with some simple checks on the path of the calculations. 2.3 Conclusion ..." Cited by 9 (0 self) Add to MetaCart this memory requirement issue may become a factor, in which case the Random Iteration Algorithm could be adapted to overcome the shortcomings mentioned here with some simple checks on the path of the calculations. 2.3 Conclusion - IMA Vol. Math. Appl , 2002 "... . We give a survey of some results within the convergence theory for iterated random functions with an emphasis on the question of uniqueness of invariant probability measures for place-dependent random iterations with finitely many maps. Some problems for future research are pointed out. 1. ..." Cited by 8 (1 self) Add to MetaCart . We give a survey of some results within the convergence theory for iterated random functions with an emphasis on the question of uniqueness of invariant probability measures for place-dependent random iterations with finitely many maps. Some problems for future research are pointed out. 1. - Proceedings of IEEE International Conference on Image Processing, (ICIP 2002), I-836 – I-839 , 2002 "... I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be electronically available to the public. ii The need for image enhancement and restoration is encounter ..." Cited by 7 (1 self) Add to MetaCart I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be electronically available to the public. ii The need for image enhancement and restoration is encountered in many practical applications. For instance, distortion due to additive white Gaussian noise (AWGN) can be caused by poor qual-ity image acquisition, images observed in a noisy environment or noise inherent in communication channels. In this thesis, image denoising is investigated. After reviewing standard image denoising methods as applied in the spatial, frequency and wavelet domains of the noisy image, the thesis embarks on the endeavor of developing and experimenting with new image denoising methods based on fractal and wavelet transforms. In particular, three new image denoising methods are proposed: context-based wavelet thresholding, predictive fractal image denoising and fractal-wavelet image denoising. The proposed context-based thresholding strategy adopts localized hard and soft thresh-olding operators which take in consideration the content of an immediate neighborhood of a wavelet coefficient before thresholding it. The two fractal-based predictive schemes are based on a simple yet effective algorithm for estimating the fractal code of the original noise-free image from the noisy one. From this predicted code, one can then reconstruct a fractally denoised estimate of the original image. This fractal-based denoising algorithm can be applied in the pixel and the wavelet domains of the noisy image using standard fractal and fractal-wavelet schemes, respectively. Furthermore, the cycle spinning idea was implemented in order to enhance the quality of the fractally denoised estimates. Experimental results show that the proposed image denoising methods are competitive, or sometimes even compare favorably with the existing image denoising techniques reviewed in the thesis. This work broadens the application scope of fractal transforms, which have been used mainly for image coding and compression purposes. iii
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=375149","timestamp":"2014-04-17T16:09:56Z","content_type":null,"content_length":"39028","record_id":"<urn:uuid:49d84012-b183-4abf-aae0-70e8e09fd520>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating the (Unlikely) Odds of NASA Satellite Casualties How did NASA come up with the 1 in 3200 chance of anyone on Earth being hit by pieces of the Upper Atmosphere Research satellite (UARS) due to crash on Friday? We contacted NASA yesterday to ask them what goes into their estimate of the risks that incoming satellites present. In short, the answer we received from NASA's Nick Johnson is "A complex computer program called ORSAT is used," to do the calculation. (The complete NASA reply is available at the end of this post.) That's cool, but computer output is only as good as the input and programming. In my experience, it's always a good idea to get out a pencil and paper to make sure the computer's answer makes sense. So I thought I'd try to estimate the odds UARS hitting anyone all by myself. Here goes . . . One thing we know, thanks to NASA press officers, is that UARS will hit somewhere between 57 north and 57 south latitude. If you happen to have a globe handy, you will see that the impact zone includes most of the populated areas of the planet. Northern Europe, as well as the upper parts of Siberia, China and Canada are safe, as it Antarctica, but the places where the vast majority of the planet's 7 billion people live are between 57 north and 57 south latitude. By just eyeballing the globe, I'd say that roughly 4/5 of the planet's surface falls in that region as well. About three quarters of the Earth is covered in water, and I'm going to assume for simplicity that there are so few people on the water at any given moment that we can just ignore all the wet portions. That means there's only a 1 in 4 chance of the satellite hitting places where people might be. I'm going to count it as a hit if the satellite gets within one square meter of any person, which means there's about 7 billion square meters of human that we have to worry about. Now, there's approximately 150,000,000 square kilometers of dry land on Earth. Because 4/5 of that land is in the danger zone, we only have about 120,000,000 square kilometers at risk. We have to convert the area humans cover from square meters to square kilometers to figure out the odds that a person will be hit. (7,000,000,000 square meters)/(1,000,000 square meters per square kilometer) = 7,000 square kilometers of human To figure out the chances of the satellite landing on a bit of land with a human on it, we just have to divide the total land area into the area of land covered in people. (7,000 square kilometers of people)/(120,000,000 square kilometers of land) = about 1/17,000 So there's roughly a 1 in 17,000 chance that the satellite will hit a person, if it hits land at all. But there's only a 1 in 4 chance of it hitting land, so divide 1/17,000 by four to get 1/68,000 that the satellite will hit a person. One thing I didn't include yet is that the satellite will break up into several pieces as it reenters the atmosphere. If you guess that it will break into about 10 pieces, the odds get worse by a factor of 10 (because the same calculation applies to each piece) to about 1 in 6800, which is only about twice the NASA estimate of 1 in 3200. There you have it, the roughest of rough estimates agrees with NASA's computer calculations. I'm sure that makes everyone more comfortable with the risks we face tomorrow. Update: I just learned that NASA expects the satellite to break into about two dozen pieces. If I take that into account in my calculation, my estimate changes to 1 in 2830, which is shockingly close to the NASA estimate. (In fact, I wouldn't have believed it if I hadn't done the calculation from scratch myself.) If you want to know more about how NASA did the calculations, here are Nick Johnson's answers to our questions. 1. How did you/NASA arrive at the "estimated human casualty risk" of 1/3,200 people? Was this done through computer modeling or a back-of-the-envelope type calculation? A complex computer program called ORSAT is used. A lower fidelity routine can be found in NASA's DAS software, which can be downloaded from http://www.orbitaldebris.jsc.nasa.gov/mitigate/das.html. 2. What are the assumptions that went into the calculation? The initial state of the vehicle at an altitude of 122 km, e.g., flight trajectory and temperature, are needed for an assessment. Breakup altitude (typically 78 km) must also be input. However, these values are run parametrically if there is any uncertainty which might affect the answer. Note that most satellite components will survive or will demise under a wide variety of initial conditions. Only those components which appear to be borderline, i.e., heat of ablation is not quite reached or just barely reached, normally require further evaluation. 3. How many assumptions were made? See above. 4. Is there redundancy in the risk assessment and if so, how was it accomplished? (For example, if a computer model calculated the risk, did a human do a paper and pencil calculation to make sure the number was reasonable?) Paper and pencil is not an option for a complex spacecraft. The ORSAT model has been verified and validated in a number of different ways. 5. Could you explicitly show your calculations to us? A summary of the assessment is provided in the attachment, a paper delivered at the 2002 World Space Congress. (see the abstract of the paper here, or email us for a full copy) No comments:
{"url":"http://physicsbuzz.physicscentral.com/2011/09/calculating-ulikely-odds-of-nasa.html","timestamp":"2014-04-18T05:30:37Z","content_type":null,"content_length":"92435","record_id":"<urn:uuid:2d277006-d7fe-4196-8748-11967f0e6ab7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Erdös and the Quantum Method March 28, 2009 Proving theorems with the quantum method Paul Erdos needs no introduction to our community, but I will give him one anyway. He won the AMS Cole Prize in 1951 and in 1983 the Wolf Prize. He is famous for so many things: an elementary proof of the Prime Number Theorem, the solver of countless open problems, the creator of countless more open problems, and his ability to see to the core of a problem. There are, by the way, several versions of how he and Atle Selberg found their respective elementary proofs of the Prime Number Theorem, and more importantly where the credit should go and to whom. See this for a history of the Our friends at Wikipedia have a story about Erdos’s use of amphetamines, but the version I heard from Ron Graham is a bit different. Ron bet Erdos $500 dollars that he could not stop taking the drug for a month. Exactly one month later Erdos met Ron, took his $500 dollars in winnings, and immediately popped a pill. Erdos said to Ron, “I felt mortal.” Perhaps my favorite story about Erdos is the “dinner conversation story.” Apparently Erdos was at a conference and found himself seated at a table of experts all from some arcane area of mathematics–let’s call it area A. Paul knew nothing about their area. Nothing. The rest of the table was having a lively discussion about the big open question in their area A. Erdos, of course, could not follow any of the discussion, since he knew none of the terms from A. Slowly, by asking lots of questions, “what does that term mean?” and “why is that true?” and so on, Erdos began to understand the open problem of area A. Once he finally understood the problem, once the definitions and basic facts had been made clear to him, he was able to solve their open problem. Right there. Right then. During the rest of the dinner Paul explained his solution to the table of experts–who I am sure had mixed feelings. Whether true or not, I think this story captures one of Erdos’s abilities. The ability to cut to the core of an open problem, and more often than not the core was a combinatorial problem. Not always. But very often. And when it was combinatorial there was a good chance that he could solve it. A personal story–alas my Erdos number is only 2–is about the time I told Erdos about the “Planar Separator Theorem”. This is of course joint work with Bob Tarjan–I plan to discuss separator theorems of all kinds in another post. After listening politely to me nervously explain the theorem, I was quite junior at the time, Paul smiled and said, “that is not to be expected.” I took this as a compliment. It made my day. The Probabilistic Method Erdos and the Quantum Method? What am I talking about? Actually Erdos is famous, of course, for the probabilistic method. I believe similar methods had been used by others, again I am not trying to be a perfect historian; but Erdos certainly deserves credit for making the probabilistic method a standard tool that everyone working in theory must know. The method is amazingly powerful, clearly many of today’s results would be unprovable without the method. You probably, no pun intended, know the general method: If you wish to construct an object with some property often a “random” object can be proved to have the property with positive probability. If this is the case, then there must exist some object with the desired property. The best current explanation of this method is the great book of Noga Alon and Joel Spencer. However, the first book on the probabilistic method was the thin small blue book of Erdos and Spencer. The book consisted of a series of chapters, each gave another example of the probabilistic method. There was no motivation, no exercises, no overview, no frills. Just chapter after chapter of “We will prove X. Consider a random object ${\dots}$ They used the chilling phrase in the book: “it follows by elementary calculation that this formula is true”, that always scared me. Often such statements could be checked in a hour or two, while I remember one that took me a few days. The ”elementary calculation”, in this case, used Stirling’s approximation for ${n!}$ out to four terms. I loved the book. I still remember my first use of the probabilistic method to solve an open problem. A year before studying the book I was asked about a certain open problem by Arnie Rosenberg of IBM Yorktown. I never got anywhere on it, and eventually forgot about it. After studying the book, one day I ran into Andy Yao at a coffee shop near MIT, and asked him what he was working on. He stated the problem, and it sounded familar. I suddenly realized two things: one it was the problem Arnie had asked me a year earlier, and second that I knew how to solve it. The answer was the probabilistic method. In about two days Andy and I had worked out all the details and we had solved the problem. The Quantum Method I have my doubts about the future of Quantum Computing. But I always think of: Clarke’s First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. Arthur Clarke So my doubts are probably wrong. However, whether or not quantum computers are ever built is not important once you realize that there is a new proof method based on quantum theory. There is a small but growing collection of theorems that have nothing to do with quantum computation, yet are proved by quantum arguments. This is extremely important. This–to me–changes everything. If quantum theory becomes a new technology for proving theorems, then we have to become experts in it or be left behind. There is an uncanny parallel with the probabilistic method. The amazing part of the probabilistic method is that it can solve problems that have nothing to do with probability theory. The statement of the theorems do not talk about random objects. The only place randomness is used is inside the proof itself. This is the magic of the method. Take a pure statement that is completely deterministic, and prove by probability theory that the statement is true. That is where the power of the probabilistic method lies, in my opinion. This is beginning to happen with quantum theory. There are now a small number of theorems that can be proved by using quantum arguments. The statements of the theorems have nothing to do with anything quantum. Nothing. I believe that such quantum arguments could parallel the probabilistic method, and help solve many currently open problems. It is early and we do not yet know if this new method could be as powerful as the probabilistic method, but time will tell. The quantum method yields proofs that look something like this: • Assume, by way of contradiction, that some classic result ${\mathsf{CR}}$ is true. • Use simulation, or another method, to show that this implies that a quantum result ${\mathsf{QR}}$ is true. • Reach a contradiction by showing that ${\mathsf{QR}}$ is too good and violates some known result from quantum theory. Thus, the classic result ${\mathsf{CR}}$ is false. Very cool. Examples of The Quantum Method One of major players in this new area is Ronald de Wolf. He has been able to prove a number of theorems using the quantum method, I will highlight one of his results on locally decodable codes. His other results include: a result on the degrees of symmetric functions, and another on a lower bound on matrix rigidity. The key in all of these is that the results have nothing in their statements about quantum theory. The degree of the symmetric function is a classic measure, and matrix rigidity is the classic notion. And so on. Again, notice the parallel with the probabilistic method. The quantum method is in its infancy, as de Wolf has pointed out to me. There are a few examples, yet there is no underlying structure, there is still no sense where the method will go. Will we look back, in a few years, and see a few isolated results, or is this the beginning of a new chapter in theory? Only time will tell, but as an outsider I am excited by the initial results. Others are working on results that may also fall under this umbrella: apparently Scott Aaronson is one of them. Of course I am not surprised, since Scott is one of the experts on all things quantum. Let’s turn to one important example of the quantum method due to Iordanis Kerenidis and de Wolf. They prove a lower bound on the size of certain codes. An error-correcting code is said to be a locally decodable code (LDC) if a randomized algorithm can recover any single bit of a message by reading only a small number of symbols of a possibly corrupted encoding of the message with an adaptive decoding algorithm. A ${(q,\delta,\epsilon)}$-locally decodable code encodes ${n}$-bit strings ${x}$ into ${m}$-bit code words ${C(x)}$. For any index ${i}$, the bit ${x_{i}}$ can be recovered with probability ${1/2 + \epsilon}$ with ${q}$ queries even if ${\delta m}$ bits have been corrupted. The theorem of Kerenidis and de Wolf is: Theorem: Any ${(2,\delta,\epsilon)}$-locally decodable code has length ${m \ge 2^{\Omega (n)}}$. Here is a outline of their proof reproduced here with de Wolf’s kind permission. Given a ${2}$-query LDC ${C:\{0,1\}^n \rightarrow {0,1}^m}$ we will proceed as follows to get a lower bound on ${m}$. Assume that the classical 2-query decoder recovers the bit ${x_i}$ by choosing a random element ${(j,k)}$ from some perfect matching ${M_i}$ of ${[m]}$, it then queries bits ${j}$ and ${k}$ of the (possibly corrupted) codeword, and returns their XOR as its guess for ${x_i}$. It was already known from Jonathan Katz and Luca Trevisan that this “normal form” can be achieved, at the expense of reducing the success probability by a small amount. Now consider the quantum state (“quantum code for x”) ${|\phi_x> }$ equal to, $\displaystyle 1/\sqrt{m} \sum_{j=1}^m (-1)^{C(x)_j} |j>$ This is the uniform superposition over all entries of the codeword ${C(x)}$ (with bits turned into signs). This state is an ${m}$-dimensional vector, so it has ${\log(m)}$ qubits. There is a quantum measurement (slightly technical but not hard) that, when applied to this state, gives us the XOR of the two bits of a random element from ${M_i}$, for each ${i}$ of our choice, and hence allows us to make the same prediction as the classical ${2}$-query decoder. Thus, we have a quantum state that allows us to predict each of the ${n}$ bits of ${x}$. It follows from a known quantum information theory result (the random access code lower bound of Ashwin Nayak) that the number of qubits is ${\Omega(n)}$. Accordingly, we have ${\log(m)=\Omega(n)}$, hence ${m \ge 2^{\Omega(n)}}$. The exponential lower bound matches–up to the small constant factors in the exponent–to the Hadamard code, which is a 2-query LDC. This is still the only super-polynomial lower bound known for LDC’s. Open Problems The obvious open questions are can we use the quantum method to solve other open problems? Or to get cleaner proofs of old theorems? Or to extend and sharpen known theorems? I do not understand the limits of this new method, but here are some potential problems that could be attacked with the quantum method: • Circuit lower bounds. • Data structure lower bounds. • Coding theory lower bounds. • Separation theorems, i.e. P=NP. • Other suggestions? One more point: de Wolf suggested that “ very metaphorically, quantum physics stands to complex numbers as classical physics stands to real numbers.” I pointed out to him that there is a famous quote due to Jacques Hadamard that is relevant: The shortest path between two truths in the real domain passes through the complex domain. Finally, I would like to thank de Wolf for his generous help with this post. As usual all opinions, errors, and mistakes, are mine. 1. March 28, 2009 9:21 am The method seems very promising but there is one big difference with the probabilistic method, which could – in the long run – make it that much less effective: The probabilistic method is ‘elementary’, in the sense that in many cases you can almost fully understand the proof without knowing too much advanced stuff. For instance, you can understand most of the requisite background for the probabilistic method in a few days. However, for the quantum method it seems to me that there’s a steep learning curve (If you think this is not the case, I’ll be really glad if you can point out some (short) resources for picking up the basics). □ March 28, 2009 9:49 am I think when Erdos and Spencer book came out the probabilistic method was new. My story about it tried to make that point. Now we all know the method and it seems simple. Perhaps the same will happen with the quantum method. Perhaps we need someone to write the book. 2. March 28, 2009 12:36 pm This is a lovely post. I can’t resist mentioning one my fondest hopes when I was still working on quantum computing, namely, the idea that it would turn out to be more natural to prove lower bounds for quantum circuit size than classical circuit size. In particular, I hoped that it would be possible to formulate the problem of proving quantum lower bounds in a “smooth” way, a way that would allow the ideas of calculus to be applied. The motivation behind this hope was the elementary observation that it’s often easier to lower bound the value of a smooth function defined over the reals than over some discrete set, simply because the ideas of calculus are available in the first case. It turns out that this elementary observation has a natural analogue in the quantum world (see, e.g., http://arxiv.org/abs/quant-ph/0701004), which gives rise to some beautiful mathematical structures. Not surprisingly, though, I never succeeded in proving a lower bound this way. 3. March 30, 2009 8:06 am Hello Michael, can you tell us what’s behind the “not surprisingly” in the last sentence of your comment ? Is there a fundamental flaw in this lower bound idea that you had, or is it “just” that the problem is mathematically difficult ? Of course lower bound results are often difficult, but after all we already have lots of such results both in classical and quantum computing. 4. March 30, 2009 12:15 pm Hi Pascal – Just mathematical difficulty. Indeed, with some technical caveats I won’t describe here (email me if you want an explanation), it is possible to establish that the minimal geodesic path lengths on a particular Riemannian manifold are equal to minimal quantum circuit size, up to polynomial factors. Thus, if it’s possible to establish bounds on quantum circuit size by any means, such bounds will necessarily simply bounds on the path length problem I was considering. I hoped, however, that the availability of the calculus of variations and similar tools would provide a powerful “in” to attack the minimal path length problem. Stated another way, I thought the minimal path length problem might be more natural than the circuit size problem. I still like the idea very much, but have moved on. 5. March 30, 2009 2:56 pm Michael, that looks like a terrific idea. I hope there will be some follow-up work on it. 6. March 30, 2009 4:56 pm Pascal – Thanks! I don’t know of much going on. There was some interest from geometers and control theorists early on, but I don’t know of any actively working on it. For non-geometers, I think the geometry may be a barrier to entry. 7. April 14, 2009 11:06 am I would like to echo Michael Nielsen in saying this is a lovely post. In the engineering field of model order reduction too, we find that modern quantum methods provide us with powerful tools and ideas for solving purely classical problems. There is some sense, perhaps, in which we are coming to appreciate that the quantum theory of information and simulation represents the natural “complexification” of the classical theory of information and simulation. 8. October 20, 2009 9:23 pm Andy Drucker and Ronald de Wolf have a new survey out of the quantum method at http://arxiv.org/PS_cache/arxiv/pdf/0910/0910.3376v1.pdf . They cite this blog entry as a partial motivation! 9. December 27, 2012 2:00 pm I quite like looking through a post that will make men and women think. Also, many thanks for allowing for me to comment! Recent Comments Jon Awbrey on The More Variables, the B… Henry Yuen on The More Variables, the B… The More Variables,… on Fast Matrix Products and Other… The More Variables,… on Progress On The Jacobian … The More Variables,… on Crypto Aspects of The Jacobian… The More Variables,… on An Amazing Paper The More Variables,… on Mathematical Embarrassments The More Variables,… on On Mathematical Diseases The More Variables,… on Who Gets The Credit—Not… John Sidles on Multiple-Credit Tests KWRegan on Multiple-Credit Tests John Sidles on Multiple-Credit Tests John Sidles on Multiple-Credit Tests Leonid Gurvits on Counting Is Sometimes Eas… Cristopher Moore on Multiple-Credit Tests
{"url":"http://rjlipton.wordpress.com/2009/03/28/erds-and-the-quantum-method/","timestamp":"2014-04-17T16:03:51Z","content_type":null,"content_length":"105200","record_id":"<urn:uuid:ff3226a3-7b49-43bd-b11b-2019dace1cf3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Free Online Percentage Calculator 1.) This online percentage calculator will help you calculate and find out what the percent of another number is: % of Answer: 2.) This tool will help you find what percent a number is out of another: Is What % Of Answer: 3.) This tool will help you see what your profit margin is: Total Revenue: Expenses: Profit margin: Profit margin percent: %
{"url":"http://www.onlinepercentagecalculator.com/","timestamp":"2014-04-19T22:16:11Z","content_type":null,"content_length":"6103","record_id":"<urn:uuid:4f0117e4-52be-40d4-9887-326aa5b29ffb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Origami Inspirations by Meenakshi Mukerji If you are familiar with Meenakshi Mukerji's previous origami publications then her third book, Origami Inspirationsmodular origami: a class of origami where two or more simple units are combined to form a more complex model. Section 1: Introduction Origami Inspirations is 120 pages long and it is divided into six sections. The first section has information on origami symbols, bases, tools, paper, and tips & hints for success. This section also includes a brief introduction to Platonic, Archimedean, and Kepler-Poinsot solids. Most of the origami models made in this book are based on these polyhedral shapes. Section 2: Simple Cubes Plain Cube Plain As suggested by the title, the models in this section are all cubes. All of the cubes require folding a square sheet of paper into thirds. The Plain Cube and Plain Cube 2 are the Cube 2 easiest and require 6 and 12 units respectively. Note how the patterns on the cube can be more complex when more sheets of paper are used. The cube is one of the humblest of the Platonic solids; however, by using many sheets of paper even the basic cube can be made quite spectacular: Ray Cube Thatch Cube Whirl Cube These cubes are made with 24 sheets of paper each. The units are folded such that the white sheet of the paper (back side) is also visible. Because so many units are used, many patterns can be generated by: • using different sides of the paper as top side, • interchanging pockets and tabs, and • positioning of the units based on their color. For example, there are 12 variations of the Ray Cube, four of which are shown on the right: Section 3: Four-Sink Based Models The 3rd section of the book is devoted to Floral Cubes which are made from the "four-sink windmill" base. Though this origami base is not particularly difficult to fold, it is quite involved. Readers who wish to make these floral cubes should be comfortable with the sink fold and should expect to spend about 10 to 15 minutes to fold each unit. Each model is made with 6 units and are effectively cubes with fancy faces. Shown are: Flower Cube and variations; Flower Cube 2 and variations; Flower Cube 3, Flower Cube 4, and Butterfly Cube Section 4: Folding with Pentagons Straying from the typical "square sheet of paper", section 4 of Origami Inspirations uses paper in the shape of a pentagon. Mukerji provides clear instructions on how to obtain a pentagon from a square. Despite these good instructions, creating a perfect pentagon is not trivial; most of the problem coming from the error due to the thickness of the paper and errors in folding exactly along the indicated lines. Of this set, the Oleander is the easiest and most forgiving model. It can be a standalone model, or 12 units can be assembled into an Oleander Ball. Because only 12 units are Oleander Oleander required, this model is also the fastest one to complete. Flower Dodecahedron 1 through 5 (below) are all folded in a similar manner: they require 12 pentagon units (for the 12 faces of the model) and 30 connector units. Because you need to cut, fold, and assemble a total of 42 units for each model, these Flower Dodecahedra take a considerable amount of time to complete. In addition, care must be taken when cutting the pentagons and connector units. Small discrepancies in the unit sizes will amplify themselves dramatically in the final model. Flower Dodecahedron 1 & 2 are folded in a similar manner. The flaps point outwards forming curled spikes. Flower Dodecahedron 3 & 4 are similar to one another and only slightly different from the previous two models. Here the flap are folded down to form central patterns. Flower Dodecahedron 5 appears to be the simplest model in this set; however, it is in many ways the most challenging. All the flaps fold towards the center of the pentagonal face. Errors in paper size, paper shape, or careless folding will cause an undesirable gap at the center where points should meet. Section 5: Miscellaneous Meenakshi Mukerji's final chapter is devoted to miscellaneous models: models that don't quite fit with the other categories but are too good to omit. Windmill Base Cube Windmill Base Cube 2 Wave In the entire book, the two easiest models are Windmill Base Cube and Windmill Base Cube 2. Both of these models require 6 easy-to-fold units and 12 super-easy connector units. Assembly is trivial and results are quite satisfying. Again, straying from the classic "square sheet of paper", the model Wave uses a rectangle in a 1:6 ratio. The units are very easy to fold; however, assembly of Waves is easier if you use miniature clothespins to hold the units in place during assembly. A stunning model that is relatively easy to make. Whipped Cream Star Star with Spirals Whipped Cream Star and Star with Spirals are two delicious models! They are so named because they look like icing on a cake! The models shown use 30 units, are folded from 1:2 ratio rectangles, and are assembled in a dodecahedron manner. Different models can be made with 12 units, 24 units, and variations in assembly. 3 unit Hexahedron 4-unit Tetrahedron 6 unit Cube Lastly, Mukerji offers Whipped Cream Polyhedra: The units are folded like the Whipped Cream unit but assembled like Sonobe units. Easy to assemble and stable too! Section 6: Fear not, there is one more section! The last section of the book is a collection of models from origami artists from around the world. The good part of this section is that new artists bring in fresh ideas - something different and a change from the familiar. And, it's just nice to let other artists have a moment of fame and enjoy the limelight. by Daniel Kwan from USA: Truncated Rhombic Triacontahedron Four interlocked Triangular Prisms (top image) by Carlos Cabrino (Leroy) from Brazil: Chrysanthemum Leroy Chrysanthemum Leroy variation Carnation Leroy (second image) by Tanya Vysochina from Ukraine Camellia (third image) Lily of the Nile by Aldo Marcell from Nicaragua Adaptable Dodecahedron (bottom image) Adaptable Dodecahedron2 All in all, this is a great origami book for those who are dedicated to modular origami. Many of the models require over 30 units to accomplish, thus one must be committed to the process. Although the units are not necessarily hard to fold, some of them are quite involved and require a certain amount of tenacity. This book is not for beginners; it is well suited for intermediate folders who are stout of character and have a strong sense of determination.
{"url":"http://www.origami-resource-center.com/origami-inspirations.html","timestamp":"2014-04-21T12:08:04Z","content_type":null,"content_length":"17113","record_id":"<urn:uuid:81e5e47b-9c83-4e82-94ab-834d9e631799>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Why it's hard to estimate small effects Why it's hard to estimate small effects Here's a great 2009 paper (.pdf) by Andrew Gelman and David Weakliem (whom I'll call "G/W"), on the difficulty of finding small effects in a research study. I'm translating to baseball to start. Let's suppose you have someone who claims to be a clutch hitter. He's a .300 hitter, but, with runners on base, he claims to be a bit better. So, you say, show us! You watch his 2012 season, and see how well he hits in the clutch. You decide in advance that if it's statistically significantly different from .300, that will be evidence he's a clutch hitter. Will that work? No, it won't. Over 100 AB, the standard deviation of batting average is about 46 points. To find statistical significance, you want 2 SD. That means to convince you, the player would have to hit .392 in the The problem is, he's not a .392 hitter! He, himself, is only claiming to be a little bit better than .300. So, in your study, the only evidence you're willing to allow, is evidence that you *know* can't be taken at face value. Let's say the batter actually does meet your requirement. In fact, let's suppose he exceeds it, and hits .420. What can you conclude? Well, suppose you didn't know in advance that you were looking for small effect. Suppose you were just doing a "normal" paper. You'd say, "look, he beat his expectation by 2.6 SD, which is statistically significant. Therefore, we conclude he's a clutch hitter." And then you write a "conclusions" section with all the implications of having a .420 clutch hitter in your lineup. But, in this case, that would be wrong, because you KNOW he's not a .420 clutch hitter, even though that's what he hit and you found statistical significance. He's .310 at best, maybe .320, if you stretch it. You KNOW that the .420 was mostly due to luck. Still ... even if you can't conclude that the guy is truly a .420 clutch hitter, you SHOULD be able to at least conclude that he's better than .300 right? Because you did get that statistical Well ... not really, I don't think. Because, the same evidence that purports to show he's not a .300 hitter ALSO shows he's not a .320 hitter! That is, .420 is also more than 2 standard deviations from .320, which is the best he possibly could be. What you CAN do, perhaps, is compare the two discrepancies. .420 is 2.6 SDs from .300, but only 2.2 SDs from .320. That does appear to make .320 more likely than .300. In fact, the probability of a .320 hitter going 42-for-100 is almost five times as high as the probability of the .300 hitter going 42-for-100. But, first, that's only 5 in 6. Second, that ignores the fact that there are a lot more .300 hitters than .320 hitters, which you have to take into account. So, all things considered, you should know in advance that you won't be able to conclude much from this study. The sample size is too small. That's Gelman and Weakliem's point: if you're looking for a very small effect, and you don't have much data, you're ALWAYS going to have this problem. If you're looking for the difference between .300 and .320, that's a difference of 20 points. If the standard error of your experiment is a lot more than 20 points ... how are you ever going to prove anything? Your instrument is just too blunt. In our example, the standard error is 46 points. To find statistical significance, you'd have to observe an effect of at least 92 points! And so, if you're pretty sure clutch hitting talent is less than 92 points, why do the experiment at all? But what if you don't know if clutch hitting talent is less than 92 points? Well, fine. But you're still never going to find an effect less than 92 points. And so, your experiment is biased, in a way: it's set up to only find effects of 92 points or more. That means that if the effect is small, no matter how many scientists you have independently searching for it, they'll never find it. Moreover, they will frequently find a LARGE effect. No matter what happens, the experiment will either be wrong too high, or wrong too low. It is impossible for it to be accurate for a small effect. The only way to find a small effect is to increase the sample size. But even then, that doesn't eliminate the problem: it just reduces it. No matter what your experiment, and how big your sample size, if the effect your looking for is smaller than 2 SDs, you'll never find it. That's G/W's criticism. It's a good one. G/W's example, of course, is not about clutch hitting. It's about a previously-published paper, which found that good-looking people are more likely to produce female offspring than male offspring. That study found an 8 percentage point difference between the nicest-looking parents and the worst-looking parents -- 52 percent girls vs. 44 percent girls. And what G/W are saying is, that 8 point difference is HUGE. How do they know? Well, it's huge as compared to a wide range of other results in the field. Based on the history of studies on birth sex bias, two or three points is about the limit. Eight points, on the other hand, is unheard of. Therefore, they argue, this study suffers from the "can't find the real effect" problem. The standard error of the study was over 4 points. How can you find an effect of less than 3 points, if your standard error is 4 points? Any reasonable confidence interval will cover so much of the plausible territory, that you can't really conclude anything at all. Gelman and Weakliem don't say so explicitly, but this is a Bayesian argument. In order to make it, you have to argue that the plausible effect is small, compared to the standard error. How do you know the plausible effect is small? Because of your subject matter expertise. In Bayesian terms, you know, from your prior, that the effect is most likely in the 0-3 range, so any study that can only find an 8-point difference must be biased. Every study has its own limits of how the standard error compares to the expected "small" effect. You need to know what "small" is. If a clutch hitting study was only accurate to within .0000001 points of batting average ... well, that would be just fine, because we know, from prior experience, that a clutch effect of .0000002 is relatively plausible. On the other hand, if it's only accurate to within .046, that's too big -- because a clutch effect of .092 is much too large to be plausible. It's our prior that tells us that. As I've argued, interpreting the conclusions of your study is an informal Bayesian process. G/W's paper is one example of how that kind of argument works. Hat tip: Alex Tabarrok at Marginal Revolution Labels: academics, bayes, statistics 1 Comments: At Wednesday, November 30, 2011 1:17:00 PM, said... Links to this post:
{"url":"http://blog.philbirnbaum.com/2011/11/why-its-hard-to-estimate-small-effects.html","timestamp":"2014-04-16T10:23:05Z","content_type":null,"content_length":"32427","record_id":"<urn:uuid:d544ad14-c123-4480-96a1-8474e6e301c6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Example 2014.1: "Power" for a binomial probability, plus: News! January 14, 2014 By Ken Kleinman Hello, folks! I'm pleased to report that Nick and I have turned in the manuscript for the second edition of SAS and R: Data Management, Statistical Analysis, and Graphics . It should be available this summer. New material includes some of our more popular blog posts, plus reproducible analysis, RStudio, and more. To celebrate, here's a new example. Parenthetically, I was fortunate to be able to present my course: R Boot Camp for SAS users at Boston University last week. One attendee cornered me after the course. She said: "Ken, R looks great, but you use SAS for all your real work, don't you?" Today's example might help a SAS diehard to see why it might be helpful to know R. OK, the example: A colleague contacted me with a typical "5-minute" question. She needed to write a convincing power calculation for the sensitivity-- the probability that a test returns a positive result when the disease is present, for a fixed number of cases with the disease. I don't know how well this has been explored in the peer-reviewed literature, but I suggested the following process: 1. Guess at the true underlying sensitivity 2. Name a lower bound (less than the truth) which we would like the observed CI to exclude 3. Use basic probability results to report the probability of exclusion, marginally across the unknown number of observed positive tests. This is not actually a power calculation, of course, but it provides some information about the kinds of statements that it's likely to be possible to make. In R, this is almost trivial. We can get the probability of observing x positive tests simply, using the dbinom() function applied to a vector of numerators and the fixed denominator. Finding the confidence limits is a little trickier. Well, finding them is easy, using lapply() on binom.test(), but extracting them requires using sapply() on the results from lapply(). Then it's trivial to generate a logical vector indicating whether the value we want to exclude is in the CI or not, and the sum of the probabilities we see a number of positive tests where we include this value is our desired result. > truesense = .9 > exclude = .6 > npos = 20 > probobs = dbinom(0:npos,npos,truesense) > cis = t(sapply(lapply(0:npos,binom.test, n=npos), function(bt) return(bt$conf.int))) > included = cis[,1] < exclude & cis[,2] > exclude > myprob = sum(probobs*included) > myprob [1] 0.1329533 (Note that I calculated the probability, not the exclusion probability.) Of course, the real beauty and power of R is how simple it is to turn this into a function: > probinc = function(truesense, exclude, npos) { probobs = dbinom(0:npos,npos,truesense) cis = t(sapply(lapply(0:npos,binom.test, n=npos), function(bt) return(bt$conf.int))) included = cis[,1] < exclude & cis[,2] > exclude > probinc(.9,.6,20) [1] 0.1329533 My SAS process took about 4 times as long to write. I begin by making a data set with a variable recording both the number of events (positive tests) and non-events (false negatives) for each possible value. These serve as weights in the proc freq I use to generate the confidence limits. %let truesense = .9; %let exclude = .6; %let npos = 20; data rej; do i = 1 to &npos; w = i; event = 1; output; w = &npos - i; event = 0; output; ods output binomialprop = rej2; proc freq data = rej; by i; tables event /binomial(level='1'); weight w; Note that I repeat the proc freq for each number of events using the by statement. After saving the results with the ODS system, I have to use proc transpose to make a table with one row for each number of positive tests-- before this, every statistic in the output has its own row. proc transpose data = rej2 out = rej3; where name1 eq "XL_BIN" or name1 eq "XU_BIN"; by i; id name1; var nvalue1; In my fourth data set, I can find the probability of observing each number of events and multiply this with my logical test of whether the CI included my target value or not. But here there is another twist. The proc freq approach won't generate a CI for both the situation where there are 0 positive tests and the setting where all are positive in the same run. My solution to this was to omit the case with 0 positives from my for loop above, but now I need to account for that possibility. Here I use the end=option to the set statement to figure out when I've reached the case with all positive (sensitivity =1). Then I can use the reflexive property to find the confidence limits for the case with 0 events. Then I'm finally ready to sum up the probabilities associated with the number of positive tests where the CI includes the target value. data rej4; set rej3 end = eof; prob = pdf('BINOMIAL',i,&truesense,&npos); prob_include = prob * ((xl_bin < &exclude) and (xu_bin > &exclude)); if eof then do; prob = pdf('BINOMIAL',0,&truesense,&npos); prob_include = prob * (((1 - xu_bin) < &exclude) and ((1 - xl_bin) > &exclude)); proc means data = rej4 sum; var prob_include; Elegance is a subjective thing, I suppose, but to my eye, the R solution is simple and graceful, while the SAS solution is rather awkward. And I didn't even make a macro out of it yet! An unrelated note about aggregators: We love aggregators! Aggregators collect blogs that have similar coverage for the convenience of readers, and for blog authors they offer a way to reach new audiences. SAS and R is aggregated by , and with our permission, and by at least 2 other aggregating services which have never contacted us. If you read this on an aggregator that does not credit the blogs it incorporates, please come visit us SAS and R . We answer comments there and offer direct subscriptions if you like our content. In addition, no one is allowed to profit by this work under our ; if you see advertisements on this page, the aggregator is violating the terms by which we publish our work. for the author, please follow the link and comment on his blog: SAS and R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/example-2014-1-power-for-a-binomial-probability-plus-news/","timestamp":"2014-04-19T09:27:48Z","content_type":null,"content_length":"42823","record_id":"<urn:uuid:39c7d72c-73df-4f5c-a276-b553932ae03a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/niman/answered","timestamp":"2014-04-16T17:10:26Z","content_type":null,"content_length":"107655","record_id":"<urn:uuid:85040859-7de6-4399-aa68-8c5dade314fb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolution: standard / high Figure 5. Final inversion S[3](2t[p]) for sin-squared pulse as a function of carrier envelope phase φ. The carrier envelope phase is in multiples of Π. The excitation is on-resonance and the pulse area is θ = Π. (a) n[p]= 2 and (b) n[p]= 1. Solid curve: N = 10^9cm^−2, dotted curve: N = 3 × 10^11cm^−2, dashed curve: N = 5 × 10^11cm^−2, and dot-dashed curve: N = 7 × 10^11cm^−2. Paspalakis and Boviatsis Nanoscale Research Letters 2012 7:478 doi:10.1186/1556-276X-7-478 Download authors' original image
{"url":"http://www.nanoscalereslett.com/content/7/1/478/figure/F5","timestamp":"2014-04-16T07:42:42Z","content_type":null,"content_length":"12118","record_id":"<urn:uuid:65c8fef0-7672-4eeb-bf67-5c8a7ea056d5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
N Richlnd Hls, TX Algebra 2 Tutor Find a N Richlnd Hls, TX Algebra 2 Tutor I graduated from Brigham Young University in 2010 with a degree in Statistical Science and I am looking into beginning a master's program soon. I have always loved math and took a variety of math classes throughout high school and college. I taught statistics classes at BYU for over 2 years as a TA and also tutored on the side. 7 Subjects: including algebra 2, statistics, geometry, SAT math ...I have also taught applied math for year 12. I have completed a double degree in Mechanical Engineering (Mechatronics) and Computer Science at the University of Melbourne. During this degree, I completed 1st and 2nd Year Electronic Engineering subjects. 56 Subjects: including algebra 2, chemistry, physics, calculus ...I work in a special education life skills classroom that includes many autistic students. I passed the state teacher certification test in special education. I was a police officer in California for 30 years. 15 Subjects: including algebra 2, geometry, algebra 1, special needs I am a Mechanical Engineer by education and currently working as a Mechanical, Electrical, and Plumbing coordinator in a large commercial construction. Advanced mathematics is applied everyday at my work. Someone once said “Everything should be made as simple as possible, but not simpler.” My approach to tutoring is “make it simple” to help understand math. 8 Subjects: including algebra 2, calculus, physics, geometry ...When I teach or tutor I focus on discovering a student's specific learning pattern; this made me an excellent ballet teacher and a great tutor. I love working around problems and thinking outside the box to find solutions. I also truly love the life sciences and research; they are fun for me and I enjoy talking about them and sharing them. 31 Subjects: including algebra 2, chemistry, SAT math, GED Related N Richlnd Hls, TX Tutors N Richlnd Hls, TX Accounting Tutors N Richlnd Hls, TX ACT Tutors N Richlnd Hls, TX Algebra Tutors N Richlnd Hls, TX Algebra 2 Tutors N Richlnd Hls, TX Calculus Tutors N Richlnd Hls, TX Geometry Tutors N Richlnd Hls, TX Math Tutors N Richlnd Hls, TX Prealgebra Tutors N Richlnd Hls, TX Precalculus Tutors N Richlnd Hls, TX SAT Tutors N Richlnd Hls, TX SAT Math Tutors N Richlnd Hls, TX Science Tutors N Richlnd Hls, TX Statistics Tutors N Richlnd Hls, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/N_Richlnd_Hls_TX_Algebra_2_tutors.php","timestamp":"2014-04-20T09:21:20Z","content_type":null,"content_length":"24430","record_id":"<urn:uuid:2e862d05-d74e-40bc-8168-72664fdb75c3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Volatility Is Your Friend - Comic Book Daily Related Posts Volatility Is Your Friend Like any financial asset, commodity, or collectible, comic book values fluctuate in the course of natural course of trading. We can measure this fluctuation by calculating its variance or standard deviation. Variance is a measure of how far a string of variables lie from their mean, or expected value. Often it is easier to work with standard deviation, which is the square root of variance, because the dispersion is expressed in the same units as the data we’re interested in. For comic book values, the standard deviation will be measured in dollars. Certain stocks and other tradable commodities have more variance and standard deviation than others. In the financial world, a stock’s fluctuation is typically referred to as volatility. Volatility is used by many as a gauge of a stock’s risk. So a large company like Procter & Gamble or Nike will be deemed to be less risky than a high-flying tech stock like Groupon or Salesforce.com. Over the course of a year, Nike will probably have a standard deviation similar to that of the whole stock market (15-20%) whereas Groupon will have volatility that could be twice as much (30-50%). What this means for an investor is that if Nike averages $90 for the year, he or she could expect it to fluctuate between approximately $70 and $100. On the other hand, Groupon could average $20 and swing violently between $10 and $30. While a high standard deviation can mean high risk, volatility can actually be your friend. How? By giving you the opportunity to buy a stock at a low price. The same goes for comic books! I spend a lot of time examining completed CGC graded comic book transactions from eBay, the major auction houses (e.g., Heritage, Pedigree, etc.), and other online auction sites and private sales. Given the frequency at which many of the key titles trade, and the healthy supply across all the auction sites, I’m always surprised when I see a title trade at the top end of its recent range. Frequently collectors will buy comics at values one or more standard deviations away from their mean value. These collectors simply aren’t taking advantage of trading patterns. By being a tad more patient—and aware of recent trends—they could save themselves a lot of money. Let’s look at a recent example of this. Below is a graph of recent sales for Wolverine Limited Series #1 with a 9.6 grade. I chose this book because it’s frequently traded with a high census. From the chart it is clear that recent transactions have fluctuated between $40 and $100. Note that page colour does not seem to matter in this case; copies with off-white-to-white and off-white pages valued just as highly as those with white pages (page colour will be a topic for a future By the Numbers). The chart below overlays the mean value and +1 and -1 standard deviations. The standard deviation for this book is approximately $17. Expressed as a percentage of the mean value, this equates to 27%. Let’s check out another example. Below is a graph of recent sales for 9.8 graded copies of New Mutants #98. New Mutants #98 is popular because it features the first appearance of Deadpool and its recent value of around $200 allows many collectors a chance to get their hands on a copy. Now we can overlay its mean value of $221 and standard deviation of $37. As a percent of the mean, this works out to 17%. With this information, books can be bought a bit more wisely. Look at all the transactions for Wolverine #1 and New Mutants #98 that occurred near 1 standard deviation above their means. Notice that these were frequently followed by sales at the mean and below the mean. At this point the collector has to ask themselves: what is the future going to look like? Perhaps the recent Deadpool values have started to rise because collectors know a Deadpool movie is right around the corner. Alternatively, the fluctuation could be random, like how the values spiked to the $260+ area several times in 2011. If you have a strong opinion that a certain book is going to begin a major uptrend then buying at or below the mean isn’t going to that relevant. Just be careful: history has shown that such trends have a strong tendency to reverse quickly after the hype Another caveat relates to low census silver age titles or modern variants. These titles do not trade as frequently so collectors may pay a higher price to secure them. Paying up a bit feels a lot better than letting a rare, long desired title out of your grasp. What it is really fascinating is that comic book volatility tends to be very similar to stock market volatility. I have calculated the standard deviation for at least 50 titles—with about 4 grades each—and the annual standard deviation tends to be between 15-30%. Do as the pros do: be aware of the stats and don’t pay up for frequently traded titles. Patience pays. Make It Good. Cancel reply
{"url":"http://www.comicbookdaily.com/collecting-community/by-the-numbers/volatility-is-your-friend/","timestamp":"2014-04-16T16:26:18Z","content_type":null,"content_length":"71223","record_id":"<urn:uuid:ec1e32ea-e923-4316-a9e7-2295560d54c4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Andrew Odlyzko: Correspondence about the origins of the Hilbert-Polya Conjecture The Hilbert-Polya Conjecture says that the Riemann Hypothesis is true because non-trivial zeros of the zeta function correspond (in a certain canonical way) to the eigenvalues of some positive operator. This conjecture is often regarded as the most promising way to prove the Riemann Hypothesis. Very little is known about its origins. Mathematical folk wisdom has usually attributed its formulation to Hilbert and Polya, independently, some time in the 1910s. However, there appears to be no published mention of it before Hugh Montgomery's 1973 paper on the pair correlation of zeros of the zeta function. Enclosed here are copies of some letters that attempted to trace the history of the Hilbert-Polya Conjecture. The first letter from Polya appears to present the only documented evidence about the origins of the conjecture. Correspondence with George Polya: The two letters by Polya (1887-1985) were written when he was 94. According to N. G. de Bruijn, at that stage in his life, Polya usually dictated letters to his wife, and only signed them himself. The fact that he wrote both letters out in his own handwriting suggests he was very interested in the subject. The account of the formulation of the conjecture in the first letter is consistent with what Polya had told Dennis Hejhal in a personal conversation. • Andrew Odlyzko to George Polya, December 8, 1981: Dear Professor Polya: I have heard on several occasions that you and Hilbert had independently conjectured that the zeros of the Riemann zeta function correspond to the eigenvalues of a self-adjoint hermitian operator. Could you provide me with any references? Could you also tell me when this conjecture was made, and what was your reasoning behind this conjecture at that time? The reason for my questions is that I am planning to write a survey paper on the distribution of zeros of the zeta function. In addition to some theoretical results, I have performed extensive computations of zeros of the zeta function, comparing their distribution to that of random hermitian matrices, which have been studied very seriously by physicists. If a hermitian operator associated to the zeta function exists, then in some respects we might expect it to behave like a random hermitian operator, which in turn ought to resemble a random hermitian matrix. I have discovered that the distribution of zeros of the zeta function does indeed resemble the distribution of eigenvalues of random hermitian matrices of unitary type. Any information or comments you might care to provide would be greatly appreciated. Sincerely yours, Andrew Odlyzko • George Polya to Andrew Odlyzko, January 3, 1982: Dear Mr. Odlyzko: Many thanks for your letter of December 8. I can only tell you what happened to me. I spent two years in Goettingen ending around the begin of 1914. I tried to learn analytic number theory from Landau. He asked me one day: "You know some physics. Do you know a physical reason that the Riemann hypothesis should be true." This would be the case, I answered, if the nontrivial zeros of the Xi-function were so connected with the physical problem that the Riemann hypothesis would be equivalent to the fact that all the eigenvalues of the physical problem are real. I never published this remark, but somehow it became known and it is still remembered. With best regards. Your sincerely, George Polya • Andrew Odlyzko to George Polya, January 18, 1982: Dear Professor Polya: Thank you very much for your letter of January 3 and the information about the origins of your conjecture about zeros of the zeta function. As you may know, the physicists have extensively studied the distribution of eigenvalues of random hermitian matrices. Now the idea is that if there is an operator associated to the zeta function, its eigenvalues might in some respects behave like those of a random hermitian matrix. This chain of reasoning is, of course, very weak; but surprisingly enough, it seems to work as is shown by the enclosed graph, and other results that I have obtained. In any case, I will send you copies of my papers on this subject for comment as soon as they are ready. Sincerely yours, Andrew M. Odlyzko • George Polya to Andrew Odlyzko, April 26, 1982: Dear Dr. Odlyzko: Please, excuse the delay of this answer to your January letter. I am sick since almost two years, I was unable to read. A few days ago I acquired a reading machine which enabled me to read your letter. I do not understand yet the graphs but I am awaiting your announced papers. Sincerely yours, G. Polya Correspondence with Olga Taussky-Todd: Since David Hilbert (1862-1943) could not be consulted, I tried the only person I knew who had worked extensively with Hilbert, namely Olga Taussky-Todd. Unfortunately, she did not have any information on the subject. • Andrew Odlyzko to Olga Taussky-Todd, January 19, 1982: Dear Professor Taussky-Todd: Unfortunately, I did not have much of a chance to talk to you at the Santa Barbara meeting in October; and, therefore, I would like to ask for your assistance through this letter. I am trying to write up some of my results on the distribution of zeros of the zeta function; and in order to make the paper complete, I would like to find out about the origin of the famous conjecture, usually ascribed to Hilbert and Polya, that the zeros of the zeta function are associated to eigenvalues of a self-adjoint hermitian operator. As far as I know, this conjecture was published neither by Hilbert nor by Polya. I recently wrote to Polya, and received in return a description of how he came to make this conjecture. However, I know nothing about Hilbert's reasoning. In your work with him, did he ever mention this conjecture to you? Any information you could supply would be greatly appreciate. Sincerely yours, Andrew M. Odlyzko • Olga Taussky-Todd to Andrew Odlyzko, January 25, 1982: Dear Dr. Odlyzko: It is nice to hear from you. I too am sorry not to have had some conversation with you in Santa Barbara. I am also sorry that I am rather ignorant about the question you are asking. In fact I would appreciate to see Polya's reply to you. I never had any conversations with Hilbert on number theory. At the time of my work on his papers in Goettingen he had no interest in number theory, only in logic. However, I think that H. Kisilewsky, now at Concordia University in Montreal, talked to me about the fact you are mentioning, and I think that he connected it with A. Weil's ideas. Maybe you ought to write to him. He would appreciate hearing from you anyhow. He does, however, usually not reply immediately. I would be grateful to share further information about this with you. With best wishes, Olga Taussky-Todd Up [ Return to home page ]
{"url":"http://www.dtc.umn.edu/~odlyzko/polya/index.html","timestamp":"2014-04-20T15:58:17Z","content_type":null,"content_length":"8118","record_id":"<urn:uuid:35c2fd89-5de9-4790-9f71-929d52873557>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. Comparison of the “analytically” obtained and simulated values of the damping rate γ varying with q (k = 0.4, ). Plot of damping rate γ as a function of q for values of k = 0.4, 0.7, 1.2. One can see that as k increases, the values of |γ| are higher than those for a lower k. A run corresponding to Run I of Manfredi, 5 who had shown the oscillations to continue till t = 1600. We extended the run till t = 5000. The vertical grey line indicates the duration of Manfredi's simulations. Notice the continuation of the oscillations. For q = 1, at t = 5000, the phase-space vortex can be seen around at v = 3.21. Plot of relative entropy Srel with time. The vertical line represents the duration of Manfredi's simulation. Plots for the amplitude of the first harmonic of the electric field E 1 with time for Set I. One can notice that the oscillatory structures are not found for . Also, as damping rate increases, one can notice that the amplitude of oscillations decreases. This is similar to the result obtained by Valentini. 14 The vertical line represents the time of Valentini's simulations. Plot of relative entropy Srel with time for till t = 3000. The vertical line represents the time up to which Valentini's simulations were performed. Plot of distribution function for the run with q = 0.85, around , at t = 3000. Plots for the amplitude of the first harmonic of the electric field E 1 with time. The vertical line represents the time of Valentini's simulations. As we can see, the field has not saturated within this time. Plot of relative entropy Srel with time for . It can be seen that the entropy saturates within t = 3000. The vertical line represents the time up to which Valentini's simulations were performed. Plot of distribution function for the run with q = 1.15, around , at t = 3000. Plot of velocity distribution function at t = 3000 comparing cases with . Scitation: Nonlinear Landau damping and formation of Bernstein-Greene-Kruskal structures for plasmas with <em>q</em>-nonextensive velocity distributions
{"url":"http://scitation.aip.org/content/aip/journal/pop/20/3/10.1063/1.4794320","timestamp":"2014-04-17T07:04:18Z","content_type":null,"content_length":"91917","record_id":"<urn:uuid:f838239f-3f2a-4874-8015-372164554dfe>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Given the function f(x), the fraction f (x+h)-f(x)/h is the difference quotient associated with f. The determination of a function's difference quotient is the first step in determining the function's derivative and is an important topic in Calculus. For each of the following functions determine its difference quotient. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f746e79e4b0b478589db2d4","timestamp":"2014-04-19T19:42:07Z","content_type":null,"content_length":"38464","record_id":"<urn:uuid:61694ea0-df64-4162-9cd2-68dcc703d906>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Young's modulus problem -- need a hint There are two wires, one brass the other copper, both 50 cm long and 1.0 mm diameter. They are somehow connected to form a 1m length. A force is applied to both ends, resulting in a total length change of 0.5 mm. Given the respective young's moduluses of 1.3 x 10^11 and 1.0 x 10^11, I'm supposed to find the amount of length change in each section. Apparently a variation of Hooke's law should be used here, such as F/A=Y(change in length/original length) I'm stuck on how can I solve this with 2 unknowns (force and change in length)? Hello redshift! I'm going to rewritte your problem in terms of stress [tex]\sigma[/tex] (Pa) and unitary deformation [tex]\epsilon=\frac{L-L_o}{L_o}[/tex] where Lo is the original lenght. So that, the stress exerted is the same in each section of the wire: Hooke's law: [tex] \sigma=E_t \epsilon_t=E_1 \epsilon_1=E_2 \epsilon_2[/tex] where "Et" (N/m^2) is the apparent Young modulus of the complete wire. Compatibility of deformations: [tex]\bigtriangleup L=\bigtriangleup L_1 + \bigtriangleup L_2[/tex] Then, you have three equations for three unknowns: Et, epsilon1 and epsilon2. Hope this help you a bit. You've got two unknowns for
{"url":"http://www.physicsforums.com/showthread.php?t=44642","timestamp":"2014-04-17T21:32:55Z","content_type":null,"content_length":"26949","record_id":"<urn:uuid:9e623b58-1e51-4be9-acb8-b50af50f794c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
An in-depth analysis of Zed I'm seeing a lot of discussion about Zed in the forums, along with a few posts debating his power relative to similar champions, Talon in particular. Let's go into the maths behind these two champions in a vacuum first, then go on to address their roles and the reality behind the two champions. Let's start with Zed. I'm seeing people complain about his low burst, quoting number such as his "pathetic 2.9 bonus ad scaling." Let's slow down and actually do the math here. Stick with me on this (or skip to the totals if you'd like.) Full burst in a vacuum (two shadows up, uninterrupted and unmitigated damage), not including auto attacks: Q: 430(+2.0) <-Two shadows, each dealing 50%,+normal from you E: 180(+.9) R: 305(+1.45) <-50% of above damage Taking into account his 25% increased overall scaling from W, this becomes Total: 915(+5.4375) Total with passive: 915(+5.4375) +(15%hp) <-5% more due to ult proc Total with passive and one auto attack: 1079.7(+7.3125)+(15%hp) *Zed's auto attacks during the ult scale. Hard. His base (109.8) times 1.5 gives 164.7 base, and each point of extra damage he buys is actually 1.25 points of damage, increased by 50%, meaning his autos do damage of 164.7(+1.875) during the ult* Wow. A far cry from the 610(+2.9) that I've seen in many threads. But let's keep going Talon's damage becomes this: Q: Burst increased by 15%, half of the dot (3s) increased by 15% (150(+.3) + 45(+.6))*1.15 + 45(+.6)= 269.25(+1.635) W: 260(+1.2)*1.15= 299(+1.38) R: 520(+1.8)*1.15= 598(+2.07) Total: 1166.25(+5.085) Total including passive: 1282.875(+5.5935) Total including passive and one auto-attack: 1399.255(+6.6935) So, it would seem Zed actually scales better than Talon, having both an advantage in bonus ad (+.619 in Zed's favor), and scaling with the enemy's total hp. So, at what point does Zed outburst a Talon? Well, let's see. We get the equation for their break even point in which "x" is bonus ad on Talon or Zed and "y" is the enemy's total hp If we solve this for y, we get What does this mean? Well, for any given bonus ad that talon and Zed have, it gives the amount of enemy hp required for Zed to outburst Talon. Let's figure out what this implies: Lowest base HP in the game: Sona at 1600HP Highest base HP in the game: Nunu at 2381HP Think about this for a moment. The more health a champion buys, the better Zed scales. And for the absolute lowest amount of hp you can have (in which case, Zed's base damage kills you anyway), Zed outbursts talon when they each have at least 128.52 AD. So enough complaining about Zed's burst. You remember that time when you stupidly forgot that armor exists in this game and talon killed you, then pentakilled your team? Yeah, Zed can kill you better. Except wait...Talon does mostly AOE damage, and Zed does mainly single target. Talon will clearly do more damage in an area than Zed can ever dream of, so that pentakill may not be happening for Zed. So, because Talon is rarely played and Zed is worse than Talon, Zed must be broken under powered and is a troll pick, right? Hold on, we still have a while to go. Follow me on this journey into my next post. (So that people can respond to what is here so far).
{"url":"http://forums.na.leagueoflegends.com/board/showthread.php?s=&t=2795536","timestamp":"2014-04-23T08:24:28Z","content_type":null,"content_length":"49874","record_id":"<urn:uuid:3ba8b77a-5895-44af-b346-6ef1b1624e13>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry & Topology, Vol. 9 (2005) Paper no. 27, pages 1187--1220. Algebraic cycles and the classical groups II: Quaternionic cycles H Blaine Lawson Jr, Paulo Lima-Filho, Marie-Louise Michelsohn Abstract. In part I of this work we studied the spaces of real algebraic cycles on a complex projective space P(V), where V carries a real structure, and completely determined their homotopy type. We also extended some functors in K-theory to algebraic cycles, establishing a direct relationship to characteristic classes for the classical groups, specially Stiefel-Whitney classes. In this sequel, we establish corresponding results in the case where V has a quaternionic structure. The determination of the homotopy type of quaternionic algebraic cycles is more involved than in the real case, but has a similarly simple description. The stabilized space of quaternionic algebraic cycles admits a nontrivial infinite loop space structure yielding, in particular, a delooping of the total Pontrjagin class map. This stabilized space is directly related to an extended notion of quaternionic spaces and bundles (KH-theory), in analogy with Atiyah's real spaces and KR-theory, and the characteristic classes that we introduce for these objects are nontrivial. The paper ends with various examples and applications. Keywords. Quaternionic algebraic cycles, characteristic classes, equivariant infinite loop spaces, quaternionic K-theory AMS subject classification. Primary: 14C25. Secondary: 55P43, 14P99, 19L99, 55P47, 55P91. E-print: arXiv:math.AT/0507451 DOI: 10.2140/gt.2005.9.1187 Submitted to GT on 24 April 2002. (Revised 28 April 2005.) Paper accepted 6 June 2005. Paper published 1 July 2005. Notes on file formats H Blaine Lawson Jr, Paulo Lima-Filho, Marie-Louise Michelsohn BL, MM: Department of Mathematics, Stony Brook University Stony Brook, NY 11794, USA PL: Department of Mathematics, Texas A&M University College Station, TX 77843, USA Email: blaine@math.sunysb.edu, plfilho@math.tamu.edu, mlm@math.sunysb.edu GT home page Archival Version These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/.
{"url":"http://www.emis.de/journals/UW/gt/GTVol9/paper27.abs.html","timestamp":"2014-04-16T16:03:06Z","content_type":null,"content_length":"3882","record_id":"<urn:uuid:d0f3bf65-3d0a-41fb-b2c0-ead18108e969>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Ashland, MA Precalculus Tutor Find an Ashland, MA Precalculus Tutor ...I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/discrete variety. I got an A in undergraduate linear algebra. I have also absorbed many additional linear algebra concepts in the process of taking graduate classes in functional analysis and abstract algebra. 14 Subjects: including precalculus, calculus, geometry, algebra 1 ...As a doctoral student I assistant-taught several semesters of Calculus, and taught summer courses myself on a few occasions. For both, I held well-attended office hours in which I did small-group and one-on-one tutoring. Calculus is about a beautiful connection between symbolic manipulation and... 29 Subjects: including precalculus, reading, calculus, English I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. 16 Subjects: including precalculus, French, elementary math, algebra 1 My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including precalculus, English, reading, calculus ...While tutoring in my junior and senior year of college, I tutored freshman in calculus. I took geometry in high school. I have been able to use the principles over and over in my career as a practicing engineer. 10 Subjects: including precalculus, physics, calculus, algebra 2 Related Ashland, MA Tutors Ashland, MA Accounting Tutors Ashland, MA ACT Tutors Ashland, MA Algebra Tutors Ashland, MA Algebra 2 Tutors Ashland, MA Calculus Tutors Ashland, MA Geometry Tutors Ashland, MA Math Tutors Ashland, MA Prealgebra Tutors Ashland, MA Precalculus Tutors Ashland, MA SAT Tutors Ashland, MA SAT Math Tutors Ashland, MA Science Tutors Ashland, MA Statistics Tutors Ashland, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Ashland_MA_precalculus_tutors.php","timestamp":"2014-04-19T19:43:59Z","content_type":null,"content_length":"24069","record_id":"<urn:uuid:22eea1d2-aecc-46e8-8fb1-d80c79d1578b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
equation (15) gives a family of curves that penetrate further below T = 1 for weaker haline forcing. In this section we will consider the results obtained by stochastic forcing of a version of the Stommel model with no restoring of the salinity field. The stability of the Stommel two-box model has been discussed by previous authors (Huang et al., 1992; Marotzke, 1990; Stommel, 1961; Walin, 1985). The results of the linear analysis of our model are shown in Figure 4. Damped harmonic solutions exist for T < S < T/(2T − 1)^2. Unstable real roots exist in the region bounded by S = T(2 − T)/2 and T = S. The remaining areas have simple damped solutions. In the North Atlantic, geologic evidence suggests that a thermally dominated thermohaline regime has existed since the close of the last ice age, 10,000 years ago. Therefore, it appears that the stable regime in which S < T(2 − T)/2 in our model corresponds to the present climate of the North Atlantic. The remainder of this paper is concerned with the analysis of the response to forced oscillation about the stable equilibrium in the thermally dominated regime of the model. To check our calculations we calculated the linear response analytically and compared it to the results of direct numerical integration of the full nonlinear model with very small stochastic forcing. It is appealing to think of the effect of air-sea fluxes associated with cyclones and anticyclones passing over the ocean as causing a random walk in vertically integrated water-mass properties. Results from the GFDL coupled model show that heating and net evaporation-minus-precipitation at the ocean surface are negatively correlated. The results of Delworth et al. (1995) show that surface fluxes tend to heat and simultaneously freshen the ocean surface or, conversely, cool the ocean surface and make it more saline. Rather than making heating and evaporation-minus-precipitation independent random variables, we made them proportional to one another, with opposite sign, in our stochastic model. Analytic spectra for the three cases are shown in Figure 5. Figure 5a corresponds to the case in which η = 2 and the steady component of E = 0.1. Q' is 10 times larger than E' and of opposite sign, causing the temperature fluctuations to be larger than the salinity fluctuations at all frequencies. At high frequencies, both spectra have a slope of ω^–2. Damping becomes important for salinity only at frequencies less than 0.1 cycles/unit time, which corresponds to a period of 240 years. Figure 5b shows a case that is the same as that of Figure 5a, except that ηQ' is now twice −E'. |T^2| is exactly four times longer than |S^2| at high frequencies, but a crossover point is reached at a period between 50 and 100 years. The effect of removing the thermohaline coupling term is shown in Figure 5c. This case corresponds to the
{"url":"http://www.nap.edu/openbook.php?record_id=5142&page=360","timestamp":"2014-04-19T02:00:56Z","content_type":null,"content_length":"36202","record_id":"<urn:uuid:037297d2-76f1-4bd0-a849-3d05882323ad>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
. A 3. ANALYSIS The mathematical formulation of the decomposition of a given distribution of coplanar points, (in our case HII regions), has been given by Considére & Athanassoula (1982). We will only recall here the essential information briefly. Let (r[j], [j]) be the radii and angles of the individual HII regions, and u[j] = ln r[j]. Their distribution can be written in terms of Its Fourier transform can then be written: The m = 0 term corresponds to the axisymmetric component, m = 1 to the one-armed one, m = 2 to the two-armed etc. In this notation an m-armed logarithmic spiral is given by r = r[0] exp(- m p) and its pitch angle i, by tan(i) = - m / p. As the galaxy is projected on the sky its axisymmetric component is elliptical in shape and contributes a spurious m = 2 component around p = 0 which is added to the real m = 2 due e.g. to spiral arms or a bar. If the galaxy had been observed face-on, instead of this spurious m = 2 one would have seen an axisymmetric component. Thus a way of finding the position angle and inclination angle (hereafter PA and IA respectively) is to find the values for which the m = 0 component is maximum with respect to the other components, i.e. if we define: then we have to find the values of the angles that maximize the ratio We used in this work M = 6 while omitting m = 5. A similar method has been proposed by Iye et al. (1982) not for the HII regions but for digitized images of the galaxy. The main difference is that Iye et al. take the Fourier transform of the density r, µ(r, r^2 r,
{"url":"http://ned.ipac.caltech.edu/level5/March02/Garcia2/Garcia3.html","timestamp":"2014-04-16T08:22:25Z","content_type":null,"content_length":"4366","record_id":"<urn:uuid:a1b25296-cd2d-43fc-a84d-a420300221be>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Pitfalls in Asset and Liability Management: One Factor Term Structure Models and the Libor-Swap Curve The Kamakura blog “Pitfalls in Asset and Liability Management: One Factor Term Structure Models” is available at this link: In that blog, we followed the approach of Jarrow, van Deventer and Wang (2003) regarding model validation. In that paper, Jarrow, van Deventer and Wang tested the Merton model of risky debt by examining the validity of the Merton model of risky debt by determining whether the implications of the model were in fact true. Their Merton model test was a “non-parametric test,” i.e. a test of implications which would prevail for any parameter values. In our November 7 blog, we did a similar non-parametric test of one factor term structure models. We described the basis for the test on November 7 as follows: “Among one factor term structure models, for any set of parameters, there is almost always one characteristic in common. The majority of one factor models make all interest rates positively correlated (which implies the time-dependent interest rate volatility sigma(T) is greater than 0). We impose this condition on a one factor model. The non-random time-dependent drift(T) in rates is small and, if risk premia are consistent and positive (for bonds) across all maturities T, then the drift will be positive as in the Heath Jarrow and Morton (1992) drift condition. With more than 200 business days per year, the daily drift is almost always between plus one and minus one basis point. Hence, one might want to exclude from consideration yield curve changes that are small, so the effect of the drift is removed. We ignore the impact of drift in what follows.” We then examined the extent to which U.S. Treasury coupon bearing bond yields, zero coupon bond yields, and forward rates were consistent with this common characteristic of single factor term structure models. To what extent did all rates rise, fall, or remain constant together? The answer is summarized in this table for the U.S. Treasury market: One factor term structure model yield movements occurred only 37.7% of the time with the par coupon bond yields quoted on the Federal Reserve’s H15 statistical release. Using smoothed yield curves and the associated monthly zero coupon bond yields and forward rates, the conclusions are even more dramatic. In the case of zero coupon bond yields, movements were consistent with one factor term structure models only 24.8% of the 12,386 business days examined. In the case of forward rates, movements were consistent with one factor models only 5.7% of the time. We now use U.S. dollar Libor rates and interest rate swap rates to do a similar analysis on the heavily used Libor-swap curve. Data Regime for Libor-Swap Curve Data was loaded for Eurodollar deposit rates reported by the Federal Reserve on its H15 statistical release beginning on January 4, 1971. Note that these rates are slightly different from “official” Libor rates reported by the British Bankers Association on www.bbalibor.com. We used U.S. dollar interest rate swap yields reported by the Federal Reserve on the H15 release. The original source of the data is that collected by the broker ICAP on behalf of ISDA. This data series begins on July 3, 2000. We supplement the H15 data with interest rate swap data from Bloomberg, which for the most part extends back to November 1, 1988. We have four distinct data regimes since 1971: Using only this raw data, with no yield curve smoothing analytics employed, we asked this question: On what percent of the business days were yield curve shifts all positive, all negative, or all zero? We report the results in the next section. Consistency with One Factor Term Structure Models in the Libor-Swap Market The results vary by the data regime studied. Of the 10,664 days on which some data was available, we eliminated all dates where the data was less than the full data regime maturity spectrum and those dates which represented the first day of a new data regime, which would distort the reported shift amounts. After deleting these data points, we are left with 10,150 business days from January 4, 1971 to November 17, 2011. From January 4, 1971 to October 31, 1988, there were only three observable points on the Libor-swap yield curve, at 1, 3 and 6 months. In large part because of the paucity of data points, it was more likely that all rates moved up together, down together, or remained unchanged today. This happened 51.6% of the time during this data regime, so one factor term structure models were consistent with actual rate movements a narrow majority of the time. For regimes with 9, 11, or 12 observable points on the yield curve, however, the percentage of business days that were consistent with one factor term structure models was only 7.2%. Over the full sample, including (perhaps inappropriately) the long period of time where there were only three observable points on the yield curve, the consistency ratio was 26.9%. As in the case of the U.S. Treasury yield curve, we conclude that one factor term structure models are grossly inaccurate approximations to the true historical movements of the Libor-swap curve. Implications of Results As we noted on November 7, there are a number of very serious errors that can result from an interest rate risk and asset and liability management process that relies solely on the assumption that one factor term structure models are an accurate description of potential yield movements: 1. Measured interest rate risk will be incorrect, and the degree of the error will not be known. Using the data above, 92.8% of the time actual yield curves will show a twist, but the modeled yield curve shifts will never show a twist. 2. Hedging using the duration or one factor term structure model approach assumes that interest rate risk of one position (or portfolio) can be completely eliminated with the proper short position in one instrument with a different maturity. The duration/one factor term structure model approach assumes that if interest rates on the first position rise, interest rates will rise on the second position as well so “going short” is the right hedging direction. The data above shows that on 92.8% of the days from November 1, 1988 to November 17, 2011, this “same direction” assumption was potentially false (some maturities will show same direction changes and some will show opposite direction changes) and the hedge could actually ADD to risk, not reduce risk. 3. All estimates of prepayments and interest rate-driven defaults will be measured inaccurately 4. Economic capital will be measured inaccurately 5. Liquidity risk will be measured inadequately 6. Non-maturity deposit levels will be projected inaccurately For both U.S. Treasury data and Libor-swap data, this is an extremely serious list of deficiencies. The only remedy is to move as soon as possible to a more general N-factor model of interest rate movements. This should be done using the best available econometric results like those from Kamakura Risk Information Services and a simulation system like Kamakura Risk Manager. Academic assumptions about the stochastic processes have been too simple to be realistic. Jarrow (2009) notes, “Which forward rate curve evolutions (HJM volatility specifications) fit markets best? The literature, for analytic convenience, has favored the affine class but with disappointing results. More general evolutions, but with more complex computational demands, need to be studied. How many factors are needed in the term structure evolution? One or two factors are commonly used, but the evidence suggests three or four are needed to accurately price exotic interest rate derivatives.” As noted in our November 7 blog, this view is shared by the Basel Committee on Banking Supervision. In its December 31, 2010 Revisions to the Basel II Market Risk Framework, the Committee states its requirements clearly on page 12: “For material exposures to interest rate movements in the major currencies and markets, banks must model the yield curve using a minimum of six risk factors.” We have just emerged (perhaps briefly) from a credit crisis which stemmed to a large degree from the use of credit models based on assumptions which were known at the time to be false. This blog and our November 7 blog present evidence that interest rate risk managers today are relying heavily on interest rate risk models and risk systems that are based on assumptions that are known to be false. Let us hope that history does not repeat itself before the industry moves to more realistic models of interest rate risk. Adams, Kenneth J. and Donald R. van Deventer. "Fitting Yield Curves and Forward Rate Curves with Maximum Smoothness.” Journal of Fixed Income, June 1994. Black, Fischer, “Interest Rates as Options,” Journal of Finance (December), pp. 1371-1377, 1995. Black, Fischer, E. Derman, W. Toy, “A One-Factor Model of Interest Rates and Its Application to Treasury Bond Options,” Financial Analysts Journal, pp. 33-39, 1990. Black, Fischer and Piotr Karasinski, “Bond and Option Pricing when Short Rates are Lognormal,” Financial Analysts Journal, pp. 52-59, 1991. Cox, John C., Jonathan E. Ingersoll, Jr. and Stephen A. Ross, "An Analysis of Variable Rate Loan Contracts," Journal of Finance, pp. 389-403, 1980. Cox, John C., Jonathan E. Ingersoll, Jr., and Stephen A. Ross, “A Theory of the Term Structure of Interest Rates,” Econometrica 53, 385-407, 1985. Dai, Qiang and Kenneth J. Singleton, “Specification Analysis of Affine Term Structure Models,” The Journal of Finance, Volume LV, Number 5, October 2000. Dai, Qiang and Kenneth J. Singleton, “Term Structure Dynamics in Theory and Reality,” The Review of Financial Studies, Volume 16, Number 3, Fall 2003. Dickler, Daniel T., Robert A. Jarrow and Donald R. van Deventer, “Inside the Kamakura Book of Yields: A Pictorial History of 50 Years of U.S. Treasury Forward Rates,” Kamakura memorandum, September 13, 2011. Dickler, Daniel T., Robert A. Jarrow and Donald R. van Deventer, “Inside the Kamakura Book of Yields, Volume II: A Pictorial History of 50 Years of U.S. Treasury Zero Coupon Bond Yields,” Kamakura memorandum, September 26, 2011. Dickler, Daniel T., Robert A. Jarrow and Donald R. van Deventer, “Inside the Kamakura Book of Yields, Volume III: A Pictorial History of 50 Years of U.S. Treasury Par Coupon Bond Yields,” Kamakura memorandum, October 5, 2011. Dickler, Daniel T. and Donald R. van Deventer, “Inside the Kamakura Book of Yields: An Analysis of 50 Years of Daily U.S. Treasury Forward Rates,” Kamakura blog, www.kamakuraco.com, September 14, Dickler, Daniel T. and Donald R. van Deventer, “Inside the Kamakura Book of Yields: An Analysis of 50 Years of Daily U.S. Treasury Zero Coupon Bond Yields,” Kamakura blog, www.kamakuraco.com, September 26, 2011. Dickler, Daniel T. and Donald R. van Deventer, “Inside the Kamakura Book of Yields: An Analysis of 50 Years of Daily U.S. Par Coupon Bond Yields,” Kamakura blog, www.kamakuraco.com, October 6, 2011. Duffie, Darrell and Rui Kan, “Multi-factor Term Structure Models,” Philosophical Transactions: Physical Sciences and Engineering, Volume 347, Number 1684, Mathematical Models in Finance, June 1994. Duffie, Darrell and Kenneth J. Singleton, “An Econometric Model of the Term Structure of Interest-Rate Swap Yields,” The Journal of Finance, Volume LII, Number 4, September 1997. Duffie, Darrell and Kenneth J. Singleton, “Modeling Term Structures of Defaultable Bonds,” The Review of Financial Studies, Volume 12, Number 4, 1999. Heath, David, Robert A. Jarrow and Andrew Morton, "Bond Pricing and the Term Structure of Interest Rates: A New Methodology for Contingent Claims Valuation," Econometrica, 60(1), January 1992. Ho, Thomas S. Y. and Sang-Bin Lee, “Term Structure Movements and Pricing Interest Rate Contingent Claims,” Journal of Finance 41, 1011-1029, 1986. Hull, John and Alan White, "One-Factor Interest-Rate Models and the Valuation of Interest-Rate Derivative Securities," Journal of Financial and Quantitative Analysis 28, 235-254, 1993. Jamshidian, Farshid, "An Exact Bond Option Formula," Journal of Finance 44, pp. 205-209, March 1989. Jarrow, Robert A., “The Term Structure of Interest Rates,” Annual Review of Financial Economics, 1, 2009. Jarrow, Robert A., Donald R. van Deventer, and Xiaoming Wang, “A Robust Test of Merton’s Structural Model for Credit Risk,” Journal of Risk, 6 (1), 2003. Macaulay, Frederick R. Some Theoretical Problems Suggested by Movements of Interest Rates, Bond Yields, and Stock Prices in the United States since 1856. New York, Columbia University Press,1938. Merton, Robert C. “A Dynamic General Equilibrium Model of the Asset Market and Its Application to the Pricing of the Capital Structure of the Firm,” Working Paper No. 497-70, A. P. Sloan School of Management, Massachusetts Institute of Technology, 1970. Reproduced as Chapter 11 in Robert C. Merton, Continuous Time Finance, Blackwell Publishers, Cambridge, Massachusetts, 1993. van Deventer, Donald R. “Pitfalls in Asset and Liability Management: One Factor Term Structure Models.” Kamakura blog, www.kamakuraco.com, November 7, 2011. van Deventer, Donald R. and Kenji Imai, Financial Risk Analytics: A Term Structure Model Approach for Banking, Insurance, and Investment Management, Irwin Professional Publishing, Chicago, 1997. van Deventer, Donald R., Kenji Imai, and Mark Mesler, Advanced Financial Risk Management, John Wiley & Sons, 2004. Translated into modern Chinese and published by China Renmin University Press, Beijing, 2007. Vasicek, Oldrich A., "An Equilibrium Characterization of the Term Structure," Journal of Financial Economics 5, pp. 177-188, 1977. Donald R. van Deventer Kamakura Corporation Honolulu, Hawaii November 23, 2011 © Copyright 2011 by Donald R. van Deventer, All Rights Reserved.
{"url":"http://www.kamakuraco.com/Blog/tabid/231/EntryId/350/Pitfalls-in-Asset-and-Liability-Management-One-Factor-Term-Structure-Models-and-the-Libor-Swap-Curve.aspx","timestamp":"2014-04-20T18:54:42Z","content_type":null,"content_length":"131344","record_id":"<urn:uuid:cdeb7491-b1e5-4c4f-996f-e254c18c4d9c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is $ \frac{\pi^2}{12}=ln(2)$ not true ? up vote 13 down vote favorite This question may sound ridiculous at first sight, but let me please show you all how I arrived at the afore mentioned 'identity'. Let us begin with (one of the many) equalities established by Euler: $$ \displaystyle f(x) = \frac{sin(x)}{x} = \prod_{n=1}^{\infty} \Big(1-\frac{x^2}{n^2\pi^2}\Big) $$ as $(a^2-b^2)=(a+b)(a-b)$, we can also write: (EDIT: We can not write this...) $$ \displaystyle f(x) = \prod_{n=1}^{\infty} \Big(1+\frac{x}{n\pi}\Big) \cdot \prod_{n=1}^{\infty} \Big(1-\frac{x}{n\pi}\Big) $$ We now we arrange the terms with $ (n = 1 \land n=-2)$, $ (n = -1 \land n=2$), $( n=3 \land -4)$ , $ (n=-3 \land n=4)$ , ..., $ (n = 2n \land n=-2n-1) $ and $(n=-2n \land n=2n+1)$ together . After doing so, we multiply the terms accordingly to the arrangement. If we write out the products, we get: $$ f(x)=\big((1-x/2\pi + x/\pi -x^2/2\pi^2)(1+x/2\pi-x/\pi - x^2/2\pi^2)\big)... $$ $$ ...\big((1-\frac{x}{(2n)\pi} + \frac{x}{(2n-1)\pi} -\frac{x^2}{(2n(n-1))^2\pi^2})(1+\frac{x}{2n\pi} -\frac{x} {(2n-1)\pi} -\frac{x^2}{(2n(2n-1))^2\pi^2)})\big) $$ Now we equate the $x^2$-term of this infinite product, using Newton's identities (notice that the'$x$'-terms are eliminated) to the $x^2$-term of the Taylor-expansion series of $\frac{sin(x)}{x}$ . $$ -\frac{2}{\pi^2}\Big(\frac{1}{1\cdot2} + \frac{1}{3\cdot4} + \frac{1}{5\cdot6} + ... + \frac{1}{2n(2n-1)}\Big) = -\frac{1}{6} $$ Multiplying both sides by $-\pi^2$ and dividing by 2 yields $$\sum_{n=1}^{\infty} \frac{1}{2n(2n-1)} = \pi^2/12 $$ That (infinite) sum 'also' equates $ln(2)$, however (According to the last section of this paper). So we find $$ \frac{\pi^2}{12} = ln(2) $$ . Of course we all know that this is not true (you can verify it by checking the first couple of digits). I'd like to know how much of this method, which I used to arive at this absurd conclusion, is true, where it goes wrong and how it can be improved to make it work in this and perhaps other cases (series). Thanks in advance, Max Muller (note I: 'ln' means 'natural logarithm) (note II: with 'to make it work' means: 'to find the exact value of) ca.analysis-and-odes taylor-series tag-removed 7 Wouldn't it be cool if it were true, though! – Nate Eldredge Jun 9 '10 at 16:03 4 This forum is aimed at research-level mathematicians, but the topic of this question lies firmly in the domain of undergraduate mathematics. I think that "Ask Dr. Math" and "Art of Problem Solving" would be more appropriate venues for this question. – Ian Morris Jun 9 '10 at 17:41 3 @ Ian Morris: I guess you're right. I didn't expect my 'proof' to be torn apart that quickly. I thought that if the infinite product could be expressed by a product of two infinite products, I could produce documented mathematical thought on open problem(s) at research level. – Max Muller Jun 9 '10 at 17:56 11 Well, we all are safe once again. I'm a bit worried that young Max may eventually succeed in finding an inner contradiction in the building of Mathematics. Then, everybody go home, and this site will be terminated too :-( – Pietro Majer Jun 9 '10 at 21:09 @Ian Morris: Understanding when this type of calculation is valid is very definitely research-level. Most (valid) calculations of this type are actually short-hand for manipulating meromorphic 6 functions, and it is often easier to guess the calculation first and then check it (this is why Euler was such a great mathematician). One place where these calculations turn up in spades is in analytic number theory, at least if the two weeks of the course I took form R. Borcherds is any indication. Another place is quantum field theory, where there are outstanding problems to define certain integrals. – Theo Johnson-Freyd Jun 12 '10 at 16:38 show 7 more comments 3 Answers active oldest votes You cannot split (1-(x/n)^2) into (1 -x/n) (1 + x/n), since the products no longer converge. up vote 33 down vote accepted Could you please elaborate on that? Why is it necessary for these prodcucts to converge? – Max Muller Jun 9 '10 at 16:21 See en.wikipedia.org/wiki/Infinite_product and en.wikipedia.org/wiki/Conditional_convergence – efq Jun 9 '10 at 16:31 1 The statements: - The product of (1 + a_n) converges - The sum of a_n converges are equivalent. So we know that prod (1 + a_n), where a_2n = (1 + x/n) and a_2n = (1 - x/n) does not converge unconditionally. So by reordering it can achieve any positive value by the same argument as for sums. Just rewrite product (1 + a_n) = exp(sum log(1 + a_n)) to show this. – Helge Jun 9 '10 at 16:50 6 Yes, you treated divergent infinite products as convergent. You get the same sort of problems as you do with treating divergent series as convergent, like $$0 = (1 - 1) + (1 - 1) +\cdots=1-(1-1)+(1-1)\cdots =1.$$ – Robin Chapman Jun 9 '10 at 17:40 Ok.. thank you Helge! I guess you can imagine I'm a bit disappointed that my flow of (il)logical arguments was interupted that quickly... but you're right, of course. Suppose we can write sin(x)/x as the product of two infinite (divergent) products. Would the rest of the 'proof' be correct? – Max Muller Jun 9 '10 at 17:46 show 3 more comments Eisenstein defined elliptic functions by working with conditionally convergent series. In particular he studied how a series changes when you rearrange the terms in a specific way. You can find a lot about his work in this direction in Weil's beautiful book Elliptic Functions according to Eisenstein and Kronecker. An analogous question would be what happens to your product up vote formula when you use a different way of pairing positive and negative indices. I do not know whether this has been studied before . . . A look into Weil's book will convince you (if you 5 down didn't know that already) that some functions are most interesting at those places where convergence fails. Thank you for the reference! As for the different pairing methods, that was what I was thinking about as well... An infinite amount of different (infinite) sum series would all converge to the same value! By the way, do you think there's a way to describe my double product formula in a closed form? I know it diverges, so perhaps it could be an exponential function? Like sin(x)^x? – Max Muller Jun 10 '10 at 13:31 P.S. I'm not sure why (some) functions are most interesting at those places where convergence fails... could you please explain? I am very interested in transforming divergent to convergent series, though, to make the double product equal the single product (and sin(x)/x). Do you think this is possible, one way or another? Or do you know any texts/papers on this subject matter? – Max Muller Jun 10 '10 at 13:44 1 values for divergent series ... see Hardy's book, DIVERGENT SERIES – Gerald Edgar Jun 10 '10 at 15:17 I was of course thinking of zeta functions, L-series and theta functions. For methods of assigning finite values to interesting divergent sums (as well as for plenty of other reasons as well), I strongly advise you to read <em>Euler through time: a new look at old themes</em> by Varadarajan. – Franz Lemmermeyer Jun 10 '10 at 17:52 I thank both of you for the references! I think I'll have some good reading for the summer holidays... – Max Muller Jun 10 '10 at 18:30 show 1 more comment It is a common trick, found in many elementary calculus texts: take a conditionally convergent series, and rearrange it to have any sum you like. up vote 3 down Ok, thanks mister Edgar. Please look at the comments I posted to Franz Lemmermeyer's answer, I'd like to know if you have some answers on the questions I posed there as well! – Max Muller Jun 10 '10 at 13:45 You probably already know this Edgar, but this is usually called the Riemann rearrangement theorem, or the Riemann series theorem – Daniel Barter Jun 13 '10 at 3:55 add comment Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes taylor-series tag-removed or ask your own question.
{"url":"http://mathoverflow.net/questions/27592/why-is-frac-pi212-ln2-not-true/27679","timestamp":"2014-04-20T11:16:21Z","content_type":null,"content_length":"81493","record_id":"<urn:uuid:5b661525-6349-4c2c-888c-d7b984eee2c0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Quick Question - MLE for Geometric Brownian Motion Process with Jumps [R] Quick Question - MLE for Geometric Brownian Motion Process with Jumps Dieter Menne dieter.menne at menne-biomed.de Sun Apr 5 16:48:22 CEST 2009 John-Paul Taylor <johnpaul.taylor <at> ryerson.ca> writes: > I am tying to run a maxlik regression and keep getting the error, > NA in the initial gradient > My Code is below: > gbmploglik<-function(param){ > mu<-param[1] > sigma<-param[2] > lamda<-param[3] > nu<-param[4] > gama<-param[5] > logLikVal<- - n*lamda - .5*n*log(2*pi) + sum(log(sum(for(j in > 1:10)(cat((lamda^j/factorial(j))*(1/((sigma^2+j*gama^2)^.5)*exp( - > logLikVal > } > rescbj<- maxLik(gbmploglik, grad = NULL, hess = NULL, start=c(1,1,1,1,1), method = "Newton-Raphson") > I was wondering if there is something that I have to do with the grad= and maybe put something other then NULL. > The other issue is that there might be something wrong with the loglikelihood function, because of the > loop that I put in it. I have not used that function, but the error message is rather clear in telling you that your start values are not good. Try to plot the function and the gradient at that point. Sometime moving by a very small amount More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2009-April/194130.html","timestamp":"2014-04-18T15:50:26Z","content_type":null,"content_length":"4308","record_id":"<urn:uuid:dad0ccbb-11a4-4fa8-8f39-565a1ee6d1e6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
The Algorithm in the Code: Building an Inquiry Tool A couple of days ago, I posted a Math Challenge posed by David Wees some weeks ago. The code emulated Euclid’s Algorithm of Coprimes and GCFs. First analysis Analysis of the code reveals that, when a=0, b=b and, when b=0, a=a. However, a reaches zero at the code’s onset, while b does the same after the code runs through scenarios when b≠0. This implies that one of the two values reaching zero is key to the code and the quantity of the other value when this happens is informative. Tabulating the difference between possible values of a and b within an arbitrary range of integers might illustrate how b=0 is reached. This process falls in the Planning and Implementation steps of David Coffey’s thinking-stage charts. Here is my table for mapping the “moves” of the code within the range of -9<a<9 and -9<b<9. Playing with b≠0 Notice that a>b below the a=b or 0 diagonal. So, for instance, the difference between a=6 and b=4 is 2, found in the bottom-left triangle of the table. In this triangle, according to the code, the difference a-b equals a. So, now a=2 and b=4. Repeating the process using a=2 and b=4 produces a difference of 2, this time in the top-right triangle. The new difference b-a equals b. Now a=2 and b=2, which produces a difference b-a of 0. Since b-a = b, the code ends with b=0 and a=2. There are six distinct cases where the code returns unique case results. Case 1: a = b When a=b, the code returns their common value. Why? As shown in the example above, the step after a=b is b=0 and the value of a is returned. This value is that when a=b. Case 2: either a = 0 or b = 0 A starting value of a=0 returns b. A value of b=0, returns a. This is a rule built into the code. But what would happen if the rule were not followed? Let’s take our b=0 and a=2 example beyond termination. Continuing the while-loop produces a difference a-b of 2, in the bottom-right triangle. This difference returns a=2 and b=0, exactly where we What if a=0 and b=2? The difference b-a returns b=2 and a=0, another recursive repeat. So, a=0 returns b and b=0 returns a. If a=b=0, zero is returned (in agreement with Case 1). Case 3: a < 0, b < 0 or both < 0 When either a or b or both are negative, the code never resolves to termination (except when a=b, Case 4). In fact, the greater value iterates to infinity in steps of the lesser negative value. Let us try a=3 and b=-2 (we could easily have tried a=-2 and b=3). The difference a-b returns a=5 and b=-2, which in turn returns a=7 and b=-2, then a=9 and b=-2, ad infinitum. a=-3 and b=-2, on the other hand, returns (a=-3,b=1), (a=-3,b=4), (a=-3,b=7), again ad infinitum. Case 4: a = b < 0 Contrary to Case 3, when a=b<0, the common value of a and b is returned, in agreement with Case 1. Ignoring this, Case 3 is followed; however, there is no condition that rectifies the ambiguity of which direction, toward infinite a or infinite b, the map should follow. Case 5: +a and +b share a common factor When a and b share a set of common factors, the greatest of these factors is returned, as per the a=6 and b=4 example which returned 2, the greatest common factor of 6 and 4. Case 6: +a and +b are coprime, or relatively prime When a and b do not share a common factor, 1 is returned, since 1 is the only natural number that is a divisor of both. Let’s map a=3 and b=8. As you can see from the table below, 1 is returned. The analysis of cases weaves over and through David Coffey’s thinking-stage charts’ Analysis through Verification stages. Interpretation and Pedagogy I was introduced to the formal Extended Euclid’s Algorithm via induction within a discrete mathematics university course. It was taught to me as a means to learn modular mathematics, so not much emphasis was placed on explaining the Extended Algorithm nor the induction. In fact, given this challenge posed by David Wees, or perhaps more so the table derived from it, the manner in which I learned the Extended Algorithm was probably the worst possible. David’s challenge and the table offer great entry tasks into the study of GFCs, coprimes, Euclid’s Algorithm and several branches of mathematics that build from them. Before the Algorithm is even named and formalized, students get to explore its mechanisms and formalize their own rules based on their mapping activities. Once they master the code and table, they can learn the corresponding Algorithm schema with emphasis on matching the items of the schema to the mapping on the table and the methods in the code. Then the Algorithm can be named and its uses illustrated. For those students who do not know code, the teacher can interpret the code with them and offer scaffolding afterward. The code is probably easier to understand than an instruction list, if instead of treating it as code, the teacher treats it as an outline of process. Notice the subtle difference here between instruction (do this) and process (this is how this works). The table doesn’t just determine GFCs and coprimes, it illustrates how greatest common factors and relatively prime, or coprime, numbers are calculated. It also illustrates why negative integers do not produce finite results, except where a=b, and why a=0 returns b and b=0 returns a. One question that might remain is what the table and code return. In the case of positive integers, the returns are obviously GCFs or 1. Interpretation can determine whether the initial values are coprime or related by common factor. But what does the return of a when b=0 and the return of b when a=0 mean? Quite simply the returns are the divisors of the numbers being analyzed. So, if one of those numbers is zero, it stands to reason that the other number is a viable divisor of zero. For instance, when b=0 and a=3, the return of 3 signifies that zero is divisible by three. Arithmetically, when a=b=0, infinity or undefinable should be returned, since conventionally no number “can” be divided by zero. This is the one flaw in this code and table. In order for the constructed table to be a viable tool for learning Euclid’s Algorithm, it should be printed out or created with non-erasable ink and the mapping should be done with pencil and eraser. The table can be used several times then to build literacy, mastery and fluency of Euclid’s Algorithm. Do you have any tasks that engage students in active learning of the outcomes, content, skills and concepts you are teaching? Follow @stefras 3 thoughts on “The Algorithm in the Code: Building an Inquiry Tool” 1. I really like how systematic you were in solving this challenge. One of the interesting things I’ve noticed when I pose this challenge to teachers is that very few of them approach the challenge with the same skills they teach their students to use (systematic reasoning, guess and check, collaborate with a neighbour, etc…). I’m collecting some other algorithms to use as challenges so I can differentiate the challenges a bit better for other groups. Also, I’d like to find a way to make this pseudocode clearer for a non-coder to understand. Maybe I need to write the algorithm in □ Thanks David. It was a fun challenge. I thrive on challenges that engage me in thinking. What really wowed me about this challenge is that interpreting the code was just the beginning. I liked that the “answer” to the code (creating the table) produced a new puzzle that worked hand in hand with the code to actively investigate an otherwise “abstract” or hard-to-understand concept and algorithm. I further like that this table, and the code (I really should be using pseudocode, shouldn’t I?), offered a tool that students can use to learn Euclid’s Algorithm. This, as far as I am concerned, is the essence of good teaching, certainly one that will influence student understanding and confidence for an extended period. I have always wondered if being a substitute teacher, rather than a contract one, influences how I learn and teach. I am unsure whether I would have approached and solved this problem the way I did if I had lesson plans to design with the nuances of my students in mind. I think I would be more explicit and less constructive than I am now. As it is now, I teach teachable moments and diagnostically and formatively. I rather enjoy that. It keeps me modelling and learning along with my kids. I don’t think I can help you with expressing the pseudocode in some other way. I have some programming training, and before that I scripted Javascript. So the pseudocode, though requiring analysis, made sense to me (it was approachable). As I mentioned in my post, rewriting the pseudocode as instructions might actually bog the procedure down in linguistic and logical minutiae. I rather think of the pseudocode as a point-form outline. As such, it compartmentalizes the details of the procedure in a quick “cheat sheet” format. No matter how you express the procedure, it will still require analysis — actual reading and note-taking of the expression — in order to learn it. In addition, I believe more students would get more out of process (this is how this works) than instruction (do this). Process places instruction into context and disguises threatening instruction (do this) as friendly puzzle (what is happening?). The students might balk at process when first encountered, since it is foreign to instruction, but they will have lasting understanding, and more importantly a better method of successful autonomous thinking and problem solving. Why don’t you express the algorithm in several forms in a post and ask the rest of us for feedback on which form we believe is clearest and why? You could even ask for alternatives. You would have to remind us who the intended audience is though. 2. a=0 in the first table is different from a=0 in the second one The difference influences puzzles where a=0 and b<0, and is a source of ambiguity. Assume we ignore Case 2 (either a=0 or b=0). The map would then follow Case 3 (a<0, b<0 or both<0) when b<0. As with other instances involving negative numbers, except when a=b<0 (Case 4), b<0 causes the code to loop forever to infinite a. For example, a=0 and b=-5 returns a=5 and b=-5 in the second table. The code would then follow Case 3 as the values of a tend to infinity. Nothing will be returned, and your computer will hang. The first table on the other hand returns a=0 and b=-5 recursively if we follow Case 2. So which is the right answer? Since Case 2 is stated in the code, it takes priority over Case 3. The “correct” return then is -5 and not ∞. This is reasonable since zero can indeed be divided by negative five. However, it contradicts the pattern of behaviour of all other instances where a and b are negative except a=b<0. (b=0 is never reached, unless initially, when either number is negative as Case 3 takes hold.) This illustrates that the code was meant for whole numbers only. Actually, given the results of a=b=0, which contradict convention, only natural numbers should be considered. Still, keeping the code open as it is informs a lot of Math. It explains how and why negative numbers do not return values, how and why a=0 returns b and b=0 returns a, and how and why the GCF or 1 is returned when natural numbers are used. One has to remember that the ambiguity between the return of b or ∞ when a=0 and b<0 is a viable source of confusion and misconception. Two solutions One can resolve the problem by returning ∞ in the first a=0 table when b<0 even though this contradicts the code and any initial puzzles where b=0, a<0 and b is returned. One could also resolve it by replacing the a=0 row in the second table with either the first table as written or a statement, such as “Return b (table one)”, referring the user to the first Either way, an arbitrary decision must be made. The code chooses the second option. Both the row of a=0 and the column of b=0 must be altered to include negative numbers. This interestingly enough violates another section of the code, namely that, when a>b, a = a-b (a positive number), and when a<b, b = b-a (another positive number). (The isocline of a=b is the 0 diagonal.) Perhaps another solution to the problem is to replace the a=0 row and the b=0 column with statements to the effect that b or a respectively should be returned. However, the visual and active explanation of how and why a=0 returns b and b=0 returns a will be lost for all values of b and a. Case 2 would need to be arbitrarily taught, rather than necessarily figured out, creating an alternate source of confusion and misconception. PS: I decided not to correct the tables in order to point out that this ambiguity is one your students are likely to face. Addressing the ambiguity in class will deepen reflection and understanding of the underlying Math of the table and code. Have an opinion? What do you think? I'd love to hear from you.
{"url":"http://shawnurban.wordpress.com/2012/03/12/building-an-inquiry-tool/","timestamp":"2014-04-19T01:49:32Z","content_type":null,"content_length":"104054","record_id":"<urn:uuid:5ca49e0c-399c-4721-b46d-5738d1cb9cca>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
John Couch Adams For other people named John Adams, see John Adams (disambiguation). John Couch Adams June 5 January 21 ), was a British mathematician . His most famous achievement was predicting the existence and position of , using only . The calculations were made to explain discrepancies with and the . At the same time, but unknown to both, the same calculations were made by Urbain Le Verrier . Le Verrier would assist in locating the ); which was found within 1° of its predicted location, a . Adams was born in Laneast, England and died in External link • http://www-history.mcs.st-andrews.ac.uk/history/Mathematicians/Adams.html
{"url":"http://july.fixedreference.org/en/20040724/wikipedia/John_Couch_Adams","timestamp":"2014-04-17T09:36:02Z","content_type":null,"content_length":"3530","record_id":"<urn:uuid:10717d06-3d1e-4c8e-bbc0-b6cce42beb14>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Equation I found this video on youtube of an MIT Physics lecture. If you jump ahead to around 45 minutes, the professor offers a math problem that I found interesting. It took me a long time to work through it, but if you know the trick, it can be solved in less than 10 seconds. i^i, where i = sqrt(-1) the clue is that i=e^i(pi/2 plusminus 2*pi*n), n=0,1,2,... Have fun with it, let me know what you think.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=15629","timestamp":"2014-04-19T22:13:26Z","content_type":null,"content_length":"13860","record_id":"<urn:uuid:f2460bf3-e000-4bd9-8dc3-08eda43803f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Gambrills Math Tutor Find a Gambrills Math Tutor ...I can teach Physics and Mathematics at any level, and any sub-discipline. My Bachelor's degree is in Theoretical Physics, from Imperial College, London. As such I am very comfortable in teaching both Physics and Mathematics. 17 Subjects: including calculus, precalculus, trigonometry, statistics My love of teaching, especially on a one-to-one basis, began in high school, where the school guidance counselor enlisted me in tutoring struggling algebra and geometry students. Since then, I informally assisted fellow students in my college studies in engineering. I worked successfully for many ... 12 Subjects: including calculus, precalculus, algebra 1, algebra 2 ...I have used both differential and integral calculus in my work and got straight A's in all my math course in both undergraduate and graduate work at Drexel University and Caltech. I have taken graduate courses in Linear Algebra at both CalTech and Johns Hopkins and received an "A" grade in each. I have used linear algebra in my work as an electrical engineer for many years. 17 Subjects: including actuarial science, linear algebra, algebra 1, algebra 2 I have received my BA from The George Washington University few years ago and now am attending George Mason University pursuing a chemistry degree to finish medical school pre-requisites. I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help student... 17 Subjects: including trigonometry, SAT math, linear algebra, precalculus ...I believe that understanding the "why" of mathematics is crucial. I don't believe in equation memorization, in most instances, but rather believe in core equation understanding. Once you understand why an equation exists and how it can be manipulated and used, then the follow up equations become intuitive. 11 Subjects: including differential equations, mechanical engineering, algebra 1, algebra 2
{"url":"http://www.purplemath.com/gambrills_md_math_tutors.php","timestamp":"2014-04-19T05:34:31Z","content_type":null,"content_length":"23815","record_id":"<urn:uuid:2a0288fd-cfda-484d-91db-548c7a7f1f6b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Games Level 1 1.0 Math Games Level 1 1.0 Ranking & Summary RankingClick at the star to rank Ranking Level User Review: 0 (0 times) File size: 1.15MB Platform: Windows XP License: Freeware Price: Free Downloads: 520 Date added: 2008-08-27 Math Games Level 1 1.0 description Undoubtedly, mathematics is one of the most important subjects taught in school. It is thus unfortunate that some students lack elementary mathematical skills. An inadequate grasp of simple every-day mathematics can negatively affect a persons life. Clearly, students with weak understanding of basic math will have difficulty in advancing their education. But the negative impact goes beyond academics. With modern life getting more complicated, people with inadequate knowledge of basic-level math find themselves at a disadvantage when competing for jobs, leasing cars, investing, and in countless other real-life situations. Studying math is certainly not an easy task especially when memorizing mathematical facts from a textbook is the only way of learning. Enter Math Games! Math Games Level 1 teaches addition and multiplication with numbers from 1 to 12. Using the mouse, a student can select the values to calculate. Then, he or she clicks on the "Ask Me" button and lets the robot do the calculation and report the result. The game window can be maximized to make it even more exciting for young children. The software is free for personal use. Educational institutions, for-profit and governmental entities should contact Sierra Vista Software about site licenses if they wish to use the software beyond the 30-day trial period. Math Games line of software helps those teachers and parents who want to make math education more exciting. This software can be used as a valuable supplement to teacher instruction and other computer teaching tools. Math Games Level 1 1.0 Screenshot Math Games Level 1 1.0 Keywords Bookmark Math Games Level 1 1.0 Math Games Level 1 1.0 Copyright WareSeeker.com do not provide cracks, serial numbers etc for Math Games Level 1 1.0. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited. Related Software Practice multiplication (numbers 1 - 10) with the smart robot and his friends. Free Download Math Table displays a table of addition and multiplication facts from 0 to 31 Free Download The Math Slate is a Windows program that provides elementary-level students with practice in the basic math skills. Free Download The Math Homework Maker,is a FREE software which can solve all your homework. If you are a parent,pupil,student,needing help or just want to check your math homework, Then you must have this Free Download to help teach the four basic arithmetic operations of ddition,subtraction,multip Free Download Math Arcade is a collection of 8 different games with 5 skill levels each. Free Download EQUALS uses the power of jigsaw puzzle and visual learning to learn basic math Free Download
{"url":"http://wareseeker.com/Home-Education/math-games-level-1-1.0.zip/340644","timestamp":"2014-04-17T08:01:43Z","content_type":null,"content_length":"33085","record_id":"<urn:uuid:35c6a24b-012e-4f5f-ba5b-671f0c367791>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US20020001412 - System for variable quantization in jpeg for compound documents [0001] The present invention relates generally to JPEG data compression and more specifically to JPEG data compression for compound images having pictures and text. [0002] JPEG is the name of both a committee and a standard. JPEG stands for Joint Photographic Experts Group, the original name of the committee that wrote the JPEG standard. The JPEG standard is an international standard which applies to the lossy and lossless compression of either full-color or gray-scale images of natural, real-world scenes. [0003] Lossy image compression compresses by striving to discard as much of the image data as possible without significantly affecting the appearance of the image to the human eye. Lossless compression is compression achieved without discarding any of the image data. [0004] The JPEG standard works well on still photographs, naturalistic artwork, and similar material (which are generally referred to herein as “pictures”), but not so well on lettering, simple cartoons, or line drawings (which are generally referred to herein as “text”). Compound images are those which contain both pictures and text (which are collectively referred to herein as “images”). In some cases, compound images contain pictures which also contain text within the picture itself. [0005] This standard is being used in the computer industry. Popular graphics-capable browsers on the World Wide Web can read and write this particular type of image data format, so if a compressed image is sent across the Web to such a browser, it knows how to decompress the image and display it. [0006] Compression is important for two main reasons. The first is storage space. If there will be a large number of images on a hard drive, the hard drive will fill up very quickly unless the data can be greatly compressed. Computers have fixed size buffers and limited memory, and an image has to fit in them otherwise, the image cannot be stored in them. [0007] The second is bandwidth. If data is being sent through a browser or through electronic mail, the more bits that need to be transmitted, the more time is required. For example, with a 28.8K modem it may take half an hour of waiting for a picture to be completely transmitted. If a 50 to 1 compression can be achieved, the same picture can be transmitted completely in about thirty seconds, and if compressed properly, the recipient will not notice the difference between the original and the compressed version. [0008] For full-color images, the uncompressed data is normally 24 bits per pixel. JPEG can typically achieve 10:1 to 20:1 compression on pictures without visible loss, bringing the effective storage requirement down to 1 to 2 bits per pixel. This is due to the fact that small color changes are perceived less accurately than small changes in brightness. Even 30:1 to 50:1 compression is possible with small to moderate defects, while for very low quality purposes such as previews or archive indexes, 100:1 compression is quite feasible. [0009] For gray-scale, and black and white images such large factors of compression are difficult to obtain because the brightness variations in these images are more apparent than the hue variations. A gray-scale JPEG file is generally only about 10%-25% smaller than a full-color JPEG file of similar visual quality with the uncompressed gray-scale data at only 8 bits/pixel, or one-third the size of the color data. The threshold of visible loss is often around 5:1 compression for gray-scale images. [0010] Although there are a number of settings that can be predefined to achieve different compression ratios, there is only one parameter, called the quality factor, that is adjusted regularly in JPEG on an image-by-image basis with one setting for an active image. The quality factor is a single number in an arbitrary, relative scale. A high quality factor will provide a relatively high quality decompressed image, but will require a relatively large file. And, of course the lower the quality, the rougher the approximation of the image and the more compression with a correspondingly smaller file size, but also, the more visible defects, or artifacts, will be in the decompressed final image. Text generally shows significant compression artifacts at higher quality factors than pictures. Further, the quality factor will only give an approximate end file size. [0011] Therefore, a long sought goal in image compression has been to maintain maximum perceptible image quality while achieving maximum compression. [0012] This goal is becoming more difficult to attain because compound documents are just starting to become more and more important. It has only been recently that it has become possible to drop pictures into text documents as much as can be done now. Before, electronic transmissions were either a text document or a picture document. Now, it is more and more common to see a compound image where someone is making a newsletter or setting up a website. People want to drop in some pictures but also want to have text as well. So compound documents are becoming a more important, whether it is just photocopying or just sending to a printer or transmitting across the internet, these have become a more important class of images. [0013] Also, most of the techniques that have been developed in the past for compound documents are based on proprietary (non-standard) compression techniques, so the images could only be decompressed using a specific company's product. [0014] It has long been known that the inability to minimize file size while maintaining high perceptual quality would lead to detrimental compromises in performance so process improvements have been long sought but have eluded those skilled in the art. Similarly, it has long been known that the problems would become more severe with compound documents and thus a generally applicable solution has been long sought. [0015] The present invention provides a simple metric for picture/text segmentation of compound documents in the discrete cosine transform domain. This allows areas of high frequency content such as text to be compressed at a better quality than pictures, thus improving the overall perceptual quality while minimizing the file size. The metric is computed using the quantized output of the discrete cosine transform. No other information is needed from any other part of the JPEG coder. [0016] The present invention provides an image compression system which can be used to apply different, appropriate quantization factors to small blocks of pictures and text to provide significant image compression. [0017] The present invention further provides an image compression system capable of distinguishing between text and pictures in compound images. [0018] The present invention still further provides for preserving the text quality without sacrificing bandwidth while at the same time being JPEG compliant. [0019] The present invention also provides an image compression system which is fully compliant with the latest extensions of the current JPEG standard. [0020] The above and additional advantages of the present invention will become apparent to those skilled in the art from a reading of the following detailed description when taken in conjunction with the accompanying drawings. [0021]FIG. 1 PRIOR ART is a schematic of a prior art baseline JPEG encoder; [0022]FIG. 2 is a schematic of a JPEG Part 3 encoder that supports variable quantization; and [0023]FIG. 3 is a schematic of a variable quantization subsystem of the present invention. [0024] Referring now to FIG. 1 PRIOR ART, therein is shown a baseline JPEG encoder system 10 for digital cameras, scanners, printers, imaging servers, etc. The JPEG encoder system 10 is for an image with a single color component. For color images, there would be a JPEG encoder system 10 for each of the color components. [0025] The system 10 receives image pixels, or input digital image data, at an input 12 which is connected to a discrete cosine transformer 14. The discrete cosine transformer 14 first divides the input digital image data 12 into non-overlapping, fixed length image blocks, generally 8 by 8. After a normalization step, the discrete cosine transformer 14 reduces data redundancy and transforms each fixed length image block by applying a discrete cosine transform to a corresponding block of discrete cosine transform coefficients. This transform converts each fixed length image block into the frequency domain as a new frequency domain image block. The first coefficient in the block, the lowest frequency coefficient, is the DC coefficient and the other coefficients are the AC coefficients (e.g., for an 8 by 8 block, there will be one DC coefficient and 63 AC coefficients). [0026] Quantization tables 16 are operatively connected to the discrete cosine transformer 14. The quantization tables 16 contain lossy quantization factors (scaled according to the factor) to be applied to each block of discrete cosine transform coefficients. One set of sample tables is given in Annex K of the JPEG standard (ISO/IEC JTC1 CD 10918; ISO, 1993). These tables and the user-defined quality factors do not actually provide compression ratios per se, but provide factors indicating how much the image quality can be reduced on a given frequency domain coefficient before the image deterioration is perceptible. [0027] It should be understood that the tables represent tabulations of various equations. The look-up tables could be replaced by subroutines which could perform the calculations to provide the [0028] A quantizer 18 is connected to the discrete cosine transformer 14 and the quantization tables 16 to divide each frequency domain image block by the corresponding element from the quantization table 16 to output the quantized discrete cosine transform output. [0029] An entropy coder 20 is connected to the quantizer 18 and to Huffman tables 22. The entropy coder 20 receives the output from the quantizer 18 and rearranges it in zigzag order. The zigzag output is then compressed using run-length encoding in the entropy encoder 20 which is a lossless entropy coding of each block of quantized discrete cosine transform output. The entropy encoder 20 of the present invention uses Huffman codes from the Huffman tables 22 although arithmetic coding can also be used. The Huffman codes exploit similarities across the quantized discrete cosine transform coefficients. JPEG contains two sets of typical Huffman tables, one for the luminance or grayscale components and one for the chrominance or color components. Each set has two separate tables, one for the DC components and the other for the AC. [0030] The bitstream out of the entropy coder 20 at output 24 is a JPEG file 26 which contains headers 28, tables 30, and data 32. The tables 30 contain information from the quantization tables 16 and the Huffman tables 22 of the appropriate information used in the processing of each block of data so the data can be properly decompressed. The data 32 contains the output from the entropy coder 20 in a form of a compressed block such that a sequence of all of the compressed blocks forms the compressed digital image data. [0031] Referring now to FIG. 2, therein is shown a JPEG encoder system 50 that supports the variable quantization of the present invention. The system 50 is compliant with the JPEG Part 3 standard. The same elements as in FIG. 1 are given the same numbers in FIG. 2. Thus, the system 50 receives input digital image data 12 into the discrete cosine transformer 14 which is connected to the quantizer 18. [0032] The quantization tables 16 are connected to a multiplying junction 52 which is connected to the quantizer 18. Also connected to the multiplying junction 52 is a variable quantization subsystem 54 which is also connected to the entropy coder 20. [0033] The entropy coder 20 is connected to the quantizer 18 and to the Huffman tables 22. The bitstream out of the entropy coder 20 at output 24 is a JPEG file 26 that contains headers 28, tables 30 , and data 32. The tables 30 contain compression-related information from the quantization tables 16 and the Huffman tables 22 which can be used by a JPEG decompresser system (not shown) to decompress the data 32 from the entropy coder 20. The quantization scale factor from the variable quantization subsystem 54 are incorporated into the data 32 by the entropy encoder 20. [0034] Referring now to FIG. 3, therein is shown the variable quantization subsystem 54 which is operatively connected to the discrete cosine transformer 14. The discrete cosine transformer 14 is connected to a quantizer 58 in the variable quantization subsystem 54. The quantizer 58 has quantization tables 56 connected to it which are the factors relating to the activity metrics. The quantizer 58 is the same as the quantizer 18. For simplicity, the quantization tables 56 for the activity metrics are the same as the quantization tables 16 for the encoding, but this is not [0035] The quantizer 58 is further connected to an activity computer 60 for computing an activity metric, M[i], as will later be described. The activity computer 60 is connected to a scale computer 62 for computing qscale as will also later be described. The scale computer 62 is connected to the multiplying junction 52 to which the quantization table 16 is connected. The discrete cosine transformer 14 as well as the multiplying junction 52 are connected to the quantizer 18 as also shown in FIG. 2. [0036] In operation in the FIG. 1 PRIOR ART baseline JPEG encoder system 10, image pixels are divided into non-overlapping 8×8 blocks where y[i ]denotes the i-th input block. This division applies to all images including text, pictures, and compound images as well as pictures containing text. After a normalization step, each block is transformed into the frequency domain using the discrete cosine [0037] Mathematically JPEG uses the discrete cosine transform in its processing. A discrete cosine transform in the frequency domain is based on the assumption that an image mirrors itself on both boundaries. This assures a smooth transition without high frequency spikes, because those are very hard to compress. And the higher frequencies are very close to zero if there is a smoothly varying [0038] The output of the above step will be a new 8×8 matrix Y[i]. Next, each element of Y[i ]is divided by the corresponding element of the encoding quantization table Q[e]. Given Q[e][j, k], [0039] In the baseline JPEG encoder system 10, a single set of quantization tables 16 is used for the whole image. For a cmpnd 1 ISO test image, the whole image would be 512 pixels by 513 pixels. [0040] After quantization, the quantized discrete cosine transformer 14 output is rearranged in raster, or zigzag, order and is compressed using run-length encoding which is a lossless, entropy coding using the Huffman tables 22 in the entropy coder 20. According to the JPEG standard, the quantization tables for each color component can be defined in the header tables of the JPEG file 26. The output of the entropy coder 20 is a sequence of compressed blocks which form the compressed image which can be decompressed by a standard JPEG decompression system. [0041] In operation, the variable quantization JPEG encoder system 50 of the present invention shown in FIG. 2 uses the JPEG adjunct standard (ISO/IEC JTC1 CD 10918; ISO, 1993) which has been extended to support variable quantization. Under variable quantization, the values of the original quantization matrix can be rescaled on small blocks of pixels as small as 8 pixels by 8 pixels. Normally the quantization matrix stays the same for the entire image, but the adjunct standard allows these changes on a block by block basis. This change was designed primarily for a rate control problem of getting the proper number of bits at the output. If a block is changed, the information can be put into the bitstream so that a decoder on the receiving end can undo it later. Thus, the various scaling factors are also encoded as part of the data bitstream. In principal, variable quantization allows for better rate control or more efficient coding which is the original purpose for the JPEG extension. [0042] Even though the latest JPEG extensions provide the syntax for the support of variable quantization, the actual way to specify the scaling factors is application dependent and not part of the JPEG adjunct standard. [0043] The present invention allows for the variable quantization of compound images. The JPEG encoder system 50 of FIG. 2 automatically detects the text-part and the image-part of a document by measuring how quickly pixels are changing in the incoming data. Black text on a white background changes very quickly from black to white and back again even within a very small block of pixels. Picture pixels, for example the image of a human face, change much more slowly through gradations of color. [0044] In the JPEG standard, the quantization is being done in a transformed domain, not on the pixels directly. The pixels are transformed through a linear matrix transform (the discrete cosine transform) into a frequency domain representation, and the quantization is performed in the frequency domain. It is also in the frequency domain that the frequency components can be determined to find how active a particular block is. The mathematical equation used is what provides an “activity metric”. The larger this activity metric turns out to be, the more things are changing within the 8 by 8 block. [0045] Also, the discrete cosine transform has the advantage that it takes real numbers and transforms them into real numbers which can be quantized. In this domain, it is possible to predict, roughly, how many bits it takes to represent the data that is actually in the block. [0046] To represent a given number, such as every number between 0 and 15, the largest number is taken, and the log base 2 of this provides the number of bits required. In this case it would be 4 because with 4 bits every number between 1 and 15 can be represented. [0047] Thus by taking the absolute value of the real numbers, taking the log base 2 of them, which tells how many bits needed for each one and then summing them up, and that provides how many bits are needed to represent the data in the entire block. This number is going to be very large if there is a lot of activity because a lot of the frequency components will be big, and it is going to be very small if the image changes slowly because all the high frequencies will be close to zero. [0048] Based on the discrete cosine transform activity of a block or a macroblock (a 16×16 block), quantization scaling factors are derived that automatically adjust the quantization so that text blocks are compressed at higher quality than image blocks. Those skilled in the art would be aware that text is more sensitive to JPEG compression because of its sharp edges which, if compressed too greatly, would blur or have ringing artifacts (ripples around the edges). At the same time, images can be compressed greatly without drastically affecting human-eye perceived differences in quality of the image. [0049] There are many ways to measure activity in a block. One method to determine the discrete cosine transform activity is to let Y[i][j, k] denote the elements of the i-th output block of the discrete cosine transform so Y[i][0, 0] denotes the DC component of the i-th block. In the present invention, the activity computer 60 uses the following activity metric: [0050] where the summation is performed over all elements of the Y[i][j, k] matrix, except Y[i][0, 0]. (The above formulation assumes that arguments in the log[2 ]function are always greater than [0051] The motivation behind this metric is based on the fact that: (a) JPEG uses differential coding for the encoding of the discrete cosine coefficients, and (b) the number of bits needed to code a DC transform coefficient are proportional to the base-two logarithm of its magnitude. The equation does not (and does not need to) account for additional coding bits needed for the Huffman coding of either size or run/size information of the discrete cosine transform coefficients. The number of these bits ranges from 2 to 16 per non-zero coefficient. Assuming that on the average c bits per non-zero coefficient are required for this purpose, then the following method can be used to compute M[i], (and the overall bit rate) more accurately. [0052] In experiments, c=4. [0053] After defining an activity metric, the next step is to define the relationship between the activity measure and the quantization scale. [0054] The cmpnd 1 ISO test image is used as the standard compound image with computer generated text, a photographic-type color image, and computer generated text within the image. The top half is text and the bottom half the color image. This is a 512×513 pixels total image, with 1,056 macroblocks (16×16 pixel blocks). The color image part starts at approximately the 508-th macroblock. [0055] The values of the activity metric, M[i], calculated in the activity computer 60 for each of the luminance macroblocks in the ISO test image is higher in the text regions of the image. However, discrimination between image and text areas is even better if M[i ]is computed using quantized values of Y[i]; that is, Y[QM,i]. When the quantization matrices Q[M ]and Q[e ]in quantization tables 56 and 16, respectively, are both the same as the one given in Annex K of the JPEG standard, activity values larger than 1.2 correspond to text areas in the image. It should be understood that the two quantization tables 56 and 16 could be different. Experiments show that the range of values for M[i ]for the ISO test image is consistent with the range of values obtained from other test images. [0056] Basically quantization varies inversely with the metric. If a higher metric means a higher activity, thus scale by less to quantize more finely or compress less. And then with a very small metric, scale the quantization very coarsely or compress more because the image is such a smooth block, that it does not matter how much quantization since it will not be perceptible. [0057] The output of the scale computer 62 is qscale, which denotes the parameter used to scale the AC values of the original quantization matrix, a value of qscale=0.5 is quite acceptable for the compression of text. On the other hand, values of qscale larger than 2 may yield serious blocky artifacts on an image. To simplify implementation, a linear, but bounded, relationship between qscale and the activity metric (M[i]) is superior, such as [0058] where a and b are constants to be defined based on desired output quality and compression ratios. One way a and b can be defined is the follows. Let m[l ]denote the value of the activity metric, M[i], for which qscale=1. Let m[u ]denote the value of M[i ]for which qscale=0.5. After solving two equations with two unknowns: [0059] For example, if m[l]=0.6 and m[u]=1.2, then a=−0.83 and b=1.498. [0060] The choices for m[l ]and m[u ]effect compression ratios as follows. If m[l ]is increased, in effect more blocks are quantized with a qscale>1; thus compression is improved but image quality may be reduced. If m[u ]is increased, the number of blocks that are quantized with qscale=0.5 is decreased; thus the quality of text is decreased, but the compression ratios are improved. [0061] In the variable quantization subsystem of FIG. 3 showing the variable quantization method, the Q[M ]quantization matrix is the same as Q[e], but this may not be always the case. For example, Q [M ]may be the same as Appendix K of the JPEG standard, but Q[e], may be a custom quantization table. [0062] Using the above metric, the upper text areas in the ISO test image were identified as areas of high frequency activity, but also the text inside the color picture at the bottom half. [0063] The qscale is provided to the multiplying junction 52 to control Q[e], from the quantization table 16 to the quantizer 18. [0064] It should be understood that the same method as described above may be used to adjust the chroma quantization tables independently from the luminance tables using the same scaling factors. [0065] JPEG, itself, as a standard doesn't specify what to do about color. But, what is commonly done, is to convert a color image into a luminance and chrominance representation so that it shows the brightness of the image with two other components that show the colorfulness. And it turns out that the human eye is much more sensitive to the luminance. A slow transition from a red to an orange versus a sharp one will not even be noticeable, but a slow transition in luminosity versus a sharp one will be noticeable as a blur. In the present invention, the activity metric is computed only for the luminance component to save on computation and then the chrominance is scaled the same way. That turns out to work reasonably well on the chrominance as well, because compound documents usually have black text on a white background and errors in the color are particularly noticeable. A little red fringe around each letter will be seen immediately but it is less likely to be seen in an [0066] As would be understood by those skilled in the art, the present invention has been described in terms of discrete components but it may be carried out in software or in dedicated integrated [0067] While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations which fall within the spirit and scope of the included claims. All matters set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.
{"url":"http://www.google.com/patents/US20020001412?dq=U.S.+Patent+No.+4,528,643","timestamp":"2014-04-18T13:33:13Z","content_type":null,"content_length":"92927","record_id":"<urn:uuid:82fbf277-2c39-4242-8acc-b2cdec1b97f5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Can Time in Special Relativity Appear Frozen despite the Clock Hypothesis Says it Cannot? B&E Scientific Ltd, United Kingdom According to general relativity, time in a gravitational field will appear slowed down, or close to a black hole even frozen to complete standstill. From an assumed equivalence between gravity and acceleration, one might thus expect that time in special relativity could similarly appear to be slowed down, or even frozen, when observing a system in strong acceleration even at moderate relativistic velocities. Specifically, this would seem to be the case for hyperbolic space time motion when accelerated motion takes place along a hyperbola corresponding to constant time in the Minkowski diagram. On the other hand, the original postulates in Einstein’s theory of special relativity are today normally supplemented with a new postulate, the clock hypothesis, stating that time is unaffected by accelerations. The present study concludes that there is however no inconsistency here: Without being in conflict with the clock hypothesis, time can still appear to be slowed down or even frozen in the special case of hyperbolic motion. This is then due to the special scaling properties of this type of motion, which happen to imitate a constant acceleration. Slowing-down of time can thus occur not only at extreme velocities close to light speed, but also at moderate relativistic velocities for sufficiently powerful accelerations. At a glance: Figures Keywords: gravity-acceleration equivalence, hyperbolic motion, scaling International Journal of Physics, 2013 1 (6), pp 146-150. DOI: 10.12691/ijp-1-6-3 Received October 31, 2013; Revised November 20, 2013; Accepted November 22, 2013 © 2013 Science and Education Publishing. All Rights Reserved. Cite this article: • Bergstrom, Arne. "Can Time in Special Relativity Appear Frozen despite the Clock Hypothesis Says it Cannot?." International Journal of Physics 1.6 (2013): 146-150. • Bergstrom, A. (2013). Can Time in Special Relativity Appear Frozen despite the Clock Hypothesis Says it Cannot?. International Journal of Physics, 1(6), 146-150. • Bergstrom, Arne. "Can Time in Special Relativity Appear Frozen despite the Clock Hypothesis Says it Cannot?." International Journal of Physics 1, no. 6 (2013): 146-150. Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks 1. Introduction Presently there seems to be considerable consensus about the validity of the so-called clock hypothesis ^[1], which comparatively recently has been introduced into special relativity, and which states that accelerated motion of a system does not affect the passage of time in that system as observed from a reference frame at rest. Arguments for the clock hypothesis will be described below and shown to be persuasive. However, as with all statements based on generalisations of observations, it does not seem obvious that the clock hypothesis necessarily has the general validity normally attributed to it. To early men on some tropical island, there would similarly have been convincing empirical evidence that water could never exist in any odd frozen, solid form – a conclusion based on their observations of water in the ocean, in lakes, small ponds, rivers, wells, rain, etc. But, although outside their experience and unknown to them, ice would still exist. Could there analogously be special cases when time could be affected by accelerations and perhaps even be frozen to standstill when observed from a stationary system, and where thus the clock hypothesis might seem not be valid? We know that this can be the case in general relativity in the heavily curved spacetime at black holes, but could this perhaps also possibly happen in special relativity, i e in flat spacetime? One such candidate for frozen time in special relativity is the peculiar properties of hyperbolic motion as studied below, where the spacetime trajectory of a relativistic motion might be made to coincide with the hyperbola representing constant time in the co-moving frame in a Minkowski diagram. This would then suggest that time in some particular accelerated systems could indeed appear to be arbitrarily slowed down ^[2] or maybe even frozen when observed from a stationary system. This is the type of situation considered in detail below as a candidate for a case in special relativity when time is indeed affected by accelerations. The problem of accelerated systems in flat spacetime and hyperbolic motion has been extensively studied in the literature ^[3, 4] (cf also Rindler coordinates ^[5, 6]). In the following, special attention will be paid to the case of frozen time, since this is the case that most clearly differs from the normal relativistic time dilation occurring at velocities close to the speed of light. However, the reader is reminded that not just frozen time but even the slowing-down of time due to acceleration in hyperbolic motion as discussed below is in apparent conflict with the clock 2. The Case for Frozen Time The Lorentz transformation relates a stationary frame x’-axis with velocity in the positive direction along the x-axis of the frame. The corresponding Lorentz transformation is given as follows ^[7]. Figure 1. Minkowski diagram showing how the x’ and ’ axes in a moving frame relate to the x and x’ and ’ axes tilt more and more towards the light ray x, with a spacetime point (x[0],0) scaling as a hyperbola for increasing velocities. Note that this hyperbola may be identical to the hyperbolic trajectory (red) observed in the stationary frame due to a particle in constant acceleration in its co-moving frame. Hence the time shown by a clock moving along such a hyperbolic spacetime trajectory may appear frozen to a stationary observer as discussed in the text. Where = t/c and β = v/c, with t and v being the time and velocity in ordinary units, and c being the speed of light. The Lorentz transformation has the counterintuitive property that events that are simultaneous in one system are not simultaneous in a system moving with some velocity relative to this system. This is clearly illustrated in a Minkowski diagram ^[8] as in Figure 1. Note that from (1) follows that ^[9] As an example, we study the case when we assume the time to be frozen in the e g, x’, say x’ = x[0]. Then (1) gives from which we thus have From (3) we get a hyperbolic trajectory in the stationary frame (x[0] constant), which combined with (4) gives and we thus get the following expression for the velocity, and thus Differentiating (7) with respect to Figure 2. The position x(x[0] = 1) in the frozen-time case (3). Note that these curves are mathematically identical to the corresponding curves (red, coinciding) calculated from a different starting point using (10) through (15) with α’ = 1/x[0] Figure 2 shows the position in the stationary frame for the case of frozen time in (3). 3. Main Argument against Frozen Time However, the case of frozen time derived above would seem to be contradicted by the following argument. The relationship between rest-frame time and time in the co-moving frame ^[4, 10] to be as The relationship between the space coordinate x in the stationary frame and the time ^[10], We note that (10) and (11) give a hyperbolic motion ^[10] in the is a constant, cf (5) above), The velocity β and acceleration ^[4] (cf the identical expressions in the more general case in (19)-(21) below, and in particular also (4) and (9) above), Note that the time-dependent acceleration ^[4] to a constant acceleration (as it should). We now want to consider the possibility of frozen time in the moving frame as observed from the stationary frame. Solving for ^[4] From (16) follows that there seems to be no way (except asymptotically for i e have constant ) when observed during some interval in times 4. Counterargument in Favour of Frozen Time However, the above expression (16) for But that is not the only way to get hyperbolic motion. It is important to note that spacetime motion according to (10) and (11) above is not equivalent to hyperbolic motion: The motion defined by (10) and (11) above implies the hyperbolic motion in (12), but a hyperbolic motion as in (12) does not necessarily imply a motion according to (10) and (11) Thus (10) and (11) above represent only one example of possible relationships which, with a suitably chosen in the relativistic regime, would obviously not correspond to the motion discussed in (10) and (11) above, but which nevertheless gives the same hyperbola as in (12), Differentiating (17) we get for the velocity as measured in the stationary frame, and thus also Differentiating (19) and (17) with respect to time and using (20), we then get for the acceleration which, like in (15), thus again transforms to a constant acceleration Specifically, the relationship with frozen time in (3) can be regarded as a limiting case of (17) with []). Hence, even though (3) does not describe a legitimate trajectory resulting from a constant acceleration in the co-moving frame as derived in (10) and (11), it nevertheless gives the same hyperbola as shown by the equivalence between (5), (12), and (18) above. Hence constant accelerations in the co-moving frame lead to hyperbolic motion as in (12), but also other types of motion like (17), or the frozen-time case (3), lead to the same hyperbolic motion in the stationary frame, and also (surprisingly, cf explanation below) with seemingly the same acceleration (if in (5) we set the constant ). A somewhat bizarre fact is thus the following: Differentiating in order to get the accelerations in the three cases (3), (10) & (11), and (17), is thus shown above to give the same constant That also the other cases (3) and (17) seemingly give the same result for the acceleration is due to the fact that the constant ^ on the right-hand side in (5), (12), and (18), has the dual property of also being a scale factor, as clearly seen in (17). This is why it appears in (5) and (18) as if it would be an acceleration (since acceleration scales as length/time^2 and we thus specifically in this case get a net scale factor In summary, hyperbolic motion is thus more general than just the result of the specific constant proper acceleration given in (10) and (11) above. Time may thus indeed seem to be frozen when observing a system in hyperbolic motion from a stationary reference system. This thus regardless of restrictions seemingly imposed by reference to results as in (10) and (16), which are derived for the particular constant proper acceleration discussed above. There are also other cases when we can have hyperbolic motion with what looks like a mathematically equivalent, constant proper acceleration . But this is then due to the appearance of 5. Frozen Time is a Limiting Case It should to be noted that frozen time in the accelerating frame corresponds to the special limiting case as discussed earlier. In accordance with (17) above, other functions ^[2], but not frozen, when observed from the stationary frame. The clock hypothesis introduced ^[11] into special relativity in the 1990s would, if generally valid, exclude that accelerated motion of a system could at all affect the apparent passage of time in that system as observed from a system at rest. On the other hand, some variant of a local acceleration/gravity equivalence ^[12] as originally introduced by Einstein ^[13] would still be expected to exist between a system in a uniform gravitational field and a system in constant acceleration, ie in hyperbolic motion. If so, then a system in hyperbolic motion would be expected to show an apparent slowing-down of time as discussed above, just as a system in a gravitational field does. In the limiting case, this slowing-down could then in both cases even result in an apparent freezing of time in systems subjected, respectively, to strong accelerations or extreme gravitational fields when observed from a reference system. The argument involving scaling as discussed above thus explains how time, despite the clock hypothesis, can still appear to be slowed down – or in the special case 6. Experimental Results Vs Frozen Time The clock hypothesis ^[1, 11], ie the assumption that the passage of time is unaffected by accelerations, is supported by some experiments. In these experiments, particles experiencing even as extreme transverse accelerations as of the order of 10^19 m/s^2 ^[14], or longitudinal accelerations of the order of 10^17 m/s^2 ^[15], respectively, have shown no effects of the acceleration in addition to the normal relativistic time dilation. These experimental results thus indicate that perhaps most types (cf the tropical island in the Introduction) of accelerations do not influence the passage of (dilated) time – not even larger ones than considered necessary in this paper. However, experimental results like those just mentioned do not contradict that time could be frozen or dilated in the special case of hyperbolic spacetime motion as discussed in this paper. In hyperbolic motion one starts, e g, with velocity zero and then increases this velocity with a constant proper acceleration. This is thus a very special type of acceleration, compared to which results from the types of accelerations used in the experiments mentioned above do not seem to be relevant. In these experiments there seems to be no correspondence to hyperbolic spacetime motion with its especially designed, constant proper longitudinal acceleration, which is the essence of the mechanism for frozen or dilated time discussed in this paper. 7. Conclusion So can time in a system in hyperbolic motion appear frozen when observed from a stationary system as the discussion above seems to suggest? Or is this not the case as the clock hypothesis states and as is also discussed above? Actually, the two alternatives could indeed both be right and not be in conflict, as will now be described in the following summary of the above arguments. Assume the clock hypothesis to be true. Then the successive inertial frames used in the derivation above would strictly behave as an accelerating frame as assumed in the derivation. According to the clock hypothesis, time would then not be affected by this acceleration. Completely unrelated to the clock hypothesis, the special case of hyperbolic motion would by definition give a spacetime trajectory in the form of a hyperbola as in (5) or (18), The hyperbola in (22) is thus described by the parameter Although this scale factor then only determines the size of the hyperbola and thus in principle has nothing to do with any acceleration, it nevertheless appears in the hyperbolic expression in exactly the same way as an acceleration, as seen by the equivalence between (12) and (22). And since it does exactly that, it also indeed happens to define a constant acceleration in the co-moving frame – even though it is actually only a scale factor. In the case this hyperbolic spacetime trajectory of the motion is arranged to coincide with a hyperbola in the Minkowski diagram describing constant time in the co-moving frame, then time in this co-moving frame would appear to a stationary observer to be frozen to standstill as discussed in this paper, and equivalently to what happens at a black hole in general relativity. But, as discussed above, this is then an effect only of the particular scaling properties of the hyperbolic motion and is thus not in conflict with the clock hypothesis as such. The above analysis thus shows that dilated or frozen time may occur even at moderate relativistic velocities for a special type of strong accelerations. This could in some future have important technological implications, and it seems essential that this fact should not be forgotten and buried in some general view that the clock hypothesis excludes also dilated or frozen time at moderate relativistic velocities in the special case of hyperbolic spacetime motion. [1] Barton G 1999, Introduction to the Relativity Principle (Wiley), Ch. 6. [2] Misner C, Thorne K, and Wheeler J 1973, Gravitation (Freeman), Exercise 6.3 (b), p. 167. [3] Misner C, Thorne K, and Wheeler J 1973, Gravitation (Freeman), Ch. 6. [4] Barton G 1999, Introduction to the Relativity Principle (Wiley), Ch. 8. [5] Rindler W 2001, Relativity: Special, General and Cosmological (Oxford University Press). [6] http://en.wikipedia.org/wiki/Rindler_coordinates. [7] Pauli W 1981, Theory of Relativity (Dover), p. 10. [8] Aharoni J 1985, The Special Theory of Relativity (Dover), Ch. 1.9. [9] Møller C, The Theory of Relativity 2nd Ed (Oxford University Press), p. 37. [10] Misner C, Thorne K, and Wheeler J 1972, Gravitation (Freeman 1973), p. 166. [11] Mainwaring S and Stedman G 1993, Phys. Rev. A, 47 3611. [12] http://en.wikipedia.org/wiki/Equivalence_principle. [13] Einstein A 1911, Ann. Phys., 35 898, Eng. transl. in The Principle of Relativity (Dover 1952). [14] Bailey J et al. 1977, Nature, 268 301. [15] Roos C et al. 1980, Nature, 286 24.
{"url":"http://pubs.sciepub.com/ijp/1/6/3/index.html","timestamp":"2014-04-20T03:11:37Z","content_type":null,"content_length":"82677","record_id":"<urn:uuid:c4e54125-f2ec-49d2-abc1-dbaada0a63de>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
MTMS: March 2013, Volume 18, Issue 7 Second Look - Permutations, Combinations, and Counting Problems Burgers, Graphs, and Combinations Article recounts sixth graders' investigations of combinations and graph theory that arose from a claim made on a Steak 'n Shake menu. Several problems for use in class are provided. From Tessellations to Polyhedra: Big Polyhedra Students explore relationships among polygons to discover which combinations tessellate; which combinations form polyhedra. Activity sheets, and answers, included. Counting Attribute Blocks: Constructing Meaning for the Multiplication Principle Attribute blocks help middle school students understand one of mathematics "big ideas", the fundamental counting principle, thus laying a good foundation for future studies in probability. Lesson plan included. Illuminations Lesson: Fibonacci Trains Students use Cuisenaire Rods to build trains of different lengths and investigate patterns, and make algebraic connections by writing rules and representing data in tables and graphs. Data Analysis and Probability Standard for Grades 6-8 In grades 6–8, teachers should build on this base of experience to help students answer more-complex questions, such as those concerning relationships among populations or samples and those about relationships between two variables within one population or sample. Toward this end, new representations should be added to the students' repertoire. Box plots, for example, allow students to compare two or more samples, such as the heights of students in two different classes. Scatterplots allow students to study related pairs of characteristics in one sample, such as height versus arm span among students in one class. In addition, students can use and further develop their emerging understanding of proportionality in various aspects of their study of data and statistics.
{"url":"http://www.nctm.org/publications/toc.aspx?jrnl=MTMS&mn=3&y=2013","timestamp":"2014-04-19T00:12:42Z","content_type":null,"content_length":"50428","record_id":"<urn:uuid:47862395-cc2d-40f0-a1da-94474c8a91bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
injectivity : string -> thm Produce injectivity theorem for an inductive type. A call injectivity "ty" where "ty" is the name of a recursive type defined with define_type, returns a ``injectivity'' theorem asserting that elements constructed by different type constructors are always different. The effect is exactly the same is if prove_constructors_injective were applied to the recursion theorem produced by define_type, and the documentation for prove_constructors_injective gives a lengthier discussion. Fails if ty is not the name of a recursive type, or if all its constructors are nullary. # injectivity "num";; val it : thm = |- !n n'. SUC n = SUC n' <=> n = n' # injectivity "list";; val it : thm = |- !a0 a1 a0' a1'. CONS a0 a1 = CONS a0' a1' <=> a0 = a0' /\ a1 = a1' cases, define_type, distinctness, prove_constructors_injective.
{"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/injectivity.html","timestamp":"2014-04-18T05:34:41Z","content_type":null,"content_length":"1859","record_id":"<urn:uuid:6c2c13cd-a5e9-4689-ab85-3317033cb194>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
ACT Math Houston, TX 77090 Successful, Experienced Educator ...In my home, I have taught piano, organ, and flute students, as well as tutoring all elementary, middle school, high school, and college students. My students have included students from surrounding public and private schools. Since I have retired, I am seeking to... Offering 10+ subjects including ACT Math
{"url":"http://www.wyzant.com/geo_Katy_ACT_Math_tutors.aspx?d=20&pagesize=5&pagenum=5","timestamp":"2014-04-17T11:05:29Z","content_type":null,"content_length":"58012","record_id":"<urn:uuid:9c69369c-b533-4d93-b6af-179b705938e5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagorean Identities This kind of Identity is unique, It is the only Identity that has exponents in it. This leads to most of the confusion that occurs when people use this identity. These identities are probably the most recent ones, as there is no real underlying principal as there are with COfunction Identities. This one greek guy saw the problems that comes up when there are exponents combined with trig functions, so he tried some things out and made our lives easier. He made the Pythagorean Identities. They make sense, but the only way to really figure them out is to plug in numbers. Luckily, in my experience, these functions are used more for answers than any thing else, meaning we don't have to deal with sin2(39,076°). Also, there are only three of them, which makes our lives infinitely easier. These identities are as follows:
{"url":"http://draconicmath.weebly.com/pythagorean-identities.html","timestamp":"2014-04-16T13:03:14Z","content_type":null,"content_length":"16250","record_id":"<urn:uuid:2ca86c88-5b01-4a30-9994-49fc78a0b9a8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help May 8th 2011, 10:07 AM #1 Senior Member Dec 2010 Write the equation of the ellipse that meet each set of conditions. 1. The endpoints of the major axis are at (-11,5) (7,5). The endpoint of the minor axis are (-2,9) and (-2,1) 2. The ellipse has center origin a=1 and e=3/4 How would I write these equation I mean for number one I got (2,5) as the center using midpoint formula. But how would I get the denominator in the formula (x-h^2)/a^2 + (y-k^2)/b^2 =1 Write the equation of the ellipse that meet each set of conditions. 1. The endpoints of the major axis are at (-11,5) (7,5). The endpoint of the minor axis are (-2,9) and (-2,1) 2. The ellipse has center origin a=1 and e=3/4 How would I write these equation I mean for number one I got (2,5) as the center using midpoint formula. But how would I get the denominator in the formula (x-h^2)/a^2 + (y-k^2)/b^2 =1 For 1), the midpoint is (-2, 5). I presume that is a typo. You are given, for example, (-11, 5) and (7, 5) as the major axis. In terms of a how long is the major axis? I mean I meant in the formula I got the (x+2)^2+(y-5)^2 (x-h^2)/a^2 + (y-k^2)/b^2 =1 but for center with midpoint I did get (-2,5) Would I use the distance formula to find how long the major axis is? Ummmm...What is (x+2)^2+(y-5)^2 (x-h^2)/a^2 + (y-k^2)/b^2 =1? It looks like part of a circle equation and a general ellipse equation? Yes, use the distance formula to get the size of the major axis. How do you find a from this number? I got 18 so would A be 9 or half the major axis. For b I got 4. May 8th 2011, 10:15 AM #2 May 8th 2011, 10:21 AM #3 Senior Member Dec 2010 May 8th 2011, 10:36 AM #4 May 8th 2011, 12:02 PM #5 Senior Member Dec 2010 May 8th 2011, 12:17 PM #6 May 8th 2011, 01:33 PM #7 Senior Member Dec 2010
{"url":"http://mathhelpforum.com/pre-calculus/179901-ellipses.html","timestamp":"2014-04-17T16:25:58Z","content_type":null,"content_length":"49419","record_id":"<urn:uuid:f9da854e-94bd-47b6-89a3-999f63b66d90>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Interesting Psyco/Numeric/Numarray comparison Tim Hochberg tim.hochberg at ieee.org Tue Feb 4 08:52:05 CST 2003 I was inspired by Armin's latest Psyco version to try and see how well one could do with NumPy/NumArray implemented in Psycotic Python. I wrote a bare bones, pure Python, Numeric array class loosely based on Jnumeric (which in turn was loosely based on Numeric). The buffer is just Python's array.array. At the moment, all that one can do to the arrays is add and index them and the code is still a bit of a mess. I plan to clean things up over the next week in my copius free time <0.999 wink> and at that point it should be easy add the remaining operations. I benchmarked this code, which I'm calling Psymeric for the moment, against NumPy and Numarray to see how it did. I used a variety of array sizes, but mostly relatively large arrays of shape (500,100) and of type Float64 and Int32 (mixed and with consistent types) as well as scalar values. Looking at the benchmark data one comes to three main conclusions: * For small arrays NumPy always wins. Both Numarray and Psymeric have much larger overhead. * For large, contiguouse arrays, Numarray is about twice as fast as either of the other two. * For large, noncontiguous arrays, Psymeric and NumPy are ~20% faster than Numarray The impressive thing is that Psymeric is generally slightly faster than NumPy when adding two arrays. It's slightly slower (~10%) when adding an array and a scalar although I suspect that could be fixed by some special casing a la Numarray. Adding two (500,100) arrays of type Float64 together results in the following timings: psymeric numpy numarray contiguous 0.0034 s 0.0038 s 0.0019 s stride-2 0.0020 s 0.0023 s 0.0033 s I'm not sure if this is important, but it is an impressive demonstration of Psyco! More later when I get the code a bit more cleaned up. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2003-February/001947.html","timestamp":"2014-04-16T04:15:58Z","content_type":null,"content_length":"4473","record_id":"<urn:uuid:2f967caf-5c0f-40b6-9e31-dd2ba8940576>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
: LOGIC, PART 1 CIS587: LOGIC, PART 1 Herbrand Universe Reduction to Clausal Form Soundness, Completeness, Consistency, Satisfiability, Our interest is about: • Perceiving, that is, how we acquire information from our environment, • Knowledge Representation, that is, how we represent our understanding of the world, • Reasoning, that is, how we infer the implications of what we know and of the choices we have, and • Acting, that is, how we choose what we want to do and carry it out. Logic is a crucial tool in Knowledge Representation and Reasoning. In the following we deal with Logic in a fairly formal manner [but, sorry, no proofs here and essentially no examples]. The treatment of Logic in Rich and Knight, Chapter 5, is less complete than desirable. I highly recommend, for the basic ideas and definitions, to look at the course cs157: Logic and Automated Reasoning taught at Stanford by professor Genesereth and at his book Logical Foundations of Artificial Intelligence . You may also look at the lecture notes available for Logic in PAIL. Both PAIL and FLAIR have modules on various aspects of Logic and Theorem Proving. For the utterly devoted to Theorem Proving there is a very powerful tool called Otter. Logic is a formal system and it is not trivial to covert the statements that we use in normal speech into statements in logic. Here are two examples of english statements that you may wish to convert into a logic language: (Barwise and Etchemendy) If the unicorn is mythical then it is immortal, but if it is not mythical then it is a mortal mammal. If the uniforn is either immortal or a mammal, then it is horned. The unicorn is magical if it is horned. (Rich and Knight) 1. Marcus was a man 2. Marcus was a Pompeian 3. All Pompeian were Romans 4. Caesar was a ruler 5. All Romans were either loyal to Caesar or hated him 6. Everyone is loyal to someone 7. People only try to assassinate rulers they are not loyal to 8. Marcus tried to assassinate Caesar. In the following we introduce First Order Logic (FOL), and just mention that it is a generalization of Propositional Logic . Let us first introduce the symbols, or alphabet, being used. Beware that there are all sorts of slightly different ways to define FOL. Also, beware that I have problems in HTML in writing down some of the logical symbols. • Logical Symbols: These are symbols that have a standard meaning, like: AND, OR, NOT, ALL, EXISTS, IMPLIES, IFF, FALSE, =. • Non-Logical Symbols: divided in: □ Constants: ☆ Predicates: 1-ary, 2-ary, .., n-ary. These are usually just identifiers. ☆ Functions: 0-ary, 1-ary, 2-ary, .., n-ary. These are usually just identifiers. 0-ary functions are also called individual constants. Where predicates return true or false, functions can return any value. □ Variables: Usually an identifier. One needs to be able to distinguish the identifiers used for predicates, functions, and variables by using some appropriate convention, for example, capitals for function and predicate symbols and lower cases for variables. A Term is either an individual constant (a 0-ary function), or a variable, or an n-ary function applied to n terms: F(t1 t2 ..tn) [We will use both the notation F(t1 t2 ..tn) and the notation (F t1 t2 .. tn)] Atomic Formulae An Atomic Formula is either FALSE or an n-ary predicate applied to n terms: P(t1 t2 .. tn). In the case that "=" is a logical symbol in the language, (t1 = t2), where t1 and t2 are terms, is an atomic formula. A Literal is either an atomic formula (a Positive Literal), or the negation of an atomic formula (a Negative Literal). A Ground Literal is a variable-free literal. A Clause is a disjunction of literals. A Ground Clause is a variable-free clause. A Horn Clause is a clause with at most one positive literal. A Definite Clause is a Horn Clause with exactly one positive Literal. Notice that implications are equivalent to Horn or Definite clauses: (A IMPLIES B) is equivalent to ( (NOT A) OR B) (A AND B IMPLIES FALSE) is equivalent to ((NOT A) OR (NOT B)). A Formula is either: • an atomic formula, or • a Negation, i.e. the NOT of a formula, or • a Conjunctive Formula, i.e. the AND of formulae, or • a Disjunctive Formula, i.e. the OR of formulae, or • an Implication, that is a formula of the form (formula1 IMPLIES formula2), or • an Equivalence, that is a formula of the form (formula1 IFF formula2), or • a Universaly Quantified Formula, that is a formula of the form (ALL variable formula). We say that occurrences of variable are bound in formula [we should be more precise]. Or • a Existentially Quantified Formula, that is a formula of the form (EXISTS variable formula). We say that occurrences of variable are bound in formula [we should be more precise]. An occurrence of a variable in a formula that is not bound, is said to be free. A formula where all occurrences of variables are bound is called a closed formula, one where all variables are free is called an open formula. A formula that is the disjunction of clauses is said to be in Clausal Form. We shall see that there is a sense in which every formula is equivalent to a clausal form. Often it is convenient to refer to terms and formulae with a single name. Form or Expression is used to this end. • Given a term s, the result [substitution instance] of substituting a term t in s for a variable x, s[t/x], is: □ t, if s is the variable x □ y, if s is the variable y different from x □ F(s1[t/x] s2[t/x] .. sn[t/x]), if s is F(s1 s2 .. sn). • Given a formula A, the result (substitution instance) of substituting a term t in A for a variable x, A[t/x], is: □ FALSE, if A is FALSE, □ P(t1[t/x] t2[t/x] .. tn[t/x]), if A is P(t1 t2 .. tn), □ (B[t/x] AND C[t/x]) if A is (B AND C), and similarly for the other connectives, □ (ALL x B) if A is (ALL x B), (similarly for EXISTS), □ (ALL y B[t/x]), if A is (ALL y B) and y is different from x (similarly for EXISTS). The substitution [t/x] can be seen as a map from terms to terms and from formulae to formulae. We can define similarly [t1/x1 t2/x2 .. tn/xn], where t1 t2 .. tn are terms and x1 x2 .. xn are variables, as a map, the [simultaneous] substitution of x1 by t1, x2 by t2, .., of xn by tn. [If all the terms t1 .. tn are variables, the substitution is called an alphabetic variant, and if they are ground terms, it is called a ground substitution.] Note that a simultaneous substitution is not the same as a sequential substitution. For example, as we will see in the next section, P(x)[z/x u/z] is P(z), but P(x)[z/x][u/z] is P(u) • Given two substitutions S = [t1/x1 .. tn/xn] and V = [u1/y1 .. um/ym], the composition of S and V, S . V, is the substitution obtained by: □ Applying V to t1 .. tn [the operation on substitutions with just this property is called concatenation], and □ adding any pair uj/yj such that yj is not in {x1 .. xn}. For example: [G(x y)/z].[A/x B/y C/w D/z] is [G(A B)/z A/x B/y C/w]. Composition is an operation that is associative and non commutative • A set of forms f1 .. fn is unifiable iff there is a substitution S such that f1.S = f2.S = .. = fn.S. We then say that S is a unifier of the set. For example {P(x F(y) B) P(x F(B) B)} is unified by [A/x B/y] and also unified by [B/y]. • A Most General Unifier (MGU) of a set of forms f1 .. fn is a substitution S that unifies this set and such that for any other substitution T that unifies the set there is a substitution V such that S.V = T. The result of applying the MGU to the forms is called a Most General Instance (MGI). Here are some examples: FORMULAE MGU MGI (P x), (P A) [A/x] (P A) (P (F x) y (G y)), [x/y x/z] (P (F x) x (G x)) (P (F x) z (G x)) (F x (G y)), [(G u)/x y/z] (F (G u) (G y)) (F (G u) (G z)) (F x (G y)), Not Unifiable (F (G u) (H z)) (F x (G x) x), Not Unifiable (F (G u) (G (G z)) z) This last example is interesting: we first find that (G u) should replace x, then that (G z) should replace x; finally that x and z are equivalent. So we need x->(G z) and x->z to be both true. This would be possible only if z and (G z) were equivalent. That cannot happen for a finite term. To recognize cases such as this that do not allow unification [we cannot replace z by (G z) since z occurs in (G z)], we need what is called an Occur Test . Most Prolog implementation use Unification extensively but do not do the occur test for efficiency reasons. The determination of Most General Unifier is done by the Unification Algorithm. Here is the pseudo code for it, and here is the corresponding Lisp code. Before we can continue in the "syntactic" domain with concepts like Inference Rules and Proofs, we need to clarify the Semantics, or meaning, of First Order Logic. An L-Structure or Conceptualization for a language L is a structure M= (U,I), where: • U is a non-empty set, called the Domain, or Carrier, or Universe of Discourse of M, and • I is an Interpretation that associates to each n-ary function symbol F of L a map I(F): UxU..xU -> U and to each n-ary predicate symbol P of L a subset of UxU..xU. The set of functions (predicates) so introduced form the Functional Basis (Relational Basis) of the conceptualization. Given a language L and a conceptualization (U,I), an Assignment is a map from the variables of L to U. An X-Variant of an assignment s is an assignment that is identical to s everywhere except at x where it differs. Given a conceptualization M=(U,I) and an assignment s it is easy to extend s to map each term t of L to an individual s(t) in U by using induction on the structure of the term. • M satisfies a formula A under s iff □ A is atomic, say P(t1 .. tn), and (s(t1) ..s(tn)) is in I(P). □ A is (NOT B) and M does not satisfy B under s. □ A is (B OR C) and M satisfies B under s, or M satisfies C under s. [Similarly for all other connectives.] □ A is (ALL x B) and M satisfies B under all x-variants of s. □ A is (EXISTS x B) and M satisfies B under some x-variants of s. • Formula A is satisfiable in M iff there is an assignment s such that M satisfies A under s. • Formula A is satisfiable iff there is an L-structure M such that A is satisfiable in M. • Formula A is valid or logically true in M iff M satisfies A under any s. We then say that M is a model of A. • Formula A is Valid or Logically True iff for any L-structure M and any assignment s, M satisfies A under s. Some of these definitions can be made relative to a set of formulae GAMMA: • Formula A is a Logical Consequence of GAMMA in M iff M satisfies A under any s that also satisfies all the formulae in GAMMA. • Formula A is a Logical Consequence of GAMMA iff for any L-structure M, A is a logical consequence of GAMMA in M. At times instead of "A is a logical consequence of GAMMA" we say "GAMMA entails We say that formulae A and B are (logically) equivalent iff A is a logical consequence of {B} and B is a logical consequence of {A}. EXAMPLE 1: A Block World Here we look at a problem and see how to represent it in a language. We consider a simple world of blocks as described by the following figures: |a | |e | +--+ +--+ |a | |c | +--+ +--+ +--+ |b | |d | ======> |d | +--+ +--+ +--+ |c | |e | |b | --------------- -------------------- We see two possible states of the world. On the left is the current state, on the right a desired new state. A robot is available to do the transformation. To describe these worlds we can use a structure with domain U = {a b c d e}, and with predicates {ON, ABOVE, CLEAR, TABLE} with the following meaning: • ON: (ON x y) iff x is immediately above y. The interpretation of ON in the left world is {(a b) (b c) (d e)}, and in the right world is {(a e) (e c) (c d) (d b)}. • ABOVE: (ABOVE x y) iff x is above y. The interpretation of ABOVE [in the left world] is {(a b) (b c) (a c) (d e)} and in the right world is {(a e) (a c) (a d) (a b) (e c) (e d) (e b) (c d) (c b) (d b)} • CLEAR: (CLEAR x) iff x does not have anything above it. The interpretation of CLEAR [in the left world] is {a d} and in the right world is {a} • TABLE: (TABLE x) iff x rests directly on the table. The interpretation of TABLE [in the left world] is {c e} and in the right world id {b}. Examples of formulae true in the block world [both in the left and in the right state] are [these formulae are known as Non-Logical Axioms]: • (ON x y) IMPLIES (ABOVE x y) • ((ON x y) AND (ABOVE y z)) IMPLIES (ABOVE x z) • (ABOVE x y) IMPLIES (NOT (ABOVE y x)) • (CLEAR x) IFF (NOT (EXISTS y (ON y x))) • (TABLE x) IFF (NOT (EXISTS y (ON x y))) Note that there are things that we cannot say about the block world with the current functional and predicate basis unless we use equality. Namely, we cannot say as we would like that a block can be ON at most one other block. For that we need to say that if x is ON y and x is ON z then y is z. That is, we need to use a logic with equality. Not all formulae that are true on the left world are true on the right world and viceversa. For example, a formula true in the left world but not in the right world is (ON a b). Assertions about the left and right world can be in contradiction. For example (ABOVE b c) is true on left, (ABOVE c b) is true on right and together they contradict the non-logical axioms. This means that the theory that we have developed for talking about the block world can talk of only one world at a time. To talk about two worlds simultaneously we would need something like the Situation Calculus that we will study later. Herbrand Universe It is a good exercise to determine for given formulae if they are satisfied/valid on specific L-structures, and to determine, if they exist, models for them. A good starting point in this task, and useful for a number of other reasons, is the Herbrand Universe for this set of formulae. Say that {F01 .. F0n} are the individual constants in the formulae [if there are no such constants, then introduce one, say, F0]. Say that {F1 .. Fm} are all the non 0-ary function symbols occurring in the formulae. Then the set of (constant) terms obtained starting from the individual constants using the non 0-ary functions, is called the Herbrand Universe for these formulae. For example, given the formula (P x A) OR (Q y), its Herbrand Universe is just {A}. Given the formulae (P x (F y)) OR (Q A), its Herbrand Universe is {A (F A) (F (F A)) (F (F (F A))) ...}. Reduction to Clausal Form In the following we give an algorithm for deriving from a formula an equivalent clausal form through a series of truth preserving transformations. You can find this algorithm here and the corresponding Lisp code here. We can state an (unproven by us) theorem: Theorem: Every formula is equivalent to a clausal form We can thus, when we want, restrict our attention only to such forms. An Inference Rule is a rule for obtaining a new formula [the consequence] from a set of given formulae [the premises]. A most famous inference rule is Modus Ponens: {A, NOT A OR B} For example: {Sam is tall, if Sam is tall then Sam is unhappy} Sam is unhappy When we introduce inference rules we want them to be Sound, that is, we want the consequence of the rule to be a logical consequence of the premises of the rule. Modus Ponens is sound. But the following rule, called Abduction , is not: {B, NOT A OR B} is not. For example: John is wet If it is raining then John is wet It is raining gives us a conclusion that is usually, but not always true [John takes a shower even when it is not raining]. A Logic or Deductive System is a language, plus a set of inference rules, plus a set of logical axioms [formulae that are valid]. A Deduction or Proof or Derivation in a deductive system D, given a set of formulae GAMMA, is a a sequence of formulae B1 B2 .. Bn such that: • for all i from 1 to n, Bi is either a logical axiom of D, or an element of GAMMA, or is obtained from a subset of {B1 B2 .. Bi-1} by using an inference rule of D. In this case we say that Bn is Derived from GAMMA in D, and in the case that GAMMA is empty, we say that Bn is a Theorem of D. Soundness, Completeness, Consistency, Satisfiability A Logic D is Sound iff for all sets of formulae GAMMA and any formula A: • if A is derived from GAMMA in D, then A is a logical consequence of GAMMA A Logic D is Complete iff for all sets of formulae GAMMA and any formula A: • If A is a logical consequence of GAMMA, then A can be derived from GAMMA in D. A Logic D is Refutation Complete iff for all sets of formulae GAMMA and any formula A: • If A is a logical consequence of GAMMA, then the union of GAMMA and (NON A) is inconsistent Note that if a Logic is Refutation Complete then we can enumerate all the logical consequences of GAMMA and, for any formula A, we can reduce the question if A is or not a logical consequence of GAMMA to the question: the union of GAMMA and NOT A is or not consistent. We will work with logics that are both Sound and Complete, or at least Sound and Refutation Complete. A Theory T consists of a logic and of a set of Non-logical axioms. For convenience, we may refer, when not ambiguous, to the logic of T, or the non-logical axioms of T, just as T. The common situation is that we have in mind a well defined "world" or set of worlds. For example we may know about the natural numbers and the arithmetic operations and relations. Or we may think of the block world. We choose a language to talk about these worlds. We introduce function and predicate symbols as it is appropriate. We then introduce formulae, called Non-Logical Axioms, to characterize the things that are true in the worlds of interest to us. We choose a logic, hopefully sound and (refutation) complete, to derive new facts about the worlds from the non-logical axioms. A Theorem in a theory T is a formula A that can be derived in the logic of T from the non-logical axioms of T. A Theory T is Consistent iff there is no formula A such that both A and NOT A are theorems of T; it is Inconsistent otherwise. If a theory T is inconsistent, then, for essentially any logic, any formula is a theorem of T. [Since T is inconsistent, there is a formula A such that both A and NOT A are theorems of T. It is hard to imagine a logic where from A and (NOT A) we cannot infer FALSE, and from FALSE we cannot infer any formula. We will say that a logic that is at least this powerful is Adeguate.] A Theory T is Unsatisfiable if there is no structure where all the non-logical axioms of T are valid. Otherwise it is Satisfiable. Given a Theory T, a formula A is a Logical Consequence of T if it is a logical consequence of the non logical axioms of T. Theorem: If the logic we are using is sound then: 1. If a theory T is satisfiable then T is consistent 2. If the logic used is also adequate then if T is consistent then T is satisfiable 3. If a theory T is satisfiable and by adding to T the non-logical axiom (NOT A) we get a theory that is not satisfiable Then A is a logical consequence of T. 4. If a theory T is satisfiable and by adding the formula (NOT A) to T we get a theory that is inconsistent, then A is a logical consequence of T.
{"url":"http://www.cis.temple.edu/~ingargio/cis587/readings/logic1.html","timestamp":"2014-04-20T10:46:21Z","content_type":null,"content_length":"23468","record_id":"<urn:uuid:f5b010d0-7a90-4bb9-ac6c-64f68ad4c333>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
USAMO 1973 #1 USAMO 1973 #1 August 7, 2009 Posted by lumixedia in Problem-solving. Tags: contest math, geometry, olympiad math, USAMO, USAMO 1973 USAMO 1973 #1. Two points, ${P}$ and ${Q}$, lie in the interior of a regular tetrahedron ${ABCD}$. Prove that angle ${PAQ<60^{\circ}}$. Solution. Let ${AP}$ intersect the interior of triangle ${BCD}$ at ${P_1}$ and ${AQ}$ intersect the interior of triangle ${BCD}$ at ${Q_1}$. Suppose WLOG that ${C}$ and ${D}$ are on the same side of ${P_1Q_1}$, and let ${P_1Q_1}$ intersect ${BC}$ at ${P_2}$ and ${BD}$ at ${Q_2}$. Clearly ${\angle PAQ<\angle P_2AQ_2}$, so it would suffice to prove that ${\angle P_2AQ_2\le60^{\circ}}$. Note that the shortest side of a triangle is always opposite the smallest angle, and that the smallest angle in a triangle is at most ${60^{\circ}}$. So we just need to show that ${P_2Q_2}$ is the shortest side of ${\triangle P_2AQ_2}$. Consider the triangle ${P_2Q_2D}$. Let ${E}$ be on ${CD}$ so that ${Q_2E}$ is parallel to ${BC}$. We can see that ${\angle P_2Q_2D>\angle EQ_2D=60^{\circ}}$ and ${\angle P_2DQ_2<\angle EDQ_2=60^{\ circ}}$, so ${\angle P_2Q_2D>\angle P_2DQ_2}$. This implies that ${P_2D>P_2Q_2}$. But from SAS congruency on triangles ${P_2CA}$ and ${P_2CD}$, we have ${P_2A=P_2D}$. Thus ${P_2A>P_2Q_2}$. By symmetry, ${Q_2A>P_2Q_2}$ and ${P_2Q_2}$ is the shortest side of ${\triangle P_2AQ_2}$, as desired. I didn’t entirely follow your solution- is this the heuristic principle? We can at least show that the angle is $\leq 60$ for $P$ in the interior or on the boundary of the tetrahedron. Project $P, Q$ into the triangle $BCD$ by extending $AP, AQ$, which doesn’t change the angle. Then $PAQ$ will be as large as possible when $P, Q$ are as far away as possible (it should be possible to make this rigorous), which occurs when $P,Q$ are opposing vertices of the triangle $BCD$. So just check $P=B, Q=C$ by symmetry. It’s just transitivity. We’re showing PAQ<60 by showing PAQ<P_2AQ_2 and P_2AQ_2<=60. The former is true because the angles being compared are in the same plane and one "contains" the other. The proof of the latter is basically what you just said, where much of my post is for the (it should be possible to make this rigorous) part. Recent Comments sethsnap.com on Integrality, invariant theory… http://www.spunka.se… on Representations of the symmetr… Home Page on Integrality, invariant theory… Felix on Riemann integration in abstrac… http://customlogopin… on Bourbaki 2.0: Or, is massively…
{"url":"http://deltaepsilons.wordpress.com/2009/08/07/usamo-1973-1/","timestamp":"2014-04-20T01:15:55Z","content_type":null,"content_length":"78087","record_id":"<urn:uuid:e801318f-9b0a-4a65-83f1-5710ff93661a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Union, NJ Science Tutor Find an Union, NJ Science Tutor ...Of course that is worthwhile and important, however I am more interested in the students understanding the material. First of all, the material- the subject matter of the sciences is seriously the most incredible and interesting stuff you (or your child) might ever study. So teaching to the tes... 11 Subjects: including pharmacology, genetics, biology, chemistry ...It allows my students to better understand the relevance and necessity of knowing and using science in their everyday lives. Thank you for taking the time to read a little bit about myself. I look forward to working with you! 8 Subjects: including chemistry, algebra 2, biology, organic chemistry ...Additionally, I tutored philosophy/logic in addition to Latin and Greek for the athletics department; I was the only non-graduate student to be so employed. I also tutored a wider variety of subjects (English, study skills, etc.) for an interdepartmental peer tutoring group. All told, during my... 17 Subjects: including philosophy, reading, English, writing ...My major for my degree was in physiology, and I have been teaching physiology for the last five years. I completed a nutritional therapy diploma followed by a masters in nutrition at Kings College London University. I worked as a nutritionist in the UK for 3 years. 7 Subjects: including ecology, ACT Science, biology, physiology ...I have been tutoring for the STEM (Science, Technology, Engineering and Mathematics) for about a year. I tutor Math, Biology, Chemistry, Anatomy, Physics, Trigonometry, Algebra, Pre-calculus, Calculus, and Elementary (K-6th). I am very patient and persistent. I tend to change complicated problems to easy ones, by changing them into different steps. 13 Subjects: including biology, calculus, chemistry, trigonometry
{"url":"http://www.purplemath.com/Union_NJ_Science_tutors.php","timestamp":"2014-04-17T19:23:34Z","content_type":null,"content_length":"23817","record_id":"<urn:uuid:4b6c2107-b544-4b2a-b9e1-61adc68ba3db>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Module Help November 5th 2011, 01:38 PM Module Help I've taken an undergraduate course in ring theory, but in my graduate algebra course we just started modules, and I'm kind of confused. - What's the difference between modules and Ideals? - Are modules not closed under multiplication? - What criteria is necessary to show something is a submodule? In my definition I just have, "closed under the action of ring elements (rn in N for a r in R n in N). But I'm not quite sure what that means. In some examples they just show a + rb is in N for all r in R, a,b in N. But sometimes they show r1(a+r2b) in N. What do I have to show in general? In the book they keep talking about how modules are very similar to vector spaces... but I'm not familar with the definition of a vector space, so could someone please explain it just in relation to rings and Ideal? Thanks. November 5th 2011, 02:49 PM Re: Module Help rings are a special kind of module. modules do not, in general, have a multiplication MxM-->M, but just a weaker "action" RxM-->M. suppose we have a submodule N of a module M. this means that (N,+) is a subgroup of (M,+) and that the action of R on M restricted to N yields an action on N, rn is in N for all r in R, and n in N. well, if M = R, then in fact, a sub-module of R is the same thing as an ideal of R. the idea is that modules are "more general" than rings, we don't have to define m'm, for all m,m' in M, we just need rm to be defined for some ring R (if the ring is Z, we just get an abelian group, so modules generalize abelian groups as well). the idea is this: suppose we have an abelian group, M. well, if we have a multiplication on M, such that left-multiplication by a in M and right-multiplication by a is an additive homomorphism: a(m+m') = am + am' (m+m')a = ma + m'a we have a ring. but we could just have: 1. r(m+m') = rm + rm', 2. (r+r')m = rm + r'm, where R is just a ring (not necessarily even a subset of M). the first says that multiplication by r is a group homomorphism M-->M (that is, multiplication by r is an endomorphism of M). the second says that the map r--->r.__ is a homomorphism from (R,+) to (End(M),+) (where the addition of two endomorphism f,g in End(M) is given by (f+g)(m) = f(m) + g(m)). but End(M) has a natural ring structure on it, not only can we add endomorphisms, but we can compose them, too, and composition is distributive over the addition of End(M). so it natural to require that the map r-->r.__ be a ring-homomorphism, as well: 3. (rs._)(m) = (rs)m = r(sm) = (r._)o(s._)(m). and if R has unity: 4. (1._)(m) = 1m = m = id_M(m). obviously, if we have a ring as our abelian group (R,+) = (M,+), the ring-homomorphism r-->r._ satisfies 1-4 as a straight-forward consequence of the distributive laws. that is, all rings are modules (over themselves, at the very least), but not all modules are rings. the criteria for showing N is a submodule of M are straight-forward: 1. N cannot be empty (we know that at the very least, 0 must be in N). 2. if a,b are in N, a+b must be in N. 3. if a is in N, and r is in R, ra must be in N. (condition 1 is often glossed over, but you should keep it in mind). note that if R has unity, then a+rb in N (for all a,b in N, r in R) is equivalent to 2 & 3: take r = 1, and we get a+b in N, take a = 0, and we get rb in N. if R has unity, the third condition is equivalent to the second, just take r1 = 1. these are just "one-step tests" as compared to "two-step tests" (we're just combining conditions 2&3 into one condition). November 5th 2011, 03:13 PM Re: Module Help Ohh, okay, thank you so much!
{"url":"http://mathhelpforum.com/advanced-algebra/191242-module-help-print.html","timestamp":"2014-04-20T09:57:51Z","content_type":null,"content_length":"7336","record_id":"<urn:uuid:68d3ffcf-b47c-4b68-b5ed-b3bba80f553c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
How to make a syllabus part 4: Getting it out there January 7, 2008, 9:22 pm This is the last in the series of “How to Make a Syllabus” articles, and I wanted to focus on an element of syllabi that I don’t hear talked about much: their life cycle. Namely, now that we know what a syllabus is for and what sorts of things ought to be on one (and not be on one), let’s talk about how to disseminate it and — very importantly — how to keep it in the game as the semester moves past day 1. A well-constructed syllabus is a one-stop shop for all the information students should need in a course. Any question, any piece of information that pertains to the course and is not already easily available elsewhere ought to be clearly written and easily accessible in the syllabus. A well-written syllabus has the power to remove a lot of guesswork and unpleasantness from the task of course management. But only if the syllabus is itself easily available, and only if students are constantly made aware of how useful it is. That is, there are two very important things to keep in mind about your syllabus once it is made: (1) it must be ubiquitous, being distributed in as many different formats and locations as possible; and (2) you must constantly refer to it as the main information source about course management to the students. Making the syllabus available is usually a no-brainer — you just photocopy the thing and hand it out on the first day of class. And most teachers realize that making the syllabus available in multiple formats is important; you can post a copy on your course web site, or email it out as an attachment after the first day. So this point isn’t difficult to grasp. The only thing to keep in mind is to carry this to extremes. Make the syllabus available in as many formats as possible: on paper, to be sure, and electronically in multiple file formats, making sure that PDF is one of those formats. For my part I usually do the following with my syllabi: • Print up paper copies for the first day of class. • Print up some more just to have on hand in the office if a student needs one. • Make electronic copies in PDF, MS Word, and RTF formats and post those on the course Angel site. This way, a student in the class is going to be practically bumping in to a copy of the syllabus wherever they go. That’s the idea — make the syllabus not only logical and transparent but also easy to find, or rather hard to get away from. In the past, I’ve also posted HTML versions of the syllabus on the web. HTML is an especially good format for syllabi because syllabi work well as hyperlinked documents. Students usually don’t read the syllabus in a linear way, starting from the beginning and working to the end; they read nonlinearly, diving in and searching for whatever piece of information is relevant to the question they have about the course. So I’ve made my syllabi before with hyperlinks to the main concepts and sections of the syllabus, allowing for nonlinear reading. Nowadays, you don’t really need to make an HTML document to accomplish this searchability, because PDF, Word, and RTF files can be searched. But how many students know how to implement a word search in their PDF viewer? So there’s still something to be said for hyperlinked syllabi. Or you might try making a syllabus wiki instead, using Wikispaces or something similar. (Wikispaces allows for on-the-fly LaTeX typesetting which makes it an especially good solution for hyperlinked online mathematical documents.) Now to the second point: What happens to the syllabus after day one. It’s very easy for the instructor to forget about the syllabus after the first day or the first week, and if the instructor forgets, then surely the students will too. So the instructor has to refer to the syllabus constantly when informational questions come up. I’ve had to develop the discipline, whenever a student asks an informational question such as “When are your office hours?” or “How many points can we total in the class?”, to NOT answer these questions directly, but rather answer with “That’s in the syllabus.” Where is your office? That’s in the syllabus. When is the final exam? That’s in the syllabus. What do I need to make on the final to get a C+ for the class? Use the formula I gave you in the syllabus. To the student, I’m sure my flat answer of “that’s in the syllabus” sounds like I am brushing them off. But what I’m doing is referring them to the place where all that stuff is written down. And frankly, a syllabus is good because it is a place where it’s all written down, and you don’t have to remember any of it. (Sound familiar?) Besides, students begin to realize that any question of this sort is always going to be answered the same way, and so they simply stop asking and look it up instead. Which is the whole I don’t do this personally, but I have also heard of profs who include syllabus-related questions on tests and quizzes, perhaps as extra credit. That’s a pretty good way to make sure students are looking at the syllabus occasionally throughout the semester and come to see it as a “friendly” document, a document that is on their side and helping them navigate the course. That’s about all I have to contribute on the topic of course syllabi. To sum up: • A syllabus is an information dump for all the parametric and structural information in the course. • A syllabus can have too little information in it, and too much information in it. Hitting the sweet spot is the challenge. • An item is to be included in the syllabus if and only if it is information that is relevant to the course that is not readily available elsewhere. • Make the syllabus readily available in a multitude of different formats and locations. • Refer to the syllabus constantly and explicitly throughout the semester as the main repository of course management information. Do these, and I think you’ll find a great weight lifted from your shoulders as you teach your course. And your students will have a little more brain power to devote to actually learning things in your class. This entry was posted in Education, Higher ed, Life in academia, Teaching and tagged college, course design, Education, higher education, syllabi, syllabus, Teaching. Bookmark the permalink.
{"url":"http://chronicle.com/blognetwork/castingoutnines/2008/01/07/how-to-make-a-syllabus-part-4-getting-it-out-there/","timestamp":"2014-04-21T09:48:20Z","content_type":null,"content_length":"62800","record_id":"<urn:uuid:208a58d9-6625-476e-94ad-cc0e31ed0563>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
matplot {graphics} Plot the columns of one matrix against the columns of another. matplot(x, y, type = "p", lty = 1:5, lwd = 1, lend = par("lend"), pch = NULL, col = 1:6, cex = NULL, bg = NA, xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, ..., add = FALSE, verbose = getOption("verbose")) matpoints(x, y, type = "p", lty = 1:5, lwd = 1, pch = NULL, col = 1:6, ...) matlines (x, y, type = "l", lty = 1:5, lwd = 1, pch = NULL, col = 1:6, ...) vectors or matrices of data for plotting. The number of rows should match. If one of them are missing, the other is taken as y and an x vector of 1:n is used. Missing values (NAs) are allowed. character string (length 1 vector) or vector of 1-character strings indicating the type of plot for each column of y, see plot for all possible types. The first character of type defines the first plot, the second character the second, etc. Characters in type are cycled through; e.g., "pl" alternately plots points and lines. vector of line types, widths, and end styles. The first element is for the first column, the second element for the second column, etc., even if lines are not plotted for all columns. Line types will be used cyclically until all plots are drawn. character string or vector of 1-characters or integers for plotting characters, see points. The first character is the plotting-character for the first plot, the second for the second, etc. The default is the digits (1 through 9, 0) then the lowercase and uppercase letters. vector of colors. Colors are used cyclically. vector of character expansion sizes, used cyclically. This works as a multiple of par("cex"). NULL is equivalent to 1.0. vector of background (fill) colors for the open plot symbols given by pch = 21:25 as in points. The default NA corresponds to the one of the underlying function plot.xy. xlab, ylab titles for x and y axes, as in plot. xlim, ylim ranges of x and y axes, as in plot. Graphical parameters (see par) and any further arguments of plot, typically plot.default, may also be supplied as arguments to this function. Hence, the high-level graphics control arguments described under par and the arguments to title may be supplied to this function. logical. If TRUE, plots are added to current one, using points and lines. logical. If TRUE, write one line of what is done. Points involving missing values are not plotted. The first column of x is plotted against the first column of y, the second column of x against the second column of y, etc. If one matrix has fewer columns, plotting will cycle back through the columns again. (In particular, either x or y may be a vector, against which all columns of the other argument will be plotted.) The first element of col, cex, lty, lwd is used to plot the axes as well as the first line. Because plotting symbols are drawn with lines and because these functions may be changing the line style, you should probably specify lty = 1 when using plotting symbols. Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. matplot((-4:5)^2, main = "Quadratic") # almost identical to plot(*) sines <- outer(1:20, 1:4, function(x, y) sin(x / 20 * pi * y)) matplot(sines, pch = 1:4, type = "o", col = rainbow(ncol(sines))) matplot(sines, type = "b", pch = 21:23, col = 2:5, bg = 2:5, main = "matplot(...., pch = 21:23, bg = 2:5)") x <- 0:50/50 matplot(x, outer(x, 1:8, function(x, k) sin(k*pi * x)), ylim = c(-2,2), type = "plobcsSh", main= "matplot(,type = \"plobcsSh\" )") ## pch & type = vector of 1-chars : matplot(x, outer(x, 1:4, function(x, k) sin(k*pi * x)), pch = letters[1:4], type = c("b","p","o")) lends <- c("round","butt","square") matplot(matrix(1:12, 4), type="c", lty=1, lwd=10, lend=lends) text(cbind(2.5, 2*c(1,3,5)-.4), lends, col= 1:3, cex = 1.5) table(iris$Species) # is data.frame with 'Species' factor iS <- iris$Species == "setosa" iV <- iris$Species == "versicolor" op <- par(bg = "bisque") matplot(c(1, 8), c(0, 4.5), type = "n", xlab = "Length", ylab = "Width", main = "Petal and Sepal Dimensions in Iris Blossoms") matpoints(iris[iS,c(1,3)], iris[iS,c(2,4)], pch = "sS", col = c(2,4)) matpoints(iris[iV,c(1,3)], iris[iV,c(2,4)], pch = "vV", col = c(2,4)) legend(1, 4, c(" Setosa Petals", " Setosa Sepals", "Versicolor Petals", "Versicolor Sepals"), pch = "sSvV", col = rep(c(2,4), 2)) nam.var <- colnames(iris)[-5] nam.spec <- as.character(iris[1+50*0:2, "Species"]) iris.S <- array(NA, dim = c(50,4,3), dimnames = list(NULL, nam.var, nam.spec)) for(i in 1:3) iris.S[,,i] <- data.matrix(iris[1:50+50*(i-1), -5]) matplot(iris.S[, "Petal.Length",], iris.S[, "Petal.Width",], pch = "SCV", col = rainbow(3, start = 0.8, end = 0.1), sub = paste(c("S", "C", "V"), dimnames(iris.S)[[3]], sep = "=", collapse= ", "), main = "Fisher's Iris Data") Documentation reproduced from R 3.0.2. License: GPL-2.
{"url":"http://www.inside-r.org/r-doc/graphics/matplot","timestamp":"2014-04-19T22:49:43Z","content_type":null,"content_length":"49489","record_id":"<urn:uuid:2a398f5b-bb95-4ee9-a0b5-2c78a813a90b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector Integral? April 17th 2013, 01:38 AM #1 Apr 2013 Vector Integral? Hey guys! I'm working through a proof from an old paper for my project and I'm really stumped with this notation. I want to understand the meaning of: For $x$, $y \in S^{2}$, $k e l$ $\int_{S^2} P_k({x}{z})P_l({y}{z}) dz=0$ Where the $P_k$ are Legendre polynomials degree $k$. Now I know that this is a kind of more generalised orthogonality relation and the legendre polynomials take scalar arguments (at least I think I know that...) in which case I think it would mean For $\boldsymbol{x}$, $\boldsymbol{y} \in S^{2}$, $k e l$ $\int_{S^2} P_k(\boldsymbol{x}\cdot \boldsymbol{z})P_l(\boldsymbol{y} \cdot \boldsymbol{z}) d\boldsymbol{z}=0$ But then I dont understand the meaning of $d\boldsymbol{z}$. Ive read in an old text that used the notation "for $x=(x_1,x_2, \ldots, x_n) ...$where $dx=dx_1 dx_2 \cdots dx_n$ is the ordinary Lebesque measure", but I haven't been taught the Lebesque stuff and it looks real tricky... So if anyone could shed a little light on whats going on, or point me to somewhere that does if its elementary (and im just being stupid) then id be so grateful! Re: Vector Integral? The "z" is the variable of integration and so is a "dummy" variable. It means "for constant vectors x and y, take the dot product with the vector z (so that the arguments of the Legendre functions are numbers), then integrate over the plane". Re: Vector Integral? When you say integrate over the plane, does that make sense when $S^2$ is the surface of the sphere so we have a surface integral? Re: Vector Integral? Solved $dz$ instead of $d^2 \Omega$ the usual area element on a sphere so it is just a lovely scalar function integrated over the surface of a sphere Thanks anyway! Re: Vector Integral? by the way so when you take the surface integral of the dot product of a vector function dotted into a normal vector (or other function)-for the dot product you get the component of the vector function in the normal direction, but to find the surface area of a surface don't you only need the area of the face? eg electromagnetism there is a dot product of vector function F(x,y,z) dot n also why is it important that both sides of the face in the limit go to zero? can you define surface integral as the limit sum like, it seems that you only need the area of the face without dot product $\sum_{I=1}^{\infty }F_{i}(x,y,z)\cdot \hat{n}\Delta S_{i}$ I guess sometimes you want to project different parts of a surface onto different regions in the coordinate planes, xy, yz, etc...? Thanks very much! Last edited by mathlover10; May 3rd 2013 at 03:54 AM. April 17th 2013, 05:08 AM #2 MHF Contributor Apr 2005 April 17th 2013, 05:53 AM #3 Apr 2013 April 24th 2013, 09:43 AM #4 Apr 2013 May 3rd 2013, 02:08 AM #5 Junior Member Dec 2012
{"url":"http://mathhelpforum.com/calculus/217650-vector-integral.html","timestamp":"2014-04-17T03:08:29Z","content_type":null,"content_length":"44502","record_id":"<urn:uuid:f96bd3f6-7d46-429b-a40c-8672e8575c6c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Grants and Special Grants and Special Projects Table of Contents Madelaine Bates and Roman Kossak: Graphing Workshops / Presidential Faculty Staff Development Grant (2000) Mathematics in Advanced Technological Education (ATE) Programs / NSF-sponsored project (1999) CUNY Logic Workshop / Faculty Development Program Grant from the Office for Instructional Technology and External Programs of CUNY Graduate Center. (1998-2000) Structure of Nonstandard Models of Arithmetic / PSC-CUNY Grant Riemannian Manifolds associated with two step nilpotent Lie groups (With A. Koranyi, Distinguished Professor, Z. Szabo, Professor, Lehman College, M.Moskowitz, Graduate Center)/ CUNY Collaborative Research Grant 2000-2002 Asymptotic Behavior of Maxwell-Klein-Gordon Equations in 4-d Minkowski Space / PSC-CUNY Research Grant, 2000-2002 Automorphisms of Riemann Surfaces / PSC-CUNY Grant (1999) A Relationship between Vertices and Quasi-isomorphisms for a Class of Bracket Groups / PSC-CUNY Grant (1999) On Quasi-representing Graphs for a Class of Butler Groups / PSC-CUNY Grant (2000) Graphing Workshop (M. Bates & R. Kossak): During the workshops, students explore the relation between a function and its graph using graphing calculator. The understanding of this concept is essential for precalculus and calculus. These workshops are open to any student in MTH 05 or MTH 06. The workshop is held for two hours a week for six weeks. ATE Programs (S. Forman): A working draft of resources and reports from an NSF-sponsored project intended to strengthen the role of mathematics in Advanced Technological Education (ATE) programs. Intended as a resource for ATE faculty and members of the mathematical community. CUNY Logic Workshop (R. Kossak): The workshop has been financed in 1998-2000 by the Faculty Development Program Grant from the Office for Instructional Technology and External Programs of CUNY Graduate Center. Roman Kossak of Bronx Community College and Joel Hamkins of the College of Staten Island are the co-directors of the workshop. Structure of Nonstandard Models of Arithmetic (R. Kossak): This is a two year grant to support preparation of the book with the same title. The book will cover the last 20years of research in model theory of arithmetic, in particular the study of the lattices of elementary substructures and the automorphism groups of nonstandard models of Peano Arithmetic. This is a joint project with Professor James Schmerl of the University of Connecticut. Riemannian Manifolds associated with two step nilpotent Lie groups (M. Psarelli): This is a CUNY Collaborative Research Grant with investigators A. Koranyi, Distinguished Professor, Z. Szabo, Professor, Lehman College. The investigators plan a collaboration between Bronx Community College, Lehman College and the Graduate Center in the study of differential geometry and topology of Riemannian Manifolds associated with certain 2-step nilpotent Lie groups, which include as a special case groups of Heisenberg type. Asymptotic Behavior of Maxwell-Klein-Gordon Equations in 4-d Minkowski Space (M. Psarelli): This research project is concerned with issues that arise in the electrodynamics of continuous media and are described within the framework of gauge field theory. The focus is on the study of the solutions of coupled Maxwell-Klein-Gordon fields equations in the 4-dimensional Minkowski space-time in the presence of mass and arbitrary size initial data that have charge. The problem belongs in the area of non-linear systems of hyperbolic partial differential equations and abelian gauge field theory. Automorphisms of Riemann Surfaces (A. Weaver): I will pursue several different approaches to the problem of matching a finite group with the the set of genera of surfaces on which it acts as a group of automorphisms. During the current grant cycle, I plan to (1) determine the genus spectrum of the cyclic group of order $pq$, where $p$ and $q$ are primes; (2) determine the genus spectrum of the dihedral $2$-groups; (3) determine the complete list of odd-order groups acting in general less than $50$; (4) classify the actions of elementary abelian $p$ groups up to topological equivalence. A Relationship between Vertices and Quasi-isomorphisms for a Class of Bracket Groups (P. Yom): I will characterize the class of Bracket groups up to quasi-isomorphisms by showing there is a sequence of vertex switches, where Bracket group is the cokernel of the diagonal embedding of intersection of $n$ subgroups $Ai$of $Q$ to the direct sum of $Ai$. That is, two Bracket groups $ [A1,A2,...,An]$ and $[B1,B2,...,Bn]$ are quasi-isomorphic if and only if there is a sequence of vertex switches which successfully replaces $Ai$ by $Bi$. On Quasi-representing Graphs for a Class of Butler Groups (P. Yom): Represent a group of the form $G(A1,...,An)$ in terms of type-labelled graph and analyze the quasi-isomorphisms of two groups using graph theoretic properties.
{"url":"http://fsw01.bcc.cuny.edu/mathdepartment/Grants/Grants.htm","timestamp":"2014-04-17T18:22:43Z","content_type":null,"content_length":"8773","record_id":"<urn:uuid:1a0686b4-77f0-4d18-99e7-fecc66a63c6f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
The unfolding of non-finitist arithmetic , 2010 "... The concept of the (full) unfolding U(S) of a schematic system S is used to answer the following question: Which operations and predicates, and which principles concerning them, ought to be accepted if one has accepted S? The program to determine U(S) for various systems S of foundational significan ..." Cited by 3 (3 self) Add to MetaCart The concept of the (full) unfolding U(S) of a schematic system S is used to answer the following question: Which operations and predicates, and which principles concerning them, ought to be accepted if one has accepted S? The program to determine U(S) for various systems S of foundational significance was previously carried out for a system of non-finitist arithmetic, NFA; it was shown that U (NFA) is prooftheoretically equivalent to predicative analysis. In the present paper we work out the unfolding notions for a basic schematic system of finitist arithmetic, FA, and for an extension of that by a form BR of the so-called Bar Rule. It is shown that U(FA) and U(FA + BR) are proof-theoretically equivalent, respectively, to Primitive Recursive Arithmetic, PRA, and to Peano Arithmetic, "... We study weak theories of truth over combinatory logic and their relationship to weak systems of explicit mathematics. In particular, we consider two truth theories TPR and TPT of primitive recursive and feasible strength. The latter theory is a novel abstract truth-theoretic setting which is able t ..." Cited by 2 (2 self) Add to MetaCart We study weak theories of truth over combinatory logic and their relationship to weak systems of explicit mathematics. In particular, we consider two truth theories TPR and TPT of primitive recursive and feasible strength. The latter theory is a novel abstract truth-theoretic setting which is able to interpret expressive feasible subsystems of explicit mathematics. 1 , 2012 "... In this paper we continue Feferman’s unfolding program initiated in [11] which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried thro ..." Cited by 1 (1 self) Add to MetaCart In this paper we continue Feferman’s unfolding program initiated in [11] which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm [13] and for a system FA (with and without Bar rule) in Feferman and Strahm [14]. The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm [7] and Eberhard [6] and whose involved proof-theoretic analysis is due to Eberhard [6]. The results of this paper were first announced in [8]. , 2005 "... Abstract. We reevaluate the claim that predicative reasoning (given the natural numbers) is limited by the Feferman-Schütte ordinal Γ0. First we comprehensively criticize the arguments that have been offered in support of this position. Then we analyze predicativism from first principles and develop ..." Add to MetaCart Abstract. We reevaluate the claim that predicative reasoning (given the natural numbers) is limited by the Feferman-Schütte ordinal Γ0. First we comprehensively criticize the arguments that have been offered in support of this position. Then we analyze predicativism from first principles and develop a general method for accessing ordinals which is predicatively valid according to this analysis. We find that the Veblen ordinal φΩω(0), and larger ordinals, are predicatively provable. The precise delineation of the extent of predicative reasoning is possibly one of the most remarkable modern results in the foundations of mathematics. Building on
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1253289","timestamp":"2014-04-18T13:40:12Z","content_type":null,"content_length":"20171","record_id":"<urn:uuid:544f8740-20e0-497c-a804-71e44a4b0896>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
approximation with taylor series April 27th 2009, 03:19 PM #1 Feb 2009 approximation with taylor series Over the interval $3<x<5,$ how poorly does the tangent line at $x=4$ approximate $\sqrt {3+x}$? How about the quadratic taylor polynomial? I'm really not sure of how to approach this one, help please! Start by writing the Taylor polynomials of order 1 and 2 out. Now there are two approaches one is to use the remainder term for the Taylor series. The second approach is to plot the three functions over the interval (3,5). Last edited by CaptainBlack; April 29th 2009 at 08:24 PM. April 29th 2009, 12:03 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/86077-approximation-taylor-series.html","timestamp":"2014-04-20T01:15:28Z","content_type":null,"content_length":"34027","record_id":"<urn:uuid:8ff07468-9e50-460c-870d-c63040374440>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
A Linear Equation; IVP; Piecewise Defined Funciton January 23rd 2011, 10:44 AM #1 Super Member Jun 2009 United States A Linear Equation; IVP; Piecewise Defined Funciton $\frac{dy}{dx}+2xy=f(x)$ such that $y(0)=2$ $f(x)=\left\{\begin{array}{cc} x, &\mbox { if } 0\leq x< 1\\ 0,&\mbox { if } x\geq 1\end{array}\right$ The integrating factor is $e^{\int 2xdx}=e^{x^2}$, so: $\int\frac{d}{dx}\left[e^{x^2}y\right]dx=\int xe^{x^2}dx$ The point $(0,2)$ gives: For $x\geq 1$ the equation is separable. $\int \frac{1}{y}\frac{dy}{dx}dx=-\int 2x dx=-x^2+C_2$ $ln\mid y\mid =-x^2+C_3$ $\mid y \mid = C_4e^{-x^2}$ $y=\pm C_4e^{-x^2}$ Problem: The solution in the book gives $\left(\frac{1}{2}e+\frac{3}{2}\right)e^{-x^2}$ for $x \geq 1$ and I don't see how to find C_4 because the point $(0,2)$ isn't on the interval $[1,\infty)$ Last edited by adkinsjr; January 23rd 2011 at 11:45 AM. Your solution looks fine, I don't see how they could have done that either. On a side note, you could also have used the Integrating Factor method with $\displaystyle x \geq 1$ (and it would probably have been easier, since you already had it). Yeah, the family of functions $y=Ce^{-x^2}$ is a solution for any value of C, and basically they have $C=\frac{1}{2}e+\frac{3}{2}$ which is a constant. The problem is that you can't get to this It turns out that this constant makes the piecewise defined solution continous at x=1 if you change the original intervals to $0\leq x \leq 1$ and $x>0$. The you can say: $lim_{x->1}Ce^{-x^2}=y(1)=\frac{1}{2}+\frac{3}{2e^{(1)^2}}=\frac{1 }{2}+\frac{3}{2e}$ However, this isn't correct because $y(1)eq \frac{1}{2}+\frac{3}{2e}$ on the original interval. But I'm pretty sure I see their mistake now. January 23rd 2011, 03:31 PM #2 January 24th 2011, 11:47 AM #3 Super Member Jun 2009 United States
{"url":"http://mathhelpforum.com/differential-equations/169120-linear-equation-ivp-piecewise-defined-funciton.html","timestamp":"2014-04-18T13:47:10Z","content_type":null,"content_length":"43179","record_id":"<urn:uuid:3992efd2-5361-4ebf-ba34-0f47f1cc5f17>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Differentiating Data Structures Mutually recursive types Last Friday I gave a talk about differentiating data structures at local student scientific group seminar; but I only read the original Conor McBride paper, I haven't known about this one. I must admit that I didn't understand fully his approach. What bothers me that I cannot tell if it works also for mutually recursive types, such as type B = leaf | node(B,C) and C = leaf | node(C,B,C) It appears that the one-hole context of such type is not of a form (a' list), but rather a mixed-step-type mutual lists: B' = nil | (B',C) | (C',B) C' = nil | (C', (B,C)|(C,B) ) | (B',(C,C)) However, when we convert the type definition to polynomial equation system (converting leaf to x) B(x) = x + B(x)C(x) C(x) = x + C(x)B(x)C(x) and differntiate it, we get B'(x) = 1 + B'(x)C(x) + C'(x)B(x) C'(x) = 1 + 2C'(x)B(x)C(x) + B'(x)C(x)^2 which is pretty similar to the type of one-hole context. To sum up - I wonder if the differentiaion of mutually recursive types is also covered. Perhaps it is, but lacks a good example to explain? Marcin Stefaniak at Thu, 2005-05-26 12:29 | to post comments mutually recursive datatypes .. are a special instance of nested types. e.g. your type can be written as B(x) = mu Y.X+mu C.X+CxBxC and similar for C(x). However, it may be worthwhile to spell out the rules for mutual types directly. Have a go! at Sat, 2005-05-28 15:17 | to post comments An informal explanation... Suppose F[X] is the type of containers of objects of type X. Then consider F[X+E]. This is the type of containers of objects that are either X or E. Suppose we can rewrite F[X+E]=F0[X]+E.F1[X]+E^2.F2 [X]+... where the Fi[X] are independent of E. Then the first term in the series should be containers with no E's in them (so we expect F0[X]=F[X]) and the second term corresponds to containers where there is precisely one E instead of an X somewhere in the structure. Remove that E to get the one hole context and we get F1[X]. Elementary calculus shows F1[X]=F'[X]. at Thu, 2005-05-26 15:01 | to post comments Andre Joyal... was differentiating datatypes in 1980. Read Andre Joyal, Une theorie combinatoire des series formelles, Adv. Math. 42 (1981), 1-82. for the beginnings of this theory, which continue in Andre Joyal, Foncteurs analytiques et especes de structures, in Combinatoire Enumerative, Springer Lecture Notes in Mathematics 1234, Springer, Berlin (1986), 126-159. But they are not all that easy to read. Instead, I recommend the excellent book F. Bergeron, G. Labelle, and P. Leroux, Combinatorial species and tree-like structures, Cambridge, Cambridge U. Press, 1998 This has already drawn the attention of many others, John Baez in particular. Jacques Carette at Fri, 2005-05-27 00:37 | to post comments I'm glad to hear this is a go I'm glad to hear this is a good book -- I've been thinking about reading it, and this is probably going to push me over the edge. at Fri, 2005-05-27 15:29 | to post comments
{"url":"http://lambda-the-ultimate.org/node/729","timestamp":"2014-04-16T22:07:08Z","content_type":null,"content_length":"16921","record_id":"<urn:uuid:afc631fe-471b-47a3-b338-eb87d65157f6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
The Browder spectrum of an elementary operator Kitson, Derek (2011) The Browder spectrum of an elementary operator. In: Elementary operators and their applications. Operator Theory: Advances and Applications . Birkhäuser Verlag, Basel, pp. 17-24. ISBN 9783034800365 Full text not available from this repository. We relate the ascent and descent of n-tuples of multiplication operators Ma,b(u)=aub to that of the coefficient Hilbert space operators a, b. For example, if a=(a1,…,an) and b∗=(b∗1,…,b∗m) have finite non-zero ascent and descent s and t, respectively, then the (n+m) -tuple (La,Rb) of left and right multiplication operators has finite ascent and descent s+t−1. . Using these results we obtain a description of the Browder joint spectrum of (La,Rb) and provide formulae for the Browder spectrum of an elementary operator acting on B(H) or on a norm ideal of B(H). Actions (login required)
{"url":"http://eprints.lancs.ac.uk/58211/","timestamp":"2014-04-18T16:56:14Z","content_type":null,"content_length":"15073","record_id":"<urn:uuid:600e2705-025c-4885-aba8-de7e06b9818c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
10x^2-3x-3= A(x^2+1)+(Bx+C) (x-1) find A B C y=kx-5 y=6x-x^2-6 solve for k Last edited by amath9; June 12th 2009 at 08:05 PM. $10x^2-3x-3 =A(x^2+1)+(Bx+C)(x-1)$ $10x^2-3x-3=Ax^2+A +Bx^2-Bx+xC-C$ the factors which multiply with $x^2$ in the right side should equal to the factor of $x^2$ in the left side so you will have $10= A+B$ and the factors which are multiply with x in the right side should equal to the factor with x in the left side so you will have $-3=-B+C$ and finally the terms with no x in the right side should equal to the terms with no x in the left side so you will have this $-3 = A -C$ now you have three equation with three variables so you can solve them $10=A+B$ $-3=-B+C$ $-3 = A -C$ find the sum of the second and the third $-6=A-B$ find the sum of this and the first one $4=2A \Rightarrow A=2$ the rest for you Multiply out the right side: $10x^2- 3x- 3= Ax^2+ A+ Bx^2- Bx+ Cx- C$. $10x^2- 3x- 3= (A+ B)x^2+ (C- B)x- C$ In order that these be equal for all x, corresponding coefficients must be equal: 10= A+ B, -3= C- B, and -3= -C. Solve for A, B, and C. y=kx-5 y=6x-x^2-6 solve for k $y= kx- 5= 6x- x^2- 6$. Because the left side does not have an " $x^2$" term, there is NO value of k that will make this true for all x.
{"url":"http://mathhelpforum.com/algebra/92686-polynomials.html","timestamp":"2014-04-20T19:56:32Z","content_type":null,"content_length":"40186","record_id":"<urn:uuid:d382bfdf-b267-4184-b7e8-b0f5e0e03fb1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Lectures on Heat and Thermodynamics Heat and Thermodynamics Lecture Notes File : pdf, 1.1 MB, 89 pages HEAT Feeling and seeing temperature changes Classic Dramatic Uses of Temperature-Dependent Effects The First Thermometer Newton’s Anonymous Table of Temperatures Fahrenheit’s Excellent Thermometer Amontons’ Air Thermometer: Pressure Increases Linearly with Temperature Thermal Equilibrium and the Zeroth Law of Thermodynamics Measuring Heat Flow: a Unit of Heat Specific Heats and Calorimetry A Connection With Atomic Theory Latent Heat Coefficients of Expansion Gas Pressure Increase with Temperature Finding a Natural Temperature Scale The Gas Law Avogadro’s Hypothesis When Heat Flows, What, Exactly is Flowing? Lavoisier’s Caloric Fluid Theory The Industrial Revolution and the Water Wheel Measuring Power by Lifting Carnot’s Caloric Water Wheel How Efficient are these Machines? Count Rumford Rumford’s Theory of Heat Robert Mayer and the Color of Blood James Joule But Who Was First: Mayer or Joule? The Emergence of Energy Conservation Bernoulli’s Picture The Link between Molecular Energy and Pressure Maxwell finds the Velocity Distribution Velocity Space Maxwell’s Symmetry Argument What about Potential Energy? Degrees of Freedom and Equipartition of Energy Brownian Motion Introduction: the Ideal Gas Model, Heat, Work and Thermodynamics The Gas Specific Heats CV and CP Tracking a Gas in the (P, V) Plane: Isotherms and Adiabats Equation for an Adiabat The Ultimate in Fuel Efficiency Step 1: Isothermal Expansion Step 2: Adiabatic Expansion Steps 3 and 4: Completing the Cycle Efficiency of the Carnot Engine The Laws of Thermodynamics How the Second Law Limits Engine Efficiency Heat Changes along Different Paths from a to c are Different! But Something Heat Related is the Same: Introducing Entropy Finding the Entropy Difference for an Ideal Gas Entropy in Irreversible Change: Heat Flow Without Work Entropy Change without Heat Flow: Opening a Divided Box The Third Law of Thermodynamics Searching for a Molecular Description of Entropy Enter the Demon Boltzmann Makes the Breakthrough Epitaph: S = k ln W But What Are the Units for Measuring W ? A More Dynamic Picture The Removed Partition: What Are the Chances of the Gas Going Back? Demon Fluctuations Entropy and “Disorder” Summary: Entropy, Irreversibility and the Meaning of Never Everyday Examples of Irreversible Processes Difficulties Getting the Kinetic Theory Moving How Fast Are Smelly Molecules? The Mean Free Path Gas Viscosity Doesn’t Depend on Density! Gas Diffusion: the Pinball Scenario; Finding the Mean Free Path in Terms of the Molecular Diameter But the Pinball Picture is Too Simple: the Target Molecules Are Moving! If Gases Intermingle 0.5cm in One Second, How Far in One Hour? Actually Measuring Mean Free Paths Why did Newton get the Speed of Sound Wrong? Introduction: Jiggling Pollen Granules Einstein’s Theory: the Osmosis Analogy An Atmosphere of Yellow Spheres Langevin’s Theory Microscopic Picture of Conduction American Units Download : link please sent me the chemical engineering books.
{"url":"http://artikel-software.com/blog/2010/07/23/lectures-on-heat-and-thermodynamics/","timestamp":"2014-04-18T00:13:59Z","content_type":null,"content_length":"45857","record_id":"<urn:uuid:8660e02a-c0d9-48df-985a-3717deec6db8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Perpendicular distance between two lines November 30th 2008, 01:57 AM Perpendicular distance between two lines The line L1 passes through the pt A, whose position vector is i-j-5k and is parallel to the vector i-j-4k. The line L2 passes through the point B, whose position vector is 2i-9j-14k and is parallel to the vector 2i+5j+6k. The point P on L1 and point Q on L2 are such that PQ is perpendicular to both L1 and L2. a) Find the length of PQ I know the vector line of L1 and L2 is L1=i-j-5k+ t(i-j-4k) The line PQ is perpendicular to both L1 and L2 so I know the common perpendicular vector. (1,-1,-4) and (2,5,6) gives a directonal vector of (14i-14j+7k) or As the distance is perpendicular it must be the minimum? But as each line has a different parameter , s and t, I can't put them into the straight line equation and differentiate. The book gives an answer of 3, which is the smallest distance I was able to find between the two lines by using some trial and error with different values of s and t. November 30th 2008, 03:17 AM Andreas Goebel what are i, j, k? Are those the unit vectors? The perpendicular distance is the minimum distance, yes. You can compute that by taking a plane that contains L1 and is parallel to L2. The distance of B to that plane is the perpendicular distance. If you want to compute the exact points P and Q, you can tage the general Points of L1 and L2, compute the vector between them, and then solve the system of linear equations where that difference is perpendicular to both L1 and L2. November 30th 2008, 07:25 PM Thanks for your help. As you said to, I found the perpendicular and substituted the parametric equations of one line into this. These 3 equations equal a pt on L2 and I found the parameter of the unit vector in the line PQ. December 1st 2008, 10:37 PM The line L1 passes through the pt A, whose position vector is i-j-5k and is parallel to the vector i-j-4k. The line L2 passes through the point B, whose position vector is 2i-9j-14k and is parallel to the vector 2i+5j+6k. The point P on L1 and point Q on L2 are such that PQ is perpendicular to both L1 and L2. a) Find the length of PQ If you are only interested to find the length PQ without calculating the coordinates of P and Q then there is a shortcut: Let $\vec u , \vec v$ denote the direction vectors of the 2 lines. Then the equations of the lines become: $l_1: \vec r= \vec a + t\cdot \vec u$ $l_2: \vec r= \vec b + t\cdot \vec v$ With $\vec n = \vec u \times \vec v$ the perpendicular distance between the 2 skewed lines is calculated by: $d=\dfrac{(\vec b - \vec a) \cdot \vec n}{|\vec n|}$ With your values: $\vec n = [1,-1,-4] \times [2,5,6] = [14, -14, 7]$ and $|[14, 14, -7]| = 21$ Then $d = \dfrac{([2, -9, -14] - [1, -1, -5]) \cdot [14, -14, 7]}{21}=\dfrac{[1, -8, -9] \cdot [14, -14, 7]}{21}=\dfrac{63}{21}=3$ December 2nd 2008, 04:47 PM Thanks for that. That is a cleaner approach.
{"url":"http://mathhelpforum.com/geometry/62309-perpendicular-distance-between-two-lines-print.html","timestamp":"2014-04-20T12:40:25Z","content_type":null,"content_length":"9272","record_id":"<urn:uuid:c79673ec-9add-4180-8396-f5b1d61dbf73>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"}