content
stringlengths
86
994k
meta
stringlengths
288
619
How to Calculate Force of Gravity Edit Article Edited by Star*star, Carolyn Barratt, Teresa, Kieran Thompson and 5 others Gravity is one of the fundamental forces of Physics. 1. 1 Decide the range of the force. If you are calculating the force of gravity at the surface of a planet, use F = mg. If you are calculating the force of gravity between objects in space, use F = 2. 2 Convert all of your units into standard units. Distances should be in metres, and masses in kilograms. 3. 3 If you are using F = mg: □ Find g, the acceleration due to gravity on the planet in question. On the Earth's surface, this is 9.8 metres per second squared. □ Multiply g by the mass of the object. This gives the force of gravity in Newtons. 4. 4 If you are using F = GmM/r2: □ Find m and M, the masses of the two objects in kilograms. Multiply them together. □ Find r, the distance between the two objects in metres. Square it. □ Multiply the product of m and M by the gravitational constant G, 6.67 x 10^-11 (0.0000000000667). □ Divide the result of this by r squared to find the force of gravity in Newtons. • These two formulas should give the same result, but the shorter formula is simpler to use when discussing objects on a planet's surface. Article Info Categories: Science Recent edits by: RockyRaccoon, Maxwell K, Anamika In other languages: Español: Cómo calcular la fuerza de gravedad, Italiano: Come Calcolare la Forza di Gravità Thanks to all authors for creating a page that has been read 53,510 times. Was this article accurate?
{"url":"http://www.wikihow.com/Calculate-Force-of-Gravity","timestamp":"2014-04-19T08:20:44Z","content_type":null,"content_length":"63005","record_id":"<urn:uuid:4114135a-df70-4fd8-8676-fbdbdf7e0d77>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
HoF debate: Kirby Puckett looptid wrote: joshheines wrote:His career OPS+ is +124. Which means his OPS was 24% higher than the average player of his time. His career RC/27 was 6.34 over about 7800 plate appearances. Mattingly had a career OPS+ of 127 and 6.29 RC/27 of 7700 career plate appearances. They both played about the same defense over their careers too. I don't think he's in either, but don't compare the defensive contributions of a center fielder to a first baseman. Being a great defensive first baseman is like being the prettiest girl at fat According to Baseball Prospectus, when Mattingly is compared to 1b of his time and when Kirby is compared to other CF of his time, the two's defensive prowess is equal and BP calculates that each saved their teams comparable runs with said defense. It's easy, fun, and 100% safe! The reason Puckett got in is the same reason Jeter will get in, they were both the face of the game for their respective eras. I think a few things are being missed: 1) For those that lived through it, don't you remember the irrepressible joy you had watching this guy play?!?!? I remember watching replay after replay of his various exploits with my mouth hanging open. And despite the personal-life problems we would later discover about him, at the time there was no one who could pump you up about the game like Kirby. He brought a joy to the game that was 2) Hits: This guy was a hitting machine. Someone mentioned quickest to 2000 hits and then someone else said it's a slippery slope since what about 1750, 1500, or 1000? Well, he was the second fastest to 1000 until Ichiro came along, but the analogy is not quite the same. 3) Stats: His lifetime batting average of .318 was the highest of any right-handed batter since Joe DiMaggio retired in 1951. (from Wikpedia). Rarefied air... 4) Charity work and community involvement. 5) More stats: Hit .300 8 times, and over .290 3 times (in fact, .296 and .298), had an OBP+ of 132 in his injury-ended season - showing he probably could have gone on for several more years, led total bases twice despite averaging about 20HR per year. ukrneal wrote:I think a few things are being missed: 1) For those that lived through it, don't you remember the irrepressible joy you had watching this guy play?!?!? I remember watching replay after replay of his various exploits with my mouth hanging open. And despite the personal-life problems we would later discover about him, at the time there was no one who could pump you up about the game like Kirby. He brought a joy to the game that was inspiring. . I never thought of Kirby as the face of baseball. I was about 10 when Kirby started playing and was a pure baseball fan. Maybe it's my east coast bias but I always thought Mattingly, Boggs, Strawberry and Ripken were the faces of baseball. I'd even say Gwynn was more of the face of baseball than Puckett was. However, that's all a completely subjective argument that can be made for or against any HOFer or borderline HOFer because you have to be phenomenal to even be considered a borderline HOFer. ukrneal wrote:2) Hits: This guy was a hitting machine. Someone mentioned quickest to 2000 hits and then someone else said it's a slippery slope since what about 1750, 1500, or 1000? Well, he was the second fastest to 1000 until Ichiro came along, but the analogy is not quite the same. That's not the point of a slippery slope argument. The point of the slippery slope argument is that we have always considered 3000 hits the magical benchmark, right? So why would it matter how fast one individual got to 2000 hits? If Puckett was the fastest person to 1000 hits, but then just trolled along for the next 15 years as a journeyman and managed to get 2000 hits does he get extra points because he was the fastest guy to 1000 hits? No way. Hypothetically say that Player X comes along and gets 250 hits in each of his first four season and is the fastest player by a mile to 1000 hits. However, he suffers a career ending injury in the off-season when he tragically loses his legs to a great white shark while surfing off the coast of Maui on his honeymoon. Is Player X a HOFer because he was the fastest ever to 1000 hits? I think not. ukrneal wrote:3) Stats: His lifetime batting average of .318 was the highest of any right-handed batter since Joe DiMaggio retired in 1951. (from Wikpedia). Rarefied air... To what extent was this because Puckett was able to retire in his prime instead of playing three, four or even five years past his prime? Let's say Kirby played three years past his prime and average .280 per year past his prime. He averaged about 600 ABs per year. Given those static numbers, if Puckett played three years past his prime his career average would be .310. Nice. Four years past prime? .308. Five years past his prime? .306. That's a mighty big difference. Coincidently, it would have taken Puckett about four or five more years to get to 3000 hits. ukrneal wrote:4) Charity work and community involvement. Does that mean Al Leiter is a HOFer? There might not be a better charity and community guy that ever existed in baseball than Lieter. No? Go figure. ukrneal wrote:5) More stats: Hit .300 8 times, and over .290 3 times (in fact, .296 and .298), had an OBP+ of 132 in his injury-ended season - showing he probably could have gone on for several more years, led total bases twice despite averaging about 20HR per year. More stats, I won't argue the AVG because that's what Puckett had going for him. I won't argue the 132+ OPs because, in part, I already have. Yes, he got cut down in his prime or at least toward the end of his prime. However, it's the Hall of Fame not the Hall of Could Have Been. The Hall is for actual achievements not potential achievements. I will argue total bases. Puckett finished 2nd in the league in TB in 1986 with 365. Then he finished 1st in 1988 with 348 and 1st in 1992 with 313. In 1986, Puckett walked only 41 times. In 1988 he walked only 23 times. In 1992 he walked 44 times. The total base number rewards those players who hit the ball as opposed to get on base via the walk. Adding walks to TBs. In 1986 Puckett had 406 bases. In 1988, Puckett had 371 bases. In 1992 he had 357 bases. In 1986 Boggs only had 280 something TBs, but he walked 105 times. So he had 385 bases. Barfield has 398. It's very indicative that he only finished in the top five in times on base once in his career, 1986. Again, in 1988, Boggs had 400 bases to pucks 371. Canseco had nearly 420. Greenwell had 400 on the nose. McGriff had 375. Winfield had 365. That's just in the AL. I don't have time to look at 1992, but TB is indicative of nothing. It's easy, fun, and 100% safe! joshheines wrote: To what extent was this because Puckett was able to retire in his prime instead of playing three, four or even five years past his prime? Let's say Kirby played three years past his prime and average .280 per year past his prime. He averaged about 600 ABs per year. Given those static numbers, if Puckett played three years past his prime his career average would be .310. Nice. Four years past prime? .308. Five years past his prime? .306. That's a mighty big difference. Coincidently, it would have taken Puckett about four or five more years to get to 3000 hits. Not a good argument. Your using the fact that his career was cut short to diminish his achievements but someone will say that since you are bringing his short career into the conversation that if he woudn't have been injured, he would have accumulated the stats to make hima sure fire hall of famer. joshheines wrote: ukrneal wrote:I think a few things are being missed: 1) For those that lived through it, don't you remember the irrepressible joy you had watching this guy play?!?!? I remember watching replay after replay of his various exploits with my mouth hanging open. And despite the personal-life problems we would later discover about him, at the time there was no one who could pump you up about the game like Kirby. He brought a joy to the game that was inspiring. . I never thought of Kirby as the face of baseball. I was about 10 when Kirby started playing and was a pure baseball fan. Maybe it's my east coast bias but I always thought Mattingly, Boggs, Strawberry and Ripken were the faces of baseball. I'd even say Gwynn was more of the face of baseball than Puckett was. However, that's all a completely subjective argument that can be made for or against any HOFer or borderline HOFer because you have to be phenomenal to even be considered a borderline HOFer. ukrneal wrote:2) Hits: This guy was a hitting machine. Someone mentioned quickest to 2000 hits and then someone else said it's a slippery slope since what about 1750, 1500, or 1000? Well, he was the second fastest to 1000 until Ichiro came along, but the analogy is not quite the same. That's not the point of a slippery slope argument. The point of the slippery slope argument is that we have always considered 3000 hits the magical benchmark, right? So why would it matter how fast one individual got to 2000 hits? If Puckett was the fastest person to 1000 hits, but then just trolled along for the next 15 years as a journeyman and managed to get 2000 hits does he get extra points because he was the fastest guy to 1000 hits? No way. Hypothetically say that Player X comes along and gets 250 hits in each of his first four season and is the fastest player by a mile to 1000 hits. However, he suffers a career ending injury in the off-season when he tragically loses his legs to a great white shark while surfing off the coast of Maui on his honeymoon. Is Player X a HOFer because he was the fastest ever to 1000 hits? I think not. ukrneal wrote:3) Stats: His lifetime batting average of .318 was the highest of any right-handed batter since Joe DiMaggio retired in 1951. (from Wikpedia). Rarefied air... To what extent was this because Puckett was able to retire in his prime instead of playing three, four or even five years past his prime? Let's say Kirby played three years past his prime and average .280 per year past his prime. He averaged about 600 ABs per year. Given those static numbers, if Puckett played three years past his prime his career average would be .310. Nice. Four years past prime? .308. Five years past his prime? .306. That's a mighty big difference. Coincidently, it would have taken Puckett about four or five more years to get to 3000 hits. ukrneal wrote:4) Charity work and community involvement. Does that mean Al Leiter is a HOFer? There might not be a better charity and community guy that ever existed in baseball than Lieter. No? Go figure. ukrneal wrote:5) More stats: Hit .300 8 times, and over .290 3 times (in fact, .296 and .298), had an OBP+ of 132 in his injury-ended season - showing he probably could have gone on for several more years, led total bases twice despite averaging about 20HR per year. More stats, I won't argue the AVG because that's what Puckett had going for him. I won't argue the 132+ OPs because, in part, I already have. Yes, he got cut down in his prime or at least toward the end of his prime. However, it's the Hall of Fame not the Hall of Could Have Been. The Hall is for actual achievements not potential achievements. I will argue total bases. Puckett finished 2nd in the league in TB in 1986 with 365. Then he finished 1st in 1988 with 348 and 1st in 1992 with 313. In 1986, Puckett walked only 41 times. In 1988 he walked only 23 times. In 1992 he walked 44 times. The total base number rewards those players who hit the ball as opposed to get on base via the walk. Adding walks to TBs. In 1986 Puckett had 406 bases. In 1988, Puckett had 371 bases. In 1992 he had 357 bases. In 1986 Boggs only had 280 something TBs, but he walked 105 times. So he had 385 bases. Barfield has 398. It's very indicative that he only finished in the top five in times on base once in his career, 1986. Again, in 1988, Boggs had 400 bases to pucks 371. Canseco had nearly 420. Greenwell had 400 on the nose. McGriff had 375. Winfield had 365. That's just in the AL. I don't have time to look at 1992, but TB is indicative of nothing. Perhaps you were just too young to remember or pay much attention. I am from the east coast (NY area) and I ALWAYS thought of him as this genuine superstar. He was always on those highlight reals and he was always smiling. How much this really adds to HOF credentials, I'm not sure, but it was something not being mentioned much. Retiring in his prime probably helped career numbers, but it is not inconceivable that he could have kept that pace up a few more years. He was hitting .314 in that last year. So I really don't see him losing much. Anyway, I disagree with you and would still vote Kirby in. I'm glad he's in and feel that he is not bordeline. The man was a great hitter and the numbers bear that out. And since the primary role of a hitter is to hit (some would say get on base, but they don't pay players millions for walks even if you don't like it), I think he gets in, just like Gwynn and Boggs. joshheines wrote:All that I just checked and Tony Gwynn was younger when he got to 2000. Jeter was younger too. Actually I think a few guys did it younger and faster. They threw up the same type of stats when Jeter reached 2000 hits and were claiming he was one of the fastest to reach the mark. It took a little bit of digging for me to realize that they were talking about post-expansion and in terms of games played. Boggs is the only one who fits that category who made it faster than Puckett. Wade Boggs,1515 Kirby Puckett, 1542 Tony Gwynn, 1560 Rod Carew, 1567 Derek Jeter, 1571 Pete Rose, 1600 Paul Molitor, 1635 Don Mattingly, 1637 George Brett, 1659 Guys like Cobb and LaJoie demolished those numbers. Bury me a Royal. ukrneal wrote:Perhaps you were just too young to remember or pay much attention. I am from the east coast (NY area) and I ALWAYS thought of him as this genuine superstar. He was always on those highlight reals and he was always smiling. How much this really adds to HOF credentials, I'm not sure, but it was something not being mentioned much. I was 9 years old when he won the 91 WS for the Twins and he was definitely a bonafide superstar at that time. Everybody wanted his baseball cards. I had a friend who dressed up as Puckett for Halloween for godsake...and this was in Maryland. He made highlight reels in out of market cities on the evening news before the explosion of SportsCenter. I don't think there's any disputing the popularity or charisma of the guy. Further, I don't see a superstar as big Puckett today. Maybe my view is warped because I was an impressionable kid back then and I loved baseball, but I remember Puckett as a baseball god. Let's go O's. Let's go Mets. giants! wrote: joshheines wrote: To what extent was this because Puckett was able to retire in his prime instead of playing three, four or even five years past his prime? Let's say Kirby played three years past his prime and average .280 per year past his prime. He averaged about 600 ABs per year. Given those static numbers, if Puckett played three years past his prime his career average would be .310. Nice. Four years past prime? .308. Five years past his prime? .306. That's a mighty big difference. Coincidently, it would have taken Puckett about four or five more years to get to 3000 hits. Not a good argument. Your using the fact that his career was cut short to diminish his achievements but someone will say that since you are bringing his short career into the conversation that if he woudn't have been injured, he would have accumulated the stats to make hima sure fire hall of famer. While that's true, doesn't it just bolster my argument that Puckett is not a HOFer because he doesn't have those numbers? And certainly while his .318 average might rank among the elite, I'm left to wonder how many players of his era bettered his career .360 OBP. It's easy, fun, and 100% safe! ukrneal wrote:The man was a great hitter and the numbers bear that out. And since the primary role of a hitter is to hit (some would say get on base, but they don't pay players millions for walks even if you don't like it), I think he gets in, just like Gwynn and Boggs. Nope. No way. The primary role of a hitter is to score runs. You can't score runs if you don't get on base. It's not like Puckett was a power hitter. He was in the top 5 in singles in the AL from 1984-1992, with the exception of 1990. He finished in the top 5 in extra-base hits only three times. Those three times he finished 3rd, 4th and 5th. So, it's not like a walk for Puckett was much different than a single. Sure a walk doesn't drive a guy on first to third or from second to home, but it moves forced runners over just the same. If you want to discount the walk, go ahead, but it gives a team an opportunity to score. I'll go back to my VORP argument. From 1986-89 the guy was on the HOF track. He was perenially a top 10 player. However, beginning in 1990 he began to trail off. With the exception of 1992 he was never a HOF caliber player again. Even factoring in the 1992 season, from 1990 to his retirement in 1995, he averaged out as being about the 35th best/most valuable player considering position value. That's just not a HOFer too me. There were better guys of the time and Puckett was NEVER a dominant force. It's easy, fun, and 100% safe!
{"url":"http://www.fantasybaseballcafe.com/forums/viewtopic.php?t=235352&start=10","timestamp":"2014-04-20T01:40:18Z","content_type":null,"content_length":"93800","record_id":"<urn:uuid:dfd30b44-485d-403f-b6d4-275eaf4ada5d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
A pair of sequence problems October 7th 2009, 12:36 PM #1 Oct 2009 A pair of sequence problems Does this sequence diverge or converge? If it converges, find the limit- I know the harmonic sequence diverges, but I've taken natural logs of both sides and get stuck, at (1/n)(ln 1 - ln n) = ln L where L is the original limit... can this be shown to converge to 0 using the squeeze theorem for sequence convergence? (note that you can move csc to the numerator as $\sin{\sqrt{9n+4}}$) (both sequences are from n=1 to infinity for the natural numbers) Does this sequence diverge or converge? If it converges, find the limit- I know the harmonic sequence diverges, but I've taken natural logs of both sides and get stuck, at (1/n)(ln 1 - ln n) = ln L where L is the original limit... == I hope you already studied functions and their limits: then what you did is fine, and you need the lim (1/n) lim(1/n) when n--> oo, which can be calculated from what we about lim x ln x when x--> 0 : this limit's value is zero since x --> 0 much faster than ln x --> -oo when x --> 0. you can do it also using L'Hospital's rule ==. the original limit is 1. can this be shown to converge to 0 using the squeeze theorem for sequence convergence? (note that you can move csc to the numerator as $\sin{\sqrt{9n+4}}$) If csc is cosecant, the the function is [sin(sqrt(9n+4))]/2]*1/Sqrt(n) and since 1/Sqrt(n) --> 0 when n --> oo and [sin(sqrt(9n+4))]/2] is bounded the limit is zero. (both sequences are from n=1 to infinity for the natural numbers) October 7th 2009, 01:38 PM #2 Oct 2009
{"url":"http://mathhelpforum.com/calculus/106689-pair-sequence-problems.html","timestamp":"2014-04-18T14:35:56Z","content_type":null,"content_length":"35045","record_id":"<urn:uuid:365d2e36-09a5-4673-8773-0828851c8efb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Multi-Step Equations with Like Terms 3.4: Multi-Step Equations with Like Terms Practice Multi-Step Equations with Like Terms Suppose you and your classmate are selling raffle tickets. Before today, $96 worth of tickets had been sold, and today, you sold 25 tickets and your classmate sold 35 tickets. Currently, $576 worth of tickets have been sold. Can you write and equation representing this scenario and solve it in multiple steps, including the combining of like terms, to determine how much each raffle ticket costs? In this Concept, you'll learn how to solve these types of problems. So far, you have learned how to solve one-step equations of the form $y=ax$$y = ax+b$multi-step equations and equations involving the Distributive Property. Solving Multi-Step Equations by Combining Like Terms In the last Concept, you learned the definition of like terms and how to combine such terms. We will use the following situation to further demonstrate solving equations involving like terms. You are hosting a Halloween party. You will need to provide 3 cans of soda per person, 4 slices of pizza per person, and 37 party favors. You have a total of 79 items. How many people are coming to your party? This situation has several pieces of information: soda cans, slices of pizza, and party favors. Translate this into an algebraic equation. $3p + 4p + 37 = 79$ This equation requires three steps to solve. In general, to solve any equation you should follow this procedure. Procedure to Solve Equations: 1. Remove any parentheses by using the Distributive Property or the Multiplication Property of Equality. 2. Simplify each side of the equation by combining like terms. 3. Isolate the $ax$ 4. Isolate the variable. Use the Multiplication Property of Equality to get the variable alone on one side of the equation. 5. Check your solution. Example A Determine the number of party-goers in the opening example. Solution: $3p + 4p + 37 = 79$ Combine like terms: $7p+37=79.$ Apply the Addition Property of Equality: $7p+37-37=79-37.$ Simplify: $7p=42.$ Apply the Multiplication Property of Equality: $7p \div 7=42 \div 7.$ The solution is $p=6$ There are six people coming to the party. Example B Kashmir needs to fence in his puppy. He will fence in three sides, connecting it to his back porch. He wants the length to be 12 feet, and he has 40 feet of fencing. How wide can Kashmir make his puppy enclosure? Solution: Translate the sentence into an algebraic equation. Let $w$ $w + w + 12 = 40$ Solve for $w$ $2w + 12 & = 40 \\2w + 12 - 12 & = 40-12 \\2w & = 28 \\2w \div 2 & = 28 \div 2 \\w & = 14$ The dimensions of the enclosure are 14 feet wide by 12 feet long. Example C Solve for $v$$3v+5-7v+18=17.$ $&3v+5-7v+18=17\\&3v-7v+18+5=17\\&-4v+23=17\\&-4v+23-23=17-23\\&-4v=-6\\&-\frac{1}{4}\cdot -4v=-\frac{1}{4}\cdot-6\\&v=\frac{6}{4}\\&v=\frac{3}{2}\\$ Checking the solution: $&3\cdot \frac{3}{2}+5-7\cdot \frac{3}{2}+18=17\\&\frac{9}{2}+5-\frac{21}{2}+18=17\\&-\frac{12}{2}+23=17\\&-6+23=17\\&17=17$ Guided Practice Solve for $w$$5\left(2w-\frac{3}{5}\right)+10=w+16$ $\text{Start by distributing the 5.} && 5\left(2w-\frac{3}{5}\right)+10&=w+16\\&& \Rightarrow 10w-3+10&=w+16\\\text{Combine like terms on the left side.} && \Rightarrow 10w+7&=w+16 \\\text{Subtract 7 and }w \text{ from each side.} && \Rightarrow 10w+7-7-w&=w+16-7-w\\&& \Rightarrow 9w&=9\\\text{Isolate }w \text{ by dividing each side by 9.} && \Rightarrow \frac{9w}{9}&=\frac{9}{9}\\&& \Rightarrow Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Multi-Step Equations (15:01) In 1 – 3, solve the equation. 1. $f-1+2f+f-3=-4$ 2. $2x+3+5x=18$ 3. $5-7y-2y+10y=30$ In 4-8, write an equation and then solve for the variable. 4. Find four consecutive even integers whose sum is 244. 5. Four more than two-thirds of a number is 22. What is the number? 6. The total cost of lunch is $3.50, consisting of a juice, a sandwich, and a pear. The juice cost 1.5 times as much as the pear. The sandwich costs $1.40 more than the pear. What is the price of the pear? 7. Camden High has five times as many desktop computers as laptops. The school has 65 desktop computers. How many laptops does it have? 8. A realtor receives a commission of $7.00 for every $100 of a home’s selling price. How much was the selling price of a home if the realtor earned $5,389.12 in commission? Mixed Review 9. Simplify $1 \frac{6}{7} \times \frac{2}{3}$ 10. Define evaluate . 11. Simplify $\sqrt{75}$ 12. Solve for $m: \frac{1}{9} m=12$ 13. Evaluate: $((-5) - (-7) - (-3)) \times (-10)$ 14. Subtract: $0.125- \frac{1}{5}$ Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Basic-Algebra-Concepts/r13/section/3.4/","timestamp":"2014-04-19T08:11:36Z","content_type":null,"content_length":"136107","record_id":"<urn:uuid:a16a8d9e-65ea-4537-9cdd-74ce88751dd7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
SIAM 50th Anniversary and 2002 Annual Meeting Prizes, awards, and special lectures are shown in alphabetical order I. E. Block Community Lecture The I. E. Block Community Lecture was instituted in 1995 to encourage public appreciation of the excitement and vitality of applied mathematics by reaching out as broadly as possible to students, teachers, and members of the local community, as well as to SIAM members, researchers, and practitioners in fields related to applied and computational mathematics. The lecture is open to the public and is named in honor of I. Edward Block, a founder of SIAM who served as its Managing Director for nearly 20 years. Christopher Bregler begins his I. E. Block Community Lecture. 2002 Lecturer: Christoph Bregler, Stanford University Title of Lecture: "From Muybridge to Virtual Humans, the Mathematics of Motion Pictures" Christoph Bregler is an Assistant Professor of Computer Science at Stanford University since 1998. He received his M.S. and Ph.D. in Computer Science from UC-Berkeley in 1995 and 1998 respectively, and his Diplom from Karlsrühe University in 1993. He also worked for several companies including IBM, Hewlett-Packard, Interval, and Disney Feature Animation. He is a member of the Stanford Movement Research Group, which does research in Vision and Graphics with a focus on Motion Capture, human face, speech, and body movement analysis and synthesis, and artistic aspects of animation. The I. E. Block Community Lecturer receives a $500 honorarium and an engraved clock. Julian Cole Lectureship Awarded for an outstanding contribution to the mathematical characterization and solution of a challenging problem in the physical or biological sciences, or in engineering, or for the development of mathematical methods for the solution of such problems. The initial funds for this award were contributed by the students, friends, colleagues, and family of the late Julian Cole, a long-time SIAM member and volunteer. This is the first time the award is being given. Stephen J. Chapman and Tom Manteuffel 2002 Winner: Stephen Jonathan Chapman, University of Oxford, United Kingdom Citation: In recognition of his outstanding contributions to the mathematical theory of superconductivity, for the solution of particular problems in that field which will influence the emergence of this new technology, and for his contributions to new techniques and methods in applied mathematics. Title of Lecture: "Exponential Asymptotics and Linear and Nonlinear Eigenvalue Problems" Stephen Jonathan Chapman is Professor of Mathematics and Its Applications at the Mathematical Institute at Oxford University, England. He received his B.A. in Mathematics (First Class) from Merton College, Oxford University, and his Ph.D. from St. Catherine's College, Oxford University. Chapman was a Postdoctoral Fellow in the Department of Mathematics at Stanford University (1992-1993) and a Nuclear Electric Research Fellow at St. Catherine's College, Oxford University (1993-1995). He served as a Royal Society University Research Fellow from 1995 to 1999. The Julian Cole Lecturer receives a cash prize of $1,000 and a framed, hand-calligraphed certificate. Richard C. DiPrima Prize Established in 1986, the prize is awarded to a young scientist who has done outstanding research in applied mathematics (defined as those topics covered by SIAM journals) and who has completed his/ her doctoral dissertation and completed all other requirements for his/her doctorate during the period running from three years prior to the award date to one year prior to the award date. The prize, proposed by Gene H. Golub during his term as SIAM President, is funded by contributions from students, friends, colleagues, and family of the late Richard C. DiPrima, former SIAM Gang Hu and Tom Manteuffel 2002 Winner: Gang Hu, California Institute of Technology (now employed by Lehman Brothers, New York) Citation: For his dissertation, "Singularity Formation in Three-Dimensional Vortex Sheets," in which he addresses a long-standing problem in applied mathematics, namely the characterization of singularities on a vortex sheet, and uses a combination of modeling, asymptotics, rigorous analysis, and numerical computation to obtain and validate his results. Gang Hu received his B.S. Degree in 1995 from Tsinghua University in Beijing, China. He studied under Thomas Y. Hou in the Applied Mathematics Department of California Institute of Technology, where he received his Ph.D. He is currently employed by Lehman Brothers in New York in their Fixed Income Research Department, where he is working on a team to design an automatic trader. There is no lecture associated with this prize. The winner of the Richard C. DiPrima Prize receives $1,000 and a framed, hand-calligraphed certificate. Previous Recipients and Prize Specifications Frederick A. Howes Commendation for Public Service Created by the SIAM Board of Trustees in 1998 and renamed in 2001 in memory of Fred Howes, this award recognizes individuals for outstanding contributions to the promotion of computational and applied mathematics through public service. Marc Q. Jacobs, H. T. Banks, and Philippe Tondeur 2002 Recipients: Marc Q. Jacobs and Philippe Tondeur Marc Q. Jacobs, Air Force Office of Scientific Research (retired) Citation: For his exemplary service and leadership in the development of research programs in control theory and dynamical systems at the Air Force Office of Scientific Research. For more than a decade at AFOSR, he developed new research directions to significantly broaden the program, and he created strong ties between the Air Force laboratories and the academic research community. In doing so, he enhanced the visibility of mathematics at the Department of Defense, and he championed the application of dynamics and control to Air Force problems. His leadership had a profound impact on the discipline. Marc Q. Jacobs is widely known for his contributions to optimal control theory and dynamical systems. Prior to joining AFOSR for the second time in December 1991, he was Professor of Mathematics at the University of Missouri in Columbia, Missouri. In 1979, he was named to the Defoe Distinguished Chair in Mathematics at the University of Missouri in recognition of his distinguished research and teaching. Dr. Jacobs received his B.S., M.A., and Ph.D. in Mathematics at the University of Oklahoma in Norman. Philippe Tondeur, Division of Mathematical Sciences, National Science Foundation Citation: For his inspiring and energetic leadership of the Division of Mathematical Sciences at the National Science Foundation. During his three years at DMS, he raised the visibility of the mathematical sciences within the Foundation and the scientific community. Building on a foundation laid by his predecessors, he argued successfully for the need for increased funding in the mathematical sciences and developed programs to address the identified needs. His leadership had a profound impact on all of the mathematical sciences. Philippe Tondeur earned his Ph.D. in Mathematics from the University of Zurich and subsequently was a Research Fellow and Lecturer at the University of Paris, Harvard University, the University of California-Berkeley, and Wesleyan University. He served as Chair of the Department of Mathematics at the University of Illinois at Urbana-Champaign from 1996-1999. Dr. Tondeur is completing his term as Director of the Division of Mathematical Sciences at the National Science Foundation. There is no lecture associated with this Commendation. Commendation recipients are presented with a framed, hand-calligraphed certificate. JPBM Award in Communications This (usually) annual award was established by the AMS-MAA-SIAM Joint Policy Board for Mathematics (JPBM) in 1988 to reward and encourage journalists and other communicators who, on a sustained basis, bring accurate mathematical information to non-mathematical audiences. Any person, a mathematician or non-mathematician, is eligible as long as that person is primarily a communicator with non-mathematical audiences. Claire and Helaman Ferguson 2002 Winners: Claire and Helaman Ferguson, Laurel, Maryland Citation: The JPBM Communications Award is presented to the Fergusons, who together have dazzled the mathematical community and a far wider public with exquisite sculptures embodying mathematical ideas, along with artful and accessible essays and lectures elucidating the mathematical concepts. Helaman Ferguson began his studies as an apprentice to a stone mason, then studied painting at Hamilton College and sculpture in graduate school. He received his Ph.D. in Mathematics from the University of Washington in Seattle and taught the subject for 17 years at Brigham Young University. He now lives and works in Laurel, Maryland where he has set up an extensive studio in his home. In addition to selling his works, he designs algorithms for operating machinery and for scientific visualization. He has exhibited and sold his sculptures worldwide. Claire Ferguson has worked closely with Helaman as curator, expositor, and publicist on his mathematical sculptures. She is author of the book "Helaman Ferguson, Mathematics in Stone and Bronze." She is also an artist in her own right and has won scholarships and prizes for her work. There is no lecture associated with this award. The winner of the JPBM Communications Award receives a $1,000 cash prize and a hand-calligraphed certificate. SIAM Award in the Mathematical Contest in Modeling The SIAM Award in the Mathematical Contest in Modeling (MCM), established in 1988, is awarded to two of the teams judged as "Outstanding" in the annual MCM. One winning team of students is chosen for each of the problems posed in the MCM. From left to right: James Case, David Arthur, Sam Malone, Ben Fusaro, Ernie Esser, Ryan Card, Jeff Giansiracusa, and Tom Manteuffel 2002 Winners: • Problem A, The Continuous Problem, "Wind and Waterspray." University of Washington Department of Mathematics Seattle, WA Students: Ryan Card, Ernie Esser, Jeff Giansiracusa Faculty Advisor: Professor James A. Morrow • Problem B, The Discrete Problem, "Airline Overbooking." Duke University Department of Mathematics Durham, NC Students: David Arthur, Sam Malone, Oaz Nir Faculty Advisor: Professor David P. Kraines Winning students each receive $800 (prize and travel), complimentary membership in SIAM for three years, and a framed, hand-calligraphed certificate for the students' schools. Previous Recipients and Prize Specifications The George Pólya Prize The George Pólya Prize in 2002 is given for a notable contribution in an area of interest to George Pólya such as approximation theory, complex analysis, number theory, orthogonal polynomials, probability theory, or mathematical discovery and learning. Harold Widom, Craig A. Tracy, and Tom Manteuffel 2002 Winners: Craig A. Tracy and Harold Widom Citation: The George Pólya Prize is awarded to Craig A. Tracy and Harold Widom for their remarkable work on random matrix theory, a subject with multiple connections to complex analysis, orthogonal polynomials, probability theory and integral systems. Craig A. Tracy, University of California, Davis Craig A. Tracy is Professor of Mathematics at University of California, Davis. He received his Ph.D. in Physics at SUNY at Stony Brook in 1973 under the thesis supervision of Barry McCoy. He did postdoctoral work at the University of Rochester (1973-1975), SUNY at Stony Brook (1975-1978) and was on the faculty of Dartmouth College (1978-1984) before joining UC Davis. Professor Tracy was Chair of the Department of Mathematics at UC Davis from 1994 to 1998. Harold Widom, University of California, Santa Cruz Harold Widom is Professor Emeritus in Applied Sciences at the University of California, Santa Cruz. He attended Stuyvesant High School in New York and received his B.S. Degree from City College of New York. He received his M.S. and Ph.D. in Mathematics from the University of Chicago. Professor Widom taught mathematics at Cornell University from 1955 to 1968. He has been at the University of California, Santa Cruz since 1968 and received his Emeritus status in 1994. He has been a Sloan Fellow and has won two Guggenheim Fellowships. Title of Lecture: "New Universal Limit Laws: Largest Eigenvalue Distributions of Random Matrices and Their Applications" The Pólya Prize consists of a $20,000 cash award (to be shared by this year's recipients) and an engraved medal. Previous Recipients and Prize Specifications The W. T. and Idalia Reid Prize The W. T. and Idalia Reid Prize in Mathematics was established by SIAM in 1993 to recognize outstanding work in, or other contributions to, the broadly defined areas of differential equations and control theory. The prize, given annually, may be awarded either for a single notable achievement or a collection of such achievements. The prize fund was endowed by the late Mrs. Idalia Reid to honor her husband. H. T. Banks and Tom Manteuffel 2002 Winner: H. Thomas Banks, North Carolina State University Citation: The W. T. and Idalia Reid Prize is awarded to H. Thomas Banks for his fundamental contributions to the theoretical and computational foundations in the identification and control of infinite dimensional systems. Title of Lecture: "Riccati Equations in Feedback Control and Estimation" H. Thomas Banks is a University Professor and Drexel Professor of Mathematics at North Carolina State University where he directs the Center for Research in Scientific Computation. Prior to that, he was a Professor of Mathematics at the University of Southern California where he served as the Director for the Center for Applied Mathematical Sciences from 1989 to 1992, and he served on the faculty at Brown University's Division of Applied Mathematics from 1968 to 1989. He received his Ph.D. in Applied Mathematics from Purdue University in 1967, and a B.S. in Applied Mathematics from North Carolina State University. His research interests are in estimation and control of distributed parameter systems, along with computational methods, acoustics elasticity, electromagnetics, fluid /structure interactions, mathematical biology, smart materials, and structures. The W. T. and Idalia Reid Lecturer receives $10,000 in cash and an engraved medal. Previous Recipients and Prize Specifications Pierre-Antoine Absil and Andreas Waechter Dong Eui Chang and Tom Manteuffel Atife Caglar and Tom Manteuffel Philipp Kuegler and Tom Manteuffel SIAM Student Paper Prizes The SIAM Student Paper Prizes are awarded every year to the student author(s) of the most outstanding paper(s) submitted to the SIAM Student Paper Competition. This award is based solely on the merit and content of the student's contribution to the submitted paper. The purpose of the Student Paper Prizes is to recognize outstanding scholarship by students in applied mathematics or computing. 2002 Winners: Pierre-Antoine Absil University of Liege, Belgium Title: "A Grassmann-Rayleigh Quotient Iteration for Computing Invariant Subspaces" Dong Eui Chang California Institute of Technology Title: "Controlled Lagrangian and Hamiltonian Systems" Andreas Waechter Carnegie Mellon University (now employed by IBM T.J. Watson Research Center) Title: "Global and Local Convergence of Line Search Filter Methods for Nonlinear Programming" Honorable Mention: Atife Caglar University of Pittsburgh Title: "Weak Imposition of Boundary Conditions for the Navier-Stokes Equations by a Penalty-Lagrange Multiplier Method" John Dunagan Massachusetts Institute of Technology Title: "Optimal Outlier Removal in High-Dimensional Spaces" Philipp Kuegler Johannes Kepler University Linz, Austria Title: "Identification of a Temperature Dependent Heat Conductivity from Single Boundary Measurements" SIAM Student Paper Prize winners receive $1,500 (prize and travel) and a hand-calligraphed certificate. Previous Recipients and Prize Specifications John von Neumann Lecture Established in 1959, this lecture is in the form of an honorarium for an invited lecture. The lecturer will survey and evaluate a significant and useful contribution to mathematics and its applications. It may be awarded to a mathematician or to a scientist in another field, but, in either case, the recipient should be one who has made distinguished contributions to pure and/or applied 2002 Lecturer: Eric S. Lander, Whitehead Institute, MIT Center for Genome Research Citation: One of the driving forces behind today's revolution in genomics, the study of all of the genes in an organism and how they function together in health and disease, Eric Lander is being recognized for his work as a geneticist, molecular biologist, and mathematician. Under his leadership, the Center for Genome Research has been responsible for developing most of the key tools of modern mammalian genomics. He also has pioneered the application of genomics to a wide range of medical problems, including cancer, diabetes, hypertension, inflammatory bowel disease, and asthma. Title of Lecture: "The Human Genome and Beyond" Eric Lander earned his B. A. in Mathematics from Princeton University in 1978 and his Ph.D. in Mathematics from Oxford University in 1981. He was Assistant and Associate Professor of Managerial Economics at the Harvard Business School from 1981 to 1990. Dr. Lander joined the Whitehead Institute as a Fellow in 1986 and joined the faculty of the Whitehead Institute and MIT in 1989 where he is a Full Professor. He was named a Rhodes Scholar in 1978 and received a MacArthur Foundation Fellowship in 1987 for his work in genetics. He was elected to the U.S. National Academy of Sciences in 1997, the U.S. Institute of Medicine in 1998, and the American Academy of Arts and Sciences in 1999. The John von Neumann Lecturer receives an honorarium of $2,500 and a hand-calligraphed certificate. Previous Recipients and Prize Specifications
{"url":"https://www.siam.org/prizes/2002_luncheon.php","timestamp":"2014-04-17T03:49:46Z","content_type":null,"content_length":"29389","record_id":"<urn:uuid:7914fd54-6ad2-4163-988d-a27753de1146>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Ocean Fog using Direct3D 10 The purpose of this project was to investigate how we could effectively render a realistic Ocean scene on differing graphics solutions while trying to provide good, current, working class set of data to the graphics community. Given the complexities involved with rendering an Ocean as well as fog effects we chose to use a projected grid concept as our baseline to start with (as it was very realistic). We then ported the original Direct3D 9 code to Direct3D 10 and all the additional effects that we needed to convert to Shader Model 4.0. In doing so we took advantage of a great opportunity to learn about a very interesting subject (the projected grid) while adding many more nuances to it. Again, the main goals of this project were to determine what we’d need to do to this complex system under a DirectX 10 scenario. And what would be required to achieve reasonable frame rates on both low and high-cost graphics solutions. During this endeavor we learned much about how to offload certain computations to the CPU vs. the GPU. And also when and where those compute cycles would be the most beneficial, both on high-end and low-end graphics solutions. Also we mention how on the CPU front, using the Intel compiler (version 10.1), we were able to gain an easy 10+ fps on our CPU-side computations (to generate fog and approximate wave movement). Projected Grid Ocean The basic concept behind the Projected Grid Ocean is a regular discretized xz-plane in world-space that is displayed orthogonally to the viewer in world space. The vertices of this grid are then displaced using a height field. The height field is a product of two variables which return the height value as specified by the following equation. This method proves very useul for generating a virtual large body of water. The Perlin noise computation for generating the wave motion uses 4 textures of varying granularity called “octaves” to animate the grid in 3 dimensions. This method was chosen over other functions (such as Navier-Stokes) to generate wave the noise as it is less compute intensive on the CPU. GPU-side Navier-Stokes implementation was not used, but is worth further investigation. For reflections and refractions the algorithm uses derivations of Snell’s function. To further add realism to the scene we restricted the height of the camera on the y axis so that the illusion of Ocean vastness could be maintained. For a highly detailed description of this method refer to Claes Johanson’s Master Thesis "Real-time water rendering - Introducing the projected grid concept", formerly available at http:// Perlin Fog For the Perlin fog we decided to implement the processing on the CPU. We do this by sampling points in the 3D texture space, separated by a stride associated with each octave-the longer the stride, the more heavily weighted the octave is in the total texture. Each of these sample points is mapped to a pseudo randomly chosen gradient, using a permutation table full of normalized gradients from a given point and a hash function2. The value of each pixel is then determined by the weight contributions of its surrounding gradient samples. All these separate octaves are then summed together to achieve a result which has smoothed, organic noise on both a near and far perspective. This result was successful, however we wanted to achieve an even more smoothed effect and have only subtle noise visible. We then applied a simple Gaussian blur algorithm (also during preprocessing). Our implementation blurred the pixels using factors supplied by Pascal’s triangle constants, i.e. {(1), (1,1), (1,2,1), (1,3,3,1) … } as weights and averaging these weighted sums (a type of convolution filter). We also took advantage of calculating blur in each axis direction independently to improve efficiency2. At this point the result was much closer to the desired; however, at the edges of the texture, seams were visible, since the Gaussian blur algorithm was sampling points beyond the texture’s scope, so we used a mod operator to wrap the sampling space. In the shader, we first calculate the fog coefficient f, which is a factor for the amount of light absorbed and scattered from a ray through fog volume4. We calculate this value using the equation: f = e-(ρ*d*n) ρ=density, d=camera distance, n=noise We then use this coefficient to interpolate between the surface color Coriginal at any point and the fog color Cfog, using this equation: Cfinal = Coriginal*f + Cfog*(1-f) This interpolation approximates the light absorption of a ray from any point to the camera3 at low CPU utilization cost. Finally, we apply animation to the fog by sampling the fog texture according to a linear function that progresses with time. This is a simple ray function, with the slope set as a constant vector. This method was successful, but gave fog which appeared glued to the geometry surface, rather than moving through the air. For this reason, we used a 3D texture for the blurry noise-when this texture is animated along a ray, the fog moves through world space rather than crawling along the surface’s 2D texture coordinates. This was successful from a birds-eye perspective, but unconvincing at other perspectives. To adjust for this, we applied a quadratic falloff for our noise, dependent on the height of the fog, that is, made the fog clearer at the height of the viewer to give the impression that clouds appeared above and below, rather than simply on all surfaces, with the equation: n = n*(ΔY2)/2 + 0.001 n=noise, ΔY=camera Y position – vertex Y position As a result, we mimic volumetric fog quite convincingly, although all fog is in fact projected onto the scene surfaces. Light Implementation The scene is lit entirely by two lights-one infinite (directional) light and one spotlight casting from the lighthouse. The infinite light is calculated simply by dot lighting with the light’s direction and the vertex normals. The spotlight also uses dot lighting, but also takes into account a light frustum and falloff. The stored information for the spotlight includes position, direction, and frustum. For surface lighting, we first determine whether or not a point lies within the spotlight frustum, then we calculate falloff, and finally apply the same dot lighting as used in the infinite light. For the first step, we find a vector from the vertex point to the camera, and then calculate the angle between that vector and the spotlight’s direction vector, using a dot product rule and solving for the angle: V1 • V2 = | V1 | | V2| cos θ V1 , V2 = vectors, θ = angle between If the angle between the two vectors is within the frustum angle, we know that the point is illuminated by the spotlight. We then use this same angle to apply a gradual falloff for the spotlight. The difference between the calculated angle and the frustum angle determines how far away from the center of the frustum the point lies, so the falloff can be determined by the expression: | θfrustum / θ | - 1 θ = angle between As you can see, the expression is undefined at θ = 0, and less than or equal to 0 at θ > θfrustum. The undefined value computes to an infinitely large number during runtime, so we have exactly what we want-most intensity at the center of the frustum and no intensity at the edge. With this result, we use the HLSL saturate function to clamp these values between 0 and 1, and multiply the spotlight intensity by this final number. For the volumetric spotlight effect, we used the same frustum angle as before, but instead to construct a series of cones which amplify the fog within the spotlight frustum. The cone vertices and indices are created during preprocessing and stored in appropriate buffers for the length of the application, and the cone is centered at the top. Because of this, we can translate the cone to the spotlight position, rotate to match the spotlight direction, and ensure that the cones cover the frustum completely. The shader code for the cones simply calculate their appropriate world-space fog coefficients as explained earlier, blend those with a surface color of zero alpha, and amplify that value by the spotlight intensity times the spotlight color. Because we are using zero alpha to indicate no amplification, we use an alpha blend state, with an addition blending function. The volumetric frustum falloff is approximated by using multiple cones, so the blending addition creates a higher amplification in the center of the frustum. The projected grid port provided an excellent chance to test performance on both high and low cost graphics systems. While providing a good opportunity to determine how to scale content for both. Modern low-cost graphics solutions have come quite a way in recent years. Further, it provided an excellent opportunity to contribute back to the graphics community. The top 2 areas of performance improvement that impacted low-cost graphics target were in the Perlin fog computation and the Ocean Grid computation. The later only renders in the Camera’s view frustum, was easy to control given the original algorithm. Through that we could easily reduce mesh complexity, and in doing so reduce the scene overhead. Also, by combining terrain and building meshes we gained even more performance on both integrated and discrete graphics. By pre-computing the Perlin textures on the CPU and only using the GPU for blending and animating the texture we came close to doubling our frame rates. Also tuning down the Ocean grid complexity and using only the necessary reflection and refraction computations we made additional performance gains. Lastly, the Intel Compiler was instrumental in auto-vectorizing our code which boosted our performance even further (~10%). Watch the Video Chuck DeSylva describes and demonstrates a better way to generate fog and waves by off-loading calculations from the GPU when it was overloaded. This resulted in up to a 3X improvement in the frame rate. Click to watch the video A Better Approach to Visualizing Ocean Fog and Waves. Download the Binary 1. Johanson, Claes. "Real-time water rendering – introducing the projected grid concept.” Master of Science thesis in computer graphics, March 2004. 2. Gustavson, Stefan. "Simplex Noise Demystified.” Gabrielle Zachmann Personal Homepage. 22 Mar. 2005. Linköping University, Sweden. 15 Jul. 2008. /sites/default/files/m/0/c/9/simplexnoise.pdf. 3. Waltz, Frederick M. and Miller, John W. V. "An efficient algorithm for Gaussian blur using finite-state machines.” SPIE Conf. on Machine Vision Systems for Inspection and Metrology VII. 1 Nov. 1998. ECE Department, Univ. of Michigan-Dearborn. 5 Aug. 2008. /sites/default/files/m/1/c/7/21_GBlur.pdf. 4. Zdrojewska , Dorota. "Real time rendering of heterogeneous fog based on the graphics hardware acceleration." Central European Seminar on Computer Graphics for students. 3 Mar. 2004. Technical University of Szczecin. 10 Jul. 2008. http://www.cescg.org/CESCG-2004/web/Zdrojewska-Dorota/. About the Authors Chuck Desylva Chuck is a 15 year Intel veteran who was involved in the first USB/AGP(PCI-E) drivers. He was also involved in developing early Intel graphics drivers. Since the turn of the 21st century he has worked to aid ISV application enabling in Intel’s Software Solutions Group. In so doing, he has worked to promote application acceleration/optimization on a wide array of software on Intel systems. Alfredo Gimenez A full-time Computer Science student at UC Davis. Alfredo’s internship has provided him an opportunity to learn DirectX 10 and begin applying both 3D artistry and computer science knowledge in a real world format. On his spare time, Alfredo is an avid flamenco guitar player and freestyle skier. Jeff Andrews Jeff Andrews is an Application Engineer with Intel working on optimizing code for software developers. He is currently focused on PC gaming. Jeff was lead architect for Intel’s Smoke Demo framework. Jeff has provided many key performance enhancements to this effort and was invaluable to its success.
{"url":"https://software.intel.com/ru-ru/articles/ocean-fog-using-direct3d-10","timestamp":"2014-04-17T18:31:29Z","content_type":null,"content_length":"57584","record_id":"<urn:uuid:4bd1957b-7d1f-4817-b90d-3b6bf6145bb8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
List of Basic Excel Formulas Here is the list of excel forumlas along with their function. ABS : ABS function is used to Return the absolute value of a supplied number. Absolute value means the modulus of the number we supplied. SIGN : Sign formula is used to returns the sign of a supplied number. Sign of the number can be +1, -1 or 0. GCD : GCD function is used to return the greatest Common Divisor or GCD of two or more given numbers. LCM : Want to find the lowest common multiple of two numbers? This formula or function make it much easy. It returns the Least Common Multiple of two or more given numbers. SUM : This formula can be used to return the sum of 2 or complete list of numbers. This is quite useful formula when you need to add individual entries of a column or row. PRODUCT : Want to find the product of 2 numbers or product of elements of entire row or column then this is the perfect solution to cut short your time. It returns the product of the given numbers. List of Advanced Excel Formulas POWER : Power formula or function is used to find the result of a number raised to a given power. It returns the result of a given number raised to a power. It greatly reduces your time by going certain high power calculations in seconds. SQRT : While preparing a balance sheet or a project in excel many times we need to find the square root of the elements. Now finding square root is made much easier. Just use sqrt forumla and it returns the positive square root of the given number. QUOTIENT : This formula can be used to return the integer portion of a division between two given numbers. You don’t need to calculate quotient manually anymore. MOD : When we divide a number by another number and we want to know the remainder then this is the perfect formula for that thing. It will return the remainder from a division between two numbers. AGGREGATE : This is quite useful function as it has the ability to perform more than 1 function. It can be used to calculate the sum, product, average and much more for a list or even a database. It also has the option to ignore hidden rows and error values. This feature is available in only in Excel 2010. Previous versions of excel don’t support above feature. SUBTOTAL : This formula is used to performs a specified calculation like sum, product, average, etc. for the values which we supply. CEILING : Ceiling function is used for rounding off a number away from zero. Rounding off away from zero means rounding a positive number up and a negative number down. EVEN : This function is similar to ceiling function but it round off a number to next even number. It rounds a number away from zero that is rounds a positive number up and a negative number down to the next even number. FLOOR : This formula is just opposite of ceiling function. It is sued to round of a number towards zero which means it rounds off a positive number down and a negative number up. ODD : This function is similar to ceiling function but it round off a number to next odd number. It rounds a number away from zero that is rounds a positive number up and a negative number down to the next odd number. ROUND : This function of formula is used to Round off a number up or down to a given number of digits. TRUNC : This formula or function is used to truncate a number towards zero. It rounds off a positive number down and a negative number up to the next integer. SUMIF : Sumif is a conditional sum formula in which sum is calculated if the condition or criteria is satisfied. It is used to add the cells in a supplied range for a given criteria. SERIESSUM : This formula is used to find and returns the sum of a power series. PI : This is quite handy formula used to returns the constant value of pi. SQRTPI : this formula or function calculate the square root of a number and then multiply the result by pi and return the final result. DEGREES : Degrees formula is a trigonometric formula used to convert Radians to Degrees. RADIANS : This formula is a trigonometric formula used to convert Degrees to Radians. COS : This trigonometric formula or function is used to returns the Cosine of a given angle. ACOS : This formula or function is used to returns the Arccosine of a number. COSH : This formula or function is used to returns the hyperbolic cosine of a number. ACOSH : This formula or function is used to returns the inverse hyperbolic cosine of a number. SIN : This trigonometric formula or function is used to return the Sine of a given angle. ASIN : This trigonometric formula or function is used to return the Arcsine of a number. SINH : This formula or function is used to return the Hyperbolic Sine of a number. ASINH : This formula or function is used to return the Inverse Hyperbolic Sine of a number. TAN : This trigonometric formula or function is used to return the Tangent of a given angle. ATAN : This trigonometric formula or function is used to return the Arctangent of a given number ATAN2 : This trigonometric formula or function is used to return the Arctangent of a given pair of x and y coordinates. TANH : This trigonometric formula or function is used to return the Hyperbolic Tangent of a given number. ATANH : This formula or function is used to return the Inverse Hyperbolic Tangent of a given number. How to use excel formulas These were the list of excel formulas now lets see how can we use these functions or formulas. ABS : To use Abs formula just write =(equal to) ABS followed by the number in brackets as shown in the figure. As soon as you close the brackets and hit enter the absolute value of the number is calculated and displayed in the same cell as shown in the figure. SUM : To use Sum formula just write =(equal to) SUM followed by the numbers in brackets as shown in the figure. As soon as you close the brackets and hit enter the sum of the numbers is calculated and displayed in the same cell as shown in the figure. PRODUCT : To use Product formula just write =(equal to) PRODUCT followed by the numbers in brackets as shown in the figure. As soon as you close the brackets and hit enter the product of the numbers is calculated and displayed in the same cell as shown in the figure. LCM : To use LCM formula just write =(equal to) LCM followed by the numbers in brackets as shown in the figure. As soon as you close the brackets and hit enter the LCM of the numbers is calculated and displayed in the same cell as shown in the figure. CEILING : To use Ceiling formula just write =(equal to) CEILING followed by the number in brackets as shown in the figure. As soon as you close the brackets and hit enter the Round off of the number is evaluated and displayed in the same cell as shown in the figure. FLOOR : To use Floor formula just write =(equal to) FLOOR followed by the number in brackets as shown in the figure. As soon as you close the brackets and hit enter the Round off of the number is evaluated and displayed in the same cell as shown in the figure. MOD : To use Mod formula just write =(equal to) MOD followed by the number in brackets as shown in the figure. As soon as you close the brackets and hit enter the Modulus of the number is evaluated and displayed in the same cell as shown in the figure. SQRTPI : To use Sqrtpi formula just write =(equal to) SQRTPI followed by the number in brackets as shown in the figure. As soon as you close the brackets and hit enter the result is calculated and displayed in the same cell as shown in the figure. SIN : To use Sin formula just write =(equal to) SIN followed by the angle in brackets as shown in the figure. As soon as you close the brackets and hit enter the sine of the angle evaluated and displayed in the same cell as shown in the figure. COS : To use Cos formula just write =(equal to) COS followed by the angle in brackets as shown in the figure. As soon as you close the brackets and hit enter the Cosine of the angle evaluated and displayed in the same cell as shown in the figure. Similarly you can use all of the above mentioned excel formulas. Excel Cheat Sheet Cheat sheet is not a sheet meant for cheating but it is a concise set of notes used for quick reference. They are named so because they maybe used by students in the test or exam for cheating purposes. Excel cheat sheet is used for quick reference of excel shortcuts and formulas. Visit cheat sheet to download excel cheat sheet. One thought on “List of Basic Excel Formulas” 1. Just wish to say your article is as astonishing. The clearness in your post is just spectacular and i can assume you’re an expert on this subject. Well with your permission allow me to grab your feed to keep updated with forthcoming post. Thanks a million and please keep up the gratifying work.
{"url":"http://sopx.wordpress.com/2011/05/01/list-of-basic-excel-formulas/","timestamp":"2014-04-17T18:57:39Z","content_type":null,"content_length":"67971","record_id":"<urn:uuid:6435fdc2-5582-4619-a850-2d283d47c05f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Report in Wirtschaftsmathematik (WIMA Report) In this paper we investigate the problem offending the Nadir point for multicriteria optimization problems (MOP). The Nadir point is characterized by the component wise maximal values of efficient points for (MOP). It can be easily computed in the bicriteria case. However, in general this problem is very difficult. We review some existing methods and heuristics and propose some new ones. We propose a general method to compute Nadir values for the case of three objectives, based on theoretical results valid for any number of criteria. We also investigate the use of the Nadir point for compromise programming, when the goal is to be as far away as possible from the worst outcomes. We prove some results about (weak) Pareto optimality of the resulting solutions. The results are illustrated by examples. The balance space approach (introduced by Galperin in 1990) provides a new view on multicriteria optimization. Looking at deviations from global optimality of the different objectives, balance points and balance numbers are defined when either different or equal deviations for each objective are allowed. Apportioned balance numbers allow the specification of proportions among the deviations. Through this concept the decision maker can be involved in the decision process. In this paper we prove that the apportioned balance number can be formulated by a min-max operator. Furthermore we prove some relations between apportioned balance numbers and the balance set, and see the representation of balance numbers in the balance set. The main results are necessary and sufficient conditions for the balance set to be exhaustive, which means that by multiplying a vector of weights (proportions of deviation) with its corresponding apportioned balance number a balance point is attained. The results are used to formulate an interactive procedure for multicriteria optimization. All results are illustrated by examples. In this paper we deal with single facility location problems in a general normed space where the existing facilities are represented by sets. The criterion to be satis ed by the service facility is the minimization of an increasing function of the distances from the service to the closest point ofeach demand set. We obtain a geometrical characterization of the set of optimal solutions for this problem. Two remarkable cases - the classical Weber problem and the minmax problem with demand sets - are studied as particular instances of our problem. Finally, for the planar polyhedral case we give an algorithmic description of the solution set of the considered problems.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16168/start/10/rows/10/yearfq/2000","timestamp":"2014-04-19T04:55:18Z","content_type":null,"content_length":"28471","record_id":"<urn:uuid:89c023e4-6d60-4ed9-b79d-a2eb42817b43>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Holliston Calculus Tutor Find a Holliston Calculus Tutor ...Let me know if I can help! -SamFor study skills, I teach the student how to learn. At times, the student does not have the structure of studying effectively and need the guidance of how to organize the information, time management, and know how of where to focus. I have from my experience as a... 38 Subjects: including calculus, reading, algebra 1, English ...I studied Arabic intensively and know the language well. After taking several courses as an undergraduate, and having taken an intensive Arabic summer course in Middlebury College, I was accepted at the Center for Arabic Studies Abroad in Cairo, Egypt. There I participated in a year long course learning Arabic at a professional level. 47 Subjects: including calculus, reading, chemistry, geometry ...I received nothing but positive feedback and recommendations. My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. 8 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I have taught lower level mathematics classes at the high school level with many special needs, mainstreamed students, including Aspergers and learning disabled students. I have experience tutoring such a learning disabled son. I have experience with generalized learning disabled instruction involving reading interpretation and mathematics. 90 Subjects: including calculus, chemistry, English, reading ...I explain the material so the student can learn through understanding. No short cuts required - if it makes sense, the student will learn it. I have a math degree from MIT and taught math at Rutgers University for 10 years. 24 Subjects: including calculus, chemistry, physics, statistics
{"url":"http://www.purplemath.com/Holliston_Calculus_tutors.php","timestamp":"2014-04-19T17:18:41Z","content_type":null,"content_length":"23878","record_id":"<urn:uuid:b70d0de2-bf49-40d5-be2f-ab47fa0b2673>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Holliston Calculus Tutor Find a Holliston Calculus Tutor ...Let me know if I can help! -SamFor study skills, I teach the student how to learn. At times, the student does not have the structure of studying effectively and need the guidance of how to organize the information, time management, and know how of where to focus. I have from my experience as a... 38 Subjects: including calculus, reading, algebra 1, English ...I studied Arabic intensively and know the language well. After taking several courses as an undergraduate, and having taken an intensive Arabic summer course in Middlebury College, I was accepted at the Center for Arabic Studies Abroad in Cairo, Egypt. There I participated in a year long course learning Arabic at a professional level. 47 Subjects: including calculus, reading, chemistry, geometry ...I received nothing but positive feedback and recommendations. My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. 8 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I have taught lower level mathematics classes at the high school level with many special needs, mainstreamed students, including Aspergers and learning disabled students. I have experience tutoring such a learning disabled son. I have experience with generalized learning disabled instruction involving reading interpretation and mathematics. 90 Subjects: including calculus, chemistry, English, reading ...I explain the material so the student can learn through understanding. No short cuts required - if it makes sense, the student will learn it. I have a math degree from MIT and taught math at Rutgers University for 10 years. 24 Subjects: including calculus, chemistry, physics, statistics
{"url":"http://www.purplemath.com/Holliston_Calculus_tutors.php","timestamp":"2014-04-19T17:18:41Z","content_type":null,"content_length":"23878","record_id":"<urn:uuid:b70d0de2-bf49-40d5-be2f-ab47fa0b2673>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Basis (theoretical question) May 24th 2009, 04:00 PM #1 Basis (theoretical question) Imagine I have $V=span \{(a,b,c,d),(e,f,g,h)\}$. They give me a vector $(i,j,k,l)$ and they ask me to find another vector $(m,n,o,p)$ such that $span ((i,j,k,l),(m,n,o,p))=V$. My attempt : I realized that $(i,j,k,l)$ can be written as a linear combination of $(a,b,c,d)$ and $(e,f,g,h)$. (It's not always true I know, but it is in the particular example I got assigned) So I must find a linear independent vector from $(i,j,k,l)$ but that spans $V$ with $(i,j,k,l)$. I don't know how to do it. (I'd be glad if there's a general method to find it) Another question : say I found a vector and want to test if it spans $V$ with $(i,j,k,l)$. If I see that I can write $(a,b,c,d)$ and $(e,f,g,h)$ as a combination linear of $(i,j,k,l)$ and the vector I found, does this implies that the latter 2 vectors span $V$? P.S.: I've all the vectors if you want them... but I wanted to know the general case, hence my choice of using letters instead of numbers. I reformulate and detail just in case : $(a,b,c,d)=(1,1,0,1)=\alpha$ and $(e,f,g,h)=(0,-1,1,1)=\beta$. $(i,j,k,l)=(2,1,1,3)=\gamma$. I noticed that $\gamma = \alpha + 2 \beta$. I must find a vector $\zeta$ such that $span \{ \gamma, \zeta \} = span \{ \alpha, \beta \}$. My main question is : say I found a vector $\zeta$ and I want to check if it does span $V$ along with $\gamma$. Is it enough to check that both $\alpha$ and $\beta$ are linear combination of $\gamma$ and $\zeta$? My intuition says yes because if $\alpha$ and $\beta$ can be written as a comb. linear of $\ gamma$ and $\zeta$, so does any vector spanned by $\alpha$ and $\beta$. And I also must check that $\zeta$ is a comb. linear of $\alpha$ and $\beta$. If it is then I can conclude that $span \{ \ gamma, \zeta \} = span \{ \alpha, \beta \}$. Is there a way to find $\zeta$, other than having a mathematical eye? I reformulate and detail just in case : $(a,b,c,d)=(1,1,0,1)=\alpha$ and $(e,f,g,h)=(0,-1,1,1)=\beta$. $(i,j,k,l)=(2,1,1,3)=\gamma$. I noticed that $\gamma = \alpha + 2 \beta$. I must find a vector $\zeta$ such that $span \{ \gamma, \zeta \} = span \{ \alpha, \beta \}$. My main question is : say I found a vector $\zeta$ and I want to check if it does span $V$ along with $\gamma$. Is it enough to check that both $\alpha$ and $\beta$ are linear combination of $\gamma$ and $\zeta$? My intuition says yes because if $\alpha$ and $\beta$ can be written as a comb. linear of $\ gamma$ and $\zeta$, so does any vector spanned by $\alpha$ and $\beta$. And I also must check that $\zeta$ is a comb. linear of $\alpha$ and $\beta$. If it is then I can conclude that $span \{ \ gamma, \zeta \} = span \{ \alpha, \beta \}$. Is there a way to find $\zeta$, other than having a mathematical eye? Okay, so you want to find two vectors that span your vector space and are linearly independent, and you are given two that already do. Firstly, note that any spanning set of n vectors is automatically linearly independent for a vector space of dimension n. So here, if you can find two vectors that span your set you are done. The "trick" to doing this sort of question is showing that you can construct your original basis vectors with your new vectors, as then this linear combination of your new vectors will span the space and your vectors are linearly independent by my first paragraph. Hint: there is nothing that says you cannot choose one of the original basis vectors for your new basis... Every vector in your space is a linear combination of your basis vectors. It would be silly to ask a question such as this for vectors outwith the vector space. "Find two vectors not in V that span V" would be a strange question... I reformulate and detail just in case : $(a,b,c,d)=(1,1,0,1)=\alpha$ and $(e,f,g,h)=(0,-1,1,1)=\beta$. $(i,j,k,l)=(2,1,1,3)=\gamma$. I noticed that $\gamma = \alpha + 2 \beta$. I must find a vector $\zeta$ such that $span \{ \gamma, \zeta \} = span \{ \alpha, \beta \}$. My main question is : say I found a vector $\zeta$ and I want to check if it does span $V$ along with $\gamma$. Is it enough to check that both $\alpha$ and $\beta$ are linear combination of $\gamma$ and $\zeta$? My intuition says yes because if $\alpha$ and $\beta$ can be written as a comb. linear of $\ gamma$ and $\zeta$, so does any vector spanned by $\alpha$ and $\beta$. And I also must check that $\zeta$ is a comb. linear of $\alpha$ and $\beta$. If it is then I can conclude that $span \{ \ gamma, \zeta \} = span \{ \alpha, \beta \}$. Is there a way to find $\zeta$, other than having a mathematical eye? one simple but important point: two non-zero vectors are linearly independent if and only if one is not a scalar multiple of the other. so, in your problem, $\alpha, \beta$ are linearly independent, and since they span V, we have $\dim V=2.$ now they give you another non-zero vector, say $\gamma.$ if i choose any non-zero vector in V, say $\zeta,$ which is not a scalar multiple of $\gamma,$ then, by the point i mentioned, $\beta, \zeta$ will be linearly independent. but since $\dim V = 2,$ any set of linearly independent vectors has at most 2 elements. that means $span \{\gamma, \zeta \}=V.$ for example, in your problem, neither $\alpha$ nor $\beta$ is a scalar multiple of $\gamma.$ so you can simply choose $\zeta=\alpha$ or $\zeta=\beta.$ Yeah I realized this, hehe. Wow. I thought about it, but when I checked if I could write $\alpha$ as a linear combination of $\beta$ and $\gamma$ it seems I made an error of arithmetic. I've redone it and it works now... Thanks a lot. Silly me, I now realize why it works. May 24th 2009, 08:34 PM #2 May 24th 2009, 10:47 PM #3 May 24th 2009, 10:50 PM #4 May 24th 2009, 11:00 PM #5 MHF Contributor May 2008 May 25th 2009, 07:26 AM #6
{"url":"http://mathhelpforum.com/advanced-algebra/90326-basis-theoretical-question.html","timestamp":"2014-04-19T12:59:04Z","content_type":null,"content_length":"73328","record_id":"<urn:uuid:7a6bab68-d083-4c74-b161-2ff35efe8f36>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Yearly Archives: 2012 Transcript Peter Woit: Well there is something that I was thinking about while watching these talks. I felt it was amazing hearing a couple of talks per day. One thing that was amazing was a kind of sweep of these … Continue reading What did you think of the symposium? The inaugural Fields Medal Symposium has taken a good amount of running around on my part and I did not get a chance to do what I love doing, writing about math! Well … Continue reading [Note: There will be much longer posts on each of the speakers mentioned in the videos.] Public Event Recording, October 15, 2012 http://audability.com/AudabilityAdmin/Clients/FieldsInstitute/ 101343_1015201270000PM/registrationform.aspx?Event_ID=1343 The Langlands Program: Number Theory, Geometry and the Fundamental Lemma James Arthur, University of Toronto The Fundamental … Continue So this week has been a little busy, to say the least. Blogging about every single lecture in real time was not possible, however I will be blogging each day following the conference to highlight the work of one of … Continue reading Why did Langlands call them Endoscopic Groups? I don’t have a good answer to this question, but I will do a bit of digging. This is one question I will ask Prof. Shelstad tomorrow. Since the average length of papers relating … Continue reading This October, the inaugural Fields Medal Symposium will mark the repatriation of the Fields Medal back to its Canadian origins. The Fields Medal was established by Canadian mathematician John Charles Fields in the early 1900′s. It is the international mathematical … Continue reading Why is the Fundamental Lemma, ‘fundamental’? A large class of mathematical results rely on Chau’s award winning proof of the ‘fundamental lemma’. The proof ensures that a set of essential conditions, forming the basis of other mathematicians’ work, are true. More … Continue reading Professor Edward Frenkel (http://math.berkeley.edu/~frenkel/) is one of the main scientific organizers for the inaugural Fields Medal Symposium. The main focus of his research is symmetry in mathematics and quantum physics. Many of the questions that he is currently tackling have … Continue reading 51 Fields Medals have been awarded, 43 Fields Medalists are still alive, 1 Fields Medal was declined in 2006, and 1 silver plate was awarded in 1998 in place of a Fields Medal. Since 1936, medals have been given at … Continue reading In the video Professor James Arthur explains the historical figures on the poster of the Fields Medal Symposium. He talks about a graphical representation of the Fundamental Lemma. He also describes the work of each of mathematician on the poster. … Continue reading Recent Comments Recent Comments
{"url":"http://blog.fields.utoronto.ca/symposium/2012/","timestamp":"2014-04-19T17:02:33Z","content_type":null,"content_length":"27702","record_id":"<urn:uuid:04fedaa7-db57-42e5-b9c5-e53148b6cced>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: fom - 01 - preface Date: Dec 8, 2012 11:50 PM Author: ross.finlayson@gmail.com Subject: Re: fom - 01 - preface On Dec 8, 6:06 pm, fom <fomJ...@nyms.net> wrote: > On 12/8/2012 1:49 PM, WM wrote: > > On 8 Dez., 19:16, fom <fomJ...@nyms.net> wrote: > >> On 12/8/2012 9:08 AM, WM wrote: > >> There are certain ongoing investigations > >> into the structure of mathematical proofs > >> that interpret the linguistic usage differently > >> from "mathematical logic". You would be > >> looking for various discussions of > >> context-dependent quantification where it > >> is being related to mathematical usage. > >> You will find that a statment such as > >> "Fix x" > >> followed by > >> "Let y be chosen distinct from x" > >> is interpreted relative to two > >> different domains of discourse. > >> This is just how one would imagine > >> traversing from the bottom of a > >> partition lattice. > > A question: Do you believe that there are more than countably many > > finite words? > > Do you believe that you can use infinite words (not finite > > descriptions of infinite sequences). > > Do you believe that you can put in order what you cannot distinguish? > > Regards, WM > There is a certain history here. > As set theory developed, Cantor was confronted > with the notion of "absolute infinity". > I prefer to go with Kant: > "Infinity is plurality without unity" > and interpret the objects spoken of in typical > discussions of set theory as transfinite numbers. > As for "unity", Cantor wrote the > following in his criticism of Frege: > "...to take 'the extension of a concept' as the > foundation of the number-concept. He [Frege] > overlooks the fact that in general the 'extension > of a concept' is something quantitatively completely > undetermined. Only in certain cases is the 'extension > of a concept' quantitatively determined, then > it certainly has, if it is finite, a definite > natural number, and if infinite, a definite power. > For such quantitative determination of the > 'extension of a concept' the concepts 'number' > and 'power' must previously be already given > from somewhere else, and it is a reversal of > the proper order when one undertakes to base > the latter concepts on the concept 'extension > of a concept'." > Cantor's transfinite sequences begin by simply > making precise the natural language references > to the natural numbers as a definite whole. And, > he justifies his acceptance of the transfinite > with remarks such as: > "... the potential infinite is only an > auxiliary or relative (or relational) > concept, and always indicates an underlying > transfinite without which it can neither > be nor be thought." > But the question of existence speaks precisely > to the first edition of "Principia Mathematica" > by Russell & Whitehead. I would love to have > the time to revisit what has been done there. > Russell's first version had been guided in > large part by his views on denotation. So, > the presupposition failure inherent to reference > was to be addressed by his description theory. > Given that, he ultimately would be committed > to the axiom of reducibility. > It is interesting to read what he says > concerning that axiom and set existence, > "The axiom of reducibility is even > more essential in the theory of > classes. It should be observed, > in the first place, that if we assume > the existence of classes, the axiom > of reducibility can be proved. For in > that case, given any function phi..z^ > of whatever order, there is a class A > consisting of just those objects which > satisfy phi..z^. Hence, "phi(x)" is > equivalent to "x belongs to A". But, > "x belongs to A" is a statement containing > no apparent variable, and is therefore > a predicative function of x. Hence, if > we assume the existence of classes, the > axiom of reducibility becomes unnecessary." > Personally, I do not think he should > have given it up. > As for my personal beliefs, I reject, for > the most part, the ontological presuppositions > of modern logicians so far as I can discern them > from what I read. Frege made a great achievement > in recognizing how to formulate a deductive > calculus for mathematics. But, I side with > Aristotle on the nature of what roles are played > by a deductive calculus. Scientific demonstration > is distinct from dialectical argumentation that > argues from belief. In turn, that distinction > informs that a scientific language is built up > synthetically. The objects of that language > are individually described using definitions. > The objects of that language are individually > presumed to exist. Consequently, the > names which complete the "incomplete symbols" > exist as references only by virtue of the fact > that the first names introduced for use in the > science are a well-ordered sequence. > Since I cannot possibly defend introducing > more than some finite number of names in > this fashion, the assumption of transfinite > numbers in set theory has a consequence. It > can be reconciled with this position only > if models of set theory are admissible as > such when they have a global well-ordering. > The largest transitive model of ZFC set theory > with these properties is HOD (hereditarily > ordinal definable). Aleph_0 is a cardinal, lengths are scalar. Quite erudite, and of interest, I find your post there. Wouldn't the HOD's order type as model be an ordinal and thus irregular, i.e. Burali-Forti? It's of interest to read of a space-filling curve and the general position, do you see any curve really fill space? What of a spiral real-space-filling curve? Some have geometry first, others integers first, I'd agree they're separate domains of discourse, but of the same domain of discourse. Some have points then lines, others points then space, some have zero then one, others zero then infinity, and the latters are to an appreciable extent more fundamental or primitive. You denote earlier the simple conjunctions of binary truth tables, and I'd appreciate myself a better understanding of the impredicative: what do you think of: an or the axiomless system of natural Turing machines have infinite tapes. Ross Finlayson
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7934347","timestamp":"2014-04-16T05:21:23Z","content_type":null,"content_length":"8900","record_id":"<urn:uuid:b390d4e9-219d-4e7e-ba23-5664b1268c46>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivative of Sine Function and Use of Chain Rule Date: 11/06/2003 at 04:05:55 From: Courtney Subject: differentiating a sine function How do I differentiate a sine function? I have learned about the product and chain rules but in my assignment I have to differentiate f(x) = 2sin(0.2(x - 5)). The sine part is the most confusing, but I'm also not sure whether to use the product or chain rule. Date: 11/06/2003 at 12:09:00 From: Doctor Riz Subject: Re: differentiating a sine function Hi Courtney - Thanks for asking your questions. Let's talk about differentiating the sine function first. Keep in mind what a derivative tells you - the slope of a tangent line to the original function at any point on the function. If you think that way, you can probably figure out what the derivative of sin(x) is. Visualize the graph of y = sin(x). As you know, it starts at 0 when x is 0 degrees, then rises to 1 at 90 degrees, falls back to 0 at 180 degrees, bottoms out at -1 at 270 degrees and returns to 0 at 360 degrees, thus completing one full period of the curve. Now think about the slope of the tangent line to the curve at each of those points. At 0 degrees, the slope of the tangent is 1 since the curve is heading upwards at a 45 degree angle. At 90 degrees, the sine has hit a maximum, so the tangent line would sit right on top of that 'hump' and have a slope of 0. At 180 degrees, the sine is again moving at an angle of 45 degrees, but it's now going downwards so the slope of the tangent is -1. At 270 degrees, the sine is at its minimum value, so again the tangent line would be horizontal and have a slope of 0. Of course, between those 'key' points the slopes of the tangents are changing smoothly. In other words, as the sine curve moves from 0 degrees to 90 degrees, the tangent slopes are descending from 1 down to 0 as the curve grows 'flatter'. Now think about the four points we just came up with for values of the slope of the tangent line to the sine curve: 0 degrees - slope of 1 90 degrees - slope of 0 180 degrees - slope of -1 270 degrees - slope of 0 360 degrees is the same as 0 degrees so again the slope is 1. Can you think of another function that contains those points? Here's a hint - it's another trig funtion, and it perfectly shows the slope of the sine curve at any point. Thus, that other trig function is the derivative of the sine function. I'll let you try to figure out what it is based on the numbers above. You also asked about the chain rule versus the product rule. Look at the function you are trying to differentiate: f(x) = 2sin[0.2(x - 5)]. What is there for multiplication in this function? The 2 out front can be treated as multiplying by a constant and can just be left out front. The 0.2 inside the sine function is again just a constant (ie, it's not 0.2x), so it doesn't need to be treated as its own function. So, there really isn't any place in this where two functions of x are being multiplied. That means we won't need the product rule. But there IS a place where there is a function INSIDE another function, which is called a composition of functions. In other words, within the main function of sin(x) there is the function 0.2(x - 5). It might be easiest to just distribute that out, which gives 0.2x - 1. So now we have: f(x) = 2sin(0.2x - 1) Let's leave the 2 out front and take the derivative of the OUTSIDE function, which is the sin(x) part. As you might have figured out from the earlier discussion, the derivative of sin(x) is cos(x). So that gives us: f'(x) = 2 * cos(0.2x - 1) Now the chain rule says to take the derivative of the INSIDE function, which is the (0.2x - 1) part. The derivative of that is 0.2, so we multiply what we had already times 0.2. All of that gives us: f'(x) = 2 * cos(0.2x - 1)*(0.2) Multiplying the 2 and the (0.2) our final answer is f'(x) = 0.4 * cos(0.2x - 1). Of course, if you want to make it look more like the original function, you could change the (0.2x - 1) back into (0.2)(x - 5) and the derivative would become: f'(x) = 0.4cos[0.2(x - 5)] Here's another example. Suppose you want to differentiate y = sin(x^2 + 3x) Start by taking the derivative of the sin to get cos(x^2 + 3x). Now the chain rule kicks in to take the derivative of x^2 + 3x, which is 2x + 3. So the final answer is y' = cos(x^2 + 3x) * (2x + 3) or (2x + 3)cos(x^2 + 3x). Here's a good question for you. Can you figure out what the derivative of cos(x) is by thinking in terms of slopes at the quarter points like we did with sin(x)? If you understood that part, you should be able to figure out the derivative of cos(x). One final comment - if you come upon something you don't know how to take the derivative of, most calculus texts have a table in the back or sometimes inside the cover which shows many common derivatives, so you might be able to look it up in a table. Hope that helps. Feel free to write back if you are still confused. - Doctor Riz, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/65002.html","timestamp":"2014-04-20T04:05:36Z","content_type":null,"content_length":"10162","record_id":"<urn:uuid:21d530fd-eb36-4b2f-a4f4-60b2c584afa5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Triangle inequality for n complex numbers February 10th 2011, 05:38 PM Triangle inequality for n complex numbers I am trying to prove that $|z_1+z_2+...+z_n| = |z_1| + |z_2| + ... |z_n|$ iff $z_i/z_j$ is a positive real number $\forall$ integers i and j, s.t. $i,j \in$$\left\{ 1,...,n \right\}$ I really don't see how these two ideas imply each other. After looking at this for several hours the only thing I managed to come up with (which I'm sure could be extended to n variables) is $|z_1|^2 + m|z_2|^2 = |z_1+z_2|^2$ where $m \in \mathbb{R}$ however, that real number m is not always $z_i/z_j$ Can anyone offer me advice please? February 10th 2011, 06:06 PM If $\forall i,j \in {1,2,...,n \}$ is $\displaystyle \frac{z_{i}}{z_{j}} = \alpha_{i,j}$ and any $\alpha_{i,j}$ is positive real, then $\forall i$ is $z_{i}= e^{\sigma\ \theta}\ \beta_{i}$, being $\sigma= \sqrt{-1}$ , $\theta$ real and any $\beta_{i}$ positive real. In this case is... $\displaystyle |z_{1} + z_{2} + ...+ z_{n}|= |e^{\sigma \theta}|\ |\beta_{1} + \beta_{2} +...+ \beta_{n}|= |z_{1}|+|z_{2}|+...+|z_{n}|$ (1) The inverse however is not true and a symple 'counterexample' is when one of the $z_{j}$ is zero and the terms $\frac{z_{i}}{z_{j}}$ for $i e j$ don't exist... Kind regards February 10th 2011, 06:35 PM Sorry, I meant to specify that $z_j eq 0$. Thank you very much for your help though. I will try to figure out the other direction on my own. February 10th 2011, 08:02 PM Use the real part and imaginary part of the complex numbers to re-write the inequality then apply the Cauchy–Schwarz inequality.
{"url":"http://mathhelpforum.com/differential-geometry/170835-triangle-inequality-n-complex-numbers-print.html","timestamp":"2014-04-18T07:38:44Z","content_type":null,"content_length":"9501","record_id":"<urn:uuid:cbdc4a2d-8ea9-4218-805c-c5ca753ac68f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Atiyah, Michael Francis (1929 Atiyah, Michael Francis (1929–) English mathematician who has contributed to many topics in mathematics, notably dealing with the relationships between geometry and analysis. In topology, he developed K-theory. He proved the index theorem on the number of solutions of elliptic differential equations, linking differential geometry, topology, and analysis – a theorem that has been usefully applied to quantum theory. Atiyah was influential in initiating work on gauge theories and applications to nonlinear differential equations, and in making links between topology and quantum field theory. Theories of superspace and supergravity, and string theory, were all developed using ideas introduced by him. Related category
{"url":"http://www.daviddarling.info/encyclopedia/A/Atiyah.html","timestamp":"2014-04-20T23:29:34Z","content_type":null,"content_length":"6300","record_id":"<urn:uuid:3c454736-dba2-4653-b543-6f019456b810>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrical Conversions ELECTRICAL UNIT CONVERSIONS The purpose of this document is to provide information, formulas and documentation to take certain electrical values and convert them into other electrical values. The formulas below are known and used universally but we use them here in association with computer, network, telecom and other IT equipment. To Find Watts To Find Volt-Amperes To Find Kilovolt-Amperes To Find Kilowatts To Convert Between kW and kVA TO Find kBTUs from Electrical Values It is often necessary to turn voltage, amperage and electrical "nameplate" values from computer, network and telecom equipment into kW, KVA and BTU information that can be used to calculate overall power and HVAC loads for IT spaces. The following describes how to take basic electrical values and convert them into other types of electrical values. • NOTE #1: The informational nameplates on most pieces of computer or network equipment usually display electrical values. These values can be expressed in volts, amperes, kilovolt-amperes, watts or some combination of the foregoing. • NOTE #2: If you are using equipment nameplate information to develop a power and cooling profile for architects and engineers, the total power and cooling values will exceed the actual output of the equipment. Reason: the nameplate value is designed to ensure that the equipment will energize and run safely. Manufacturers build in a "safety factor" when developing their nameplate data. Some nameplates display information that is higher than the equipment will ever need - often up to 20% higher. The result is that, in total, your profile will "over engineer" the power and cooling equipment. Electrical and mechanical engineers may challenge your figures citing that nameplates require more power than necessary. • NOTE #3: Our advice: Develop the power and cooling profile using the nameplate information and the formulas below and use the resultant documentation as your baseline. Reasons: (1) it's the best information available without doing extensive electrical tests on each piece of equipment. Besides, for most projects, you are being asked to predict equipment requirements 3-5 years out when much of the equipment you will need hasn't been invented yet. (2) the engineers will not duplicate your work; they do not know what goes into a data center. They will only challenge the findings if they appear to be to high. If the engineers want to challenge your figures, it's OK but have them do it in writing and let them take full responsibility for any modifications. If you must lower your estimates, do so. But, document everything. There will come a day in 3-5 years when you will need every amp of power you predicted. We've had projects where it was very evident within six months that what we predicted would come true - sometimes even earlier than we estimated. • NOTE #4 If you are designing a very high-density server room where you will have racks and racks (or cabinets and cabinets) of 1U and 2U servers tightly packed, you need to read our article entitled "IT Pros - Don't be Left in the Dust on IT Server Room Design". To Find Watts 1. When Volts and Amperes are Known POWER (WATTS) = VOLTS x AMPERES • We have a small server with a nameplate shows 2.5 amps. Given a normal 120 Volt, 60 hz power source and the ampere reading from equipment, make the following calculation: POWER (WATTS) = 120 * 2.5 ANSWER: 300 WATTS To Find Volt-Amperes (VA) 1. Same as above. VOLT-AMPERES (VA) = VOLTS x AMPERES ANS: 300 VA To Find kilovolt-Amperes (kVA) 1. SINGLE PHASE KILOVOLT-AMPERES (kVA) = VOLTS x AMPERES Using the previous example: 120 * 2.5 = 300 VA 300 VA / 1000 = .3 kVA 2. 208-240 SINGLE-PHASE (2-POLE SINGLE-PHASE) KILOVOLT-AMPERES (kVA) = VOLTS x AMPERES 220 x 4.7 = 1034 1034 / 1000 = 1.034 kVA 3. THREE-PHASE • Given: We have a large EMC Symmetrix 3930-18/-36 storage system with 192 physical volumes. EMC's website shows a requirement for a 50-amp 208 VAC receptacle. For this calculation, we will use 21 amps. Do not calculate any value for the plug or receptacle. KILOVOLT-AMPERES (kVA) = VOLTS x AMPERES x 1.73 208 x 21 x 1.73 = 7,556.64 7,556.64 / 1000 = 7.556 kVA To Find Kilowatts • Finding Kilowatts is a bit more complicated in that the formula includes a value for the "power factor". The power factor is a nebulous but required value that is different for each electrical device. It involves the efficiency in the use of of the electricity supplied to the system. This factor can vary widely from 60% to 95% and is never published on the equipment nameplate and further, is not often supplied with product information. For purposes of these calculations, we use a power factor of .85. This arbitrary number places a slight inaccuracy into the numbers. Its OK and it gets us very close for the work we need to do. 1. SINGLE PHASE Given: We have a medium-sized Compaq server that draws 6.0 amps. KILOWATT (kW) = VOLTS x AMPERES x POWER FACTOR 120 * 6.0 = 720 VA 720 VA * .85 = 612 612 / 1000 = .612 kW 2. TWO-PHASE KILOWATT (kW) = VOLTS x AMPERES x POWER FACTOR x 2 220 x 4.7 x 2 = 2068 2068 x .85 = 1757.8 1757.8 / 1000 = 1.76 kW 3. THREE-PHASE • Given: We have a large EMC Symmetrix 3930-18/-36 storage system with 192 physical volumes. EMC's website shows a requirement for a 50-amp 208 VAC receptacle. For this calculation, we will use 22 amps. Do not calculate the value of the plug or receptacle. Use the value on nameplate. KILOWATT (kW) = VOLTS x AMPERES x POWER FACTOR x 1.73 208x22x1.73 = 7,916.48 7,916.48 * .85 = 6,729.008 6,729.008/1000=6.729 kW To Convert Between kW and kVA • The only difference between kW and kVA is the power factor. Once again, the power factor, unless known, is an approximation. For purposes of our calculations, we use a power factor of .85. The kVA value is always higher than the value for kW. kW to kVA kW / .85 = SAME VALUE EXPRESSED IN kVA kVA TO kW kVA * .85 = SAME VALUE EXPRESSED IN kW To Find BTUs From Electrical Values • Known and Given: 1 kW = 3413 BTUs (or 3.413 kBTUs) • The above is a generally known value for converting electrical values to BTUs. Many manufacturers publish kW, kVA and BTU in their equipment specifications. Often, dividing the BTU value by 3413 does not equal their published kW value. So much for knowns and givens. Where the information is provided by the manufacturer, use it. Where it is not, use the above formula.
{"url":"http://www.abrconsulting.com/Conversions/elec-con.htm","timestamp":"2014-04-16T13:09:16Z","content_type":null,"content_length":"16991","record_id":"<urn:uuid:3a11af47-ebb8-4ffd-b0d6-14c1f49b64ea>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what is the sequence diagram of electric pencil sharpener • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ffe82ee4b0426c6368541e","timestamp":"2014-04-21T07:47:51Z","content_type":null,"content_length":"29793","record_id":"<urn:uuid:06ed600c-4d84-47b4-bd54-0c732c79c6ef>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Formulated by Karl Friedrich Gauss, arguably one of the greatest mathematicians in history, who provided significant contributions to many areas of both mathematics and theoretical physics. Gauss's Law expresses the relationship between electric charge and electric field and provides an alternative to Coulomb's Law. It states that total electric flux through a closed surface enclosing a definite volume is proportional to the net charge inside the surface. It is not affected by the radius from the charge to the surface in any way. It can be expressed in its simplest form as: Electric Flux = Charge enclosed / Epsilon Nought (the Permeability of free space, constant) The surface involved can be imaginary, there need not be any material present at the location of the surface - such a closed surface can be referred to as a Gaussian Surface. Gauss's Law can be used in calculating the electric fields caused by a variety of charge distributions, or in reverse to determine charge distribution from a known electric field pattern. Without a computer, such calculations are usually feasible when both the charge distribution and the Gaussian Surface under consideration have some symmetrical property. Most calculations involving this law involve some degree of integration, but this can be avoided in simple examples with careful algebraic manipulation.
{"url":"http://everything2.com/title/Gauss%2527s+Law","timestamp":"2014-04-16T11:04:26Z","content_type":null,"content_length":"19870","record_id":"<urn:uuid:13b60fc2-2c60-4d93-b25e-9a43cc3b2aa5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Conducted Emissions testing A negative resistance element is capable of serving as an amplifier. A switchmode power supply of extremely high efficiency has a negative dynamic input impedance. ( Please see my Switchmode dynamic impedance article on LICN ) That negative impedance can have an amplifying effect which can act on whatever stray signals and noise that are coming in on the power line. Testing of a switchmode power supply for its own conducted emissions can be compromised by this. We first look at the concept of amplification by a negative resistance in its simplest form by examining a voltage divider. We then look at this concept in real world action. In the left image below, a set of frequency comb lines is seen on our spectrum analyzer when we're testing for the "conducted emissions" of a table lamp. The lamp is obviously not the source of that spectra. But then, when the lamp is replaced by a switchmode power supply, the comb lines show up very much higher on the analyzer display. This is not a good thing. Power lines can be carrying unexpected signals. For example, I have been told that these comb lines may be coming from signals deliberately placed on the power line by the utility company as part of a data carrying load management system. Whatever their cause, since they are there and cannot be removed, the question to ask is, do you have some really good power line filtering in place for their removal from your test results? If not, do you have a very clean, on-site line power generator of your own? We now look at an analysis of the effect of having a negative load impedance when using a typical line impedance stabilization network (LISN) for conducted emissions testing. We begin with this algebraic analysis of the transfer function from the power line to the spectrum analyzer: We then use this last equation to look at the transfer function from the power line to the spectrum analyzer versus the value of the load, R2 for both positive and negative values of R2. Then we also make a SPICE model and when we compare the outcomes, we find that they agree: We find that the signal attenuation that is normally expected of the LISN from the power line to the spectrum analyzer can be totally lost above some particular corner frequency. In such a case, unwanted signals coming in on the power line and appearing on the spectrum analyzer, which may very possibly be above the conducted emssion limits for the UUT, can be falsely attributed to an innocent UUT. The complexity of this effect can be intimidating. If we look at a few different load values while still assuming them to be constant over all frequencies, we see tremendous variability in the degree of harm to the LISN transfer function as follows: The dynamic impedance presented by the UUT is not at all likely to be a constant value over our entire reange of frequencies. Try now to imagine what these curves would look like if the load impedance were to vary as a function of frequency. It boggles the imagination. However, a diminished capability of the LISN to filter out power line signals would still be an issue. Just in case you'd like to play around with the above algebraic analysis, this is the GWBASIC code (Yes, I still use it.) for doing that. Run it under Windows 98SE or earlier. On some machines, it will also work with Windows XP, but not on all machines. 10 CLS:SCREEN 9:COLOR 15,1:YSTART=240:XSTART=40:PI=3.14159265# 20 PRINT "save "+CHR$(34)+"lisnrneg.bas"+CHR$(34):PRINT:ON ERROR GOTO 270 30 PRINT "save "+CHR$(34)+"a:\lisnrneg.bas"+CHR$(34):PRINT:PRINT 40 C$="###ê Load ###.# dB":D$="100 Hz to 100 MHz":FDBHOLD=1000000! 50 KK=0:F=100:FOR HX=0 TO 40:HDB=-HX:GOSUB 190:XHOLD=X:X=X+2 60 IF ABS(HX-5*INT(HX/5))<.01 THEN X=X+4 70 IF ABS(HX-10*INT(HX/10))<.01 THEN X=X+4 80 GOSUB 200:X=XHOLD:GOSUB 200:NEXT HX:KK=0 90 FOR FX=2 TO 7:FOR FY=1 TO 10:F=FY*10^FX:HDB=-39.99:GOSUB 190:YHOLD=Y:Y=Y+4 100 IF ABS((FY-1)*(FY-10))<.01 THEN Y=Y+4 110 GOSUB 200:Y=YHOLD:GOSUB 200:NEXT FY:NEXT FX:KK=0 120 PRINT D$:PRINT:PRINT FDBHOLD/1000000!;"MHz":PRINT:GOTO 240 130 REM 140 REM Transfer function subroutine 150 W=2*PI*F:DR=(R1+R2)-W^2*L1*C1*(R2+R3):DI=W*(L1+C1*(R1*R2+R2*R3+R1*R3)) 160 DEN=SQR(DR^2+DI^2):NUM=W*R2*R3*C1:H=ABS(NUM/DEN) 170 HDB=20*LOG(H)/LOG(10):IF ABS(F-FDBHOLD)<100 THEN HDBHOLD=HDB 180 RETURN 190 X=30*LOG(F):Y=HDB*5:IF HDB<-40 THEN KK=0 200 CC=XSTART+X:DD=(320-Y-YSTART):IF KK<>0 THEN LINE (AA,BB)-(CC,DD) 210 AA=CC:BB=DD:KK=1:RETURN 220 COLOR 15-CT,1:FOR FX=2 TO 7:FOR FY=1 TO 10 STEP .01:F=FY*10^FX 230 GOSUB 150:GOSUB 190:NEXT FY:NEXT FX:KK=0:RETURN 240 READ R1,R3,C1,L1:DATA .01,50,.25e-6,50e-6 250 READ R2:GOSUB 220:CT=CT+1:IF CT>5 THEN CT=1 260 PRINT USING C$;R2,HDBHOLD:GOTO 250 270 RESUME 280 280 COLOR 15,1:DATA 50,10,2,-30,-45,-49,-50:REM Load values Loading comments... Write a Comment
{"url":"http://www.edn.com/electronics-blogs/living-analog/4406289/Conducted-Emissions-testing-","timestamp":"2014-04-19T09:24:54Z","content_type":null,"content_length":"66620","record_id":"<urn:uuid:847c8e2b-1e2d-4c69-85a0-a74f221366b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
5.2 Understanding Distance, Speed, and Time Relationships Using Simulation Software The following links to an updated version of the e-example and includes activities to help students in the upper elementary grades understand ideas about functions and about representing change over time, as described in the Algebra Standard. │ │ Understanding Distance, Speed, and Time (5.2) │ │ │ │ │ │ This two-part e-example uses two runners with variable starting points and speeds in order to find the relationship between time and distance. Use the puzzle mode to manipulate speed in │ │ │ order to follow the pace of a fellow runner. │ The e-example below contains the original applet which conforms more closely to the pointers in the book. │ │ Using the Runners App (5.2.2) │ │ │ │ │ │ This example includes a software simulation of two runners along a track. Students can control the speeds and starting points of the runners, watch the race, and examine a graph of the │ │ │ time-versus-distance relationship. The computer simulation uses a context familiar to students, and the technology allows them to analyze the relationships more deeply because of the ease of │ │ │ manipulating the environment and observing the changes that occur. │ Set a starting position for the runners by dragging their icons along the tracks. Change the direction they face by clicking once on their icons. Set the length of the stride for each runner using the controls on the lower left. What do you think the race will look like? Who will go farther in 100 "seconds"? (Note: It's convenient to call the units of time "seconds" for discussion purposes, although the simulation runs much faster.) Click Go to run the simulation. How to Use the Interactive Figure • Set a starting position for each runner by dragging his or her icon along the tracks. • Change the direction the runner faces by clicking once on the runner's icon. • Set a step size for each runner by clicking on the Arrow Up or Arrow Down buttons in the lower left. • Click on an icon for either runner in the lower left to turn it on/off. Play. Runs the simulation from current position. Pause. Press Play to resume. Reset. Returns runners to their previous positions. Follow-Up Questions and Tasks Think about and discuss the following: What does the graph show? Did what happened match your prediction? If it did, how does the graph show what you predicted? If not, why do you think what happened was different from what you expected? Click on Get Ready to position the boy and the girl to start a new race. Make a change in one of your settings (e.g., the length of the girl's stride or the boy's starting position). How will this change affect the graph? Run the simulation again and see what happens. Continue making changes and predicting the result. After each run of the simulation, think about what the graph shows and think about what happened and why. This example illustrates computer software that engages students in the upper elementary grades in ideas about functions and about representing change over time. The software and examples in this activity are based on the Trips software (Clements, Nemirovsky, and Sarama 1996). This software allows students to analyze change by setting the starting positions and length of stride (speed) for two runners. Students then observe the simulated races as they happen and relate the changing positions of the two runners to dynamic representations that change as the events occur. Students can predict the effects on the graph of changing the starting position or the length of the stride of either runner. They can observe and analyze how a change in one variable, such as length of stride, relates to a change in speed. This computer simulation uses a familiar context that students understand from daily life, and the technology allows them to analyze the relationships in this context deeply because of the ease of manipulating the environment and observing the changes that occur. In this activity, students are working with functional relationships. As students work with this example, they need to be encouraged by the teacher to analyze how a change in the starting position or the length of the stride will affect the time needed to reach the finish lines. Acting out different stories about the "trips" can help students visualize the effect of, for example, increasing the length of the stride or having one runner start in a position ahead of the other runner. As students become familiar with the simulation, they can analyze each situation numerically by building a table showing the relationship between time and distance. By inspecting the track, the graph, and the table, students can become more precise in reasoning quantitatively about the relationships ("The length of the boy's stride is 2, so you know his distance by multiplying the time by 2"). Older elementary school students can relate the boy's and girl's trips proportionally ("The girl goes twice as far as the boy in the same amount of time"). Students can begin to describe rate of change informally by inspecting the slope of the line ("The girl's line is steeper because she is moving faster"). Interpreting two-variable graphs will be unfamiliar to many students in this age group. Part of the teacher's role is to help them connect what is happening on the graph to what is happening on the track: How long does it take for the boy to go the same distance as the girl has traveled in fifty "seconds"? How can you see this demonstrated on the track? On the graph? Where on the track does the girl catch up to the boy? Where is this point on the graph? Additional Tasks and Questions • Set the starting position and length of stride for both runners. Run the simulation. Now write a story that describes the trip. For example, "The girl is going really fast. She catches up to and passes the boy, who is going slow," or "The girl started way behind the boy, who was already halfway to the tree by the time she got going. She went really fast and caught up to him more and more. Finally, at 75 she passed him and kept going really fast and got to the tree first." • Three motion stories are told below. Before the students use the simulation, have them physically simulate the motion stories (with their bodies). Then develop specific instructions (starting position and length of stride for each runner) to produce the action in the stories. Try out the instructions using the computer simulation above. □ Motion Story 1. The boy and girl start from the same position. The girl gets to the tree ahead of the boy. □ Motion Story 2. The boy starts behind the girl. The boy gets to the tree before the girl. □ Motion Story 3. The boy starts at the tree and the girl starts at the house. The boy gets to the house before the girl gets to the tree. • Look at the two graphs below, which show the results of different motion stories. Develop a set of instructions to produce each trip. Take Time to Reflect • Do you think students would enjoy using this computer activity? Why or why not? What are they likely to focus on? • How can teachers help students become comfortable moving among various techniques for organizing and representing ideas about relationships and functions? • What important ideas about functions and representing change over time can students learn while working on this activity? This activity and applet were adapted with permission from the Trips software, Clements, Nemirovsky, and Sarama (1996). The activity was adapted with permission from Tierney et al. (1998). Clements, Douglas H., Nemirovsky, Ricardo, & Sarama, Julie. Trips (computer program). Palo Alto, Calif.: Dale Seymour Publications, 1996. Tierney, Cornelia, Ricardo Nemirovsky, Tracy Noble, and Doug Clements. Investigations in Number, Data, and Space: Patterns of Change. Palo Alto, Calif.: Dale Seymour Publications, 1998.
{"url":"http://www.nctm.org/standards/content.aspx?id=25037","timestamp":"2014-04-17T10:20:51Z","content_type":null,"content_length":"49164","record_id":"<urn:uuid:b3dc2372-b944-4d4e-bc78-104be2aed3cd>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Hitchhiker's Guide to Ghosts and Spooks in Particle Physics On Halloween this year the CDF collaboration at Fermilab's Tevatron the presence of ghosts in their detector. And not just one meager Poltergeist rattling his chain, but a whole hundred-thousand army. As for today, the ghosts could not be exorcised by systematical effects. While waiting for theorists to incorporate the ghosts into their favorite models of new physics it is good to know that the CDF anomaly is by no means the only puzzling experimental result in our field. There are other ghosts at large: I guess most of them are due to unknown systematical errors, but some may well be due to new physics. Below I pick up a few anomalous results in subjective order of relevance. The list is not exhaustive - you are welcome to complain about any missing item. So, off we go. In this post I restrict to collider experiments, leaving astrophysics for the subsequent post. Muon Anomalous Magnetic Moment This experimental result is very often presented as a hint to physics beyond the Standard Model. For less oriented: there is nothing anomalous in the anomalous magnetic moment itself - it is a well-understood quantum effect that is attributed to virtual particles. But in the muon case, theoretical predictions slightly disagree with experiment. The E821 experiment in Brookhaven measured $a_\mu = (11 659 208 \pm 6)\cdot 10^{-10}$. The Standard Model accounts for all but $28\cdot 10^{-10}$ of the above, which represents a 3.4 sigma discrepancy. The discrepancy can be readily explained by new physics, for example by low-energy supersymmetry or by new light gauge bosons mixing with the photon. But there is one tiny little subtlety. The Standard Model prediction depends on low-energy QCD contributions to the photon propagator that cannot be calculated from first principles. Instead, one has to use some experimental input that can be related to the photon propagator using black magic and dispersion relations. Now, the discrepancy between theory and experiment depends on whether one use the low-energy e+e- annihilation or the tau decays as the experimental input. The quoted 3.4 sigma arises when the electron data are used, whereas the discrepancy practically disappears when the tau data are used. It means that some experimental data are wrong, or some theoretical methods employed are wrong, or both. In near future, a certain measurement may help to resolve the puzzle. The troublesome QCD contribution can be extracted from a process studied in BaBar, in which a photon decays into two pions (+ initial state radiation). There are that the preliminary BaBar results point to a larger QCD contribution (consistent with the tau data). This would eradicate the long-standing discrepancy of the muon anomalous magnetic moment. But, at the same time, it would imply that there is a flaw in the e+e- annihilation data, which would affect other measurements too. Most notably, the electron data are used as an input in determining the hadronic contribution to the electromagnetic coupling, which is one of the key inputs in fitting the Standard Model parameters from electroweak observables. As pointed out in this paper , if the low-energy QCD contribution where larger than implied by the electron data, the central value of the fitted Higgs boson mass would decrease. Currently, the electroweak fit determines the Higgs boson mass as $77^{+28}{}_{-22}$, which is already uncomortable with the 114 GeV direct search limit. Larger QCD contributions consisent with the tau data would increase this tension. Interesting times ahead. Forward-Backward Asymmetry CERN's LEP experiment has been desperately successful: it beautifully confirmed all theoretical predictions of the Standard Model. The mote in the eye is called $A_{fb}^b$: the forward- backward asymmetry in decays of the Z-boson into the b-quarks. This observable measures the asymmetry in the Z boson interactions with left-handed b-quarks and right-handed ones. The results from LEP and SLD led to a determination of $A_{FB}^b$ that deviates 3 sigma from the Standard Model prediction. On the other hand, the total decay width of the Z-boson into the b-quarks (summarized in the so-called Rb) seems to be in a good agreement with theoretical predictions. One possible interpretation of these two facts is that the coupling of the Z-boson to the right-handed b-quarks deviates from the Standard Model, while the left-handed coupling (who dominates the measurement of Rb) agrees with the Standard Model. At first sight this smells like tasty new physics - the Zbb coupling is modified in many extensions of the Standard Model. In practice, it is not straightforward (though not impossible) to find a well-motivated model that fits the data. For example, typical Higgsless or Randall-Sundrum models predict large corrections to the left-handed b-quark couplings, and smaller corrections to the right-handed b-quark couplings, contrary to what is suggested by the electroweak observables. Maybe this discrepancy is just a fluke, or maybe this particular measurement suffers from some systematic error that was not taken into account by experimentalists. But the funny thing is that this measurement is usually included in the fit of the Standard Model parameters to the electroweak observables because...it saves the Standard Model. If $A_{FB}^b$ was removed from the electroweak fit, the central value of the Higgs boson would go down, leading to a large tension with the 114 GeV direct search limit. Bs Meson Mixing Phase The results from BaBar and Belle led to one Nobel prize and zero surprises. This was disappointing, because flavor-changing processes studied in these B-factories are, in principle, very sensitive to new physics. New physics in sd transitions (kaon mixing) and bd transitions is now tightly constrained. On the other hand, bs transitions are less constrained, basically because the B-factories were not producing Bs mesons. This gap is being filled by the Tevatron who has enough energy to produce Bs mesons and study its decays to J/psi. In particular, the mass difference of the two Bs eigenstates was measured and a constraint on the phase of the mixing could be obtained. The latter measurement showed some deviation from the Standard Model prediction, but by itself it was not statistically significant. Later in the day, the UTfit collaboration the Bs meson data with all other flavor data. Their claim is the Bs mixing phase deviates from the Standard Model prediction at the 3 sigma level. This could be a manifestation of new physics, though it is not straightforward to find a well-motivated model where the new physics shows up in bs transitions, but not in bd or sd transitions. NuTeV Anomaly was an experiment at Fermilab whose goal was a precise determination of the ratio of neutral current to charged current reactions in neutrino-nucleon scattering. Within the Standard Model, this ratio depends on the Weinberg angle $\sin \theta$. It turned out that the magnitude of the Weinberg angle extracted from the NuTeV measurement deviates at the 3 sigma level from other measurements. It is difficult to interpret this anomaly in terms of any new physics scenario. A mundane explanation, e.g. incomplete understanding of the structure of the nucleons, seems much more likely. The dominant approach is to ignore the Nu-TeV measurement. HyperCP Anomaly This measurement was sometimes mentioned in the context of the CDF anomaly, because the scales involved are somewhat similar. Fermilab's experiment found evidence for decays of the hyperon (a kind of proton with one s quark) into one proton and two muons. This by itself is not inconsistent with the Standard Model. However, the signal was due to three events where the invariant mass of the muon pair was very close to 214 MeV in each case, and this clustering appears very puzzling. The HyperCP collaboration that this clustering is due the fact that the hyperon first decays into a proton and some new particle with the mass 214 MeV, and the latter particle subsequently decays into a muon pair. It is very hard (though, again, not impossible) to fit this new particle into a bigger picture. Besides, who would ever care for 3 events? GSI Anomaly For dessert, something completely crazy. The accelerator GSI Darmstadt can produce beams of highly ionized heavy atoms. These ions can be stored for a long time and decays of individual ions can be observed. A really weird thing was in a study of hydrogen-like ions of praseodymium 140 and promethium 142. The time-dependent decay probability, on top of the usual exponential time-dependence, shows an oscillation with a 7s period. So far the oscillation remains unexplained. There were attempts to connect it to neutrino oscillations, but this has failed. Another crazy possibility is that the ions in question have internal excitations with a small $10^{-15}$ eV mass splitting. 13 comments: Very good post. Thanks. good one Jester, nice to see all the ghosts happy together. hadn't heard about the GSI one, it looks strange indeed... I heard a talk on the GSI at Neutrino08 and the authors' conclusion tended towards it being an as yet not understood systematic effect, but I suppose that is to be expected. Dear Jester, thank you for a nice summary. A couple of anomalies might be added: the anomalous e+e- pairs in heavy ion collisions, the ortopositronium decay rate anomaly (which both can be understood as evidence for electro-pions identified as bounds states of color excited leptons). It would be interesting to know also the status of Karmen anomaly. It would be nice to see how muon g-2 for electron and muon are affected by two-photon coupling to leptopions. I give some references in the hope that someone might get interested. Electro-pion anomaly S. Barshay (1992) , Mod. Phys. Lett. A, Vol 7, No 20, p. 1843. J.Schweppe et al.(1983), Phys. Rev. Lett. 51, 2261. M. Clemente et al. (1984), Phys. Rev. Lett. 137B, A. Chodos (1987), Comments Nucl. Part. Phys., Vol 17, No 4, pp. 211, 223. L. Kraus and M. Zeller (1986), Phys. Rev. D 34, Orto-positronium decay rate anomaly C. I. Westbrook ,D. W Kidley, R. S. Gidley, R. S Conti and A. Rich (1987), Phys. Rev. Lett. 58 , Karmen anomaly KARMEN Collaboration, B. Armbruster et al (1995) , Phys. Lett. B 348, 19. V. Barger, R. J. N. Phillips, S. Sarkar (1995), Phys. Lett. B 352,365-371. Dear Jester, as I told earlier, the leptonic color predicted by TGD promises a solution to large number of anomalies, also CDF anomaly. The predicted lifetime for charge tau-pion is same as the lifetime of the possibly existing new particle. The neutral tau-pions and their p-adically scaled up variants with masses coming as powers of two would correspond to the three states proposed by CDF collaboration: mass predictions are consistent with the proposal of CDF. The decays of these neutral pions to 3 pions almost at rest explain the jet like structure. The remaining challenge was to estimate the production cross section. A brief article summarizing the details of the calculation of the tau-pion production cross section can be found from my home page. Here is the abstract. The article summarizes the quantum model for tau-pion production. Various alternatives generalizing the earlier model for electro-pion production are discussed and a general formula for differential cross section is deduced. Three alternatives inspired by eikonal approximation generalizing the earlier model inspired by Born approximation to a perturbation series in the Coulombic interaction potential of the colliding charges. The requirement of manifest relativistic invariance for the formula of differential cross section leaves only two options, call them I and II. The production cross section for tau-pion is estimated and found to be consistent with the reported cross section of about 100 nb for option I under natural assumptions about the physical cutoff parameters (maximal energy of tau-pion center of mass system and the estimate for the maximal value of impact parameter in the collision which however turns out to be unimportant unless its value is very large). For option II the production cross section is by several orders of magnitude too small. Since the model involves only fundamental coupling constants, the result can be regarded as a further success of the tau-pion model of CDF anomaly. Analytic expressions for the production amplitude are deduced in the Appendix as a Fourier transform for the inner product of the non-orthogonal magnetic and electric fields of the colliding charges in various kinematical situations. This allows to reduce numerical integrations to an integral over the phase space of lepto-pion and gives a tight analytic control over the numerics. See also the postings in my blog. A little addition to the previous posting. The plot for differential production cross section is here. Great post Jester! Dear Jester, GSI anomaly brings in mind the nuclear decay rate anomalies which I discussed some time ago in the posting Tritium beta decay anomaly and variations in the rates of radioactive processes in my blog. These variations in decay rates are in the scale of year and decay rate variation correlates with the distance from Sun. Also solar flares seem to induce decay rate variations. The TGD based explanation relies on nuclear string model in which nuclei are connected by color flux tubes having exotic variant quark and antiquark at their ends (TGD predicts fractal hierarchy of QCD like physics). These flux tubes can be also charged: the possible charges +,-1,0. This means a rich spectrum of exotic states and a lot of new low energy nuclear physics. The energy scale corresponds to Coulomb interaction energy alpha m, where m is mass scale of exotic quark. This means energy scale of 10 keV for MeV mass scale. The well-known poorly understood X-ray bursts from Sun during solar flares in the wavelength range 1-8 A correspond to energies in the range 1.6-12.4 keV -3 octaves in good approximation- might relate to this new nuclear physics and in turn might excite nuclei from the ground state to these excited states and the small mixture of exotic nuclei with slightly different nuclear decay rates could cause the effective variation of the decay The question is whether there could be a natural varying flux of X rays in time scale of 7 seconds causing the rate fluctuation by the same mechanism also in GSI experiment. In any case, the prediction is what might be called X ray nuclear physics and artificial X ray irradiation of nuclei would be an easy manner to kill or prove the hypothesis. See my blog and the chapter Nuclear String Hypothesis of "p-Adic length scale Hypothesis and Dark Matter Hierarchy". Dear Matti Pitkanen, Please stop filling this nice blog with your spam! It gets annoying to have to scroll down through your long posts. Surely if someone is interested in your theory they can just go to your blog and discuss it there. and now particles that go bump in the ATIC ... Dear Anonymous, I have been always wondering where people like you having nothing constructive to say come from, and why they are not automatically moderated out of serious discussion. Anyone interested in 7 second time scale of GSI anomaly and able to read and understand simple ten line long argument can come and see the discussion in my blog. I suspect that Matti is God's wrath sent down upon me for all my sins. I guess I have to take it stoically, like the Plagues of Egypt, or the Bubonic Pest. Dear Jester, I suggest that you come to my blog and demonstrate that the argument deriving the oscillation period 7 seconds for the decay rate variation in GSI experiment contains a fatal mistake. We could even make a bet: 1000 euros for you if you find the mistake and vice versa. In this manner you could also prove that you are not only misusing your academic position. The structure of argument is very simple. a) Nuclear string model predicts existence of two states corresponding to charged and neutral color bonds. b) The interaction potential corresponds to W Coulombic potential predicted by induced gauge field concept (classical gauge fields as projections of CP_2 spinor curvature to space-time surface: basic difference from standard model). This induces oscillating charge entanglement between nucleon and quark at the end of color bond. Nucleon and quark oscillate between their two charge states. Total charge is of course conserved. c) Anyone worked with 2-state systems knows how to model the situation. It is simple thing to calculate the value of interaction potential energy giving oscillation frequency omega= V/hbar. If the distance of the end of quark from nucleon is fraction .61 of protons Compton radius you get 7 seconds. Please come to blog and demonstrate me were the fatal error is and you own 1000 euros.
{"url":"http://resonaances.blogspot.com/2008/11/hitchhikers-guide-to-ghosts-and-spooks.html?showComment=1227070020000","timestamp":"2014-04-16T21:51:49Z","content_type":null,"content_length":"122711","record_id":"<urn:uuid:96648925-a3e5-49f6-97f9-4132f202f2c7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
About Mathematics Do You Like Solving Problems? • Are you fascinated by mathematical ingenuity and discovery? • Would you like to contribute to any one of such diverse fields as financial forecasting, space systems, or education? Mathematics Is the Language of Science Mathematics relies almost entirely on imagination and discovery that come from the human mind. There are no boundaries to mathematical ideas. From Ancient Times to the Present Age of Information Mathematics is rich in history and application. It is used: • to break codes during war time • to analyze documents • to design transportation systems • to compute missile trajectories and satellite orbits • to handle complex management scheduling From the ancient Egyptians to the robotics scientists of this century, mathematics has been integral to many technical advances in the history of humankind. Visit the American Mathematical Society's Mathematical Moments for a more extensive list of areas in which mathematics impacts our lives. Mathematicians Are in Demand! Career possibilities include work in finance, cryptology, computer graphics, medicine, robotics, economics, statistics, teaching, and management science. The American Mathematical Society maintains a list of jobs held by recent graduates in mathematics. You might be interested in the Mathematics Department Factsheet, which contains information about the department and our graduates. Mathematics Teachers Are in Demand! The College Board shows that 44 percent of current teachers in Pennsylvania will be eligible to retire in the next six years. The state is expecting an increase of more than four million elementary and secondary students during this same time. You might be interested in the Mathematics Education Factsheet, which contains information about our education programs and our graduates. Professional Organizations Devoted to Mathematical Sciences American Mathematical Society (mathematical research and scholarship) American Statistical Association (scientific and educational society to promote excellence in the application of statistical science) Consortium for Mathematics and Its Applications (to improve mathematics education for students of all ages) Institute for Operations Research and the Management Sciences (for operations research educators, investigators, scientists, students, managers, and consultants) Mathematical Association of America (for educators, students, professionals, and math enthusiasts) [Allegheny Mountain Section] National Council of Teachers of Mathematics (mathematics education) [Pennsylvania] Sigma Xi (scientific research society) [local chapter] Society for Industrial and Applied Mathematics (the application of mathematics and computational science to engineering, industry, science, and society)
{"url":"http://www.iup.edu/math/about/default.aspx","timestamp":"2014-04-17T09:48:55Z","content_type":null,"content_length":"25972","record_id":"<urn:uuid:34a2df3d-b5f6-4c03-9b86-a285e00efb49>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Formulated by Karl Friedrich Gauss, arguably one of the greatest mathematicians in history, who provided significant contributions to many areas of both mathematics and theoretical physics. Gauss's Law expresses the relationship between electric charge and electric field and provides an alternative to Coulomb's Law. It states that total electric flux through a closed surface enclosing a definite volume is proportional to the net charge inside the surface. It is not affected by the radius from the charge to the surface in any way. It can be expressed in its simplest form as: Electric Flux = Charge enclosed / Epsilon Nought (the Permeability of free space, constant) The surface involved can be imaginary, there need not be any material present at the location of the surface - such a closed surface can be referred to as a Gaussian Surface. Gauss's Law can be used in calculating the electric fields caused by a variety of charge distributions, or in reverse to determine charge distribution from a known electric field pattern. Without a computer, such calculations are usually feasible when both the charge distribution and the Gaussian Surface under consideration have some symmetrical property. Most calculations involving this law involve some degree of integration, but this can be avoided in simple examples with careful algebraic manipulation.
{"url":"http://everything2.com/title/Gauss%2527s+Law","timestamp":"2014-04-16T11:04:26Z","content_type":null,"content_length":"19870","record_id":"<urn:uuid:13b60fc2-2c60-4d93-b25e-9a43cc3b2aa5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Elements living in the conjugacy class and in the centralizer of an m-cycle in Am up vote 7 down vote favorite Let m>1 be an odd natural number, x a m-cycle in Am, the alternating group in m letters, C the conjugacy class of x in Am. Questiom: How can I describe the elements in the set { j | x^j in C} in terms of m? For instance, if C' is the conjugacy class of x in Sm, the symmetric group in m letters, then { j | x^j in C} = { j | (j,m)=1 }, where (j,m) = Greatest common divisor of j and m. But in Am, C' splits in two conjugacy classes of Am of the same size: C and the conjugacy class of (1 2)x(1 2) in Am. Thank you in advance. Fernando. gr.group-theory nt.number-theory co.combinatorics finite-groups Retagged to add nt, co, and finite-groups based on tags of similar questions. – Douglas Zare Feb 26 '10 at 6:23 add comment 2 Answers active oldest votes The set is the quadratic residues when $m$ is prime, but usually not when $m$ is composite. For example, $(0,1,2,3,4,5,6,7,8)$ is conjugate to $(0,2,4,6,8,1,3,5,7)$ in $A_9$ even though $2$ is not a square mod $9$, so there is no additional condition beyond $(j,9)=1$. up vote 5 For $m$ odd, the sign of the permutation on $\mathbb Z/ m\mathbb Z$ of multiplication by $j$ is the Jacobi symbol $\big(\frac jm\big)$. (This perspective on the Jacobi symbol is natural down vote from one of Gauss's proofs of quadratic reciprocity, but it's also theorem 1 here. Also see Zolotarev's lemma.) Since there are two conjugacy classes of $m$-cycles in $A_m$, $\big(\frac jm \big)=+1$ iff $x$ is conjugate to $x^j$ in $A_m$. add comment Thank you, Douglas. With the notation giving above and that giving in the paper of Marek Szyjewski (that you refered me), the following statements are equivalent: 1) x^j in C, 2) sgn(lambda_ j )=1, 3) J (j,m)=1, J the Jacobi symbol. 1) <=> 2) is easy (I have chequed). 2) <=> 3) is Theor. 1 of the paper of Marek Szyjewski. This is an unplubished article yet. I had no time to chequed all of it; I have only chequed Case 1, but I guess that Case 2 and 3 are correct.(?) up vote 0 down vote I am interested in the case m=3 p, with p>3 prime. I need to prove that there exist j, with j mod 3 =2, such that x^j in C. This amounts to prove that there exists j, 0< j < m, such that: -) ( j,m)=1, -) j mod 3 =2 (i.e. J( j,3)= -1), -) J( j,p)= -1, because J( j,m)=J( j,3) J( j,p). Do you have any clue for that? Thank you in advance. add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory nt.number-theory co.combinatorics finite-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/16361/elements-living-in-the-conjugacy-class-and-in-the-centralizer-of-an-m-cycle-in-a","timestamp":"2014-04-17T10:09:43Z","content_type":null,"content_length":"56497","record_id":"<urn:uuid:0193c2bc-25e3-4c27-a16f-3aca76e8bcf5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Double integral with transformations. May 2nd 2013, 11:37 PM #1 Oct 2012 Double integral with transformations. I need help in determine the limits of the double integrals where I had to use transformations. The double integral is $\int\int_R (4x + 8y) dA$ where R is a parallelogram with vertices (-1,3), (1,-3), (3,-1) and (1,5). The transformations were given as: $x = \frac{1}{4}(u+v)$, $y=\frac{1}{4} (v-3u)$. Using the transformation equations of $u=x-y$ and $v=3x+y$ (solving simultaneoulsy from above), the points in the transformed plane $T^{-1}$ become (-4,-2), (-4,-6), (4,10) and (4,6). I calculated the Jacobian as $\mid 4 \mid$ and the integrand becomes $3v-6u$. $\int\int_R (4x+8y) dA = \int\int_r (3v-6u) \mid 4 \mid dv du$ where r is the region in the transformed plane. $= 12 \int_{-4}^{4} \int_*^* (v-2u) dv du$ Could someone please help to me with what the * * limits of the inner integral would be and correct any errors I have made in getting this far? Thanks in advance, Re: Double integral with transformations. Sorry for putting anyone out. I think I found the problem. Simple math errors have brought me undone. First off I calculated the transformed points wrong. They should of been (-4,0), (-4,8), (4,0) and (4,8) which gives a nice square. Also the integrand should of been $3v - 5u$. $\int_{-4}^{4}\int_0^8 (3v-5u) \mid4\mid dv du = 3072$ Thanks to anyone you read this. Please help with my other post about the TRIPLE INTEGRAL. May 3rd 2013, 04:40 AM #2 Oct 2012
{"url":"http://mathhelpforum.com/calculus/218508-double-integral-transformations.html","timestamp":"2014-04-23T18:14:10Z","content_type":null,"content_length":"34578","record_id":"<urn:uuid:cc4be966-62ef-41f0-81bd-ea5a5c3a3e53>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Article 12 Stephen M. Phillips Flat 3, 32 Surrey Road South. Bournemouth. BH4 9BP. England. Website: http://smphillips.8m.com “In ancient times, music was something other than mere pleasure for the ear: it was like an algebra of metaphysical abstractions, knowledge of which was given only to initiates, but by the principles of which the masses were instinctively and unconsciously influenced. This is what made music one of the most powerful instruments of moral education, as Kong-Tsee (Confucius) had said many centuries before Plato.” G. de Mengel, Voile d’Isis ┃ The tetrahedral generalisation of the Platonic Lambda discussed in Article 11 is shown to generate the tone ratios of the Pythagorean scale. Godname numbers define properties of ten octaves, ┃ ┃ which conform to the pattern of the Tree of Life. The latter is exhibited also in the 32 notes above the fundamental up to the perfect fifth of the fifth octave, which has a tone ratio of 24. ┃ ┃ Being the tenth overtone and therefore corresponding to Malkuth explains why this number is central to the physics of the superstring. The numbers in the tetractys form of Plato’s Lambda are ┃ ┃ shown to be — individually or in combination — the numbers of the various musical sounds that can be played with ten notes arranged in a tetractys. The number of melodic intervals, chords and ┃ ┃ broken chords is found to be the number of charge sources of the unified, superstring force. The 72 broken chords and 168 melodic intervals and chords correlate with the 72:168 division of such ┃ ┃ charges encoded in the inner form of the Tree of Life and manifested in the distinction between the major and minor whorls of the superstring described by Annie Besant and C.W. Leadbeater. The 90 ┃ ┃ musical sounds generated by a tetractys of ten notes correlate with the 90 edges of the five Platonic solids. Similarity between the root structure of the superstring symmetry group E[8] and the ┃ ┃ intervals and chords of the octave suggests that superstrings share with music the universal mathematical pattern of the Tree of Life, the eight zero roots of E[8] corresponding to the eight ┃ ┃ notes of the Pythagorean scale. ┃ 1. The tetrahedral Platonic Lambda In his Timaeus, Plato describes how the Demiurge measured the World Soul, or substance of the spiritual universe, according to the simple proportions of the first three powers of 2 and 3. This is represented by his ‘Lambda,’ socalled because of its resemblance to the Greek letter Λ (Fig. 1). These numbers line but two sides of a tetractys of ten numbers from whose relative values the physicists and musicians of ancient Greece worked out the frequencies of the notes of the octaves of the now defunct Pythagorean musical scale. However, it was shown in Article 11 that, if we ignore the speculative cosmological context in which this algorithm for generating the relative frequencies of the musical notes was presented and regard the Lambda and its underlying tetractys purely as a construction of Pythagorean mathematics, then it is incomplete. This is because the numbers 1, 2, 3 and 4 were the basis of Pythagorean number mysticism and its application to the study ofFig. 2). It may be argued that this three-dimensional figure is not consonant with the details of the cosmological theory that Plato presented in his Timaeus. This, indeed, is the case. Nevertheless, the value and universality of mathematics exist in their own right and do not have to be validated by the theories of any mathematician or philosopher, however renowned that person may be*. Properties of numbers are more important than how they may have been interpreted. The tetrahedron of 20 numbers has the following musical virtue: the extended Lambda * This is not intended as a criticism of Plato, who may have known about the tetrahedral generalisation. tetractys generates the tone ratios of octaves along one side and perfect fifths along another side. But the numbers starting with 6 and generating the perfects fourths have to be added by hand, so to speak, following ad hoc rules of multiplication by 2 and 3 that were not part of Plato’s cosmological theory and whose justification is simply that they create the right numbers. Furthermore, whereas the pairing of numbers separated by octaves or intervals of the perfect fifth follows the natural geometry of the array of numbers set by the extended boundary of the Lambda, the pairing of successive perfect fourths does not respect the same symmetry because it occurs in diagonal fashion across the array. Worse still, the other possible diagonal pairing of numbers whose tone ratios differ by a factor of 3 plays a relatively weak role in generating twelfths of the Pythagorean scale. The traditional construction of the tone ratios of the Pythagorean clearly lacks symmetry because the classical scheme is mathematically incomplete. On the other hand, the fourth face of the tetrahedron is a tetractys of numbers whose pairings parallel to its three sides create octaves, perfect fifths and perfect fourths with, respectively, the tone ratios, 2/1, 3/2 and 4/3 (Fig. 3). Its hexagonal symmetry means that, when extended in the traditional manner of the Lambda tetractys, every number becomes surrounded by six others that are octaves, perfect fourths or perfect fifths. The numbers may be divided by any one of them to generate the same lattice of tone ratios of the Pythagorean scale, e.g., the infinite, hexagonal lattice of numbers is invariant with respect to such division. The number 24 (=1×2×3×4) is at the centre of the fourth face*. Figure 4 displays the lattice of tone ratios, starting with 1, the fundamental, that are created by dividing every number in the tetractys and outside it by 24. Using any other number in or outside the tetractys as divisor would have created the same lattice of tone ratios. Overtones are shown in yellow circles, red lines connect octaves (×2), green lines connect perfect * 6, the centre of the Lambda tetractys, is the fourth overtone and 24 is the tenth overtone. Integers 6, 8, 12 and 24 at the centres of the four faces have the ratios 1, 3/2, 4/3, 2, 3 and 4 of the integers 1, 2, 3 and 4. Figure 4 fourths (×4/3) and blue lines connect perfect fifths (×3/2). The tone interval of 9/8 is also indicated by the orange line joining the centre of the tetractys (coloured grey) to one corner. The tone ratios 27/16 of note A and 243/128 of note B are similarly defined by, respectively, indigo and violet diagonals extending from the number 1 to corners of larger triangles. Successive notes of the scale for each octave are joined by dashed lines.They zigzag between an octave, the seventh note of the octave and its perfect fourth, i.e., Table of tone ratios of eleven octaves of the Pythagorean scale │ │ │ │ │ │ │ │ │ Number │ │ │ C │ D │ E │ F │ G │ A │ B │ Of │ │ │ │ │ │ │ │ │ │ overtones │ │ 1 │ 1 │ 9/8 │ 81/64 │ 4/3 │ 3/2 │ 27/16 │ 243/128 │ 0 │ │ 2 │ 2 │ 9/4 │ 81/32 │ 8/3 │ 3 │ 27/8 │ 243/64 │ 2 │ │ 3 │ 4 │ 9/2 │ 81/16 │ 16/3 │ 6 │ 27/4 │ 243/32 │ 4 │ │ 4 │ 8 │ 9 │ 81/8 │ 32/3 │ 12 │ 27/2 │ 243/16 │ 7 │ │ 5 │ 16 │ 18 │ 81/4 │ 64/3 │ 24 │ 27 │ 243/8 │ 11 │ │ 6 │ 32 │ 36 │ 81/2 │ 128/3 │ 48 │ 54 │ 243/4 │ 15 │ │ 7 │ 64 │ 72 │ 81 │ 256/3 │ 96 │ 108 │ 243/2 │ 20 │ │ 8 │ 128 │ 144 │ 162 │ 512/3 │ 192 │ 216 │ 243 │ 26 │ │ 9 │ 256 │ 288 │ 324 │ 1024/3 │ 384 │ 432 │ 486 │ 32 │ │ 10 │ 512 │ 576 │ 648 │ 2048/3 │ 768 │ 864 │ 972 │ 38 │ │ 11 │ 1024 │ 1152 │ 1296 │ 4096/3 │ 1536 │ 1728 │ 1944 │ 39 │ (Red cells enclose integer notes up to end of tenth octave) between the extremities of the Pythagorean scale and its midpoint. 2. The first ten octaves The tone ratios of the 71 notes in the first ten octaves are shown in the table above (red cells contain overtones and blue cells enclose notes beyond the tenth octave). The last column lists as a running total the number of overtones of the fundamental with a tone ratio of 1. 1. In the interval 243/128 between C and B, 243 is the 26th* overtone and the 55th note after 1, where 55 = 2 3 and 128 is the 21 st overtone and the 50 th note (the last note of the seventh octave). This shows how Ehyeh, Yahweh and Elohim, Godnames of the Supernal Triad, prescribe the range of pitch between the first seven notes (1). 256, which is part of the leimma 256/243 between notes E and F and between B and C of the next octave, is the 36th note, counting from the beginning of the fourth octave. This shows how Eloha, Godname of Geburah with number value 36, defines the ‘leftover’ between adjacent octaves. The 36 th note after 1 has a tone ratio of 36 ; 2. the first ten octaves span 71 notes, of whose tone ratios 40 = 4 4 4 4 4 = 4 + 8 + 12 + 16 are integers and 31 are fractions. This shows how the Godname El of Chesed with number value 31 determines the number of notes whose tone ratios are not whole numbers, whilst Eloha prescribes the number of notes because 71 is the 36 th odd integer. It also demonstrates how the Pythagorean Tetrad (4) defines the number of integer tone ratios in ten octaves. The 71st note has a tone ratio of 1024 = 2^10. This is the smallest number with ten prime factors (all 2), showing the Pythagorean character of the last note of the tenth octave — the 40th note that is an integer; 3. The 70th note is 972 = 36×27, where 27 (=3^3) is the largest integer in Plato’s Lambda and 36 (=1^3+2^3+3^3) is the sum of the integers 1, 8, and 27 located at its apex and extremities: 8 27. *The number values of the Sephiroth, their Godnames, Archangels, Angels and Mundane Chakras are written throughout the text in boldface. Hence, 972 = 3^3 + 6^3 + 9^3. This property is an example of the beautiful, mathematical properties of the first ten octaves (the reason for this will be given shortly). As 2700 = 3^3 + 6^3 + 9^3 + 12^3 and 100 = 1^3 + 2^3 + 3^3 + 4^3, the largest integer 27 can be expressed as the ratio: Once again, it is the Pythagorean Tetrad that expresses a number important to the mathematics of the Pythagorean scale, for both the numerator and the denominator in the ratio are the sum of four cubes. It was pointed out in Article 11 that Yahweh prescribes the number 243 in the leimma because it is the 26th overtone. 243 = 3^3 + 6^3, i.e., it is the sum of the first two of the three cubes summing to the value of the tone ratio of the seventh note of the tenth octave. 3 and 6 are the integers in 36, the number value of Eloha. The table indicates that 6^3 = 216 is the tone ratio next smaller than 243. This is the number of Geburah whose Godname defines the number 256, as indicated in comment (1), as well as the number 243. Because the tone ratios of corresponding notes in n successive octaves all increase by the same factor of 2, there are as many overtones in any such set of n octaves, taking their lowest tonic as the fundamental, as there are in the first n octaves; the first note of the first octave is set as 1 merely for convenience because the integers and fractions represent relative, not absolute, frequencies. The first seven octaves have 50 notes of which 21 are overtones up to 128. Counting from the first overtone with tone ratio 2, the number value of Elohim defines the next seven octaves whose last note B first becomes an integer (243). This is the 26th overtone, which is therefore also prescribed by Yahweh. It is the (7×7=49)th note and so is prescribed by El Chai, the Godname of Yesod. The 70th note (972) represents the same note relative to 4, the tonic of the third octave. It is, counting from this note, the 36th overtone. This is how these four Godnames prescribe the 70th note of the first ten octaves. Let us now represent the 70 notes of the ten octaves by what the author has called in previous articles a ‘2nd-order tetractys’ (Fig. 5). The 21 notes of the first three octaves are arranged at the corners and centres of hexagons at its three corners and the 49 notes of the next seven octaves are at the corners and centres of seven hexagons arranged at the corners and centre of a larger hexagon (2). The ten tonics C[n] (1≤n≤10) are at the centres of the hexagons. The centre of the 2nd-order tetractys denotes the tonic of the tenth octave with tone ratio 512 (C[10]). This kind of tetractys is a more differentiated version of the Pythagorean symbol of divine wholeness. It is why the 70th number, 972, exhibits arithmetic properties typical of the beautiful harmonies manifested by this pattern. Figure 6 displays the equivalence between the 2nd-order tetractys* and the Tree of Life with its 16 triangles turned into tetractyses. The ten corners of these triangles correspond to the centres of the ten tetractyses (both shown as white circles). The tonics of the ten octaves can be assigned to the positions of the ten Sephiroth and the remaining 60 notes assigned to the 60 black yods. The first seven notes of each octave formally correspond to a Sephirah. This is the reason for our considering the first ten octaves of the Pythagorean scale. The mathematical beauty of this Tree of Life pattern has already begun to show itself in the properties of the 70th note discussed above. Let us now consider the integer tones in the ten octaves. As pointed out in comment 2 above, their number 40 can be represented as a tetractys array of the number 4. The sum of those 4’s at its corners is 12, leaving 28 as the sum of the seven other 4’s. Yods at corners of a tetractys correspond to the Sephiroth of the Supernal Triad and the seven other yods correspond to the Sephiroth of Construction. The 12:28 division of the 40 integer tone ratios therefore corresponds to the Kabbalistic distinction between the * Actually, it is a slightly different version of that shown in Figure 4. The difference is immaterial. subjective and objective Sephiroth of the Tree of Life. The largest number 27 in Plato’s Lambda is the twelfth such tone ratio (A of the 5th octave). 4 is the fourth integer tone ratio and 12 is the eighth. The former is the 15th note and the latter is the 26 th. This shows how the Godnames Yah with number value 15 and Yahweh with number value 26 that are assigned to the Sephirah Chokmah mark out notes that correspond to successive members of the Supernal Triad. Yahweh also defines the twelfth integer tone ratio because 27 is the 26 th integer after 1 (for the Pythagoreans, 27 would be the 26th true integer because they regarded the number 1 not as an integer but as the source and principle of all numbers). The 40 integer tone ratios comprise 11 octaves (note C) and 29 others. 29 is the 15th odd integer after 1, showing how the Godname Yah with number value 15 defines the number of overtones in ten octaves that are not merely octaves. There are five whole tone intervals (9/8) and two leimmas (256/243) that separate the eight notes of an octave*. This means that the 70 intervals between the 71 notes of the ten octaves are made up of 20 leimmas and 50 tones. The Godname Elohim with number value 50 prescribes the number of tones spanning ten octaves. It demonstrates par excellence the Tree of Life pattern formed by ten octaves. The correspondence in the Tree of Life of this 20:50 division of intervals is the fact that, when it is constructed from tetractyses, there are 20 () yods on the faces of the tetrahedron and 50 (●) yods outside them (Fig. 7). The leimmas correspond to the yods on the tetrahedron and the tones correspond to the yods outside the tetrahedron. In general, n overlapping trees contain (50n+20) yods, of which 20n * This 5:2 division corresponds in the Tree of Life to the lowest, five Sephiroth of Construction, which are always shared with overlapping trees, and to Chesed and Geburah, which are unshared. yods lie on n tetrahedra, so that this 5:2 correspondence exists only for a single Tree of Life. Counting from the fundamental of the first octave, there are ten overtones up to the perfect fifth of the fifth octave with tone ratio 24: 2, 3, 4, 6, 8, 9, 12, 16, 18, 24. Of these, four (2, 4, 8, 16) are octaves, leaving six others. This 4:6 division corresponds in the Tree of Life to the four lowest Sephiroth at the corners of a tetrahedron and the uppermost six Sephiroth. The sum of the Godname numbers of the latter is 21 + 26 + 50 + 31 + 36 + 76 = 240 = 24 24 24 24 24 24 24, that is, they sum to the number of yods in 24 tetractyses. This suggests something special about the number 24. This might be suspected, because it is 1×2×3×4, i.e., the number of permutations of four objects, which shows its Pythagorean character. It is also the 26th note and the tenth overtone, both counting from 2, the first overtone. This corresponds in the Tree of Life to the fact that the tenth Sephirah, Malkuth, is, as its lowest point, the 26th and last geometrical element in its trunk (Fig. 8). The significance of the perfect fifth of the fifth octave of the Pythagorean scale is that it is the seventh of a set of perfect fifths, the first five being successive: (Subscripts denote the octave number) Including the tonic of the first octave, there are 33 notes up to G[5], where 33 = 1! + 2! + 3! + 4!, i.e., 33 is the total number of permutations of four rows of 1, 2, 3 and 4 objects arranged in a tetractys. Of these notes, 11 are integers and (33–11=22) are fractional, where 22 = 1^4 + 2^3 + 3^2 + 4^1. The latter notes comprise 16 notes in the first three octaves and six notes in the fourth and fifth octaves up to the last fifth. This 6:16 division corresponds in the Tree of Life to the six Paths that are edges of the tetrahedron whose corners are the lowest four Sephiroth and the 16 Paths outside it (see Fig. 7). The 32 overtones and notes up to G[5] conform to the geometrical pattern of the Tree of Life, the ten overtones corresponding to the ten Sephiroth and the 22 fractional notes corresponding to the 22 Paths connecting the (Thick lines are Paths of the trunk of the Tree of Life) Sephiroth (Fig. 9). The ordering of notes in Figure 9 follows the traditional Kabbalistic numbering of Paths. As the tenth overtone, the tone ratio 24 corresponds to the lowest Sephirah, Malkuth, which signifies the outer, physical form of anything embodying the universal blueprint of the Tree of Life. It is this correspondence that makes the number 24 significant vis-à-vis superstring theory, as will be explained in Section 6. Arranged in a tetractys: the ten overtones have 16 combinations of two or more notes selected from each row. They comprise ten harmonic intervals, five chords of three notes and one chord of four notes. These correspond in the trunk of the Tree of Life to, respectively, its ten Paths, five triangles each with three corners and the tetrahedron with four corners (see Fig. 9). Alternatively, the 16 harmonic intervals and chords correspond to the 16 triangles of the Tree of Life itself. The ten harmonic intervals correspond to the ten triangles below the level of the path joining Chesed and Geburah, and the six chords correspond to the six triangles either above or projecting beyond this line, which, according to Kabbalah, separates the subjective Supernal Triad from the objective aspect of the Tree of Life manifesting the seven Sephiroth of Construction. Of the six triangles, only one triangle is completely above this line. It corresponds to the single chord of four notes. The complete correspondence between the ten octaves and the Tree of Life is summarised below: Any set of ten successive octaves exhibits this same Tree of Life pattern because the tone ratios of corresponding notes in successive octaves differ by a factor of 2, which means that, relative to the first note of any such set, there are always ten overtones with the same set of values as that found above for the first ten octaves, starting with a tone ratio of 1 for the tonic of the first octave. The perfect fifth of the fifth octave, counting from any given octave, is still an overtone with the tone ratio of 24 relative to the tonic of that starting octave. The numbers in the table of tone ratios are not absolute pitches but frequencies defined relative to that of the fundamental, which is normally given the convenient value of 1. Their underlying Tree of Life pattern is, therefore, not dependent on a particular starting point but holds for any set of ten successive octaves. The fact that most of their notes would fall outside the audible range of the human ear is irrelevant. Counting from the tonic of the first octave, the tone ratio 24 (=1×2×3×4) is the 33rd note (33=1!+2!+3!+4!) and the perfect 5th of the fifth octave. Counting from the latter, the 33rd note is 576 = 24^2 and still the perfect 5th of the new fifth octave. This is the 65th note from the tonic of the first octave, where 65 is the 33rd odd integer. The Godname Adonai with number value 65 prescribes sequences of 33 notes whose last note has a tone ratio always 24 times that of the first note. Only the first sequence (1) has ten overtones. The second sequence (2) has 24 overtones in addition to the first overtone with tone ratio of 24. In general, the nth sequence terminates in the note with tone ratio 24^n. Notice that the Godname of Netzach, the fourth Sephirah of Construction with number value 129, determines the end of the fourth sequence with tone ratio 24^4, that is, (1×2×3×4) raised to the fourth power. This shows the principle of the Pythagorean Tetrad at work. The 1680 coils of each whorl in the UPA superstring have been shown in previous articles to be due to 24 gauge charges of the superstring gauge symmetry group E[8], the total number of 240 for all ten whorls corresponding to its 240 non-zero roots. The number of yods in the lowest n Trees of Life is given by Y(n) = 50n + 30. The lowest 33 trees have Y(33) = 1680 yods. This is the same number as 24 separate Trees of Life, each with 70 yods, because 1680 = 24×70. Just as the first 33 notes culminate with the tone ratio 24, so the first 33 overlapping Trees of Life have as many yods as 24 separate trees. This demonstrates the association of the numbers 33 and 24 in the context of the Tree of Life. Ten objects: Number of permutations 1! = 1 2! = 2 A 3! = 6 B C 4! = 24 D E F G H I J Total = arranged in a tetractys can be arranged in their separate rows in 33 ways (see above). 24 of these are permutations of the last row of four objects. In this case, the 33rd permutation is the last of these 24 arrangements. The tone ratio 24 is the perfect fifth of the fifth octave. As the 33rd note, it has its counterpart in the 33rd tree of what the author has called the ‘Cosmic Tree of Life’ — the 91 trees mapping all levels of consciousness (see Article 5). Counting upwards, the 33rd tree represents the fifth subplane of the fifth plane (33 = 4×7 + 5). The fifth plane, called in Theosophy the ‘atmic plane,’ expresses the Divine Quality of Tiphareth, so that its fifth subplane also corresponds to this Sephirah. We see that the 33rd subplane is the most characteristic of Tiphareth, namely “Beauty.” Little wonder then that it should determine the number 1680 characterising the form of a whorl component of a superstring, previous articles by the author having displayed its very beautiful properties. Confirmation that the number 33 represents a cycle of completion of a Tree of Life pattern of which the ten overtones spanning 33 notes is an example comes from the concept of tree levels. The emanation of the ten Sephiroth takes place in seven stages (Fig. 10). Each Sephirah can be represented by a Tree of Life. Ten overlapping Trees of Life have 33 tree levels. This number thus parameterizes the complete emanation of ten trees. Nine tree levels extend down to the top of the seventh tree, marking the last of the 25 dimensions of space. Below them are a further 24 tree levels representing the 24 spatial dimensions at right angles to the direction in which the 1-dimensional string extends. This 9:24 differentiation in tree levels separating the purely physical plane from superphysical subplanes corresponds to the nine permutations of 1, 2 and 3 objects in the first three rows of a tetractys and the 24 permutations of the four objects in the fourth row. This is in keeping with the four rows of the tetractys symbolising the four fundamental levels of Divine Spirit, soul, psyche and body, i.e., the last 24 tree levels of the 33 tree levels determine the physical form of a superstring because they represent geometrical degrees of freedom as the dimensions or directions of space along which its whorls can vibrate. Further confirmation of the cyclic nature of the number 33 in defining repeated Tree of Life patterns is that there are 33 corners outside the root edge of every successive set of seven polygons enfolded in overlapping Trees of Life (Fig. 11). In general, the number of corners of the 7n polygons enfolded in n trees = 35n + 1, ‘1’ denoting the highest corner of the hexagon enfolded in the nth tree (the highest and lowest corners of each hexagon are shared with its adjacent hexagons). Hence, the 70 polygons enfolded in 10 trees have 351 corners (Fig. 11). This the number value of Ashim, the Order of Angels of Malkuth. As a set of (7+7) polygons has 70 corners, there are 64 corners outside their root edge that are unshared with hexagons in adjacent sets. This is the number of Nogah, the Mundane Chakra of Netzach, the Sephirah energising the music and art of the soul. Another remarkable property of the number 33 is that the 33rd prime number is 137. This is one of the most important numbers in theoretical physics (3) because its reciprocal is almost equal to the so-called “fine-structure constant.” This determines the probability that an electron will emit or absorb a photon and measures the relative strength of the electromagnetic force compared with the nuclear force binding protons and neutrons together inside atomic nuclei. As yet theoretically undetermined by physicists, the number 137 is encoded in the inner form of the Tree of Life as the 137 tetractyses whose yod population is equal to the yod population of the (7+7) enfolded polygons when their sectors are each transformed into three tetractyses (Fig. 12). 1370 is also the number of yods in 27 overlapping Trees of Life (4), which gives another remarkable significance to the largest number in Plato’s Lambda, which is prescribed by the Godname Yahweh as the 26th number after 1. Indeed, 27 can be said to define the ten octaves of the Pythagorean scale in the sense that 972, the largest of their overtones, is the 27th overtone following the overtone 27, which is the 33rd note after the fundamental. Remarkably, 972 is also the 33rd even overtone. A tetractys of ten objects has (1!+2!+3!+4!=33) permutations of the objects in its rows. A tetractys of ten different notes would generate one note and 32 melodic intervals and broken chords formed from the other three rows of notes (see p. 22 for the definition of these musical terms). Suppose that we were to play one note, next a melodic interval, THE 36 CORNERS OF THE 7 ENFOLDED POLYGONS DENOTE THE 36 MUSICAL ELEMENTS THAT A TETRACTYS OF 10 NOTES CAN PLAY Figure 14 then a broken chord of three notes and finally a broken chord of four notes. The number of possible ways of playing ten notes in succession by following the pattern of the tetractys is 1!×2!×3!×4! = 288.* This is the number of yods lying on the boundaries of the (7+7) regular polygons constituting the inner form of the Tree of Life (Fig. 13). Supposing that the notes are arranged in the tetractys in either ascending or descending order, there are 144 arrangements of ascending notes and 144 arrangements of descending notes. They have their parallel in the two similar sets of seven polygons that are shaped by 144 yods on their 48 sides. Now suppose that we were to play one note, then either a harmonic or a melodic interval, next either a chord or a broken chord of three notes and finally a chord or broken chord of four notes. The numbers of possible musical sequences created by playing 1, 2, 3 & 4 notes would be: 1. ^1C[1] = 1 2. ^2C[2] + ^2P[2] = 1 + 2 = 3 3. ^3C[3] + ^3P[3] = 1 + 6 = 7 4. ^4C[4] + ^4P[4] = 1 + 24 = 25 The total number of musical elements is 36, which is the number value of Eloha, the Godname of Geburah (Fig. 14). Playing 10 notes in the order of the rows 1, 2, 3 & 4 would generate (1×3×7×25=525) possible sequences of notes. One of them is the sequence of one note, one harmonic interval and two chords, i.e., four musical sounds. This is the minimum number of sounds created by playing the tetractys of 10 notes, leaving 524 sequences, each with 5-10 sounds drawn from the 36 musical elements. Compare this with the facts that the seven enfolded polygons in the inner form of the Tree of Life have 36 defining corners and that the (7+7) enfolded polygons have 524 yods. Moreover, there are (1!×2!×3!×4!=288) sequences of 10 successive sounds consisting of a note, one melodic interval and two broken chords, leaving (524–288=236) sequences with 5-9 sounds. These correspond to the 288 yods inside the (7+7) polygons and to the 236 yods on their sides (Fig. 15)! We see that the Tree of Life blueprint is inherent in the musical potential of 10 notes played in four steps according to the pattern of a tetractys, the symbol of the ten-fold nature of God (6). 3. The seven notes of the Pythagorean scale The first seven notes of the musical octave: *288 = 17^2 – 1 = 3 + 5 + 7 + … + 33, where 33 is the 16th (16=4^2) odd integer after 1. have (2^7–1=127) different combinations, where 127 is the 31st prime number. This indicates how the Godname El with number value 31, which is assigned to Chesed, the first Sephirah of Construction, prescribes how many groups of notes can be played together to make basic musical sounds using the 7-fold musical scale. These combinations comprise the seven notes themselves and (127–7=120) harmonic intervals and chords, where 7 is the fourth odd integer (also the fourth prime number) and 120 = 2^2 + 4^2 + 6^2 + 8^2. This illustrates how the Pythagorean Tetrad, 4, determines both the numbers of notes and their combinations. The number of harmonic intervals is ^7C[2] = 21, which is the number value of the Godname Ehyeh assigned to Kether. The number of chords is therefore (120–21=99). This distinction between intervals and chords is arithmetically defined as follows: In other words, the Pythagorean character of the number 120 is shown by its being the sum of the first ten odd integers after 1, 21 being the tenth odd integer after 1 and 99 being the sum of the remaining 9 integers in this tetractys array. 99 is the 50th odd integer, showing how the Godname Elohim with number value 50 defines the number of chords that can be played with the first seven notes of the Pythagorean scale. Successive octaves comprise seven notes per octave and the eighth note beginning the next octave. N octaves therefore span (7N+1) notes. The number of ‘Sephirothic levels’ (SLs) in the lowest, n overlapping Trees of Life is (6n+5). For what values of N and n are the number of notes and SLs the same? The only solutions to: 7N + 1 = 6n + 5 up to N =10 can be found by inspection to be N = n = 4 or N = 10 & n = 11, i.e., four octaves have as many notes (29) as the lowest four Trees of Life, whilst ten octaves have as many notes (71) as eleven such Trees of Life have SLs. Every eighth note in successive octaves is of the same type, whilst every seventh SL in successive Trees corresponds to the same Sephirah. The Pythagorean Tetrad and Decad define analogous successions of notes of the scale and the emanations of Sephiroth in overlapping Trees of Life. Excluding the highest note belonging to the next higher octave, four and ten octaves have, respectively, 28 and 70 notes, the same as the SLs in four and eleven overlapping Trees of Life. In general, the counterpart of the last note of the Nth octave shared with the next higher octave is Daath of the nth tree, which is Yesod of the (n+1)th tree but which is not counted as an SL when the overlapping trees are considered as a separate 4. Tetractys of ten notes A harmonic interval is two notes played together. A melodic interval is two notes played one after the other. A chord is three or more notes played simultaneously and a broken chord is a set of three of more notes played in succession. The following discussion will consider only melodic intervals and broken chords where the notes are all different. Consider a tetractys array of ten different notes: B C D E F G H I J (Any notes can be considered here — the letters labelling them do not refer to the notes of the Pythagorean scale). The number of intervals and chords that notes within the same row generate when played will now be determined. A harmonic interval is a combination of two notes, whereas a melodic interval is two notes played with regard to their order in time, i.e. a permutation of two notes. A chord is a combination of three or more notes, and a broken chord is a pattern of three or more notes played in quick succession, i.e., a particular arrangement or permutation of these notes. The table below shows the numbers of harmonic intervals and chords (combinations of notes) and melodic intervals and broken chords (permutations of notes) for the notes in the four rows of the tetractys: The number value 26 of Yahweh, the Godname of Chokmah, is the number of notes, harmonic intervals and chords that can be played within the four rows of notes, the number value 15 of its older version, Yah, being the number that can be played from four * Notation: ^nC[r] = n!/r!(n-r)! and ^nP[r] = n!/(n-r)! notes. There are (26–10=16=4^2) harmonic intervals and chords (10 intervals & 6 chords). The number of notes, melodic intervals and broken chords is 84 = 1^2 + 3^2 + 5^2 + 7^2. This illustrates the defining role of the Pythagorean Tetrad because 1, 3, 5, & 7 are the first four odd integers. The number of melodic intervals and broken chords = 84 – 10 = 74, which is the 73rd integer after 1. The number value 73 of Chokmah determines the number of basic musical elements (namely, melodic intervals and broken chords) that can be played from sets of 1, 2, 3 and 4 notes. The number of harmonic and melodic intervals, chords and broken chords = 16 + 74 = 90, and the number of notes, intervals and chords of both types = 90 + 10 = 100 = 1^3 + 2^3 + 3^3 + 4^3. This shows how the Pythagorean integers 1, 2, 3, & 4 express the total number of musical sounds created by playing notes from each row of the tetractys. These results can be represented by a tetractys array of the number 10: The central number 10 represents the ten notes and the sum 90 of the remaining 10’s is the number of intervals and chords that they can create. This beautiful result demonstrates the power of the tetractys and the role of the Tetrad in defining its properties, whatever the nature of the things symbolised by its yods. The number of melodic intervals, chords and broken chords = 90 – 10 = 80, which is the number value of Yesod, the penultimate Sephirah. This is the number of yods in the lowest Tree of Life (Fig. 16 ). The meaning of Yesod is “foundation.” It is appropriate, given that this tree is the base of any set of overlapping Trees of Life. A G J B C H D F I D E F I E B C E H G H I J J F C A A B D G The three tetractyses have (3×90=270) intervals and chords. As the same notes appear in each array, the number of notes, harmonic and melodic intervals, chords and broken chords that can be played using the three orientations of a tetractys of notes = 10 + 270 = 280. The number value 280 of Sandalphon, Archangel of Malkuth, measures how many basic musical sounds composed of up to four notes can be created from ten notes arranged in a tetractys, only the notes in a row being played. The number of melodic intervals and broken chords created by each orientation of the tetractys of notes is 74. The total number of such intervals and chords = 3×74 = 222. This is the number of yods other than their 41 corners associated with either half of the inner form of the Tree of Life (Fig. 17). There are 444 such yods in both sets of seven enfolded, regular polygons, 222 of them being associated with each set. The number of harmonic intervals in each array = ^2C[2] + ^3C[2] + ^4C[2] = 10. The three arrays have (3×10=30) such intervals, where 30 = 1^2 + 2^2 + 3^2 + 4^2. The number of melodic intervals in each array = ^2P[2] + ^3P[2] + ^4P[2] = 20. The three arrays have (3×20=60) melodic intervals. The number of notes and harmonic intervals = 10 + 30 = 40 = 4 4 = 3^0 + 3^1 + 3^2 + 3^3, showing how the Tetrad determines this number, for it is the sum of the first four powers of 3. The number of notes and melodic intervals in the three arrays = 10 + 60 = 70. Compare this with Figure 18, which shows that turning the 16 triangles of the Tree of Life into tetractyses generates 60 yods in addition to those at their ten corners. The ten Sephirothic points can be assigned the notes and the 60 other yods can be assigned the melodic intervals that they generate. We have seen that the number of harmonic and melodic intervals and chords and broken chords is 270. The number of harmonic intervals, chords and broken chords = 270 – 60 = 210 = 21 21 21 21 21 21. The number value 21 of Ehyeh determines how many harmonic intervals, chords and broken chords the ten notes can create. The number of chords in each array = ^3C[3] + ^4C[3] + ^4C[4] = 6, that is, 3×6 = 18 in the three arrays. The number of broken chords in the three arrays is therefore 210 – 18 – 30 = 162 (54 per array). The number of chords and broken chords = 18 + 162 = 180 (60 per array). The number of harmonic intervals and chords = 30 + 18 = 48 (16 per array). In other words, the number of different combinations of the ten notes (i.e., new sounds) that can be played simultaneously when selected from their three possible tetractys arrays is the same as the number of corners of the seven, separate regular polygons (Fig. 19). This illustrates the character of the number 48* (the number of Kokab, the Mundane Chakra of Hod) in quantifying the most basic degrees of freedom making up a Tree of Life pattern — in this case, the corners of the seven polygons. The same number appears in the context of what the ancient Greeks called ‘tetrachords.’ * 48 shows its Pythagorean character by being the smallest integer with tenfactors, including 1 and itself. They did not experience the musical octave as one complete whole but rather as a twopart structure (5). The octave evolved through the completion of two groups of four notes, or tetrachords. For example, the sequence of notes G, A, B, C below is a tetrachord: They shared a central note that was always a perfect fourth with respect to the beginning of the first tetrachord (G here) and the endnote of the second tetrachord (here F). The number of permutations of four objects taken one, two, three and four at a time = ^4P[1] + ^4P[2] + ^4P[3] + ^4P[4] = 4 + 12 + 24 + 24 = 64 = 4^3. The number 64 is the number value of Nogah, the Mundane Chakra of Netzach (astrologically associated with the planet Venus). The number of permutations of four objects taken two, three and four at a time = 64 – 4 = 60. The number of permutations of four objects taken two at a time = ^4P[2]= 12. Hence, each of the two tetrachords in an octave has 12 melodic intervals and (60–12=48) broken chords, the latter comprising 24 (=1×2×3×4) broken chords of three notes and 24 broken chords of four notes. There are therefore two chords each of four notes and (48+48=96) possible broken chords in an octave split up into two tetrachords. Compare these divisions with the fact that the (6+6) enfolded polygons have two corners of their shared root edge and 48 corners outside their root edge, 24 on each side of it, whereas, when separated by the root edge, each set of all seven separate polygons also has 48 corners (Fig. 20). As this set of 12 polygons constitutes a Tree of Life pattern in its own right (see Article 8), we see that the ancient Greek depiction of the octave as two tetrachords conforms to the pattern of the Tree of Life. Elohim prescribes the seven polygons and root edge because, as Figure 20 shows, its number value 50 is the number of their corners (two belong to the root edge). Of the 162 broken chords generated by the three orientations of a tetractys of ten notes, six are descending and ascending tetrachords (two per orientation). Therefore, there are (162–6=156) broken chords whose notes are not all in descending or ascending sequence. 156 is the 155th integer after 1. This is how Adonai Melekh, the complete Godname of Malkuth with number value 155, measures the number of sounds that can be made by playing the [3×(3+4) = 21] notes in the rows of three and four of the three tetractyses one after the other but not in order of their pitch. 21 is the number value of Ehyeh, Godname of Kether. 5. The Platonic Lambda revisited In Article 11, we found that the tetractys form of Plato’s Lambda: is but one face of a tetrahedron whose fourth face is a tetractys that generates in a symmetric way the tone ratios of the Pythagorean musical scale. Properties of this parent tetractys are compared below with the various numbers of intervals and chords generated from a tetractys array of ten notes. 1. Sum of 10 integers = 90 = number of both types of intervals & chords; 2. Sum of 9 integers surrounding centre = 84 = number of notes, melodic intervals & broken chords; 3. Sum of 7 integers at centre and corners of hexagon = 54 = number of broken chords; 4. Central integer 6 = number of chords; 5. Sum of 6 integers at corners of hexagon = 48 = number of harmonic intervals & chords in 3 arrays or number of broken chords in set of 4 notes; 6. Sum of smallest integer (1) and largest integer (27) = 28 = number of notes & chords in 3 arrays; 7. Sum of integers 1, 3, 9, 27 on side of Lambda = 40 = number of notes & harmonic intervals in 3 arrays. We find that the numbers making up the Lambda tetractys do more than define the tone ratios of musical notes — a function known to musicians and mathematicians for more than two thousand years. They also measure the various numbers of musical elements that can be played by using the four rows of different notes arranged in a tetractys. Figure 21. The first six polygons enfolded in ten Trees of Life have 250 corners (denoted by dots) that are intrinsic to them. The topmost corner of the hexagon enfolded in the tenth Tree of Life (not denoted by a dot) is not intrinsic as it coincides with the lowest corner of the hexagon enfolded in the eleventh Tree of Life. The number 90 is ^10P2, the number of permutations of two objects taken from a set of ten objects, i.e., in this context the number of melodic intervals that can be played with ten different notes without regard to their arrangement in a tetractys. In the context of the UPA superstring, a point on each of its ten whorls has (10×9=90) coordinates with respect to the 9-dimensional space of the superstring. Noting that Besant & Leadbeater states in their book Occult Chemistry (6) that none of the whorls ever touched one another as they observed them, this means that these ten, non-touching curves require 90 independent (but not necessarily all different) numbers as free coordinate variables. As discussed in Article 12, 90 is the number of trees above the lowest one in what the author calls the ‘Cosmic Tree of Life,'* i.e., the number of levels of consciousness beyond the most physical level represented by the lowest tree. This means that a musical sound containing up to four notes can be assigned to each of these levels of consciousness, with the tetractys of ten notes itself assigned to the 91st level. The counterpart of the latter for the superstring would be the time coordinate, the number that locates it in time. Alternatively, a melodic interval generated from ten notes can be assigned to these levels. As ^7P2 = 42, there are 42 such intervals generated from seven notes sited at the centre and corners of the hexagon in the tetractys and (90–42=48) intervals generated by pairing either these notes with those at the corners of the tetractys or the latter themselves. This 48:42 division corresponds to the 48 subplanes of the cosmic physical plane above the lowest one and the 42 subplanes of the six superphysical cosmic planes. Whether or not this correlation may have deeper significance, it demonstrates a beautiful, mathematical harmony between the permutational properties of ten objects arranged in a tetractys and what the author has shown in previous articles to be the map of all levels of reality. The latter itself is a tetractys with the fractal-like quality that each of the nine yods surrounding its centre is a tetractys and that the central yod is the repetition of this on a spiritually lower but exactly analogous level. The five types of musical elements are notes of the Pythagorean scale, their harmonic intervals, melodic intervals, chords & broken chords. Their numbers are shown below: harmonic melodic broken notes intervals intervals chords chords 10 30 (3×10) 60 (3×20) 18 (3×6) 162 (3×54) (The second number in each bracket is the number of musical elements per orientation) * See Article 5 for details about this map of the spiritual cosmos. There are (2^5–1=31) combinations of these elements, where 31 is the number value of El (“God”), the Godname assigned to Chesed, the first Sephirah of Construction. Some of these were discussed above. In order of increasing size, these numbers are: 10, 18, 28, 30, 40, 48, 58, 60, 70, 78, 88, 90, 100, 108, 162, 172, 180, 190, 192, 202, 210, 220, 222, 232, 240, 250, 252, 280. Notice that the 21st number is 210 = 21 21 and that the 28th (last) number is 280 = 28 28 28 28 28 28. The 26th number is 250. Hence, the Godname Yahweh with number value 26 marks out the number of notes, melodic intervals, chords and broken chords. This significance of this is as follows: the number 250 comprises 10 notes, 60 melodic intervals and 180 chords and broken chords. This number was found in Article 11 to be the sum of the 16 (=4^2) tone numbers making up the four faces of the tetrahedron other than its corners. The significance of 250 for superstrings was discussed in this and earlier articles. * Its relevance to the Tree of Life is that the 60 polygons of the first six types enfolded in ten overlapping Trees of Life have 250 corners unshared with polygons enfolded in the next higher tree (Fig. 21). They comprise the 10 lowest and highest corners of the ten hexagons, the 60 corners of the triangle, square and pentagon that are outside their root edges and 180 corners of the hexagon, octagon and decagon. The (6+6) polygons constitute a Tree of Life pattern because they have 50 corners prescribed by the Godname Elohim with number value 50 (Fig. 20; see Article 4 for how the other Godnames prescribe their geometry). It is a remarkable demonstration of how Godnames are fundamentally connected to the tetractys that the number of Yahweh should pick out from all possible combinations of basic musical elements (themselves numbered by the Godname of the Sephirah lying below Chokmah) just that number which is also the number of geometrical degrees of freedom denoted by corners of polygons enfolded in * In particular, see Article 5. ten Trees of Life! Not five trees, nor nine trees, but precisely the Pythagorean and Kabbalistic measure of Divine Perfection, namely, the number 10! The number of melodic intervals, chords and broken chords = 250 – 10 = 240 = 4! 4! (4! = 1×2×3×4) 4! 4! 4! 4! 4! 4! 4! Suppose that we consider a tetractys of nine notes with the central one missing: The numbers of musical elements now become: harmonic melodic broken notes intervals intervals chords chords 9 (3×3) 24 (3×8) 48 (3×16) 15 (3×5) 144 (3×48) (The second number in each bracket is the number of elements per orientation) The three tetractys arrays possess 240 notes, harmonic intervals, melodic intervals, chords and broken chords — the same number as the number of melodic intervals, chords and broken chords generated by a complete tetractys of ten notes. In terms of the correspondence between the tetractys and the Tree of Life, the central yod symbolises Malkuth, the outer, physical realisation of a Tree of Life entity (superstrings, human beings, etc). What the reappearance of the number 240 in a tetractys of notes with the central one missing is saying is that 240 of the original 280 musical elements are due to those notes that formally correspond to Sephiroth above Malkuth. This accounts for why the Archangel of Malkuth has the number value 6. The musical nature of superstrings The 240 musical elements generated from the nine notes corresponding to Sephiroth above Malkuth comprise (24+48=72) intervals* and (9+15+144=168) notes, chords and melodic chords. This division is reflected in the above tetractys representation of the number 240 because its three corners sum to 72 and the seven remaining numbers 4! add up to 168. Earlier articles and the author’s book (7) have established that each of the ten whorls of the UPA superstring (Fig. 22), observed with a yogic siddhi called ‘anima’ by Annie Besant and C.W. Leadbeater, carries 24 gauge charges of the unified, superstring gauge symmetry group E[8] corresponding to the latter’s non-zero simple roots. The three ‘major’ whorls carry (3×24=72) such charges and the seven ‘minor’ whorls carry (7×24=168) charges. The former are in fact the charges corresponding to the 72 non-zero * The alternative combination of 72 notes, harmonic intervals and chords is less plausible, intuitively speaking, because it mixes musical elements of different types. simple roots of E[6], an exceptional subgroup of E[8], there being 168 non-zero roots of E[8] that do not belong to E[6]. Remarkably, the musical elements due to the notes corresponding to Sephiroth above Malkuth correspond precisely in number to the gauge charges mathematically associated with the non-zero roots of E[8] and its subgroup E[6]. Transformed into single tetractyses, the lowest Tree of Life contains 80 yods (Fig. 23). Using the numbers (shown above in brackets) of musical elements of each type generated by a tetractys of ten notes, the total number of such musical elements is (3 + 8 + 16 + 5) + 48 = 32 + 48 = 80. Each yod in the lowest Tree of Life can denote a musical element! Moreover, there are 48 yods up to the level of Chesed, the first Sephirah of Construction, illustrating once again the nature of this number in quantifying the number of formative degrees of freedom that are needed to express the part of the Tree of Life that manifests in an objective sense. The 48 yods up to Chesed denote the number of broken chords and the 32 yods above this level denote the number of notes, intervals and chords. In this remarkable way, the number of musical sounds (80) created by the notes corresponding to the nine Sephiroth above the objective one, Malkuth, and their division into melodic chords of three and four notes (the beginning of melody) reflects precisely the 48:32 pattern of the lowest Tree. The number 240 is the number of yods generated in the lowest Tree of Life when its 19 triangles are transformed into three tetractyses, that is, yods in addition to its 11 Sephirothic points ( Fig. 24). These 240 hidden or potential degrees of freedom correspond to the potential of 240 musical elements that can be played with the set of nine notes corresponding to the Sephiroth above Malkuth. Notice that the correspondence is both qualitative and quantitative because the 11 Sephirothic points of the lowest tree constitute its most basic outer form (its Malkuth aspect), so that the hidden 240 yods represent what is beyond this aspect, just as the 240 musical elements denote sounds generated by notes corresponding formally to Sephiroth above Malkuth. A 72:168 pattern similar to that found above for three tetractyses with their central note missing also exists when the latter is present. The number of melodic intervals, chords broken chords with three notes. The (3×80=240) melodic intervals, chords and broken chords generated by the three orientations of the tetractys comprise (3×24=72) broken chords of four notes and (3×56=168) melodic intervals, chords and broken chords of three notes. It mirrors the 72 E[8] gauge charges carried by the three major whorls and the 168 such charges carried by its seven minor whorls (Fig. 25). In terms of the charge sources of all the forces other than gravity that superstrings of ordinary matter can exert on one another, superstring physics conforms to the same pattern as the melodic sounds generated by playing the ten notes of a tetractys in its three orientations. 7. The Platonic solids represent the 90 musical elements In Article 3 it was shown how, when seen as constructed from the tetractys (the Pythagorean symbol of ten-fold Divine unity), the first four regular polyhedra embody numbers of significance to the mathematics of superstring forces. It will now be proved that the five Platonic solids (Fig. 26) also embody in a geometrical way the numbers of different types of musical sounds that can be generated from a tetractys of ten notes. The numbers of edges in the five Platonic solids are shown below: tetrahedron octahedron cube icosahedron dodecahedron The total number of edges is 90, which is the same as the number of musical elements generated by a tetractys array of ten notes, that is, sounds other than the notes themselves. They comprise six harmonic intervals generated from the row of four notes, 12 harmonic and melodic intervals generated from the rows of two and three notes, 12 melodic intervals generated from the row of four notes, 30 chords and broken chords of three or four notes generated from the rows of three and four notes (six chords, 24 broken chords of three notes from the row of four notes) and 30 broken chords (six from the row of three notes, 24 from the row of four notes). The correspondence between the edges of the Platonic solids and these musical elements is: (Numbers in brackets are the numbers of musical elements generated by the stated row) Notice how the complexity of the musical elements builds up correspondingly with the number of edges of the Platonic solids. They start with the harmonic intervals (the simplest musical sounds), which correspond to the edges of the tetrahedron, the simplest Platonic solid, and finish with the broken chords (the most complex sounds), which correspond to the edges of the dodecahedron, the last of the regular polyhedra. This natural progression argues against the agreement between the various numbers of edges and musical elements being merely a coincidence or a concoction. The Godname Elohim with number value 50 assigned to the Sephirah Binah prescribes both the form of the Platonic solids and the set of ten notes and their musical combinations because the former have 50 corners and the latter comprise 100 musical elements in one array, where 100 is the 50th even integer. They include 26 single sounds (notes, harmonic intervals and chords), where 26 is the number value of Yahweh, the Godname of Chokmah. The number of notes, melodic intervals and broken chords generated by a tetractys of notes is 84. As 84 = 3×28 = 3(1+2+3+4+5+6+7) = 3 + 6 + 9 + 12 + 15 + 18 + 21, this is how the number value 21 of Ehyeh, the Godname of Kether, determines the number of such musical sounds. The number of musical sounds made up of notes played in succession is 74, which is the 36th even integer after 2. This is how Eloha, the Godname of Geburah with number value 36, prescribes the number of sounds of this type. The number of broken chords of four notes generated by the fourth row of a tetractys = 4! = 24. The three possible tetractys arrays therefore have (3×24=72) such chords, where 72 is the number value of Chesed, the fourth Sephirah from the top of the Tree of Life. 72 is the 36th even integer. It is therefore also prescribed by Eloha. 8. The octave as analogue of the rank-8 gauge group E[8] The number of combinations of the eight notes of the musical scale = 2^8 – 1 = 255 = ^8C[1] + ^8C[2] + ^8C[3] + ^8C[4] + ^8C[5] + ^8C[6] + ^8C[7] + ^8C[8]. The (^8C[7] =8) combinations of seven notes comprise the combination (let us call it ‘1[7]’) of its first seven notes, C, D, E, F, G, A, B and seven combinations (call this ‘7[7]’) of the octave with six of the first seven notes. (^8C[8] =1) denotes the single group of eight notes (call this ‘1[8]’). 255 = 8 + 28 + 56 + 70 + 56 + 28 + (1[7] + 7[7]) + 1[8]. 255 – 7[7] = 248 = 8 + 28 + 56 + 70 + 56 + 28 + 1[7] + 1[8] ‘8’ denotes the first seven notes and the octave. 248 – 8 = 240 = 28 + 56 + 70 + 56 + 28 + 1[7] + 1[8]. = (56 + 56) + 28 + 70 + 28 + 1[7] + 1[8] = 112 + 128, where 112 = 56 + 56 and 128 = 28 + 70 +28 + 1[7] + 1[8]. 112 is the number value of Beni Elohim, the Order of Angels assigned to the Sephirah Hod (see ref. 1). Compare this with the fact that the superstring gauge symmetry group E[8] is defined mathematically by its 248 simple roots, which comprise eight zero roots (seven of one kind and one of another kind) and 240 non-zero roots. The latter comprise 112 non-zero roots of one kind and 128 non-zero roots of another kind. We see that the following correspondence emerges between the Pythagorean scale and the root structure of E[8]: 1. 248 combinations of 8 notes other than groups of 7 containing the octave; 248 roots of E[8]; 2. 8 notes of the octave (first 7 notes + octave itself); 8 zero roots of E[8] (7 of one kind, 1 of another);* *It is unnecessary to explain here their difference in the technical terms of group theory. 3. 240 combinations of 8 notes other than these notes & groups of 7 containing the octave; 240 non-zero roots; 4. 240 combinations comprise 112 groups of 3 & 5 notes & 128 groups of 2, 4 & 6 notes, the first 7 notes & all 8 notes. 240 non-zero roots comprise 112 of one kind & 128 of another. The number 240 can be also written as: 240 = (70 + 1[7] + 1[8]) + (28 + 56) + (56 + 28) = 72 + 168, 168 = (28 + 56) + (56 + 28) = 84 + 84. 72 (the number of Chesed, the first Sephirah of Construction) is the sum of the number (70) of chords of four notes created from the [7]) and the chord of all eight notes (1[8]). 168 is the number of harmonic intervals and chords of three, five and six notes. The point of this exercise is that the composition of musical sounds created from the eight notes of the Pythagorean scale can be viewed as analogous to the 72:168 division of the non-zero roots of the superstring symmetry group E[8] and their corresponding gauge charges, which previous articles by the author have shown to be encoded in the inner form of the Tree of Life. As discussed earlier, this difference corresponds to the distinction between the three major whorls and the seven minor whorls of the UPA superstring, that is, to the distinction in the Tree of Life between the Supernal Triad (the triple Godhead) and the seven Sephiroth of Construction. 168 is the number value of Cholem Yesodeth, the Mundane Chakra of Malkuth. The 84:84 division of the number 168 that emerges naturally above from the rearrangement of combinatorial numbers is reflected in its encoding in the first (6+6) enfolded, regular polygons of the inner form of the Tree of Life (Fig. 27) because there are 84 yods in each set of polygons outside their shared root edge*. This splitting into equal numbers is reflected in the UPA itself as the spiralling of each whorl with 1680 coils 2½ times around the axis of the UPA, returning to its top by making 2½ narrower twists (see Fig. 25). Each half of the whorl comprises 840 coils, which is the number of yods shaping the 60 polygons of the first six types enfolded on each side of the central pillar of ten overlapping Trees of Life that represent each of the ten whorls of the superstring. The division of the structural parameter 168 into two halves, as physically manifested in the inner and outer halves of the superstring, is due to the mathematical fact (referring to the equation at the beginning of this section) that ^8C[2] = ^8C[6] = 28 and ^8C[3] = ^8C[5] = 56. In terms of the octave, this means that 28 harmonic intervals and 56 chords of three notes (making 84) can be played with an octave, as can 28 chords of six notes and 56 chords of five notes (also making 84). In terms of the first six polygons, namely, the triangle, square, pentagon, hexagon, octagon and decagon, there are, respectively, 5, 8, 11, 14, 20 and 26 yods along their edges outside their root edge. The square and octagon has (8+20=28) yods and the triangle, pentagon, hexagon and dodecagon have (5+11+14+26=56) yods. For one set of polygons, the harmonic intervals can be represented by the yods of the square and octagon, whilst the chords of three notes can be represented by the yods of the triangle, pentagon, hexagon and decagon. For the other set, the square and octagon represent the chords of six notes, and the four other polygons represent the chords of five notes. No alternative combinations of polygons are possible. The musical potential of the Pythagorean octave finds its counterpart in the sacred geometry of the inner Tree of Life. 9. Conclusion The numbers of Plato’s Lambda tetractys should not be seen simply as generating the tone numbers of the Pythagorean scale and all its octaves. Their absolute, as well as their relative, values have a meaning that is both metaphysical and musical. In the former case, the central number (6), the sum (48) of the six numbers at the corners of the hexagon, the sum (84) of the numbers surrounding the centre of the tetractys and the sum (90) of all ten numbers measure, respectively, the levels of physical consciousness above the most rudimentary, the cosmic counterparts of these levels, all superphysical levels and all levels of consciousness. In the latter case, these numbers refer to the * See Article 4 for how the Godnames prescribe this set of polygons. various musical sounds that can be played with a tetractys of ten notes. They are, respectively, the chords, next, the broken chords of three or four notes played from the base set of four notes, then, the notes, melodic intervals and broken chords, and, finally, the harmonic and melodic intervals, chords and broken chords. The numbers of the Lambda refer to higher levels of consciousness ( analogous levels in the case of the above combinations) that bear a correspondence to the basic music sounds playable with a tetractys of notes. This is the real meaning of the music of the World Soul. The number value 280 of the Hebrew name of the Archangel of Malkuth is the number of musical sounds that can be played with a tetractys of notes in its three possible orientations. The Pythagorean scale conforms to the pattern of the Tree of Life, which maps each of the 91 levels of consciousness. As the perfect fifth of the fifth octave, the tenth overtone and the 22 fractional notes that precede it complete a Tree of Life pattern. Its tone ratio 24 is central to the structure and dynamics of superstrings. Music, the study of number in time, and geometry, the study of number in space, come together in the five Platonic solids. Their 90 edges correlate with the 90 harmonic and melodic elements, chords and broken chords that can be played with a tetractys of ten notes. This number also characterises the superstring because each of its ten string components extends in nine dimensions, their oscillations described by 90 independent variables. Music, the vibrations of superstrings and the spectrum of consciousness are interrelated in universal correspondence and harmonious proportion through the Pythagorean symbol of the tenfold nature of Divine “Intellectual and celestial music, finally, was the application of the principles given by speculative music, no longer to the theory or the practice of the art pure and simple, but to that sublime part of the science which had as its object the contemplation of nature and the knowledge of nature and the knowledge of the immutable laws of the universe. Having then reached its highest degree of perfection, it formed a sort of analogical bond between the sensible and the intelligible, and thus afforded a simple means of communication between the two worlds. It was an intellectual language which was applied to metaphysical abstractions, and made known their harmonic laws, in the way that algebra, as the scientific part of mathematics, is applied by us to physical abstractions, and serves to calculate relationships.” Antoine Fabre D’Olivet, 18th century Pythagorean. 1. For reference, the gematria number values of the Sephiroth, their Godnames, Archangels, Angels and Mundane Chakras are listed below: │ Sephirah │ Title │ Godname │ Archangel │ Order of │ Mundane │ │ │ │ │ │ Angels │ Chakra │ │ Kether │ 620 │ 21 │ 314 │ 833 │ 636 │ │ Chokmah │ 73 │ 15. 26 │ 248 │ 187 │ 140 │ │ Binah │ 67 │ 50 │ 311 │ 282 │ 317 │ │ Chesed │ 72 │ 31 │ 62 │ 428 │ 194 │ │ Geburah │ 216 │ 36 │ 131 │ 630 │ 95 │ │ Tiphareth │ 1081 │ 76 │ 101 │ 140 │ 640 │ │ Netzach │ 148 │ 129 │ 97 │ 1260 │ 64 │ │ Hod │ 15 │ 153 │ 311 │ 112 │ 48 │ │ Yesod │ 80 │ 49 │ 246 │ 272 │ 87 │ │ Malkuth │ 496 │ 65, 155 │ 280 │ 351 │ 168 │ 2. The reason for the way the six notes D–B are assigned in Figure 5 to the corners of hexagons is as follows: the alternate corners of each hexagon form the two intersecting triangles of a Star of David. In terms of the equivalence between the tetractys and the Tree of Life, the two triangles of yods correspond to the two triads of Chesed, Geburah, Tiphareth and Netzach, Hod and Yesod, which are located at corners of triangles. As show below, the six notes of the Pythagorean scale above the tonic form only two chords of three notes with the same relative proportions of their tone ratios: 9/8 (9/8)^2 4/3 3/2 27/16 243/128 D E F G A B 1. DEF GAB 9/8 : (9/8)^2 : 4/3 ≠3/2 : 27/16 : 243/128 2. DEG FAB 9/8 : (9/8)^2 : 3/2 ≠ 4/3 : 27/16 : 243/128 3. DEA FGB 9/8 : (9/8)^2 : 27/16 ≠ 4/3 : 3/2 : 243/128 4. DEB FGA 9/8 : (9/8)^2 : 243/128 ≠ 4/3 : 3/2 : 27/16 5. DFG EAB 9/8 : 4/3 : 3/2 ≠ (9/8)^2 : 27/16 : 243/128 6. DFA EGB 9/8 : 4/3 : 27/16 = (9/8)^2 : 3/2 : 243/128 7. DFB EGA 9/8 : 4/3 : 243/128 ≠ (9/8)^2: 3/2 : 27/16 8. DGA EFB 9/8 : 3/2 : 27/16 ≠ (9/8)^2: 24/3 : 243/128 9. DGB EFA 9/8 : 3/2 : 243/128 ≠ (9/8)^2: 4/3 : 27/16 10. DAB EFG 9/8 : 27/16 : 243/128 ≠ (9/8)^2: 4/3 : 3/2 Only the chords DFA and EGB listed in (6) made up of alternate notes and spanning the interval of a fifth exhibit the same proportions of their tone ratios, their corresponding notes being each separated by a whole tone. Musical harmony between a pair of chords exists in this sense only for one of the ten pairs. All seven notes form ^7C[3]= 35 chords of three notes, that is, (35–20=15) more chords than the other six notes. The number value 15 of the Godname Yah is therefore the number of chords of three notes containing the tonic. The chord EGB is the chord DFA lifted by a whole tone. Just as the six Sephiroth of Construction above Malkuth consist of two triads, so the six notes above the tonic, which is assigned to the central yod symbolising Malkuth, form uniquely two harmonious chords. The pairs of notes (E, D), (G, F) and (B, A) have the same relative tone interval of 9/8. Note E corresponds to note D, G corresponds to F and B corresponds to A. This is consistent with their assignment: because Netzach, which is below Chesed on the Pillar of Mercy, is its counterpart in a new cycle of differentiation, whilst Hod similarly is the counterpart of Geburah below it on the Pillar of Judgement and Yesod, the psyche, is the human reflection of the spiritual individuality represented by Tiphareth lying above it on the Pillar of Equilibrium. 3. The God Particle, Leon Lederman (Bantam Press, 1993), pp. 28-29. 4. The number of yods in n Trees of Life ≡ Y(n) = 50n + 20. Hence, Y(27) = 1370. 5. Cosmic Music, Joscelyn Godwin (ed.) (Inner Traditions, Vermont, 1989). 6. Occult Chemistry, Annie Besant and C. W. Leadbeater (Theosophical Publishing House, Adyar, Chennai, India, 1951). 7. The Mathematical Connection Between Religion and Science, Stephen M. Phillips (Antony Rowe Publishing, England, 2009).
{"url":"http://www.smphillips.8m.com/article12.htm","timestamp":"2014-04-20T16:20:11Z","content_type":null,"content_length":"295305","record_id":"<urn:uuid:6a13ea90-8101-4c29-b4c2-d5ac1742e5e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Mental FlossYou can't win it if you're not in it With memories of the media retreat rapidly, er, retreating from my mind, I thought I'd better provide the answer to the question I posed on Friday. A reader named Xander (hi, Xander!) offered his solution via the comments: People are generally loss-averse so most will choose the guaranteed loss of $3000 to avoid the likely hit of 4 grand. In the case of the winning money, nothing ventured, nothing gained, something cliche: people will roll the dice for the extra cash cause hey, if you don't win, you still lose nothing. This strategy is the one David chose. Mathematically, it's the optimal tactic -- if you take the probabilities and amounts as a whole, the expected value of your lottery win on average is $3,200 (so, $200 more than you can guarantee yourself). Meanwhile, a certain loss of $3000 is better than an expected loss of $3,200. You could also argue that Will's strategy (take the guaranteed option in both situations) makes logical sense, because the scenarios are mirror images of each other -- what holds for one ought to hold for the other. Here's the rub: people don't think "purely mathematically" or even logically, and according to the U of Chicago prof at the retreat, generally most of them choose the guaranteed $3K win and the loser lottery -- the exact opposite of the optimal strategy. I will confess to being in this group even though I had already done all the calculations of expected value and loss when I made my choice. I went on instinct, and in the process helped to annoy the living daylights out of thousands of economists, many of whom rely on models that assume rationality -- which, given the evidence, isn't the most rational thing they could do. For more on how economists are trying to account for irrational dingbats like me, check out this article from Technology Review.
{"url":"http://mentalfloss.com/node/14442/atom.xml","timestamp":"2014-04-20T21:04:11Z","content_type":null,"content_length":"4682","record_id":"<urn:uuid:d99e47ad-0f50-4463-b42b-a430d755317a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Goedel on Epsilonica I often read paraphrases of one or other of Gödel’s theorems that talk about true, unprovable statements. I’ve said before that I’m a formalist of sorts. Talk of undecidable statements in a system being true gives me headaches. And I’m an analyst so I work in ZFC. If I say “statement X is true” I’m telling you that there exists a proof of statement X in ZFC. If you ask me if I think the continuum hypothesis is true, I’ll explain to you that it’s known to be undecidable. If you tell me you know it’s undecidable but still want to know if I think it’s true, I’ll look at you as if you asked me what colour integrity is.
{"url":"http://mattheath.wordpress.com/tag/goedel/","timestamp":"2014-04-16T04:12:04Z","content_type":null,"content_length":"27871","record_id":"<urn:uuid:2c294b0e-6aa2-4510-a45e-d32b9761d5fe>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
About Gift Optimizer A tremendous amount of work has been devoted to this project. The challenge was to build a calculator that would not only mathematically perform the optimization the correct way, but also accomplish it and return the results in the shortest time possible. Simple optimizations typically use fractional numbers. They often have smooth objective functions. Essentially you can think of it as the mathematical equivalent of a hill. You keep going up until you reach the top, at which point you have reached the maximum. Our problem is that you can't have half a gift. So this smooth, continuous experience doesn't really exist. You can only be allocated a whole gift. Hence, many optimizers will perform the optimization using the fractional approach, and then they adjust the results by rounding up and/or down the solution at the end, in order to ensure that whole gifts are allocated appropriately. Whilst it makes the math simpler, this process typically yields incorrect and erroneous results. We do perform the optimization the right way. Gift Optimizer knows that you deserve the best and because of this we insist on giving it to you. Gift Optimizer only deals with whole gifts, not fractions, making the behind the scenes calculations much more complex, but accurate. Math is not the only challenge. Due to the number of possible permutations, finding that one, or a handful of optimal solutions, that satisfy all of the constraints amongst the quadrillions of distinct arrangements, within a limited amount of time can be tricky. Hence from a computational standpoint, the code must itself be optimized to perform at the maximum speed in order to find your solution. Sure, given enough time and CPU we could theoritically find any solution. However, life is full of constraints and therefore we must make the best and most effective use of the resources at We hope you enjoy using Gift Optimizer as much as we enjoyed making it!
{"url":"http://www.opteamate.com/Home/About","timestamp":"2014-04-21T04:48:25Z","content_type":null,"content_length":"4103","record_id":"<urn:uuid:7c388724-a038-4060-af22-90ba33ec6f7d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Predefined Language Attributes Ada '83 Language Reference Manual Copyright 1980, 1982, 1983 owned by the United States Government. Direct reproduction and usage requests to the Ada Information Clearinghouse. A. Predefined Language Attributes Style Guide references: 3.2.5 Constants and Named Numbers, 3.4.2 Enumeration Types, 5.3.3 Private Types, 5.5.1 Range Values, 5.5.2 Array Attributes, 6.2.3 Attributes 'Count, 'Callable and 'Terminated , 8.2.4 Subtypes in Generic Specifications This annex summarizes the definitions given elsewhere of the predefined language attributes. • P'ADDRESS For a prefix P that denotes an object, a program unit, a label, or an entry: Yields the address of the first of the storage units allocated to P. For a subprogram, package, task unit, or label, this value refers to the machine code associated with the corresponding body or statement. For an entry for which an address clause has been given, the value refers to the corresponding hardware interrupt. The value of this attribute is of the type ADDRESS defined in the package SYSTEM. (See 13.7.2.) • P'AFT For a prefix P that denotes a fixed point subtype: Yields the number of decimal digits needed after the point to accommodate the precision of the subtype P, unless the delta of the subtype P is greater than 0.1, in which case the attribute yields the value one. (P'AFT is the smallest positive integer N for which (10**N)*P'DELTA is greater than or equal to one.) The value of this attribute is of the type universal_integer. (See • P'BASE For a prefix P that denotes a type or subtype: This attribute denotes the base type of P. It is only allowed as the prefix of the name of another attribute: for example, P'BASE'FIRST. (See 3.3.3.) • P'CALLABLE For a prefix P that is appropriate for a task type: Yields the value FALSE when the execution of the task P is either completed or terminated, or when the task is abnormal; yields the value TRUE otherwise. The value of this attribute is of the predefined type BOOLEAN. (See • P'CONSTRAINED For a prefix P that denotes an object of a type with discriminants: Yields the value TRUE if a discriminant constraint applies to the object P, or if the object is a constant (including a formal parameter or generic formal parameter of mode in); yields the value FALSE otherwise. If P is a generic formal parameter of mode in out, or if P is a formal parameter of mode in out or out and the type mark given in the corresponding parameter specification denotes an unconstrained type with discriminants, then the value of this attribute is obtained from that of the corresponding actual parameter. The value of this attribute is of the predefined type BOOLEAN. (See • P'CONSTRAINED For a prefix P that denotes a private type or subtype: Yields the value FALSE if P denotes an unconstrained nonformal private type with discriminants; also yields the value FALSE if P denotes a generic formal private type and the associated actual subtype is either an unconstrained type with discriminants or an unconstrained array type; yields the value TRUE otherwise. The value of this attribute is of the predefined type BOOLEAN. (See 7.4.2.) • P'COUNT For a prefix P that denotes an entry of a task unit: Yields the number of entry calls presently queued on the entry (if the attribute is evaluated within an accept statement for the entry P, the count does not include the calling task). The value of this attribute is of the type universal_integer. (See 9.9.) • P'DELTA For a prefix P that denotes a fixed point subtype: Yields the value of the delta specified in the fixed accuracy definition for the subtype P. The value of this attribute is of the type universal_real. (See 3.5.10.) • P'DIGITS For a prefix P that denotes a floating point subtype: Yields the number of decimal digits in the decimal mantissa of model numbers of the subtype P. (This attribute yields the number D of section 3.5.7.) The value of this attribute is of the type universal_integer. (See 3.5.8.) • P'EMAX For a prefix P that denotes a floating point subtype: Yields the largest exponent value in the binary canonical form of model numbers of the subtype P. (This attribute yields the product 4*B of section 3.5.7.) The value of this attribute is of the type universal_integer. (See • P'EPSILON For a prefix P that denotes a floating point subtype: Yields the absolute value of the difference between the model number 1.0 and the next model number above, for the subtype P. The value of this attribute is of the type universal_real. (See 3.5.8.) • P'FIRST For a prefix P that denotes a scalar type, or a subtype of a scalar type: Yields the lower bound of P. The value of this attribute has the same type as P. (See 3.5.) • P'FIRST For a prefix P that is appropriate for an array type, or that denotes a constrained array subtype: Yields the lower bound of the first index range. The value of this attribute has the same type as this lower bound. (See 3.6.2 and 3.8.2.) • P'FIRST(N) For a prefix P that is appropriate for an array type, or that denotes a constrained array subtype: Yields the lower bound of the N-th index range. The value of this attribute has the same type as this lower bound. The argument N must be a static expression of type universal_integer. The value of N must be positive (nonzero) and no greater than the dimensionality of the array. (See 3.6.2 and 3.8.2.) • P'FIRST_BIT For a prefix P that denotes a component of a record object: Yields the offset, from the start of the first of the storage units occupied by the component, of the first bit occupied by the component. This offset is measured in bits. The value of this attribute is of the type universal_integer. (See 13.7.2.) • P'FORE For a prefix P that denotes a fixed point subtype: Yields the minimum number of characters needed for the integer part of the decimal representation of any value of the subtype P, assuming that the representation does not include an exponent, but includes a one-character prefix that is either a minus sign or a space. (This minimum number does not include superfluous zeros or underlines, and is at least two.) The value of this attribute is of the type universal_integer. (See • P'IMAGE For a prefix P that denotes a discrete type or subtype: This attribute is a function with a single parameter. The actual parameter X must be a value of the base type of P. The result type is the predefined type STRING. The result is the image of the value of X, that is, a sequence of characters representing the value in display form. The image of an integer value is the corresponding decimal literal; without underlines, leading zeros, exponent, or trailing spaces; but with a one character prefix that is either a minus sign or a space. The image of an enumeration value is either the corresponding identifier in upper case or the corresponding character literal (including the two apostrophes); neither leading nor trailing spaces are included. The image of a character other than a graphic character is implementation-defined. (See 3.5.5.) • P'LARGE For a prefix P that denotes a real subtype: The attribute yields the largest positive model number of the subtype P. The value of this attribute is of the type universal_real. (See 3.5.8 and 3.5.10.) • P'LAST For a prefix P that denotes a scalar type, or a subtype of a scalar type: Yields the upper bound of P. The value of this attribute has the same type as P. (See 3.5.) • P'LAST For a prefix P that is appropriate for an array type, or that denotes a constrained array subtype: Yields the upper bound of the first index range. The value of this attribute has the same type as this upper bound. (See 3.6.2 and 3.8.2.) • P'LAST(N) For a prefix P that is appropriate for an array type, or that denotes a constrained array subtype: Yields the upper bound of the N-th index range. The value of this attribute has the same type as this upper bound. The argument N must be a static expression of type universal_integer. The value of N must be positive (nonzero) and no greater than the dimensionality of the array. (See 3.6.2 and 3.8.2.) • P'LAST_BIT For a prefix P that denotes a component of a record object: Yields the offset, from the start of the first of the storage units occupied by the component, of the last bit occupied by the component. This offset is measured in bits. The value of this attribute is of the type universal_integer. (See 13.7.2.) • P'LENGTH For a prefix P that is appropriate for an array type, or that denotes a constrained array subtype: Yields the number of values of the first index range (zero for a null range). The value of this attribute is of the type universal_integer. (See • P'LENGTH(N) For a prefix P that is appropriate for an array type, or that denotes a constrained array subtype: Yields the number of values of the N-th index range (zero for a null range). The value of this attribute is of the type universal_integer. The argument N must be a static expression of type universal_integer. The value of N must be positive (nonzero) and no greater than the dimensionality of the array. (See 3.6.2 and 3.8.2.) • P'MACHINE_EMAX For a prefix P that denotes a floating point type or subtype: Yields the largest value of exponent for the machine representation of the base type of P. The value of this attribute is of the type universal_integer. (See 13.7.3.) • P'MACHINE_EMIN For a prefix P that denotes a floating point type or subtype: Yields the smallest (most negative) value of exponent for the machine representation of the base type of P. The value of this attribute is of the type universal_integer. (See 13.7.3.) • P'MACHINE_MANTISSA For a prefix P that denotes a floating point type or subtype: Yields the number of digits in the mantissa for the machine representation of the base type of P (the digits are extended digits in the range 0 to P'MACHINE_RADIX - 1). The value of this attribute is of the type universal_integer. (See 13.7.3.) • P'MACHINE_OVERFLOWS For a prefix P that denotes a real type or subtype: Yields the value TRUE if every predefined operation on values of the base type of P either provides a correct result, or raises the exception NUMERIC_ERROR in overflow situations; yields the value FALSE otherwise. The value of this attribute is of the predefined type BOOLEAN. (See 13.7.3.) • P'MACHINE_RADIX For a prefix P that denotes a floating point type or subtype: Yields the value of the radix used by the machine representation of the base type of P. The value of this attribute is of the type universal_integer. (See 13.7.3.) • P'MACHINE_ROUNDS For a prefix P that denotes a real type or subtype: Yields the value TRUE if every predefined arithmetic operation on values of the base type of P either returns an exact result or performs rounding; yields the value FALSE otherwise. The value of this attribute is of the predefined type BOOLEAN. (See • P'MANTISSA For a prefix P that denotes a real subtype: Yields the number of binary digits in the binary mantissa of model numbers of the subtype P. (This attribute yields the number B of section 3.5.7 for a floating point type, or of section 3.5.9 for a fixed point type.) The value of this attribute is of the type universal_integer. (See 3.5.8 and 3.5.10.) • P'POS For a prefix P that denotes a discrete type or subtype: This attribute is a function with a single parameter. The actual parameter X must be a value of the base type of P. The result type is the type universal_integer. The result is the position number of the value of the actual parameter. (See • P'POSITION For a prefix P that denotes a component of a record object: Yields the offset, from the start of the first storage unit occupied by the record, of the first of the storage units occupied by the component. This offset is measured in storage units. The value of this attribute is of the type universal_integer. (See 13.7.2.) • P'PRED For a prefix P that denotes a discrete type or subtype: This attribute is a function with a single parameter. The actual parameter X must be a value of the base type of P. The result type is the base type of P. The result is the value whose position number is one less than that of X. The exception CONSTRAINT_ERROR is raised if X equals P'BASE'FIRST. (See 3.5.5.) • P'RANGE For a prefix P that is appropriate for an array type, or that denotes a constrained array subtype: Yields the first index range of P, that is, the range P'FIRST .. P'LAST. (See 3.6.2.) • P'RANGE(N) For a prefix P that is appropriate for an array type, or that denotes a constrained array subtype: Yields the N-th index range of P, that is, the range P'FIRST(N) .. P'LAST(N). (See 3.6.2.) • P'SAFE_EMAX For a prefix P that denotes a floating point type or subtype: Yields the largest exponent value in the binary canonical form of safe numbers of the base type of P. (This attribute yields the number E of section 3.5.7.) The value of this attribute is of the type universal_integer. (See 3.5.8.) • P'SAFE_LARGE For a prefix P that denotes a real type or subtype: Yields the largest positive safe number of the base type of P. The value of this attribute is of the type universal_real. (See 3.5.8 and 3.5.10.) • P'SAFE_SMALL For a prefix P that denotes a real type or subtype: Yields the smallest positive (nonzero) safe number of the base type of P. The value of this attribute is of the type universal_real. (See 3.5.8 and • P'SIZE For a prefix P that denotes an object: Yields the number of bits allocated to hold the object. The value of this attribute is of the type universal_integer. (See 13.7.2.) • P'SIZE For a prefix P that denotes any type or subtype: Yields the minimum number of bits that is needed by the implementation to hold any possible object of the type or subtype P. The value of this attribute is of the type universal_integer. (See 13.7.2.) • P'SMALL For a prefix P that denotes a real subtype: Yields the smallest positive (nonzero) model number of the subtype P. The value of this attribute is of the type universal_real. (See 3.5.8 and 3.5.10.) • P'STORAGE_SIZE For a prefix P that denotes an access type or subtype: Yields the total number of storage units reserved for the collection associated with the base type of P. The value of this attribute is of the type universal_integer. (See 13.7.2.) • P'STORAGE_SIZE For a prefix P that denotes a task type or a task object: Yields the number of storage units reserved for each activation of a task of the type P or for the activation of the task object P. The value of this attribute is of the type universal_integer. (See • P'SUCC For a prefix P that denotes a discrete type or subtype: This attribute is a function with a single parameter. The actual parameter X must be a value of the base type of P. The result type is the base type of P. The result is the value whose position number is one greater than that of X. The exception CONSTRAINT_ERROR is raised if X equals P'BASE'LAST. (See 3.5.5.) • P'TERMINATED For a prefix P that is appropriate for a task type: Yields the value TRUE if the task P is terminated; yields the value FALSE otherwise. The value of this attribute is of the predefined type BOOLEAN. (See • P'VAL For a prefix P that denotes a discrete type or subtype: This attribute is a special function with a single parameter X which can be of any integer type. The result type is the base type of P. The result is the value whose position number is the universal_integer value corresponding to X. The exception CONSTRAINT_ERROR is raised if the universal_integer value corresponding to X is not in the range P'POS(P'BASE'FIRST) .. P'POS(P'BASE'LAST). (See 3.5.5.) • P'VALUE For a prefix P that denotes a discrete type or subtype: This attribute is a function with a single parameter. The actual parameter X must be a value of the predefined type STRING. The result type is the base type of P. Any leading and any trailing spaces of the sequence of characters that corresponds to X are ignored. For an enumeration type, if the sequence of characters has the syntax of an enumeration literal and if this literal exists for the base type of P, the result is the corresponding enumeration value. For an integer type, if the sequence of characters has the syntax of an integer literal, with an optional single leading character that is a plus or minus sign, and if there is a corresponding value in the base type of P, the result is this value. In any other case, the exception CONSTRAINT_ERROR is raised. (See 3.5.5.) • P'WIDTH For a prefix P that denotes a discrete subtype: Yields the maximum image length over all values of the subtype P (the image is the sequence of characters returned by the attribute IMAGE). The value of this attribute is of the type universal_integer. (See 3.5.5.) Address any questions or comments to adainfo@sw-eng.falls-church.va.us.
{"url":"http://archive.adaic.com/standards/83lrm/html/lrm-A.html","timestamp":"2014-04-21T10:17:31Z","content_type":null,"content_length":"30069","record_id":"<urn:uuid:eebf2db0-94b5-4218-bfc5-cd628a7ab1de>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
A Uniformly Distributed Statistic on a Class of Lattice Paths Let ${\cal G}_n$ denote the set of lattice paths from $(0,0)$ to $(n,n)$ with steps of the form $(i,j)$ where $i$ and $j$ are nonnegative integers, not both zero. Let ${\cal D}_n$ denote the set of paths in ${\cal G}_n$ with steps restricted to $(1,0),(0,1),(1,1)$, the so-called Delannoy paths. Stanley has shown that $| {\cal G}_n | =2^{n-1}|{\cal D}_n|$ and Sulanke has given a bijective proof. Here we give a simple statistic on ${\cal G}_n$ that is uniformly distributed over the $2^{n-1}$ subsets of $[n-1]=\{1,2,\ldots,n\}$ and takes the value $[n-1]$ precisely on the Delannoy paths. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v11i1r82/0","timestamp":"2014-04-18T01:12:47Z","content_type":null,"content_length":"14613","record_id":"<urn:uuid:80eb8984-aa7c-4784-9fc8-9981c7726e3b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
This is a schedule of the colloquium talks for SIMU 2002. All talks will take place at the University of Puerto Rico - Humacao in Ciencias Naturales Room 103. ┃Date │Time│Speaker │Title ┃ ┃June 21│1:45│John B. Little, │Counting the number of solutions of a system of polynomial equations ┃ ┃ │ │College of the Holy Cross │ ┃ ┃June 21│3:00│Lisa Fauci, │Biological fluid dynamics - successes and challenges ┃ ┃ │ │Tulane University │ ┃ ┃June 28│1:45│Reinhard Laubenbacher, │Oh what a tangled web we weave: The age of Networks ┃ ┃ │ │Virginia Bioinformatics Institute│ ┃ ┃June 28│3:00│Rosa C. Orellana, │On Hecke algebras ┃ ┃ │ │Dartmouth College │ ┃ ┃July 12│1:45│Irena Swanson, │Mayr-Meyer ideals ┃ ┃ │ │New Mexico State University │ ┃ ┃July 19│1:45│Louis Billera, │Geometry of the space of phylogenetic trees ┃ ┃ │ │Cornell University │ ┃ ┃July 19│3:00│Carlos Moreno, │Number theory and its applications ┃ ┃ │ │City University of New York │ ┃ John B. Little, College of the Holy Cross Counting the number of solutions of a system of polynomial equations Many problems in pure and applied mathematics lead to the basic problem of solving systems of polynomial equations with real or complex coefficients in several variables. The situation where the set of complex solutions is finite is fundamental and we will focus on that. While iterative numerical methods (for instance, the multivariable Newton's Method) are usually effective for approximating individual solutions, they give no direct information about how many solutions a system actually has. Using them, it is sometimes difficult to know when to stop looking for new solutions! Other, more algebraic, techniques for determining (or bounding) the total number of solutions have been much studied. In this talk, we will discuss some of these ideas, starting with the most basic result of this type -- the Fundamental Theorem of Algebra, and its generalization to the classical Bezout theorem. We will then turn to some quite recent work and show how a more refined estimate, called the BKK bound, comes from the combinatorics of the particular "shape" of the terms in the polynomials (encoded via the so-called Newton polytopes). Back to the top. Lisa Fauci, Tulane University Biological fluid dynamics - successes and challenges Problems in biological fluid dynamics typically involve the interaction of an elastic structure with the surrounding fluid. Examples of these coupled fluid-structure systems include blood flow in the heart, air flow through the lungs, and sperm motility in the reproductive tract. These biological processes can each be described by a system of time-dependent, coupled, nonlinear partial differential equations. While the explicit solution of these complex equations is impossible, scientists are making progress in understanding these systems using computational methods. In this talk, we will present an overview of biological fluid dynamics, and discuss the interdisciplinary nature of the research in this field. We will focus on the examples of ciliary and flagellar beating in microorganisms, as well as the swimming of nematodes and leeches. Back to the top. Reinhard Laubenbacher, New Mexico State University Oh what a tangled web we weave: The age of networks An important paradigm of science at the beginning of the twenty-first century is the view of our social, physical, and technological world as an interconnected collection of networks. It is important to understand these networks as a whole rather than as a collection of their parts. For instance, the World Wide Web is more than just a collection of linked web sites; it takes on a life of its own that cannot be explained through the functioning of its parts. Modern genetics has taught us that the processes in our body are controlled by a network of genes acting together. Computers perform vast calculations by distributing the task over a network of processors that work together as a team. This talk will describe several such networks, analysis methods, and scientific challenges we are facing in understanding the networked world around (and inside) us. Back to the top. Rosa C. Orellana, Dartmouth College On Hecke algebras The Hecke algebras of type A arise naturally in the study of knot theory, quantum groups, and Von Neumann algebras. Their relation to the symmetric and braid groups allows for their study using combinatorics and low dimensional topology. In this talk I will give an introduction to Hecke algebras of type A and B and show their relation to the symmetric group and the braid group (this groups will be defined in this talk). I will also construct a beautiful homomorphism from a specialization of the Hecke algebra of type B onto a reduced Hecke algebra of type A. This homomorphism has proven to be an useful tool to reduce questions about the Hecke algebra of type B To the Hecke algebra of type A. If time permits, I will give applications of this homomorphism. Back to the top. Irena Swanson, New Mexico State University Mayr-Meyer ideals Grete Hermann proved in 1926 that for any ideal I in an n-dimensional polynomial ring over the field of rational numbers, if I is generated by k polynomials each of which has degree at most d, then it is possible to write each element f of I as a linear combination of the given generators such that the coefficients of the generators have degree at most (kd)^2^^n^ more than the degree of f. In other words, the ideal membership problem is doubly exponential in the number of variables. There are ideals for which singly exponential degree can be easily verified. For a long time there was hope that singly exponential bound was indeed the upper bound. However, in 1982, Mayr and Meyer found (generators of) ideals for which a doubly exponential bound in n is indeed achieved. This talk will be about these Mayr-Meyer ideals, the properties they do and do not satisfy, and how one can approach the computational complexity problem from the point of view of algebra. Back to the top. Louis Billera, Cornell University Geometry of the space of phylogenetic rees We consider a continuous space that models the set of all phylogenetic trees having a fixed set of leaves. This space has a natural metric of nonpositive curvature (i.e., it is CAT(0) in the sense of Gromov), giving a way of measuring distance between phylogenetic trees and providing some procedures for averaging or otherwise doing statistical analyses on sets of trees on a common set of species. This geometric model of tree space provides a setting in which questions that have been posed by biologists and statisticians over the last decade can be approached in a systematic fashion. For example, it provides a justification for disregarding portions of a collection of trees that agree, thus simplifying the space in which comparisons are to be made. Implementing this model requires computational techniques that make use of the dual combinatorial and continuous nature of this space. Such techniques are currently being developed. This is joint work with Susan Holmes and Karen Vogtmann: http:www.math.cornell.edu/~vogtmann/papers/Trees/lap.pdf Back to the top. Carlos Moreno, City University of New York Number theory and its applications The distinguished place that the squares 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, ... hold within the set of all natural numbers is discussed in connection with Gauss' law of quadratic reciprocity. This will serve as an introduction to exponential sums, also known as trigonometric sums, and their applications in number theory (Gauss sums). The application of exponential sums in computational mathematics ("the fast Fourier transform"-FFT) and the application of Galois' theory of finite fields in cryptography (the new "Advanced Encryption Standard"-Rijndael) will also be discussed. Furthermore, the elementary level of the talk will discuss the historical relation between these developments and the rest of mathematics Back to the top.
{"url":"http://www.uprh.edu/~simu/colloquia2.htm","timestamp":"2014-04-20T03:24:59Z","content_type":null,"content_length":"21429","record_id":"<urn:uuid:9a6068b5-f416-4a97-9b7a-947c750e67fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about R on Rules of Reason It’s Oscars season again so why not explore how predictable (my) movie tastes are. This has literally been a million dollar problem and obviously I am not gonna solve it here, but it’s fun and slightly educational to do some number crunching, so why not. Below, I will proceed from a simple linear regression to a generalized additive model to an ordered logistic regression analysis. And I will illustrate the results with nice plots along the way. Of course, all done in R (you can get the script here). The data for this little project comes from the IMDb website and, in particular, from my personal ratings of 442 titles recorded there. IMDb keeps the movies you have rated in a nice little table which includes information on the movie title, director, duration, year of release, genre, IMDb rating, and a few other less interesting variables. Conveniently, you can export the data directly as a csv file. Outcome variable The outcome variable that I want to predict is my personal movie rating. IMDb lets you score movies with one to ten stars. Half-points and other fractions are not allowed. It is a tricky variable to work with. It is obviously not a continuous one; at the same time ten ordered categories are a bit too many to treat as a regular categorical variable. Figure 1 plots the frequency distribution (black bars) and density (red area) of my ratings and the density of the IMDb scores (in blue) for the 442 observations in the data. The mean of my ratings is a good 0.9 points lower than the IMDb scores, which are also less dispersed and have a higher peak (can you say ‘kurtosis’). Data-generating process Some reflection on how the data is generated can highlight its potential shortcomings. First, life is short and I try not to waste my time watching bad movies. Second, even if I get fooled to start watching a bad movie, usually I would not bother rating it on IMDb.There are occasional two- and three-star scores, but these are usually movies that were terrible and annoyed me for some reason or another (like, for example, getting a Cannes award or featuring Bill Murray). The data-generating process leads to a selection bias with two important implications. First, the effective range of variation of both the outcome and the main predictor variables is restricted, giving the models less information to work with. Second, because movies with a decent IMDb ratings which I disliked have a lower chance of being recorded in the dataset, the relationship we find in the sample will overestimate the real link between my ratings and the IMDb ones. Take one: linear regression Enough preliminaries, let’s get to business. An ordinary linear regression model is a common starting point for analysis and its results can serve as a baseline. Here are the estimates that lm provides for regressing my ratings on IMDb scores: summary(lm(mine~imdb, data=d)) Estimate Std. Error t value Pr(>|t|) (Intercept) -0.6387 0.6669 -0.958 0.339 imdb 0.9686 0.0884 10.957 *** Residual standard error: 1.254 on 420 degrees of freedom Multiple R-squared: 0.2223, Adjusted R-squared: 0.2205 The intercept indicates that on average my ratings are more than half a point lower. The positive coefficient of IMDb score is positive and very close to one which implies that one point higher (lower) IMDb rating would predict, on average, one point higher (lower) personal rating. Figure 2 plots the relationship between the two variables (for an interactive version of the scatter plot, click here): The solid black line is the regression fit, the blue one shows a non-parametric loess smoothing which suggests some non-linearity in the relationship that we will explore later. Although the IMDb score coefficient is highly statistically significant that should not fool us that we have gained much predictive capacity. The model fit is rather poor. The root mean squared error is 1.25 which is large given the variation in the data. But the inadequate fit is most clearly visible if we plot the actual data versus the predictions. Figure 3 below does just that. The grey bars show the prediction plus/minus two predictive standard errors. If the predictions derived from the model were good, the dots (observations) would be very close to the diagonal (indicated by the dotted line). In this case, they are not. The model does a particularly bad job in predicting very low and very high ratings. We can also see how little information IMDb scores contain about (my) personal scores by going back to the raw data. Figure 4 plots to density of my ratings for two sets of values of IMDb scores – from 6.5 to 7.5 (blue) and from 7.5- to 8.5 (red). The means for the two sets differ somewhat, but the overlap in the density is great. In sum, knowing the IMDb rating provides some information but on its own doesn’t get us very far in predicting what my score would be. Take two: adding predictors Let’s add more variables to see if things improve. Some playing around shows that among the available candidates only the year of release of the movie and dummies for a few genres and directors (selected only from those with more than four movies in the data) give any leverage. summary(lm(mine~imdb+d$comedy +d$romance+d$mystery+d$"Stanley Kubrick"+d$"Lars Von Trier"+d$"Darren Aronofsky"+year.c, data=d)) Estimate Std. Error t value Pr(>|t|) (Intercept) 1.074930 0.651223 1.651 . imdb 0.727829 0.087238 8.343 *** d$comedy -0.598040 0.133533 -4.479 *** d$romance -0.411929 0.141274 -2.916 ** d$mystery 0.315991 0.185906 1.700 . d$"Stanley Kubrick" 1.066991 0.450826 2.367 * d$"Lars Von Trier" 2.117281 0.582790 3.633 *** d$"Darren Aronofsky" 1.357664 0.584179 2.324 * year.c 0.016578 0.003693 4.488 *** Residual standard error: 1.156 on 413 degrees of freedom Multiple R-squared: 0.3508, Adjusted R-squared: 0.3382 The fit improves somewhat. The root mean squared error of this model is 1.14. Moreover, looking again at the actual versus predicted ratings, the fit is better, especially for highly rated movies – no surprise given that the director dummies pick these up. The last variable in the regression above is the year of release of the movie. It is coded as the difference from 2014, so the positive coefficient implies that older movies get higher ratings. The statistically significant effect, however, has no straightforward predictive interpretation. The reason is again selection bias. I have only watched movies released before the 1990s that have withstood the test of time. So even though in the sample older films have higher scores, it is highly unlikely that if I pick a random film made in the 1970s I would like it more than a random film made after 2010. In any case, Figure 6 below plots the year of release versus the residuals from the regression of my ratings on IMDb scores (for the subset of films after 1960). We can see that the relationship is likely nonlinear (and that I really dislike comedies from the 1980s). So far both regressions assumed that the relationship between the predictors and the outcome is linear. Needless to say, there is no compelling reason why this should be the case. Maybe our predictions will improve if we allow the relationships to take any form. This calls for a generalized additive model. Take three: generalized additive model (GAM) In R, we can use the mgcv library to fit a GAM. It doesn’t make sense to hypothesize non-linear effects for binary variables, so we only smooth the effects of IMDb rating and year of release. But why stop there, perhaps the non-linear effects of IMDb rating and release year are not independent, why not allow them to interact! summary(gam(mine ~ te(imdb,year.c)+d$"comedy " +d$"romance "+d$"mystery "+d$"Stanley Kubrick"+d$"Lars Von Trier"+d$"Darren Aronofsky", data = d)) PParametric coefficients: Estimate Std. Error t value Pr(|t|) (Intercept) 6.80394 0.07541 90.225 *** d$"comedy " -0.60742 0.13254 -4.583 *** d$"romance " -0.43808 0.14133 -3.100 ** d$"mystery " 0.32299 0.18331 1.762 . d$"Stanley Kubrick" 0.83139 0.45208 1.839 . d$"Lars Von Trier" 2.00522 0.57873 3.465 *** d$"Darren Aronofsky" 1.26903 0.57525 2.206 * Approximate significance of smooth terms: edf Ref.df F p-value te(imdb,year.c) 10.85 13.42 11.09 Well, the root mean squared error drops to 1.11 and the jointly smoothed (with a full tensor product smooth) variables are significant, but the added predictive value is minimal in this case. Nevertheless, the plot below shows the smoothed terms are more appropriate than the linear ones, and that there is a complex interaction between the two: Take four: models for categorical data So far we treated personal movie ratings as if they were a continuous variable, but they are not – taking into account that they are essentially an ordered categorical variable might help. But ten categories, while possible to model, would make the analysis rather unwieldy, so we recode the personal ratings into five categories without much loss of information: 5 and less, 6,7,8,9 and more. We can first see a nonparametric conditional destiny plot of the newly created categorical variable as a function of IMDb scores: The plot shows the observed density for each category of the outcome variable along the range of the predictor. For example, for a film with an IMDb rating of ’6′, about 35% of the personal scores are ’5′, a further 50% are ’6′, and the remaining 15% are ’7′. Remember that the plot is based on the observed conditional frequencies only (with some smoothing), not on the projections of a model. But the small ups and downs seem pretty idiosyncratic. We can also fit an ordered logistic regression model, which would be appropriated for the categorical outcome variable we have, and plot its predicted probabilities given the model. First, here is the output of the model: summary(polr(as.factor(mine.c) ~ imdb+year.c, Hess=TRUE, data = d) Value Std. Error t value imdb 1.4103 0.149921 9.407 year.c 0.0283 0.006023 4.699 Value Std. Error t value 5|6 9.0487 1.0795 8.3822 6|7 10.6143 1.1075 9.5840 7|8 12.1539 1.1435 10.6289 8|9 14.0234 1.1876 11.8079 Residual Deviance: 1148.665 AIC: 1160.665 The coefficients of the two predictors are significant. The plot below shows the predicted probability of the outcome variable – personal movie rating – being in each of the five categories as a function of IMDb rating and illustrates the substantive scale of the effect. Compared to the non-parametric conditional density plot above, these model-based predictions are much smoother and have ‘disciplined’ the effect of the predictor to follow a systematic pattern. It is interesting to ponder which of the two would be more useful for out-of-sample predictions. Despite the fact that the non-parametric one is more faithful to the current data, I think I would go for the parametric model projections. After all, is it really plausible that a random film with an IMDb rating of 5 would have lower chance a getting a 5 from me than a film with an IMDb rating of 6, as the non-parametric conditional density plot suggests? I don’t think so. Interestingly, in this case the parametric model has actually corrected for some of the selection bias and made for more plausible out-of-sample predictions. In sum, whatever the method, it is not very fruitful to try to predict how much a person (or at least, the particular person writing this) would like a movie based on the average rating the movie gets and covariates like the genre or the director. Non-linear regressions and other modeling tricks offer only marginal predictive improvements over a simple linear regression approach, but bring plenty of insight about the data itself. What is the way ahead? Obviously, one would want to get more relevant predictors, but, unfortunately, IMDb seems to have a policy against web-scrapping from its database, so one would either have to ask for permission or look at a different website with a more liberal policy (like Rotten Tomatoes perhaps). For me, the purpose of this exercise has been mostly in its methodological educational value, so I think I will leave it at that. Finally, don’t forget to check out the interactive scatterplot of the data used here which shows a user’s entire movie rating history at a glance. As you would have noted, the IMDb ratings come at a greater level of precision (like 7.3) than the one available for individual users (like 7). So a user who really thinks that a film is worth 7.5 has to pick 7 or 8, but its average IMDb score could well be 7.5. If the rating categories available to the user are indeed too coarse, this would show up in the relationship with the IMDb score: movies with an average score of 7.5 would be less predictable that movies with an average score of either 7 or 8. To test this conjecture, a rerun the linear regression models on two subsets of the data: one comprising the movies with an average IMDb rating between 5.9 and 6.1, 6.9 an 7.1, etc., and a second one comprising those with an average IMDb rating between 5.4 and 5.6, 6.4 and 6.6, etc. The fit of the regression for the first group was better than for the second (RMSE of 1.07 vs. 1.11), but, frankly, I expected a more dramatic difference. So maybe ten categories are just Swimming in a sea of code If you are looking for code here, move on. > In the beginning, there was only the relentless blinking of the cursor. With the maddening regularity of waves splashing on the shore: blink, blink, blink, blink…Beyond the cursor, the white wasteland of the empty page: vast, featureless, and terrifying as the sea. You stare at the empty page and primordial fear engulfs you: you are never gonna venture into this wasteland, you are never gonna leave the stable, solid, familiar world of menus and shortcuts, icons and buttons. And then you take the first cautious steps. print ‘Hello world’ > Hello world, the sea obliges. > 2 > 4 You are still scared, but your curiosity is aroused. The playful responsiveness of the sea is tempting, and quickly becomes irresistible. Soon, you are jumpting around like a child, rolling upside-down and around and around: > a=2 > b=3 > a+b > for (x in 1:60) print (x) The sense of freedom is exhilarating. You take a deep breath and dive: > for (i in 1:10) ifelse (i>5, print ('ha'), print ('ho')) [1] "ho" [1] "ho" [1] "ho" [1] "ho" [1] "ho" [1] "ha" [1] "ha" [1] "ha" [1] "ha" [1] "ha" Your old fear seems so silly now. Code is your friend. The sea is your friend. The white page is just a playground with endless possibilities. Your confidence grows. You start venturing further into the deep. You write your first function. You let code scrape the web for you. You generate your first random variable. You run your first statistical models. Your code grows in length and takes you deeper and deeper into unexplored space. Then suddenly you are lost. Panic sets in. The code stops to obey; you search for the problem but you cannot find it. Panic grows. Instinctively, you grasp for help for the icons, but there are none. You look for support by the menus but they are gone. You are all alone in the middle of this long string of code which seems so alien right now. Clouds gather. Who tempted you in? How do you get back? What to do next? You want to turn these lists into vectors, but you can’t. You need to decompose your strings into characters but you don’t know how. Out of nowhere encoding problems appear and your entire code is defunct. You are lost…. Eventually, you give up and get back to the shore. The world of menus and icons and shortcuts is limited but safe. Your short flirt with code is over forever, you think. Sometimes you dare to dream about the freedom it gave you but then you remember the feelings of helplessness and entrapment, of being all alone in the open sea. No, getting into code was a childish mistake. But as time goes by you learn to control your fear and approach the sea again. This time without headless enthusiasm but slowly, with humility and respect for its unfathomable depths. You never stray too far away from the shore in one go. You learn to avoid nested loops and keep your regular expressions to a minimum. You always leave signposts if you need to retrace your path. Code will never be your friend. The sea will never be your lover. But maybe you can learn to get along just enough as to harness part of its limitless power… without losing yourself into it forever. > The evolution of EU legislation (graphed with ggplot2 and R) During the last half century the European Union has adopted more than 100 000 pieces of legislation. In this presentation I look into the patterns of legislative adoption over time. I tried to create clear and engaging graphs that provide some insight into the evolution of law-making activity: not an easy task given the byzantine nature of policy making in the EU and the complex nomenclature of types of legal acts possible. The main plot showing the number of adopted directives, regulations and decisions since 1967 is pasted below. There is much more in the presentation. The time series data is available here, as well as the R script used to generate the plots (using ggplot2). Some of the graphs are also available as interactive visualizations via ManyEyes here, here, and here (requires Java). Enjoy. Music Network Visualization Note: probably of interest only to the intersection of the readers who are into niche music genres and those interested in network visualization. My music interests have always been rather, hmm…, eclectic. Somehow IDM, ambient, darkwave, triphop, acid jazz, bossa nova, qawali, Mali blues and other more or less obscure genres have managed to happily co-exist in my music collection. The sheer diversity always invited the question whether there is some structure to the collection, or each genre is an island of its own. Sounds like a job for network visualization! Now, there are plenty of music network viz applications on the web. But they don’t show my collection, and just seem unsatisfactory for various reasons. So I decided to craft my own visualization using R and igraph. As a first step I collected for all artists in my last.fm library the artists that the site classifies as similar. So I piggyback on last.fm for the network similarity measures. I also get info on the most-often used tag for the artist and the number of plays it has on the site. The rest is pretty straightforward as can be seen from the code. # Load the igraph and foreign packages (install if needed) lastfm<-read.csv("http://www.dimiter.eu/Data_files/lastfm_network_ad.csv", header=T, encoding="UTF-8") #Load the dataset lastfm$include<-ifelse(lastfm$Similar %in% lastfm$Artist==T,1,0) #Index the links between artists in the library lastfm.network<-graph.data.frame(lastfm, directed=F) #Import as a graph last.attr<-lastfm[-which(duplicated(lastfm$Artist)),c(5,3,4) ] #Create some attributes V(lastfm.network)[107:length(V(lastfm.network))]$tag<-NA #Attach the attributes to the artist from the library (only) V(lastfm.network)$label.cex$tag<-ifelse(V(lastfm.network)$listeners>1200000, 1.4, (ifelse(V(lastfm.network)$listeners>500000, 1.2, (ifelse(V(lastfm.network)$listeners>100000, 1.1, (ifelse(V(lastfm.network)$listeners>50000, 1, 0.8))))))) #Scale the size of labels by the relative popularity V(lastfm.network)$color<-"white" #Set the color of the dots V(lastfm.network)$size<-0.1 #Set the size of the dots V(lastfm.network)[1:106]$label.color<-"white" #Only the artists from the library should be in white, the rest are not needed E(lastfm.network)[ include==0 ]$color<-"black" E(lastfm.network)[ include==1 ]$color<-"red" #Color edges between artists in the library red, the rest are not needed fix(tkplot) #Add manually to the function an argument for the background color of the canvas and set it to black (bg=black) tkplot(lastfm.network, vertex.label=V(lastfm.network)$name, layout=layout.fruchterman.reingold, canvas.width=1200, canvas.height=800) #Plot the graph and adjust as needed I plot the network with the tkplot command which allows for the manual adjustments necessary because many artist names get on top of each other in the initial plot. Because the export options of tkplot are limited I just took a print screen ( I know, I know, that’s kind of cheating ;-)), added the tittle in Photoshop and, voila, it’s done! [click to enlarge and explore] Knowing intimately the artists in the graph, I can certify that the network definitely makes a lot of sense. I love the small clusters (Flying Louts, Andy Stott, Extrawelt and Claro Intelecto [minimal/dub], or Anouar Brahem and Rabih Abou-Khalil [ethno jazz]) loosely connected to the rest of the network. And I love the fact that the boundary spanners are immediately obvious (e.g. Pink Martini between acid jazz and world music [what a stupid label by the way!], or Cesaria Evora between African and Caribbean music, or Portishead between brit-pop, trip-hop and darkwave, or Amon Tobin between trip-hop, electro and IDM). Even the different world music genres are close to each other but still unconnected. And somehow Banco De Gaya, the most ethno of all electronica in the library, ended up closest to the world/ethno clusters. There are a few problems, like Depeche Mode, which get to be pulled from the opposite sides of the graph, but these are very few. Altogether, I have to admit I feel like a teenage dream of mine has finally been realized. But I realize the network is a rather personal thing (as it was meant to be) so I don’t expect many to get overly excited about it. Still, I would be glad to hear your comments or suggestions for extensions and improvements. And, if you were a good boy/girl during the year, I could also consider visualizing your last.fm network as a present for the new year! Network visualization in R with the igraph package In this post I showed a visualization of the organizational network of my department. Since several people asked for details how the plot has been produced, I will provide the code and some extensions below. The plot has been done entirely in R (2.14.01) with the help of the igraph package. It is a great package but I found the documentation somewhat difficult to use, so hopefully this post can be a helpful introduction to network visualization with R. Here we go: # Load the igraph package (install if needed) # Data format. The data is in 'edges' format meaning that each row records a relationship (edge) between two people (vertices). # Additional attributes can be included. Here is an example: # Supervisor Examiner Grade Spec(ialization) # AA BD 6 X # BD CA 8 Y # AA DE 7 Y # ... ... ... ... # In this anonymized example, we have data on co-supervision with additional information about grades and specialization. # It is also possible to have the data in a matrix form (see the igraph documentation for details) # Load the data. The data needs to be loaded as a table first: bsk<-read.table("http://www.dimiter.eu/Data_files/edgesdata3.txt", sep='\t', dec=',', header=T)#specify the path, separator(tab, comma, ...), decimal point symbol, etc. # Transform the table into the required graph format: bsk.network<-graph.data.frame(bsk, directed=F) #the 'directed' attribute specifies whether the edges are directed # or equivelent irrespective of the position (1st vs 2nd column). For directed graphs use 'directed=T' # Inspect the data: V(bsk.network) #prints the list of vertices (people) E(bsk.network) #prints the list of edges (relationships) degree(bsk.network) #print the number of edges per vertex (relationships per people) # First try. We can plot the graph right away but the results will usually be unsatisfactory: Not very informative indeed. Let’s go on: #Subset the data. If we want to exclude people who are in the network only tangentially (participate in one or two relationships only) # we can exclude the by subsetting the graph on the basis of the 'degree': bad.vs<-V(bsk.network)[degree(bsk.network)<3] #identify those vertices part of less than three edges bsk.network<-delete.vertices(bsk.network, bad.vs) #exclude them from the graph # Plot the data.Some details about the graph can be specified in advance. # For example we can separate some vertices (people) by color: V(bsk.network)$color<-ifelse(V(bsk.network)$name=='CA', 'blue', 'red') #useful for highlighting certain people. Works by matching the name attribute of the vertex to the one specified in the 'ifelse' expression # We can also color the connecting edges differently depending on the 'grade': E(bsk.network)$color<-ifelse(E(bsk.network)$grade==9, "red", "grey") # or depending on the different specialization ('spec'): E(bsk.network)$color<-ifelse(E(bsk.network)$spec=='X', "red", ifelse(E(bsk.network)$spec=='Y', "blue", "grey")) # Note: the example uses nested ifelse expressions which is in general a bad idea but does the job in this case # Additional attributes like size can be further specified in an analogous manner, either in advance or when the plot function is called: V(bsk.network)$size<-degree(bsk.network)/10#here the size of the vertices is specified by the degree of the vertex, so that people supervising more have get proportionally bigger dots. Getting the right scale gets some playing around with the parameters of the scale function (from the 'base' package) # Note that if the same attribute is specified beforehand and inside the function, the former will be overridden. # And finally the plot itself: par(mai=c(0,0,1,0)) #this specifies the size of the margins. the default settings leave too much free space on all sides (if no axes are printed) plot(bsk.network, #the graph to be plotted layout=layout.fruchterman.reingold, # the layout method. see the igraph documentation for details main='Organizational network example', #specifies the title vertex.label.dist=0.5, #puts the name labels slightly off the dots vertex.frame.color='blue', #the color of the border of the dots vertex.label.color='black', #the color of the name labels vertex.label.font=2, #the font of the name labels vertex.label=V(bsk.network)$name, #specifies the lables of the vertices. in this case the 'name' attribute is used vertex.label.cex=1 #specifies the size of the font of the labels. can also be made to vary # Save and export the plot. The plot can be copied as a metafile to the clipboard, or it can be saved as a pdf or png (and other formats). # For example, we can save it as a png: png(filename="org_network.png", height=800, width=600) #call the png writer #run the plot dev.off() #dont forget to close the device #And that's the end for now. Still not perfect, but much more informative and aesthetically pleasing. Additional information can be found on this guide to igraph which is in development, the examples here, and the official CRAN documentation of the package. Especially useful is this list of the plot attributes that can be tweaked. The plots can also be adjusted interactively using the tkplot function instead of plot, but the options for saving the resulting figure are limited. Have fun with your networks! Visualizing left-right government positions How does the political landscape of Europe change over time? One way to approach this question is to map the socio-economic left-right positions of the governments in power. So let’s plot the changing ideological positions of the governments using data from the Manifesto project! As you will see below, this proved to be a more challenging task than I imagined, but the preliminary results are worth sharing nonetheless. First, we need to extract the left-right positions from the Manifesto dataset. Using the function described here, this is straightforward: lr2000<-manifesto.position('rile', start=2000, end=2000) This compiles the (weighted) cabinet positions for the European countries for the year 2000. Next, let’s generate a static map. We can use the new package rworldmap for this purpose. Let’s also build a custom palette that maps colors to left-right values. Since in Europe red traditionally is the color of the political left (the socialists), the palette ranges from dark red to gray to dark blue (for the right-wing governments). library (rworldmap) op <- palette(c('red4','red3','red2','red1','grey','blue1', 'blue2','blue3', 'blue4')) After recoding the name of the UK, we are ready to bind our data and plot the map. You can save the map as a png file. lr2000$State<-recode(lr$State, "'Great Britain'='United Kingdom'") lrmapdata <- joinCountryData2Map( lr2000,joinCode = "NAME", nameJoinColumn = "State", mapResolution='medium') png(file='LR2000map.png', width=640,height=480) mapCountryData( lrmapdata, nameColumnToPlot="position",colourPalette=op, xlim=c(-9,31), ylim=c(36,68), mapTitle='2000', aspect=1.25,addLegend=T ) The limits on on the x- and y-axes center the map on Europe. It is a process of trial and error till you get it right, and the limits need to be co-ordinated with the aspect and the width and height of the png file so that the map looks reasonably well-proportioned. Here is the result (click to see in full resolution): It looks a bit chunky but not too bad. Next, we have to find a way to show developments over time. We could show several plots for different years on one page, but this is not very effective: A much better way would be to make the maps dynamic, or, in other words, to animate them. But this is easier said than done. After searching for a few days for tools that can accomplish the job, I settled for producing individual maps for each month, importing the series into Adobe Flash, and exporting a simple animation movie. The R code to produce the individual maps: lr<-manifesto.position('rile', start=1948, end=2008, period='month') lr$State<-recode(lr$State, "'Great Britain'='United Kingdom'") for (i in 1:length(u.c)){ lr.temp<-subset(lr, lr$Year.month==u.c[i]) lrmapdata <- joinCountryData2Map( lr.temp,joinCode = "NAME", nameJoinColumn = "State", mapResolution='medium') plot.name<-paste('./maps/map',i,'.png', sep='') png(file=plot.name, width=640,height=480) mapCountryData( lrmapdata, nameColumnToPlot="position",colourPalette=op, xlim=c(-9,31), ylim=c(36,68), mapTitle=u.c[i], aspect=1.25,addLegend=T ) dev.off() } And here is the result (opens outside the post): Flash video of Left-Right positions (slow) It kind of works, it has buttons for navigation, but it has one major flow – it is damn slow. It should be 12 frames (maps) per second, and it is 12 fps inside Flash, but once exported, the frame rate goes down (probably because my laptop’s processor is too slow). In fact, I can export a fast version, but only if I get rid of the control buttons. Here it is (right-click and press play to Flash video of Left-Right positions (fast) You can also play the animation as an AVI video (uploaded on YouTube), but somehow, through the mysteries of video-processing, a crisp slideshow of 8mb ended up as a low-res movie of 600mb. The results resemble my initial idea, although none is perfect. Ideally, I would want a fast movie with controls and a time-slider, but my Flash programming skills (and my computer) need to be upgraded for that. Meanwhile, the Manifesto project could also update their data on which the animation is based. Altogether, the experience of creating the visualization has been much more painful than I anticipated. First, there doesn’t seem to be an easy way to get a map of Europe (or, more precisely, of the European Union territories) for use in R. The available options are either too low resolution, or too outdated (e.g. featuring Czechoslovakia), or require centering a world-map using ylim and xlim which is a problem because these coordinates are connected to the dimensions and the resolution of the output plot. For the US, and for individual European states, there are tons of slick and easy-to-find maps (shapefiles), but for Europe I couldn’t find anything that doesn’t feature huge tracts of land east to the Urals, which are irrelevant and remain empty with political data (which is usually available for the EU+ states only). Any pointers to good, relatively high-res maps (shapefiles) of the EU will be much appreciated. Second, producing an animation out of the individual maps is rather difficult. Currently, Google Charts offer dynamic plots and static maps, I hope in the future they include dynamic maps as well. Especially because the googleVis package makes it possible to build Google charts from within R. I also found a new tool called StatPlanet which seems relevant and rather cool, but still relies on Adobe Flash and has no packaged Europe/EU maps. The big guns in visualization software are most probably up to the task but Tableau is prohibitively expensive and Processing is said to have a steep learning curve. Again, any help in identifying solutions that do not require proprietary software to produce animated maps would be much appreciated. I hope to be able to post an update on the project soon. Compiling government positions from the Manifesto Project data with R ****N.B. I have updated the function in February 2014 to makes use of the latest Manifesto data. See for details here.*** The Manifesto Project (former Manifesto Research Group, Comparative Manifestos Project) has assembled a database of ‘quantitative content analyses of parties’ election programs from more than 50 countries covering all free, democratic elections since 1945′ and is freely accessible online. The data, however, is available only at the party, and not at the government (cabinet) level. In order to automate the process of extracting government positions from the Manifesto data, I wrote a simple R function which combines the party-level Manifesto data with the data on government compositions from the ParlGov database. The function manifesto.position() produces a data frame with the country, the time period, the government position of interest, and an index (id) variable. You can get the data either at a monthly or yearly period of aggregation, specify the start and the end dates, and get the data in ‘long’ or ‘wide’ format. Here is how it works: First, you would need R up and running (with the ‘ggplot2‘ library installed). Second, you need the original data on party positions and on government compositions, and this script to merge them. Alternatively, you can download (or source) directly the resulting merged dataset here. Third, you need to source the file containing the functions. Here are a few examples of the function in action: ### 1. Load the data file from the working directory or from the URL (default) #cabinets<-read.table ('cabinets.txt', as.is=TRUE) cabinets<-read.table ('http://www.dimiter.eu/Data_files/cabinets/cabinets.txt', as.is=TRUE) ### 2. Load the functions from the working directory or from the URL (default) #source('government position extraction functions.R') ### Use of manifesto.position(x, weighted=TRUE, long=TRUE, period='year', start=1945, end=2010) ### Inputs: ### x [the name of the Manifesto item] ### weighted [weighted mean of the government position or a simple unweighted mean] ### period [year (default) or month - time period for which the position is extracted] ### long [long (default) or wide version of the output data] ### start [starting year for the extraction; 1945 is default] ### end [end year of the extraction; 2010 is default] ### Output: A data frame with four columns - State, Year (Year.month), position [the actual position], id [Year.State(Year.month.State)] ### For details see the sourced file above ### Examples ## 1. Extract the left/right positions ## 2. Exatract the unweighted International peace position from 1980 until 1999 intp<-manifesto.position('intpeace', weighted=F, start=1980, end=1999) ## 3. Exatract the weighted Welfare position from 1980 until 1999 in a wide, rather than long shape - states are rows and years are colunms welfare<-manifesto.position('welfare', long=F, start=1980, end=1999) welfareT<-t(welfare) ##this would make the countries columns and the years rows. ## 4. Left/right on a monthly basis from 1980 till 1990 lrm<-manifesto.position('rile', period='month', start=1980, end=1990) I hope you find the function useful. Feel free to e-mail any suggestions, remarks, reports on bugs, etc. If you use the function and the data, don’t forget to acknowledge the work of the people who collected the Manifestos and who compiled the ParlGov database.
{"url":"http://rulesofreason.wordpress.com/category/r/","timestamp":"2014-04-21T05:05:21Z","content_type":null,"content_length":"106911","record_id":"<urn:uuid:5ae5e542-0dde-4e1f-ae49-2fe442a23f63>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
First Order ODE September 29th 2011, 05:49 PM #1 Jan 2011 First Order ODE $\frac{dz}{dt} + 3e^{t+z} = 0$ Here's what I have done: $\frac{dz}{dt} = -3e^{t+z}$ $\ln(dz) - \ln(dt) = \ln(-3) + ln(e^{t+z})$ Am I approaching this correctly? Something is obviously wrong since the ln(-3) is undefined, which makes me think that I am doing this wrong. re: First Order ODE I see it like this $\frac{dz}{dt} = -3e^{t+z}$ $\frac{dz}{dt} = -3e^{t}e^{z}$ $e^{-z}~dz= -3e^{t}~dt$ $\int e^{-z}~dz= -3 \int e^{t}~dt$ re: First Order ODE Thanks for that. This is my very first differential equation (my deflowering, so to speak), so this is all new to me. re: First Order ODE But wait... Carrying on from where you left off: $e^{-z} = -3e^t + C$ $-z = \ln(-3e^t + C)$ I'm back to my same dilemma. I have the logarithm of a negative number. How do I handle that? Thanks again. re: First Order ODE re: First Order ODE I thought of that, too, that the constant can always push the equation positive. Where I am confused is that when I run this problem through Wolfram (or my calculator) to check my work, it shows the answer as $\ln(3e^t + C)$. I'm just trying to understand whether I am getting the right answer or not, and I have two sources (that I trust more than my own work) that have a slightly different answer. Any thoughts? Thanks again. Re: First Order ODE I don't quite get the same result. I have $-e^{-z}=-3e^{t}+C,$ and so $e^{-z}=3e^{t}+C,$ where I've redefined my constant. Then $-z=\ln(3e^{t}+C),$ and hence September 29th 2011, 06:00 PM #2 September 29th 2011, 06:13 PM #3 Jan 2011 September 29th 2011, 06:19 PM #4 Jan 2011 September 29th 2011, 06:44 PM #5 September 29th 2011, 06:50 PM #6 Jan 2011 September 30th 2011, 06:45 AM #7
{"url":"http://mathhelpforum.com/differential-equations/189159-first-order-ode.html","timestamp":"2014-04-20T06:14:05Z","content_type":null,"content_length":"52954","record_id":"<urn:uuid:257a4c8e-d55a-4817-8c0a-898afcb36acb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Is there a faster the way to solve polynomials then to use the long division method? If yes, how do you do it? • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Synthetic Division Best Response You've already chosen the best response. can you please show me how to do synthetic division? @Hero Best Response You've already chosen the best response. Factor by inspection. For example, say we had x^3 + 6x^2 + 11x +6. You could spot that (x+1) is a factor (substitute x=-1 to see this) and then write the semi-factorised verision by inspection: (x+1)(x^2+5x+6) And continue reducing and taking factors out until you get to (x+1)(x+2)(x+3). It's a bit odd to describe the method, but I'll explain the first step: I took out (x+1). We have x^ 3 so I know the first term in the second set of bracekets will have to be x^2. When this is multiplied by x+1 we get the x^3, but only get x^2 when we need 6x^2. So the next term in the second set of brackets will be 5x (since it will multiply with the x from the first set of brackets to give a total of 6x^2). So up till now we have (x+1)(x^2+5x+...). The 5x completed the x^2 terms, but only gives us 5x, when we need 11x. So our next term will be +6, to give the extra 6x and the +6. So at the end of this step we have (x+1)(x^2+5x+6). You can then factorise x^2+5x+6 easily. It seems long winded in the explanation but it really isn't, it's basically just shorthand long division. Please let me know if I can make something more clear. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50489f44e4b003bc12041a85","timestamp":"2014-04-20T11:05:56Z","content_type":null,"content_length":"35768","record_id":"<urn:uuid:9bfb0982-efb0-464c-bcfc-b8ada40d4a8b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Lesson 1-7 Inductive Reasoning Inductive reasoning is when you use clues and patterns to make a conclusion. A conjecture is the conclusion that you make. You can look at patterns whether they are pictures or numbers and use inductive reasoning to determine the rule that they follow. Here are a couple of examples you can try. 2, 4, 6, 8,...... What is the rule? What are the next 2 terms? 4, 3, 5, 4, 6, 5, 7....... What is the rule? What are the next 2 terms? Not all conjectures are true. Sometimes we make a conjecture about something and later find out it is incorrect. If it is incorrect, there is a counter example to prove it. Here is an example. All birds can fly. This is incorrect. A counter example of this conjecture could be an ostrich or a penguin. Both are birds, but neither of them can fly. The homework for this section is pg. 38 2-30 evens.
{"url":"http://geigerprealgebra.blogspot.com/2010/09/lesson-1-7-inductive-reasoning.html","timestamp":"2014-04-16T16:49:39Z","content_type":null,"content_length":"53879","record_id":"<urn:uuid:5029a398-ee77-4040-9cb3-4f6ffc1051bc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Heathcote, NY Math Tutor Find a Heathcote, NY Math Tutor ...I attended MIT and Colorado College for my undergraduate degree, which was in Mathematics with an emphasis in Computer Science. I have my Master of Arts in Teaching Secondary Mathematics from Teachers College, Columbia University, as well as my NYS secondary math teaching certification. For the past seven years I have been teaching math at a private school in Riverdale. 9 Subjects: including algebra 1, algebra 2, calculus, geometry I am flexible, friendly and use professional teaching skills to help you. I am originally from the UK and have taught high school physics for over three years, including at the prestigious Gordounstoun School in Elgin, Scotland. I also have experience in personal tutoring. 8 Subjects: including algebra 1, algebra 2, geometry, linear algebra ...My background is in Electrical Engineering. I am working in an Electronics company as a manufacturing engineer. In the last 4 years I lived in Miami, California and New York following the opportunities I get in life, which taught me that dedication, hard work and some help is the key for success. 17 Subjects: including statistics, linear algebra, logic, probability ...Fast Results Almost Guaranteed. I am an algebra wizard. I specialize in breaking down the subject and building it back up at the level my scholars are engaging. 26 Subjects: including algebra 2, GED, grammar, prealgebra ...I am now having 7 students as private tutoring. My students include those from Hunter College High School, Stuyvesant, Bronx Science, Brooklyn Tech, etc., all referred by parents. I helped many students got into their dream schools or honor classes. 12 Subjects: including linear algebra, algebra 1, algebra 2, calculus Related Heathcote, NY Tutors Heathcote, NY Accounting Tutors Heathcote, NY ACT Tutors Heathcote, NY Algebra Tutors Heathcote, NY Algebra 2 Tutors Heathcote, NY Calculus Tutors Heathcote, NY Geometry Tutors Heathcote, NY Math Tutors Heathcote, NY Prealgebra Tutors Heathcote, NY Precalculus Tutors Heathcote, NY SAT Tutors Heathcote, NY SAT Math Tutors Heathcote, NY Science Tutors Heathcote, NY Statistics Tutors Heathcote, NY Trigonometry Tutors Nearby Cities With Math Tutor Allerton, NY Math Tutors Bardonia, NY Math Tutors East White Plains, NY Math Tutors Fleetwood, NY Math Tutors Glenville, CT Math Tutors Hartsdale Math Tutors Inwood Finance, NY Math Tutors Mamaroneck Math Tutors Maplewood, NY Math Tutors North White Plains, NY Math Tutors Oyster Bay Cove, NY Math Tutors Scarsdale Math Tutors Scarsdale Park, NY Math Tutors Throggs Neck, NY Math Tutors Wykagyl, NY Math Tutors
{"url":"http://www.purplemath.com/Heathcote_NY_Math_tutors.php","timestamp":"2014-04-16T16:13:17Z","content_type":null,"content_length":"23867","record_id":"<urn:uuid:d2f28db1-7944-4627-bb61-60b49376fb66>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Regression I_ Introduction Introduction to Regression Lecture 18 Learn key terms and uses of regression. Describe the assumptions needed for simple Ordinary Least Squares regression. Estimate the parameters for a simple linear probabilistic model. Text: 9.1, 9.2 & 9.3 MSIT3000 2 What is regression? To ‘regress’ one variable on another is to ‘fit’ a The simplest function to fit is: Y=A (Not very useful). The second simplest function to fit is: Y = A + Bx (Remarkably useful!) ‘Regression’ refers to finding values for A & B from values of X & Y. MSIT3000 3 Fitting a line to data: Example from p 455 MSIT3000 4 Data with “regression line” 'Fitted' Line y = 0.7x - 0.1 y Predicted y Linear (Predicted y) MSIT3000 5 What is regression useful for? Marketing: advertising & sales models. Real estate: estimating the value of property and property attributes. Finance: Valuing assets. Modeling default risk. Establishing benchmarks. Accounting: Measuring financial performance – what is an appropriate benchmark? Organization Behavior: Relating performance to different kinds of pay or responsibilities. MSIT3000 6 Dependent variable. This is what you wish to model, explain and predict. In a sales-advertising model, you would want to predict sales based on how much you advertise. Independent variable (a.k.a. explanatory variable or This is the input to the model (advertising, in the sales advertising model). MSIT3000 7 Probabilistic vs deterministic models. Deterministic models have no room for ‘error’. I.e. if y = a + bx then that must be exactly true for all pairs of y and x. Probabilistic models recognize that there may be some ‘disturbance’ in our data. We therefore add noise to the model: y = a + bx + The noise term is denoted with and a.k.a. Random error MSIT3000 8 Ordinary Least Squares regression: Ordinary refers to the deterministic part of the model being linear. We will expand on what “linear” means further when we get to multiple ‘Least Squares’ refers to how we find the regression line. More on that shortly. MSIT3000 9 Where are we? We have a few terms and definitions. We have a set of problems in business that regression is useful for. We have found that it is possible to ‘fit’ a regression line by sight. The main problem with this method: it is subjective. This was terminology & motivation; now we will examine a method to find the regression line MSIT3000 10 In order to fit a linear regression line, we need the following assumptions (cfr text): 1. Y = 0 + 1 x + (implied in text). 2. N(0,2) 3. i & j are independent if i j MSIT3000 11 Fitting an OLS The ‘fitted line’: ˆ ˆ Yhat = b0 + b1 x y 0 1 x ˆ We can find ‘errors’ [a.k.a. ‘prediction errors’ or ‘residuals’] for each pair of x & y: e = y- yhat How can we use the errors to find a “best” line through our data? MSIT3000 12 'Fitted' Line y = 0.7x - 0.1 y Predicted y Linear (Predicted y) x Residual Plot Using the error terms In order to minimize the error in some meaningful way, we must first measure the overall error. How? we square each error to make sure each component of the overall error term is positive. then we sum all the squared error terms in order to get a measure for all of the data. finally we minimize that function; based on which the parameter-estimates MSIT3000 14 When we minimize SSE using the parameter estimates, we find that: the slope 1hat = SS(xy)/SS(xx) the intercept 0hat = ybar - 1hat*xbar this is another way of saying that the OLS line passes through the pair of sample means, xbar and ybar. Where: SS ( x x)( y y ) x y x y i i xy i i i i SS xx ( xi x) 2 x i MSIT3000 15 Objectives addressed: Terminology and some uses of regression. Assumptions needed OLS regression. Estimating the OLS parameters. Problem: Example on page 479. MSIT3000 16
{"url":"http://www.docstoc.com/docs/134155709/Regression-I_-Introduction","timestamp":"2014-04-23T09:39:22Z","content_type":null,"content_length":"59201","record_id":"<urn:uuid:cecb682f-4286-4a4a-a531-a52e34c27331>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Réunion d'été 2002 de la SMC Graph Theory / Théorie des graphes (B. Aslpach, Organizer) The Alspach conjecture proposes necessary and sufficient conditions for a complete graph K[n] (n odd) or a complete graph minus a 1-factor (n even) to be decomposable as an edge-disjoint union of cycles of prescribed lengths. We discuss various results on this and some related conjectures. In particular we describe some recent partial results on the corresponding questions for complete bipartite graphs and complete digraphs. C. H. Li recently made the following conjecture: Let G be a circulant digraph of order n = n[1]n[2] and degree m, where gcd(n[1],n[2]) = 1, n[1] divides 4k, where k is odd and square-free, and every prime divisor of n[2] is greater than m, or, if G is a circulant graph, every prime divisor of n[2] is greater than 2m. Then if G¢ is any circulant (di)graph of order n, then G and G¢ are isomorphic if and only if they are isomorphic by a group automorphism of Z[n]. We verify that this conjecture is true. The classical dual relation between flows and colourings in plane graphs breaks down for maps on higher surfaces. Specifically, if S is a surface different from the sphere, then the circular chromatic number of a map G on S may be strictly greater than the circular flow number of the surface dual map G^S*. We show that these two parameters are nearly equal provided that G has edge width at least f(S) for some (astronomical) function f. Conversely, we use orientations to derive lower bounds on the circular flow numbers of certain maps, paying special attention to a certain exceptional class of nonorientable maps. Together, the results imply that there are gaps in the range of possible circular chromatic numbers for certain maps of high edge width. For example any triangulation G has circular chromatic number either at least 4 or at most 3+e where e depends only on S and on the edge width of G. This is joint work with M. DeVos, B. Mohar, D. Vertigan and X. Zhu. This talk will introduce a new family of intersection graphs which generalizes interval graphs, interval bigraphs, and circular arc graphs of clique covering number two. A structural characterization will be given, akin to the Lekkerkerker-Boland characterization of interval graphs. This family arose in the classification of the complexity of list homomorphism problems, as the largest family for which polynomial algorithms are possible (as long as P ¹ NP); this connection will also be briefly discussed. This is joint work with Tomas Feder and Huang Jing. Most work on list colourings has focused on determining the list chromatic number: the minimum size of the lists so that a valid list colouring can always be found, regardless of the content of the lists. However, a list colouring may exist, even when the lists do not have the size prescribed by the list chromatic number (for example, the lists could be disjoint). This leads to the alternative question: given a graph and a particular assignment of lists to its vertices, is it possible to determine easily whether a list colouring exists? We present an algorithm that answers this question for series-parallel graphs. Given a series-parallel graph, the algorithm will either find a list colouring, or establish that no such colouring exists. The algorithm has complexity O(d^2m), where d is the maximum degree and m is the number of edges of the graph. This is joint work with Philippe Maheux. We discuss cubic semisymmetric, that is, regular edge- but not vertex-transitive graphs. A recently obtained list of all such graphs of order up to 768 will be presented. These graphs can be arranged in a lattice displaying, for each member, all of its direct normal quotients (some of which are arc-transitive) and all of its direct covers in the list. The major part of the list is comprised of graphs with a solvable automorphism group. These graphs can be obtained as successive regular elementary abelian covers of K[3,3] or K[4]. As for the graphs with a nonsolvable automorphism group, the list includes biquasiprimitive examples as well as graphs which have K[1,3] as a normal quotient. Moreover, a brief summary of all known infinite families of cubic semisymmetric graphs together with respective members which appear in the list, and explicit rules for their construction, will also be given. This is a joint work with Marston Conder, Aleksander Malnic and Primoz Potocnik. An isomorphism problem for circulant graphs (Cayley graphs over the cyclic group) is known since 1967 when \' Ad\' am conjectured (wrongly) that two circulants are isomorphic if and only if their generating sets are conjugate by a group automorphism. In the talk a polynomial time algorithm which solves the above problem will be presented. It consists of two steps. First, a special combinatorial invariant of a graph, called the key of a graph, is computed. The circulants with distinct keys are not isomorphic. For circulants with a given key k there exists a small set of permutations P[k] with the following property: two circulants with the same key k are isomorphic if and only if an isomorphism between them may be found inside P[k]. The cardinality of P[k] is bounded by j(n) where n is the order of a circulant and j(n) is the Euler function. A graph is called self-complementary if it is isomorphic to its own complement. A circulant graph is a Cayley graph on a cyclic group. Froncek, Rosa, and Siran, and independently, Alspach, Morris, and Vilfred have shown that a self-complementary circulant graph with n vertices exists if and only if every prime divisor of n is congruent to 1 modulo 4. In this talk we present an extension of the above result to circulant graphs of even order. A graph is called almost self-complementary if it is isomorphic to its complement minus a 1-factor. We give necessary and sufficient conditions for the existence of almost self-complementary circulants that share a regular cyclic subgroup of the automorphism group with their isomorphic almost complement, and describe their structure. We also discuss recent progress in our search for almost self-complementary circulants that have no regular cyclic subgroup of the automorphism group in common with their isomorphic almost complement. Let G be a finite group with generating set W, where W is closed under inverses, and let p be a cyclic permutation of W. The Cayley map M=CM(G,W,p) is an oriented 2-cell embedding of the Cayley graph Cay(G,W) such that the rotation of arcs emanating from each vertex is determined by p. A Cayley map is regular if its automorphism group is as large as possible. Special types of regular Cayley maps have already been characterized. For example, using automorphisms (antiautomorphisms) of G, Sirán and Skoveria provide necessary and sufficient conditions for a Cayley map to be balanced (antibalanced) and regular. More recently mappings of a group G called skew-morphisms have been introduced by Jajcay and Sirán and these provide a unified theory of regular Cayley maps. We define a Cayley map M=CM(G,W,p) to be e-balanced if p(x^-1)=(p^e(x))^-1 for every x Î W. This concept naturally generalizes the concepts of balanced and antibalanced. We then investigate a particular kind of skew-morphism, which we call an e-morphism. Using e-morphisms we establish necessary and sufficient conditions for a Cayley map to be e-balanced and regular. Several related results are also presented. This is joint work with J. Martino and M. Skoviera. If X is any connected Cayley graph on any finite abelian group, we determine precisely which flows on X can be written as a sum of hamiltonian cycles. In particular, if the degree of X is at least 5, and X has an even number of vertices, then it is precisely the even flows, that is, the flows f, such that å[a Î E(X)] f(a) is divisible by 2. On the other hand, there are infinitely many examples of degree 4 in which not all even flows can be written as a sum of hamiltonian cycles. Analogous results were already known 10 years ago, from work of Brian Alspach, Stephen Locke, and Dave Witte, for the case where X is cubic, or has an odd number of vertices. The circumference of a graph G, denoted by c(G), is the length of a longest cycle in G. In this talk, I will present several results (and problems) about c(G) for 4-connected and 3-connected Let G be a (k+2)-connected graph where k ³ 5. We proved that if G contains three complete graphs of order k, say L[1], L[2], L[3] such that |L[1] ÈL[2] ÈL[3]| ³ 3k-3, then G contains a K[k+2] -minor. This result generalizes some early results by Robertson, Seymour and Thomas (Combinatorica, 1993) for k=4, and Kawarabayashi and Toft for k=5. (Joint work with Kawarabayashi, Luo,
{"url":"http://cms.math.ca/Events/summer02/abs/gt-f.html","timestamp":"2014-04-16T07:26:16Z","content_type":null,"content_length":"21616","record_id":"<urn:uuid:ad133347-965e-4cae-88cb-bbbb614d1785>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Astronomy Online - Pulsars Detecting Pulsars - by Alex Nervosa: Back to High Mass Stellar Evolution Rossi X-ray Timing Explorer (RXTE) RXTE and Pulsar Observations Technology Components Fourier Analysis Fast Fourier Transform Sampling Rate & Nyquist Rate Frequency Determination Results and Commentary Test Signal Pulsar Frequencies & Spin Periods Pulsar Period Derivative & Pulsar "Characteristic Age" Pulsar Magnetic Field & Energy Generation Rate Pulsar Analysis Summary & Conclusion Credits and Comments Pulsar Source Code Period Derivative Source Code for Each Pulsar (PD Plot) Test Signals Source Code (sine & sawtooth functions) Back to High Mass Stellar Evolution Pulsars are rotating neutron stars that generate regular electromagnetic (EM) pulses at their spin rate. They were first discovered in the radio region of the EM spectrum by Jocelyn Bell and Anthony Hewish in 1967 [1]. Since this pioneering discovery, pulsars have also been detected in other regions of the EM spectrum such as the optical, x-ray and gamma ray regions. This study will seek to detect pulsars from RXTE [2] x-ray observations, determine their intrinsic properties such as spin period, spin rate and period derivative^1. Using this information we will attempt to derive pulsar properties such as ‘characteristic age’, magnetic field strength, energy loss rate and contrast results with the ATNF [3] pulsar catalogue and published research in addition to performing a pulsar analysis. We’ll conclude with a summary of key points. Back to Top | Back to High Mass Stellar Evolution The following will provide background theory on supernovae, neutron stars and pulsars accompanied by information relating to the data source used for this study, namely the Rossi X-ray Timing Explorer (RXTE). A neutron star is an 16 15 extremely dense, degenerate stellar corpse composed mostly of densely packed neutrons. Neutron stars are Outer created by the collapse of core stars more massive than 8 solar masses when they become supernovae. They have an estimated radius between 10 to 16 kilometres, a mass around 1.4 solar masses and a density in the order of 10^14 times higher that the Sun’s [4]. Neutron stars are one of three known endpoints of stellar evolution, the other two being white dwarfs (formed by stars with masses less than 8 solar masses) and black holes (formed by stars with masses greater than 15-20 solar masses). Figure 1 shows a theoretical neutron star structure and composition. During stellar nuclear fusion processes governed by gravity and pressure leading to a supernova explosion, the Chandrasekhar mass limit [5] is reached whereby electron degeneracy pressure at the stellar core can no longer support a gravitationally collapsing star. At this point of extreme density, relativistic electrons combine with protons in and around the stellar core via a process called neutronization (also known as inverse beta decay) resulting in neutrons and electron neutrinos being created i.e. e-+ p to n + v_e. During neutronization many protons are converted to neutrons and vast amounts of neutrino energy is released. Neutron degeneracy pressure [6] in a similar way as electron degeneracy pressure halts further core collapse. A neutron star is created at this point; also described as a supernova by-product or stellar corpse. The supernova is powered by neutrino energy released from inverse beta decay and by explosive fission nucleosynthesis processes around the neutron star as part of the SNR^2. The newly created neutron star is extremely small, highly magnetised (magnetic field approximately 10^12 Gauss) and fast spinning (spin period is generally between 0.25 and 2 seconds) [7] compared to its pre-supernova stellar progenitor. The neutron star may also be ejected from the SNR due to explosion asymmetries resulting from the supernova. There are three generic classes of pulsars arising from neutron stars, based on the energy source which powers them as follows: 1) Rotationally-powered pulsars 2) Accretion-powered pulsars 3) Magnetars Rotationally-powered pulsars emit electromagnetic radiation from their magnetic poles (shown in blue in figure 2) resulting from their inherent rotation energy. The electromagnetic dipole radiation emitted can be across a large portion of the EM spectrum, generally from the x-ray region down to the radio region. Typically the radiation is seen in the radio region of the EM spectrum. Accretion-powered pulsars emit electromagnetic radiation via magnetic dipole radiation as well as by collecting material on their accretion disk typically from a binary companion in close proximity, or even by their closely associated SNR. Disk accretion creates x-ray ‘hot-spots’ which are responsible for periodic intensity variations (and non-periodic intensity variations) in these pulsars. Magnetars are thought to be the sources of Soft Gamma ray Repeaters (SGR’s) and Anomalous X-ray Pulsars (AXP’s) [8]. Magnetars are highly magnetised neutron stars in fact much more than conventional neutron stars by a factor of up to 100 or more, with magnetic fields in the order of 10^14 Gauss and capable of emitting both x-rays and gamma rays by decay of their very strong magnetic field. The very strong magnetic field of a magnetar is thought to be inherited when the neutron star is first created during a supernovae [9]. Back to Top | Back to High Mass Stellar Evolution Rossi X-ray Timing Explorer (RXTE) The Ross X-ray Timing Explorer (RXTE) is a NASA mission which was launched in December of 1995. Originally designed as 2 year mission with a maximum lifespan of 5 years, RXTE is still in service today (November 2005) collecting x-ray data from galactic sources such as pulsars, galaxies and binary star systems. RXTE carries three detection instruments two of which, PCA and HEXTE [10, 11] are ‘pointed’ instruments for point-source x-ray detection. The third instrument called ASM [12] is designed to perform x-ray detection at large angles across the sky. The Proportional Counter Array (PCA) instrument detects x-rays in the lower part of the x-ray energy spectrum (2-60 KeV), whilst the High Energy X-ray Timing Experiment (HEXTE) detects x-rays in the upper part of the x-ray energy spectrum (15-250 KeV). Both detectors are designed such that they are able to overlap a substantial portion of their respective EM spectrums. Both the PCA and ASM are proportional detectors [13] whilst HEXTE is a scintillation detector [14]. Both PCA and HEXTE have been designed with microsecond time resolution capability i.e. the PCA instrument of our pulsar study is capable of detecting a range of pulsar periods down to 1 microsecond spin period accuracy. The EDS (Experiment Data System) onboard RXTE is responsible for capturing PCA pulsar data, processing it in ‘time binned mode’ [15], and inserting the results in the RXTE telemetry system for delivery to Earth based data collection systems. The PCA data used for our pulsar study relates to the detection of two young pulsars, namely PSR B0540-69 and PSR B1509-58. The former is associated with the LMC (Large Magellanic Cloud) at a distance from Earth of approximately 49.4 kpc and is associated with SNR 0540-693, whilst the latter is at a distance 4.4 kpc in the constellation Circinus as part of SNR G320.4-1.2 (shown on the cover page) [3]. Back to Top | Back to High Mass Stellar Evolution RXTE and Pulsar Observations RXTE’s primary objective has been to actively monitor galactic & extra galactic x-ray sources. Some observation time has been dedicated specifically to pulsars as many of these are of both galactic and extra galactic nature. the table below summarises key pulsar observations as well as milestones and discoveries made by RXTE in the last 10 years: The following describes technology and techniques required to analyse RXTE pulsar data. The method behind the chosen approach for data analysis will be described along with relevant signal processing theory required to place the subsequent sections of this study into context. Back to Top | Back to High Mass Stellar Evolution Technology Components The following components were used. Following subsections will elaborate how some of these were applied in achieving the objectives of this study: Back to Top | Back to High Mass Stellar Evolution Fourier Analysis Fourier analysis is widely used in signal theory. It takes representations of a signal from a ‘real world’ analog system such as a pulsar and performs a Fourier Transform – i.e. takes a signal from a time representation to it’s frequency equivalent or conversely from the frequency ‘domain’ to the time domain using operations such as: 1) Signal decomposition – taking a real world signal e.g. pulsar signal and separating this into its corresponding sine and cosine components. ) Signal processing – perform mathematical calculations on the corresponding sine and cosine components in a meaningful way for subsequent signal synthesis. ) Signal synthesis -re-construct the signal to produce relevant results in the corresponding domain e.g. convert from the time domain to frequency domain, or vice-versa. One of the objectives of this study is to detect pulsations at given frequencies for each pulsar data set. To do so we create pulsar analysis code in the ANSI C programming language (table 2). We perform Fourier analysis of a pulsar’s time domain signal (RXTE data) provided as photon counts as a function of time. The use of a Fourier Transform -more specifically an optimised variant of the Discrete Fourier Transform^4 called the Fast Fourier Transform (FFT) is used in our pulsar code on RXTE data. We typically fold or ‘bin’ the pulsar data at regular time intervals as a required preparatory step for the FFT operation. Using a DFT stems from requiring to capture a continuous signal from a real world system such as a pulsar and discretise it in it’s time domain digital equivalent (performed by the RXTE PCA and EDS systems). Having the digitised representation of a pulsar signal in a computer system allows us to perform signal decomposition, processing and synthesis to produce the required frequency domain equivalent. The DFT is arguably the only type of Fourier transform that may be used to operate on such representation of the real world given it’s ability to render a continuous and periodic pulsar signal into a discrete and periodic representation. Computer algorithms implementing FFT’s are very efficient. Generally the time taken to calculate a transform on a data set via an FFT algorithm is of the order N log2 N, where N is the number of data samples required to be a power of 2. DFT’s using methods other than FFT’s typically take much longer to compute, in the order of N^2. Note however that computational time comparisons are impacted by algorithm efficiencies, operating systems and hardware used or combination of these. Ignoring these aspects for the purpose of making a simple comparison between FFT’s and traditional DFT’s^5 computations, FFT’s are faster -in the order of 100 times or more than traditional DFT methods i.e. compare O(N log2 N) versus O(N^2) [16] computations required to achieve a transformed result. Back to Top | Back to High Mass Stellar Evolution Fast Fourier Transform FFT’s may be used to calculate a real DFT required in our study of pulsar data by way of using a complex DFT [17]. Our objective was to use the FFT as a tool without being deeply concerned in the implementation specifics. We will note however that it’s advantageous to use complex DFT’s over using direct implementations of real DFT’s (which FFTW supports also) for the following reasons [18]: 1. Complex DFT’s can make use of complex numbers via substitution, thus making mathematical transformations^6 easier to work with. 2. Complex DFT’s can handle negative spectral frequencies, generally required to deal with digital signal processing problems such as aliasing, circular convolution and amplitude modulation – whereas real DFT’s have difficulty handling these situations. 3. Complex DFT’s can satisfactorily handle corner cases, for instance special mathematical handling of the first and last points of a frequency spectrum – typically required when performing an inverse Fourier transform i.e. from the frequency domain to the time domain. Real DFT implementations take a single N point time domain signal and create two N/2 + 1 point frequency domain signals containing only positive frequencies. The complex DFT takes two N point time domain signals and creates two N point frequency domain signals containing both positive and negative frequencies. The shaded boxes in figure 3 indicate pulsar data values common to both types of DFT transforms in both the time domain and frequency domain. One can then see how we might use a FFT to calculate a complex DFT to produce a real DFT transform of pulsar data (from the time domain to the frequency domain) – we simply compute the FFT (complex DFT) by: 1. Zeroing the imaginary input data of the complex DFT in the time domain. 2. Inserting the provided ‘binned’ RXTE pulsar data in the real part of the complex DFT in the time domain. 3. Perform the transform. 4. Discard the negative frequencies of both real and imaginary part in the frequency domain. 5. Compute the power spectrum in the frequency domain of the remaining positive frequencies. Programmatically the FFT was implemented via the FFTW software library as indicated in table 2. Although FFTW does supports real DFT algorithms to perform a Fourier transform, we opted to perform a complex DFT via FFTW and reduce the transformed output to mimic the computation of a real DFT (as shown in figure 3). In doing so we avoided additional programming complexity in our pulsar code otherwise required for a real DFT i.e. avoided managing different size input and output arrays and array "padding" [19], resulting in simpler pulsar code. Back to Top | Back to High Mass Stellar Evolution Sampling Rate & Nyquist Rate The sampling rate also known as the sampling frequency is defined as the number of samples per second (Hz) taken from a continuous time domain signal to convert this into a ‘proper’ discrete signal. The inverse of the sampling rate also known as the sampling time (or sampling period) sets the length of the time bins for data folding so as to prepare the data in an evenly spaced manner required for the FFT. The sampling rate is related also to the Nyquist rate as per the Nyquist-Shannon sampling theorem [20]. The Nyquist-Shannon sampling theorem simply states that the sampling rate has to be greater than twice the Nyquist rate (or Nyquist frequency, which is representative of the bandwidth in the frequency domain of the pulsar signal) or equivalently at least twice the bandwidth of the time domain signal being sampled so as to avoid alaising^7. This consideration was taken into account as unwanted spectrum artifacts due to aliasing would have adversely altered the pulsar frequency domain representation. The relationship between the sampling rate and the Nyquist rate is given as: Nf=1/2*Ts ; where: Sf=1/Ts We define Nf as the Nyquist rate (Hz), Ts as the sampling time (in seconds, also know as the sampling period) and Sf as the sampling rate (Hz). The sampling period serves two main purposes: 1. It ensures we avoid aliasing in the frequency domain equivalent of the time domain pulsar signal. 2. It ensures that we evenly space the data (fold data) as prerequisite for the FFT operation performed on the time domain data in our pulsar code. We ensured selection of an appropriate sampling period to satisfy the Nyquist-Shannon sampling theorem to avoid aliasing, for each ‘test’ signal (sine & sawtooth trigonometric functions) and RXTE pulsar signals based on: 1. Knowing the value of the ‘test’ signal sine peaks in the frequency domain before the FFT. This assisted in understanding the likely Nyquist rate and sampling rate to use on time domain ‘test’ 2. Understanding the likely characteristics of pulsar spin periods at ‘worst case’ scenarios e.g. millisecond pulsars which would require a very high sampling rate, hence determining the Nyquist rate and sampling rates required for RXTE pulsar data to be sampled and subsequently transformed into the frequency domain spectrum. Back to Top | Back to High Mass Stellar Evolution Frequency Determination One of the fundamental objectives of this study is to find the spin period and pulsar frequency from RXTE observations and use this result to derive other important pulsar properties such as period derivative, age, magnetic field and energy loss rate. To calculate the pulsar frequency, firstly ‘test’ signals were sampled and transformed using FFTW to ensure that the approach of sampling, transformation and power spectrum generation in the frequency domain was correct and consistent for subsequent use on RXTE pulsar data. The ‘test’ time domain signals used were sine and sawtooth functions of an array of integers, which have known inherent fundamental frequencies and/or associated harmonics that transform to either/or a single/multiple frequency ‘spikes’ in the computed frequency spectrum. The approach common to both the test signals and the RXTE pulsar data used in our pulsar code was as follows: Back to Top | Back to High Mass Stellar Evolution Results & Commentary Test Signals The code implemented the algorithm in table 3, generating a ‘test’ sine signal in the time domain as per figure 4a, with a single frequency pre-set at 10 Hz (see Appendix). The results are shown in figure 4b: Figure 4b shows a single peak with fundamental frequency 10Hz corresponding to the oscillations observed in the time domain. This provided adequate confidence and evidence that the algorithm to be applied to RXTE pulsar data was appropriate. As further confirmation before transforming RXTE pulsar data, a sawtooth function was transformed itself composed of a fundamental frequency accompanied by a number of overtones^8 producing the FFT results in figure 5b. This further demonstrates the validity of the algorithm in table 3 as it clearly shows what is considered an appropriate FFT for a sawtooth function. The fundamental frequency occurs at 42Hz, the first overtone (2nd harmonic) at 84Hz, the second overtone (3rd harmonic) at 126 Hz. Both the sine and sawtooth functions contained 512 data points which were sampled at a rate producing 256 ‘time bins’ (samples or intervals) containing time domain data. This sampling rate used was high enough so as to avoid any possible aliasing effects. As a further test for both test signals (not shown) decreasing the sampling rate shifted both spectrums in the frequency domain to the right, implying the detected fundamental frequency and associated harmonics were moved closer to the Nyquist rate set by the sampling frequency chosen. Back to Top | Back to High Mass Stellar Evolution Pulsar Frequencies & Spin Periods Whilst the data for both test signals was generated by incrementing a number set (see pulsar code in Appendix) and applying a trigonometric function to each number in a number set, the RXTE pulsar data was provided from a text file read by the pulsar code. The following table provides a sample of the RXTE pulsar data representative of the time domain, which was ‘time binned’ or folded via appropriate sampling period in our pulsar code: There were a number of sampling rates chosen for the pulsar data for two reasons: 1. To compare and contrast FFT results thus ensuring sampling correctness and 2. To ensure differing sampling rates above the Nyquist rate of each pulsar produced the same results – thus proving no aliasing artefacts would interfere with the results. Sampling rates, sampling periods and corresponding Nyquist rates used for each pulsar (PSR B0540-69 and PSR B1509-58), where each had 3 data sets respectively are shown below^9. The last 3 periods produced identical results and served as confirmation of appropriate choice of sampling rate(s) for RXTE data sets. Given the fastest spinning pulsar was found to have a spin rate below 20Hz (figure 7), a sampling period up to 0.05 seconds could have been employed and still satisfy the Nyquist-Shannon sampling criterion. The FFT results produced for PSR B0540-69 and PSR B1509-58 are as follows: The pulsar frequency and pulsar period results for all data sets analysed via FFTW (and compared to ATNF catalogue results and literature) is as follows: Results from table 6 and 7 demonstrate an increase in spin period, accompanied by a decrease in pulsar frequency with increasing MET (mission elapsed time) as follows: Back to Top | Back to High Mass Stellar Evolution Pulsar Period Derivative & Pulsar ‘Characteristic Age’ A pulsar’s period derivative is deemed as the rate at which the pulsar period changes. The period derivative is derived from the pulsar spin period and the MET. The period derivative is unit-less (seconds per second), given by: and is related to the pulsar age by: where ‘dP’ is the change in pulsar period, ‘dt’ is the change in time related to dP (MET), and ‘P’ is the spin period of the pulsar in seconds. It should be noted that the ‘characteristic age’ calculation shown here assumes that pulsar spin period (and by consequence period derivative) is the key determinant for the ‘characteristic age’ calculation. The period derivative for each pulsar was determined by subtracting the highest and lowest values of the spin period and MET respectively to determine ‘dP’ and ‘dt’ based on RXTE observations. The ratio of the two was taken to determine the period derivative for each pulsar. The resulting period derivative was used in the age calculation along with the pulsar period (from the last RXTE observation^10 of each pulsar). ATNF catalogue and literature results are also shown: Back to Top | Back to High Mass Stellar Evolution Pulsar Magnetic Field & Energy Generation Rate Assuming both pulsars are rotationally powered (as will be discussed in the upcoming "Pulsar Analysis" section), further assumptions can be made that allow estimation of the magnetic field and energy generation rate of each pulsar. The following equations describe how the magnetic field and energy generation rate (respectively) of pulsars relate to the pulsar period and period derivative previously calculated: where ‘I’ is the moment of inertia, ‘c’ is the speed of light, ‘R’ is the radius of the neutron star/pulsar, ‘P’ is the pulsar period and ‘dP/dt’ is the pulsar’s period derivative. Assuming the mass of each pulsar is 1.4 solar masses and the radius is 10^6 cm [7] an estimate for the moment of inertia ‘I’ can be obtained via: I = 2 / 5 * M * R^2, which gives I = 1.1e45 g.cm^2 [7]. In addition using a value of c equal to 2.99792458e10 cm s-1 [7] both magnetic field and energy generation rate can be calculated from previously calculated spin period and period derivative: Back to Top | Back to High Mass Stellar Evolution Pulsar Analysis The frequency domain spectrum in figure 6 and 7 (PSR B0540-69 and PSR B150958 respectively) demonstrate PSR B1509-58 having multiple harmonics (at least 4) whereas PSR B0540-69 shows only 2 harmonics. A possible explanation may be the number of harmonics is related to the pulse width of the source i.e. the higher the number of harmonics the narrower the pulse frequency [25]. In addition given PSR B1509-58 is much closer to Earth than PSR B1509-58, the intensity of the pulse emitted from the pulsar is likely to be stronger and not as dispersed (hence narrower) than PSR 0540-69. Possible confirmation appears from the ordinate value in the amplitude power spectrum of both pulsars in figure 6 and 7 indicating a much higher amplitude value for the fundamental frequency and associated harmonics of PSR 1509-58 compared with PSR B0540-69. Pulsar spin periods increased (accompanied by frequency decreases) in line with observations taken at different times as shown in figure 8 and 9. These results are an expected consequence of pulsar spin-down as energy from magnetic dipole radiation is emanated from the pulsars. It’s possible that the energy emanated from both these pulsars interacts via non-thermal processes with the respective SNR each pulsar is associated with [26, 27]. Pulsar spin down may be associated with accretion torques retarding a pulsar’s spin period (where an accretion disk may be present around a pulsar typically via presence of a binary companion) or more commonly from rotational energy losses causing magnetic dipole radiation as describe in the introduction [28]. Typically a braking index is associated with pulsars which is a measure of the slope of a spin down curve where the rotation speed of a pulsar is plotted as a function of time. The braking index can be used to show how close a pulsar is to fitting the rotational model commonly associated with energy losses via magnetic dipole radiation. A braking index equal to 3 conforms to a model rotationally powered pulsar where all energy is radiated away via magnetic dipole Literature for PSR 0540-69 and PSR 1509-58 indicate braking indices of 2.0 and 2.8 respectively [29, 22] indicating close fit to the model for PSR 1509-58 and divergence from the model for PSR 0540-69. Consequently for PSR 0540-69 other energy loss mechanisms such as: 1. Pulsar wind, which may affect the associated SNR 2. Distortion in magnetic field lines 3. A time-varying magnetic field strength or 4. A combination of all or some of the above may contribute to the energy loss rate of this pulsar [21, 29]. Analysis of the ATNF Pulsar Database revealed that neither of the two pulsars, each being part of a SNR is part of a known binary system. Although there is divergence in braking indices as indicated earlier, calculations in the "Results & Commentary" section assumed both pulsars to be model rotational pulsars implying a braking index of 3 for each, thus assuming magnetic dipole radiation as being the only energy source emanating from each pulsar. This simplistic presumption allowed calculations relating age, magnetic field and energy generation (energy loss) rates to be made simply based on the pulsar period and corresponding period derivative. Table 6 and 7 indicate overall minor differences in the calculated values of the pulsars frequencies and spin periods when contrasted with the ATNF pulsar catalogue and literature. On further inspection it can be shown that all three sources display differences in values of frequency and spin period i.e. minor differences are also apparent between the ATNF pulsar catalogue and referenced literature. The differences across all three sources however don’t meaningfully change the subsequent calculations of pulsar age, magnetic field and energy generation rate in table 9. Possible reasons for the discrepancies though between the values calculated in this study when contrasted with other sources are given: 1. The binning algorithm used and rounding of double integer values in the pulsar code, or 2. The epochs at which the observations where taken from RXTE compared with other sources. At various points in time as shown intrinsically by the RXTE results, there are changes in pulse frequency and spin period due to pulsar spin down, hence when we compare the year in RXTE observations were taken (between 1996 and 1998) with other sources such as ATNF (1984) and literature (various years before 1994) we find our calculated values for spin period and pulsar frequency respectively higher and lower than catalogue and literature values, consistent with measurements being performed at a later epoch by RXTE. Both pulsars may have also suffered glitches [7] between observations which typically spin-up pulsars causing the spin period to decrease (spin rate to increase) which would affect the period derivative also. Non-rotational processes described earlier involved in energy generation are also not accounted for in the ‘characteristic age’ approach which relies on pulsar spin period to determine pulsar age. Hence the ‘characteristic age’ approach described in this study should not be considered entirely fool proof. There is in fact evidence indicating that the ‘characteristic age’ method may not be entirely accurate given different estimates of pulsar ages put forward by pulsar observations of various research groups. In order to confirm pulsar age estimates based on the ‘characteristic age’ approach, other techniques may be required such as radial velocity measurements in conjunction with proper motion measurements of a pulsar and it’s associated SNR [30] if these can be reasonably Given relations for a pulsar’s magnetic field: and energy generation (loss) rate: there are a number of ‘fixed’ parameters and two ‘variable’ parameters namely ‘P’ being the spin period and ‘dP/dt’ the first period derivative of a pulsar. Changes in these two variable parameters will affect the magnetic field and energy generation rate of the pulsar. These relations shows that the product of spin period and period derivative for the magnetic field are both responsible for changes in a pulsar’s magnetic field and that the ratio of these same two (which is inversely proportional to pulsar age) given: will in a similar way determine the energy generation rate at a given point in time for a pulsar. When the results of table 6,7,8,9 were analysed the following relative comparison was drawn: Given the above relations for magnetic field and energy loss rate of a pulsar and table 10 analysis, the spin period appears to be the key determinant in both pulsar’s magnetic field and energy loss rate i.e. the spin period is directly proportional to the magnetic field and inversely proportional to the energy loss rate as shown in table 10. In addition one can see that faster spinning pulsars (small spin period) lose energy more quickly than their counterparts (slower spinning pulsars with larger spin periods). This is analogous to the behaviour observed by a ‘spinning top’ which loses a large amount of kinetic energy early in it’s life before gradually spinning down via surface friction. Consequently as a pulsar ages and spins down its ability to lose energy via magnetic dipole radiation is typically reduced. Table 10 suggests also that if indeed PSR B1509-58 began life as a fast spinning pulsar (it has a larger spin period than it counterpart PSR B0540-69) in line with the accepted pulsar model whereby the spin period starts small and increases over time, it cannot be excluded that its magnetic field may have been in the high order of 10^13 Gauss or higher at ‘birth’. If this was the case in it’s early past, PSR B1509-58 may have been a very highly magnetised neutron star capable of emitting EM energy well into the x-ray region. We exclude the possibility that PSR B1509-58 may have been a magnetar based purely on characteristic age results (Magnetar models suggest ages of up to 10^4 years from birth) [8]. The results suggest also that small spin periods are typically associated with small period derivatives and vice versa, however in order to make this assertion with confidence one would need to determine both these characteristics across a larger population of pulsars. An argument in support of this assertion would be in the characteristics of millisecond pulsars and ordinary pulsars as shown in the following diagram: Millisecond pulsars shown have small periods accompanied by small period derivatives whilst the same also appears to be true for ordinary pulsars. The characteristics described in table 10 and discussed earlier can be re-interpreted in figure 10 as follows: • Both pulsars are young (order of 10^3 years) and belong to SNR’s • Both pulsars have reasonably high magnetic fields (range 10^12 to 10^13 Gauss) • Both pulsars have moderately high period derivatives and (with the exception of millisecond pulsars) and are reasonably fast spinning objects in line with their calculated ‘characteristic age’. These observations place both PSR B0540-69 and PSR 1509-58 in the white circular region depicted in figure 10, in agreement with their SNR associations. Back to Top | Back to High Mass Stellar Evolution Summary & Conclusion The characteristics of two rotationally powered pulsars namely PSR B0540-69 and PSR 1509-58 were determined via data folding and Fourier analysis using a Fast Fourier Transform (FFT) to implemented a complex DFT. In sampling the time domain data, appropriate sampling rates were used on ‘test’ data and pulsar data so as to satisfy the Nyquist-Shannon sampling theorem and avoid potential aliasing effects in the sampled spectra. The ‘real’ data resulting from our pulsar code employing the FFT enabled subsequent calculations providing the spin period and corresponding period derivative for each pulsar. These pulsar baseline characteristics were then used to calculate individual pulsar ages using a ‘characteristic age’ approach as well as associated magnetic field strength and energy generation (energy loss) rates for each pulsar. Commentary and analysis of results indicated possible reasons for the differing harmonics and signal strength detected from each pulsar. Reasons were provided for the spin-down nature of pulsars such as accretion torques and rotational energy losses accompanied by a discussion on pulsar braking indices and how these indices related to the ‘characteristic age’ of pulsars. Discussions on discrepancies in calculated results for pulsar characteristics centred around the accuracy of the calculations and the epoch in which the data collection was performed. The pulsar ‘characteristic age’ model was critiqued and other methods of determining pulsar age were referenced such as proper motion measurements from SNR’s and radial velocity measurements. Although far from perfect the ‘characteristic age’ model is currently the best method to estimate pulsar characteristics based on spin period and period derivative measurements. Results and analysis suggest that spin periods are key determinants in the magnitudes of magnetic fields and energy generation rates of pulsars. From the analysis it’s suggested that faster spinning pulsars lose energy more quickly than slower spinning pulsars in agreement with calculations and literature. The observed proportionality between spin period and period derivative although requiring additional analysis was supported by the existence of millisecond pulsars and characteristics of the general pulsar population. We also excluded purely on age and calculated characteristics of magnetic field that PSR B1509-58 was once a magnetar. Lastly an attempt was made to categorise and place both PSR B0540-69 and PSR B1509-58 on a pulsar period derivate versus pulsar period diagram so as to compare and contrast their characteristics across a larger pulsar population sample. Back to Top | Back to High Mass Stellar Evolution [1] Hewish A., 1970, Pulsars, ARA&A 8, 265H [2] Swank J., Newman P., 2002, RXTE Mission [3] Australian Telescope National Facility (ATNF) Pulsar Database [4] Astronomy 201 Course, Neutron Star, Cornell University, 2005 [5] Chandrasekhar S., 1934, Stellar Configurations With Degenerate Cores, OBS 57, 373C [6] Pasachoff J., Contemporary Astronomy, Saunders, 1977 [7] Ostlie D. A., Carroll B. W., 1996, Modern Stellar Astrophysics [8] Duncan R. C., 2003, ‘Magnetars’, Soft Gamma Ray Repeaters & Very Energetic Magnetic Fields [9] Duncan R. C., 1998, Magnetar Models for Sof Gamma Repeaters & Anomalous X-ray Pulsars, AAS 193, 5640D [10] Swank J., Newman P., 2002, RXTE Proportional Counter Array [11] Swank J., Newman P., 2002, RXTE High Energy X-Ray Timing Experiment [12] Swank J., Newman P., 2002, RXTE All-Sky Counter [13] Wikipedia, 2005, Proportional Counter [14] Wikipedia, 2005, Scintillation Counter [15] Swank J., Newman P., 2002, RXTE Experiment Data System [16] Earlevel Engineering, 2002, The Fast Fourier Transform [17] Wikipedia, 2005, Fast Fourier Transform [18] Smith S. W., 1999, The Scientist’s & Engineers Guide To Digital Signal Processing, Chapter 31 [19] Frigo M., 2004, FFTW 3.1 Reference, Chapter 2, Section 2.3 [20] Smith S. W., 1999, The Scientist’s & Engineers Guide To Digital Signal Processing, Chapter 3 [21] Kaaret et. al., 2001, Chandra Observations of the Young Pulsar PSR B0540-69, ApJ 546:1159-1167 [22] Kaspri et. al., 1994, On the Spin-down of PSR B1509-58, ApJ 422L, 83K [23] Seward F. D., Harnden F. R. Jr., 1984, Discovery of a 50 Millisecond Pulsar in the LMC, ApJ 287L, 19S [24] Gvaramadze V. V., 2001, On the Age of PSR B1509-59, A&A 374, 259-263 [25] Hulse R. A., 1993, The Discovery of the Binary Pulsar (Nobel Lecture, Princeton University) [26] Kanbach, 2003, Spectral and Timing Studies of PSR B0540-69 [27] Camilo F. et. al., 2002, PSR J1124 5916: Discovery of a Yound Energetic Pulsar in SNR G292.0.1.8, AJ 567:L71-L75 [28] Marsden D., Lingenfelter R.E., Rothschild R.E., 2000, The Cause of the Age Discrepancy in Pulsar B1757-24, [29] Manchester R. N., Peterson B. A., 1989, A Braking Index for PSR B0540-69, AJ, volume 342, part 2, page L23 [30] MIT News Office, 2002, Age Discrepancy Throws Existing Pulsar Theories Into Turmoil Back to Top | Back to High Mass Stellar Evolution Credits and Comments: Cover Image: PSR B1509-58 in SNR G320.4-1.2, NASA/MIT/B.Gaensler et al. (http://chandra.harvard.edu) Figure 1: Adapted from Swinburne Astronomy Online (SAO) Figure 2: Adapted from Swinburne Astronomy Online (SAO) Figure 3: Courtesy Smith S. W., 1999, The Scientist’s & Engineers Guide To Digital Signal Processing Figure 10: Courtesy Lorimer D. R., 1998, Binary and Millisecond Pulsars Back to Top | Back to High Mass Stellar Evolution ^1 ‘Period derivative’ is deemed the rate a which the period of a pulsar changes (‘first period derivative’ is also a term used) and is unit-less (sec sec-1). SNR – Supernovae Remnant. This is the remnant of the supernova explosion which typically extends to a distance light years away from the source explosion (see front cover). ^3 Given the limitations imposed by the choice of operating system hence Cygwin choice, many tools listed were compiled directly from source code. ^4 Discrete Fourier Transform hereafter is abbreviated to ‘DFT’. ^5 Conventional DFT’s (DFT’s O(N^2)) can typically be implemented using simultaneous linear equations or correlation methods. ^6 A transformation is defined as taking multiple inputs, performing an operation on these inputs based on a set of rules, and producing multiple outputs (many in, many out). ^7 Aliasing causes reflection of high and low frequencies in the corresponding low and high frequency spectral regions computed in the FFT resulting in distortion of the sampled signal, hence causing a pulsar signal not be properly reconstructed from the sampled signal. ^8 ‘Overtone’ is used in this instance, however the term implies additional sinusoidal frequencies which are not necessarily multiples of the fundamental frequency (unlike harmonics) – in the sawtooth example ‘harmonic’ is an appropriate description also, as well as for pulsar analysis. ^9 Software limitations with Cygwin were encountered with small sampling periods (below 0.007 seconds) and large RXTE data sets. To mitigate this initially, some large pulsar data sets were reduced however the sampling periods were ultimately increased to avoid stack overflow issues caused by large arrays required for data folding eliminating the need to reduce pulsar data sets. The latter approach did not affect results as the corresponding Nyquist rates shown in table 5 (in bold italic) were still well above the Nyquist rate of each individual pulsar. A sampling period of 0.008 seconds was ultimately used in our code for both PSR B0540-69 and PSR B1509-58 to reduce computational time for all data sets. ^10 The choice of observation was in the end arbitrary as the average values of all observations or other individual observations used in the age calculations did not adversely affect the age of the pulsar in calculated and shown in table 8. Back to Top | Back to High Mass Stellar Evolution Pulsar Source Code #include <stdio.h> #include <math.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <f tw3.h> #include <gnuplot_i.h> int main (int argc,char*argv[]) fftw_plan p; gnuplot_ctrl *gh = gnuplot_init(); int i; //generic counter int bin; //bin numbercounter int N; //numberofsamples int count; //photon count (from file) double mt; //mission time (from file) double real2,imag2; //real and imagcomponents squared double magn,max_magn; //magnitude and maximum magnitude double start_mt; //mission start time double end_mt; //mission end time double bin_time; //'binned'time (mission time incremented) //double incr= 0.002; //500Hzsamplingrate (250HzNyquist rate) //double incr= 0.005; //200Hzsamplingrate (100HzNyquist rate) //double incr= 0.02; //50Hzsamplingrate (25HzNyquist rate) double incr= 0.008; //125Hzsamplingrate (62.5HzNyquist rate) if(argc == 2){ if((fp = fopen(argv[1],"r"))== NULL){ printf("Cannot open pulsardatafile"); else { //get the observation start and end times fscanf(fp,"% d % lf",&count,&mt); start_mt = mt; fscanf(fp,"% d % lf",&count,&mt); end_mt = mt; else { printf("Nothingto do!\n"); exit(0); } //create array 'S' with 'N' bins - set all bins in 'S' to zero N = (end_mt -start_mt)/incr; //get numberofsamples int S[N]; //create arrayofbins memset((void *)S,0,N *sizeof(int)); //set all bins to zero printf("Start time:% .8lf\t End time:% .8lf\n",start_mt,end_mt); printf("MET:% .8lfdays since UTC1/1/94\n",start_mt/86400); printf("Obs. time range:% .8lfseconds\n",end_mt -start_mt); printf("Obs. time range:% .8lfhours\n",(end_mt -start_mt)/3600); printf("Samplingperiod:% .8lfseconds\n",incr); printf("Samplingfrequency:% .8lfHz\n",1/incr); printf("Nyquist frequency:% .8lfHz\n",1/(2*incr)); printf("Numberofsamples (bins):% d\n",N); //re-open pulsardatafile if((fp = fopen(argv[1],"r"))== NULL){ printf("Cannot open pulsardatafile"); //file now open,read first record fscanf(fp,"% d % lf",&count,&mt); bin_time = mt; bin = 0; fscanf(fp,"% d % lf",&count,&mt); while (bin_time < mt){ bin_time = bin_time + incr; S[bin]++; //increment photon count in the bin //write binned datato file forgnuplot if((fp = fopen("TD.data","w+"))== NULL){ printf("Cannot open TD.datafile"); } else for(i=0; i<N; i++) fprintf(fp,"% d % d\n",i,S[i]); //'in' has input data, 'out' has data to post-process in = fftw_malloc(sizeof(fftw_complex)*N); out = fftw_malloc(sizeof(fftw_complex)*N); p = fftw_plan_dft_1d(N,in,out,FFTW_FORWARD,FFTW_ESTIMATE); //ensure that in and out are clean memset((void *)in,0,N *sizeof(double)); memset((void *)out,0,N *sizeof(double)); //insert datainto real part ofcomplexDFT for(i=0; i<N; i++){ in[i][0]= (double)S[i]; in[i][1]= 0.0; //not really required due to ”memset‘ //printf("Completed FFT\n"); //post-process real and imaginary data //onlyN/2values required for real and imaginary components //as these only represent the _real_DFT we are computing //save FFT's real dataforre-use e.g. gnuplot //printf("Post-process dataforgnuplot..\n"); if((fp = fopen("FD.data","w+"))== NULL){ printf("Cannot open FD.data"); max_magn = 0.0; for(i=0; i < N/2; i++){ real2= out[i][0]*out[i][0]; imag2= out[i][1]*out[i][1]; magn = sqrt(real2+ imag2); if(i <= 100) //gets rid of any crud at the start magn = 0.0; if((i > 100)&& (magn > max_magn)){ max_magn = magn; bin = i; fprintf(fp,"% d % .8f\n",i,magn); gnuplot_cmd(gh,"set terminal png"); gnuplot_cmd(gh,"set output \"TD.png\""); gnuplot_cmd(gh,"set title \"Time Domain\""); gnuplot_cmd(gh,"set xlabel \"Time Bins\""); gnuplot_cmd(gh,"set ylabel \"Counts perBin\""); gnuplot_cmd(gh,"set grid xtics ytics"); gnuplot_cmd(gh,"plot \"TD.data\"with lines"); gnuplot_cmd(gh,"set output \"FD.png\""); gnuplot_cmd(gh,"set title \"FrequencyDomain\""); gnuplot_cmd(gh,"set xlabel \"FrequencyBins\""); gnuplot_cmd(gh,"set ylabel \"Magnitude\""); gnuplot_cmd(gh,"set grid xtics ytics"); gnuplot_cmd(gh,"plot \"FD.data\"with lines"); printf("NumberofFFTBins:% d (+ve frequencies)\n",i); printf("Maxmagn:% .8lffound at FFTbin number:% d\n",max_magn,bin); printf("PSRFreq:% .8lfHz\n",((double)bin/(double)i)*(1/(2*incr))); Back to Top | Back to High Mass Stellar Evolution Period Derivative Source Code For Each Pulsar (PD Plot) PSR B0540-69 #include <stdio.h> #include <math.h> #include <gnuplot_i.h> int main (int argc,char*argv[]) gnuplot_ctrl *gh = gnuplot_init(); double start_epoch,end_epoch,start_period,end_period; double epoch,period,Pd; printf("Usingdatafiles 0540sd-hz.wri & 0540sd-sec.wri\n"); gnuplot_cmd(gh,"set terminal png"); gnuplot_cmd(gh,"set output \"0540sd-hz.png\""); gnuplot_cmd(gh,"set title \"PulsarSpindown (Freq. vs Epoch)\""); gnuplot_cmd(gh,"set xlabel \"Epoch\""); gnuplot_cmd(gh,"set ylabel \"Frequency\""); gnuplot_cmd(gh,"set grid xtics ytics"); gnuplot_cmd(gh,"plot \"0540sd-hz.wri\"with lines"); gnuplot_cmd(gh,"set output \"0540sd-sec.png\""); gnuplot_cmd(gh,"set title \"PulsarSpindown (Period vs Epoch)\""); gnuplot_cmd(gh,"set xlabel \"Epoch\""); gnuplot_cmd(gh,"set ylabel \"Period\""); gnuplot_cmd(gh,"set grid xtics ytics"); gnuplot_cmd(gh,"plot \"0540sd-sec.wri\"with lines"); if((fp1= fopen("0540sd-sec.wri","r"))== NULL){ printf("Cannot open 0540sd-sec.wri\n"); else { fscanf(fp1,"% lf% lf",&epoch,&period); start_epoch = epoch; start_period = period; while (!feof(fp1)){ fscanf(fp1,"% lf% lf",&epoch,&period); end_epoch = epoch; end_period = period; printf("st_epoch:% .8lf\tst_period:% .8lf\n",start_epoch,start_period); printf("end_epoch:% .8lf\tend_period:% .8lf\n",end_epoch,end_period); Pd = (end_period -start_period)/(end_epoch -start_epoch); printf("Period derivative:% .8e\n",Pd); printf("Done\n"); } PSR B1509-58 #include <stdio.h> #include <math.h> #include <gnuplot_i.h> int main (int argc,char*argv[]) gnuplot_ctrl *gh = gnuplot_init(); double start_epoch,end_epoch,start_period,end_period; double epoch,period,Pd; printf("Usingdatafiles 1509sd-hz.wri & 1509sd-sec.wri\n"); gnuplot_cmd(gh,"set terminal png"); gnuplot_cmd(gh,"set output \"1509sd-hz.png\""); gnuplot_cmd(gh,"set title \"PulsarSpindown (Freq. vs Epoch)\""); gnuplot_cmd(gh,"set xlabel \"Epoch\""); gnuplot_cmd(gh,"set ylabel \"Frequency\""); gnuplot_cmd(gh,"set grid xtics ytics"); gnuplot_cmd(gh,"plot \"1509sd-hz.wri\"with lines"); gnuplot_cmd(gh,"set output \"1509sd-sec.png\""); gnuplot_cmd(gh,"set title \"PulsarSpindown (Period vs Epoch)\""); gnuplot_cmd(gh,"set xlabel \"Epoch\""); gnuplot_cmd(gh,"set ylabel \"Period\""); gnuplot_cmd(gh,"set grid xtics ytics"); gnuplot_cmd(gh,"plot \"1509sd-sec.wri\"with lines"); if((fp1= fopen("1509sd-sec.wri","r"))== NULL){ printf("Cannot open 1509sd-sec.wri\n"); else { fscanf(fp1,"% lf% lf",&epoch,&period); start_epoch = epoch; start_period = period; while (!feof(fp1)){ fscanf(fp1,"% lf% lf",&epoch,&period); end_epoch = epoch; end_period = period; printf("st_epoch:% .8lf\tst_period:% .8lf\n",start_epoch,start_period); printf("end_epoch:% .8lf\t end_period:% .8lf\n",end_epoch,end_period); Pd = (end_period -start_period)/(end_epoch -start_epoch); printf("Period derivative:% .8e\n",Pd); printf("Done\n"); } Back to Top | Back to High Mass Stellar Evolution Test Signals Source Code (sine & sawtooth functions) #include <stdio.h> #include <math.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <f tw3.h> #include <gnuplot_i.h> #define PHASE64.0 int main (int argc,char*argv[]) fftw_plan p; FILE*fp1,*fp2,*fp3; gnuplot_ctrl *gh = gnuplot_init(); int N = 512; //#samples of input signal int Bins = 256; //total #bins used double Ts = 1/(double)Bins; //sampling rate double Fn = 1/(2*Ts); //Nyquist rate double incr; //binning increment double Ct; double x[N]; double y[N]; double real2,imag2,magn[N],max_magn = 0; double s_time = 0.0; double r_time = 0.0; double y_val; int i; int max_i = 0; for(i=0; i<N; i++){ x[i]= (double)i; //y[i]= sin(x[i]/N *PHASE); y[i]= 0.08*i -(int)(0.08*i); //save dataforre-use e.g. gnuplot if((fp1= fopen("sawtooth.data","w+"))== NULL){ printf("Cannot open sine.data"); else { for(i=0; i<N; i++){ fprintf(fp1,"% lf% lf\n",x[i],y[i]); Ct = x[i]; printf("Sine conversion complete\n"); incr= Ct /(double)Bins; //binning increment printf("Input samples = % d\n",(int)Ct); printf("incr:% lf\n",incr); //open output signal file -forbucketed data if((fp2= fopen("sampled.data","w+"))== NULL){ printf("Cannot open sampled datafile"); //open input file if((fp1= fopen("sawtooth.data","r"))== NULL){ printf("Cannot open sine.data"); exit(1); else { i = 0; printf("Start bucketing..\n"); while (!feof(fp1)){ fscanf(fp1,"% lf% lf",&r_time,&y_val); if(r_time > s_time+incr){ fprintf(fp2,"% d % lf\n",i,y_val); s_time = r_time; N = i; i++; printf("Finished bucketing..\n"); printf("N=%d **numberofsamples forFFT\n",N); printf("Bins = % d\n",Bins); printf("Samplingperiod Ts=1\\Bins = % lf\n",Ts); printf("Nyquist rate Fn=1\\2*Ts = % lf\n",Fn); //'in'has input data,'out'has datato post-process in = fftw_malloc(sizeof(fftw_complex)*N); out = fftw_malloc(sizeof(fftw_complex)*N); p = fftw_plan_dft_1d(N,in,out,FFTW_FORWARD,FFTW_ESTIMATE); //ensure that in and out are clean memset((void *)in,0,N *sizeof(double)); memset((void *)out,0,N *sizeof(double)); //open signal file -bucketed data if((fp2= fopen("sampled.data","r"))== NULL){ printf("Cannot open sampled datafile"); exit(1); } printf("Insert datain arrays forFFT\n"); fscanf(fp2,"% d % lf",&i,&y_val); in[i][0]= y_val; in[i][1]= 0.0; printf("Done datainsertion\n"); printf("Done FFT!\n"); //post-process real and imaginary data //only Bins/2values are required for real and imaginary components //as these only represent the _real_DFT we are computing for(i=0; i < N/2; i++){ real2= out[i][0]*out[i][0]; imag2= out[i][1]*out[i][1]; magn[i]= sqrt(real2+ imag2); //save FFT's real dataforre-use e.g. gnuplot if((fp3= fopen("fft.data","w+"))== NULL){ printf("Cannot open fft.data"); else { for(i=0; i < N/2; i++){ if(i != 0){ fprintf(fp3,"% i % lf\n",i,magn[i]); if(magn[i]> max_magn){ max_magn = magn[i]; max_i = i; printf("Maxmagn:% lfxvalue:% d\n",max_magn,max_i); gnuplot_cmd(gh,"set terminal png"); gnuplot_cmd(gh,"set output \"sawtooth.png\""); gnuplot_cmd(gh,"plot \"sawtooth.data\"with lines"); gnuplot_cmd(gh,"set output \"fft.png\""); gnuplot_cmd(gh,"plot \"fft.data\"with lines"); Back to Top | Back to High Mass Stellar Evolution
{"url":"http://astronomyonline.org/Stars/Pulsars.asp","timestamp":"2014-04-20T22:09:28Z","content_type":null,"content_length":"92819","record_id":"<urn:uuid:b13e5cbc-ea90-47fc-8151-57692599b48b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Puzzles This page: Dr. Math See also the Dr. Math FAQ: classic problems About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Puzzles Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to frequently posed puzzles: 1000 lockers. Letter+number puzzles. Getting across the river. How many handshakes? Last one at the table. Monkeys dividing coconuts. Remainder/divisibility puzzles. Squares in a checkerboard. Weighing a counterfeit coin. What color is my hat? How is it possible to cut a hole through a solid cube so that a cube, larger than the original, can be passed in one end and out the other? Every day a ship leaves San Francisco for Tokyo... How many Tokyo ships will each San Francisco ship meet? Weigh the balls to find the one that's different... As he lay dying, the Pharaoh proclaimed: "I bequeath 1/3 of my estate to my oldest child; 1/4 of my estate to the next oldest child; and to each succeeding child, except the youngest, the next unit fraction of my estate; and to the youngest the remainder." How many 20-cent coins can you put around a 20-cent coin so that all of them touch? I have to plant 10 trees in 5 rows with 4 trees in each row. The y-axis, x-axis, x = 6, and y = 12 determine the sides of a pool table. Follow the path of a ball starting at the point (3,8). Find five different positive unit fractions whose sum is 1. (A unit fraction is a fraction whose numerator is 1. All denominators must also be natural numbers.) Prove that every power of two has a multiple whose decimal expansion has only digits 1 and 2. Does the orthocenter of a triangle have any practical uses? I pick 5 digits, and write them down. My friend tells me a sum. Then I pick 5 more digits, he picks 5, I pick 5, and he picks 5. The sum he told me is the sum of all 5 lines. How did he know what it would be? I was wondering if you could give me some specific advice on how to study for the Putnam Exam because I will be taking it for my first time. In Store 88 they sell exactly ten items, some items with the same price as others, but the only the digit on the price tags of each item is 8... What is the smallest number that you can multiply by 540 to make a square number? A prime number riddle. I'm looking for a paper - or some material - about "the prisoners' problem." How many two-digit numbers exist such that when the products of their digits are added to the sums of their digits, the result is equal to the original two-digit number? I have to find a 10 digit number which uses each of the digits 0-9 such that the first digit is divisible by 1, the first two digits make a number divisible by 2, the first three digits make a number divisible by 3, and so on up to all ten digits making a number divisible by 10. I figured it out using mostly guess and check, but it took a long time. Is there a quicker way? Today is November 14, 2000, a Tuesday. What day of the week was November 14, 1901? A pyramid-building puzzle. A rational number greater than one and its reciprocal have a sum of 2 1/6. What is this number? Express your answer as an improper fraction in lowest terms. How many rectangles are there on a chessboard? A teacher puts one hat on each of three students' heads and then discards the remaining two hats so they cannot be seen. Then the first child is told he can look at the other two children and from the color of their hats, he can guess what color he is wearing... Can you explain some of the math behind the Rubik's Cube? Sequences of turns for solving Rubik's cube. I read that a rubics cube has 4 quintillion different possible combinations. Is this number correct? How can I calculate this value on my own? Two dogs run around a circular track at different speeds. How long will it take for them to return to the starting point at the same time? Strategies for winning at Russian Nim (the "20" game). What is the highest score that is impossible to make? Find the missing number in the sequence 11 > ? > 1045 > 10445. I have a machine that only types out ones (no spaces or tricks involved). What procedure must you do to this machine to get any given finite set? For example [2,85,11,5,60]. For a different set, the number of ones that the machine types out will vary. A building has 7 elevators, each stopping on at most 6 floors. If you take the right elevator you can get to any floor from any other floor without changing elevators. What is the greatest number of floors the building can have? How many 'perfect shuffles' does it take to get the cards back in the order you started? What kind of a math project could I do with magic squares? A Volunteer writes different numbers from 1 to 125 on six cards, and keeps one. The Host arranges the others in some order and gives them to the Partner, who then says the number on the missing card. How? A snail is climbing a window-pane, beginning in the evening at a height of e minus 1 meter from the base. It loses 1 meter each night. On the second day, it doubles its altitude of the morning. On the third day, it triples the altitude of the morning, and so on. What will be its altitude on the 51st day at dawn? Take five times which plus half of what, and make the square of what you've got... In a poll of 34 students, 16 felt confident solving quantitative comparison questions, 20 felt confident solving multiple choice questions.... How many students felt confident solving only multiple choice questions and no others? I have tried logical reasoning and can't get it. Create a ten-digit number that meets some special conditions... Page: [<prev] 1 2 3 4 5 6 7 8 9 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_puzzles.html?start_at=241&num_to_see=40&s_keyid=39101349&f_keyid=39101350","timestamp":"2014-04-20T06:04:31Z","content_type":null,"content_length":"24842","record_id":"<urn:uuid:cf2c3a71-880a-4f87-8863-bdac4009d4eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Orderings and maximal ideals of rings of analytic functions. Díaz-Cano Ocaña, Antonio (2005) Orderings and maximal ideals of rings of analytic functions. Proceedings of the American Mathematical Society, 133 (10). pp. 2821-2828. ISSN 1088-6826 Restricted to Repository staff only until 2020. Official URL: http://www.ams.org/journals/proc/2005-133-10/S0002-9939-05-07848-2/S0002-9939-05-07848-2.pdf We prove that there is a natural injective correspondence between the maximal ideals of the ring of analytic functions on a real analytic set X and those of its subring of bounded analytic functions. By describing the maximal ideals in terms of ultrafilters we see that this correspondence is surjective if and only if X is compact. This approach is also useful for studying the orderings of the field of meromorphic functions on X. Item Type: Article Uncontrolled Keywords: Real analytic sets; Analytic functions; Maximal ideal; Ultrafilters; orderings. Subjects: Sciences > Mathematics > Algebraic geometry ID Code: 15040 References: C. Andradas, E. Becker. A note on the real spectrum of analytic functions on an analytic manifold of dimension one. Lect. Notes Math. 1420 (1990), 1–21. C. Andradas, L. Br¨ocker, J. M. Ruiz. Constructible sets in real geometry. Springer- Verlag, Berlin, 1996. J. Bochnak, M. Coste, M.-F. Roy. Real Algebraic Geometry. Springer-Verlag, Berlin, 1998. A. Castilla. Sums of 2n-th powers of meromorphic functions with compact zero set. Lect. Notes Math. 1524 (1991), 174–177. A. Castilla. Artin-Lang property for analytic manifolds of dimension two. Math. Z. 217 (1994), 5–14. A. Castilla. Propiedad de Artin-Lang para variedades anal´ıticas de dimensi´on dos. Ph. D. Thesis, Universidad Complutense de Madrid (1994). A. D´ıaz-Cano, C. Andradas. Complexity of global semianalytic sets in a real analytic manifold of dimension 2. J. reine angew. Math. 534 (2001), 195–208. L. Gillman, M. Jerison. Rings of continuous functions. Van Nostrand, Princeton, 1960. M. Hirsch. Differential Topology. Springer-Verlag, 1976. P. Jaworski. The 17-th Hilbert problem for noncompact real analytic manifolds. Lecture Notes Math. 1524 (1991), 289–295. K. Kurdyka, G. Raby. Densit´e des ensembles sous-analytiques. Ann. Inst. Fourier 39(1989), 753–771. Deposited On: 27 Apr 2012 09:05 Last Modified: 06 Feb 2014 10:14 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/15040/","timestamp":"2014-04-18T08:26:45Z","content_type":null,"content_length":"28257","record_id":"<urn:uuid:481f738e-5a14-4571-b1d2-6c9f4f8e9943>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Kensington, MD ACT Tutor Find a Kensington, MD ACT Tutor ...Subjects that are typically covered include structural properties (atoms, molecules,and the resulting chemical reactions), balancing equations, stoichiometry, gas laws, and many others. Chemistry is a building block subject where previously covered topics combine in more complex ways as newer su... 17 Subjects: including ACT Math, chemistry, algebra 2, calculus ...I enjoy finding innovative ways to communicate math concepts to students so that they grasp material they thought they'd never "get." A strong use of visuals/manipulatives is helpful for younger students so that they can see math in action. Playing games that use math eases children into a higher comfort level with manipulating numbers. Soon they realize that math can actually be 17 Subjects: including ACT Math, reading, geometry, algebra 1 ...I scored a 790 on my SAT Math test. I have received examination preparation training from Kaplan. I have been a peer reviewer for scientific journals as well as an editor for scientific and popular manuscripts published by my office at NASA. 39 Subjects: including ACT Math, chemistry, physics, writing ...In addition to test prep, I enjoy tutoring biology, math, history, and English. I have experience tutoring at all grade levels and at a collegiate level. My schedule is extremely flexible, and I can tutor during the day, as well as on evenings and weekends. 31 Subjects: including ACT Math, English, writing, geometry ...I have worked with students to teach them how to take notes, test taking strategies, manage their time, become more organized, and use their class materials to their fullest advantage. By learning these study skills and more, students are much more likely to achieve their goals. In addition to ... 26 Subjects: including ACT Math, English, reading, ESL/ESOL
{"url":"http://www.purplemath.com/kensington_md_act_tutors.php","timestamp":"2014-04-16T13:28:11Z","content_type":null,"content_length":"24039","record_id":"<urn:uuid:49a0322a-e681-4004-9d58-2bf048d7c1bb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof analysis : a contribution to Hilbert's last problem / Sara Negri, Jan von Plato. Publication date: Cambridge ; New York : Cambridge University Press, 2011. Includes bibliographical references (p. 254-261) and indexes. □ Machine generated contents note: Prologue: Hilbert's Last Problem; 1. Introduction; Part I. Proof Systems Based on Natural Deduction: 2. Rules of proof: natural deduction; 3. Axiomatic systems; 4. Order and lattice theory; 5. Theories with existence axioms; Part II. Proof Systems Based on Sequent Calculus: 6. Rules of proof: sequent calculus; 7. Linear order; Part III. Proof Systems for Geometric Theories: 8. Geometric theories; 9. Classical and intuitionistic axiomatics; 10. Proof analysis in elementary geometry; Part IV. Proof Systems for Nonclassical Logics: 11. Modal logic; 12. Quantified modal logic, provability logic, and so on; Bibliography; Index of names; Index of subjects. "This book continues from where the authors' previous book, Structural Proof Theory, ended. It presents an extension of the methods of analysis of proofs in pure logic to elementary axiomatic systems and to what is known as philosophical logic. A self-contained brief introduction to the proof theory of pure logic is included that serves both the mathematically and philosophically oriented reader. The method is built up gradually, with examples drawn from theories of order, lattice theory and elementary geometry. The aim is, in each of the examples, to help the reader grasp the combinatorial behaviour of an axiom system, which typically leads to decidability results. The last part presents, as an application and extension of all that precedes it, a proof-theoretical approach to the Kripke semantics of modal and related logics, with a great number of new results, providing essential reading for mathematical and philosophical logicians"-- Provided by publisher. "We shall discuss the notion of proof and then present an introductory example of the analysis of the structure of proofs. The contents of the book are outlined in the third and last section of this chapter. 1.1 The idea of a proof A proof in logic and mathematics is, traditionally, a deductive argument from some given assumptions to a conclusion. Proofs are meant to present conclusive evidence in the sense that the truth of the conclusion should follow necessarily from the truth of the assumptions. Proofs must be, in principle, communicable in every detail, so that their correctness can be checked. Detailed proofs are a means of presentation that need not follow in anyway the steps in finding things out. Still, it would be useful if there was a natural way from the latter steps to a proof, and equally useful if proofs also suggested the way the truths behind them were discovered. The presentation of proofs as deductive arguments began in ancient Greek axiomatic geometry. It took Gottlob Frege in 1879 to realize that mere axioms and definitions are not enough, but that also the logical steps that combine axioms into a proof have to be made, and indeed can be made, explicit. To this purpose, Frege formulated logic itself as an axiomatic discipline, completed with just two rules of inference for combining logical axioms. Axiomatic logic of the Fregean sort was studied and developed by Bert-rand Russell, and later by David Hilbert and Paul Bernays and their students, in the first three decades of the twentieth century. Gradually logic came to be seen as a formal calculus instead of a system of reasoning: the language of logic was formalized and its rules of inference taken as part of an inductive definition of the class of formally provable formulas in the calculus"-- Provided by publisher. jump to top
{"url":"http://searchworks.stanford.edu/view/9432682","timestamp":"2014-04-20T08:59:31Z","content_type":null,"content_length":"28679","record_id":"<urn:uuid:fec77047-6485-40a2-a45f-37fd4ec701d9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
How many relations of length $n$ can exists in a group without enforcing shorter relations? up vote 52 down vote favorite Let $G$ be a group with two generators. Suppose that all non-trivial words of length less or equal $n$ in the generators and their inverses define non-trivial elements in $G$. Question: How many of the $4\cdot 3^{n}$ words of length $n+1$ in the generators and their inverses can at most be trivial in $G$? I am interested in the growth of this number as $n$ grows. From what I understand from the Gromov's theory of random groups, a choice of relations of length $n+1$ will (almost surely as $n \to \ infty$) not enforce shorter relations if one chooses $$3^{\left(\frac{1}2 - \varepsilon\right) \cdot n}$$ relations of length $n+1$ at random. (This result is related to small cancellation theory which applies to randomly choosen relations. The exponent $1/2$ which appears is related to the birthday paradox. It ensures that with high probability one does not chose relations which have large overlap.) However, I would not know how to prove it or even locate it in the literature. Can someone confirm this? Question: Can one do better than $3^{\left(\frac{1}2 - \varepsilon\right)\cdot n}$ (as $n \to \infty$) with a concrete sequence of groups rather than using random groups? For $n=1$ one can map both generators to the non-trivial element of $Z_2$ getting all $12$ elements of length $2$ trivial. For $n=2$ one can map one generator to a generator of $Z_5$ and the other to its square getting all $36$ elements of length $3$ trivial. Does anyone know the optimum for $n=3$? – Someone Aug 19 '11 at 16:05 @Someone: that doesn't work. If the generators are $a$ and $b$ then $a^3$ would be non-trivial. Indeed, if $a^3$ and $a^2b$ are both trivial in the group, then $a=b$, and so the word $ab^{-1}$ is trivial. – Steve D Sep 30 '11 at 17:32 add comment 2 Answers active oldest votes Let w be a word (on the alphabet of the two letters plus two more symbols for the formal inverse) of length n+1. If this word is trivial, then cyclic permutations of this word are trivial and one also gets relations of the form letter = word in length n by taking a different letter out of the word and formally inverting, and rearranging the word appropriately. When you do this you can group the cyclic permutations intp four groups (or two if you do the proper inversions). This allows you to build up lists of which words are equal. You can then do cancellation to build up shorter relations. Once you have two words of length n/2 being equal (let me assume n even for simplicity), you can now form a trivial word of length n contradictory to your premise. up vote Starting with K words of length n+1 no two of which are cyclically similar, one can develop K'(n+1) distinct words into two different groups. (There may be conflation and K' may be less than 1 down K; for the moment assume we are lucky and that K' = K.) If one of those groups has two words beginning (say) with the same string of n/2 letters, then you get a contradictory word, so the vote groups must each be smaller than 4^(n/2), giving a rough estimate of K'(n+1) <= 2 * 2^n, and more work may show that K might be of the same order as K'. Not a full answer, and some combinatorics left to be done (for example ruling out the cases that two words have the same prefix and suffix with combined length of n/2), but I hope this line of thought helps. Gerhard "Ask Me About System Design" Paseman, 2011.02.17 "some combinatorics left to be done" seems to be the difficult part. Note that there is no algorithm to decide whether or not a given representation is a representation of the trivial group. – Andreas Thom Feb 18 '11 at 8:06 Indeed, there is more to be done. However, if you are enforcing the relations, you can decide (for every set of n+1 words deemed trivial) what relations you want to hold, and (unless you are seeking lower bounds) may be able to get an upper bound on the number of trivial words of length n+1 that will force (contradict) one of the shorter words to be provably trivial. I would be surprised if one had a method for an exact upper bound. Also, if one just wants within order of magnitude, some combinatorics can be overlooked. Gerhard "What's a Googleplex Between Friends" Paseman, 2011.02.18 – Gerhard Paseman Feb 18 '11 at 8:39 add comment The following trick inspired by Burnside groups can perhaps be made into something interesting (but probably smaller then $3^{1/2-\epsilon}$): Suppose $n+1$ has a fairly small divisor $k$ (I ignore if $k=2$ already works) and choose a subset $S$ of words of length $(n+1)/k$ in $\langle a,b\rangle$. If no word of $S$ can be cyclically reduced and if the words of $S$ satisfy a suitable no-small-cancellation property (in particular, $\langle S\rangle$ should be the free group on $S$), then the free Burnside group up vote on $S$ has no relation shorter than $n+1$ (with respect to the length induced as a subgroup of $\langle a,b\rangle$). The quotient of $\langle a,b\rangle$ by the Burnside-relations induced by 1 down $S$ should now have no relations smaller than $n+1$. Indeed, cyclically reducedness of the elements in $S$ shows that the relations $g^{(n+1)/k},g\in S$ are cyclically reduced and the small vote cancellation property shows that all other relations are larger. The problem amounts thus to finding a suitable large set $S$. (I guess that this is also more or less the idea underlying Paseman's answer.) add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/55737/how-many-relations-of-length-n-can-exists-in-a-group-without-enforcing-shorter","timestamp":"2014-04-19T20:08:25Z","content_type":null,"content_length":"61551","record_id":"<urn:uuid:dbbba79c-5b06-4a35-81df-7d0fde70a151>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
The French physicist and mathematician, Andre Marie Ampère is mainly credited for laying down the basis of electrodynamics (now known as electromagnetism). He was the first person to demonstrate that a magnetic field is generated when two parallel wires are charged with electricity and is also known for inventing the astatic needle, a significant component of the contemporary astatis Education and Career: Andre Marie was born in Lyon, France on 20 January 1775. He grew up at the family property at Poleymieux-au-Mont-d’Or near Lyon. His father, Jean-Jacques Ampère was an affluent businessman and local government official. Young Ampère spent most of his time reading in the library of his family home, and developed a great interest in history, geography, literature, philosophy and the natural sciences. His father gave him Latin lessons and encouraged him to pursue his passion for mathematics. At a very young age he rapidly began to develop his own mathematical ideas and also started to write a thesis on conic sections. When he was just thirteen, Ampère presented his first paper to the Academie de Lyon. This paper consisted of the solution to the problem of constructing a line of the same length as an arc of a circle. His method involved the use of infinitesimals, but unfortunately his paper was not published because he had no knowledge of calculus then. After some time Ampère came across d’Alembert’s article on the differential calculus in the Encyclopedia and felt the urge to learn more about mathematics. Ampère took few lessons in the differential and integral calculus from a monk in Lyon, after which he began to study the works of Euler and Bernoulli. He also acquired a copy of the 1788 edition of Lagrange’s Mecanique analytique, which he studied very seriously. From 1797 to 1802 Ampère earned his living as a mathematics tutor and later he was employed as the professor of physics and chemistry at Bourg Ecole Centrale. In 1809 he got appointed as the professor of mathematics at the Ecole Polytechnique, a post he held until 1828. He was also appointed to a chair at Universite de France in 1826 which he held until his death. In 1796 Ampère met Julie Carron, and got married in 1799. During 1820, the Danish physicist, H.C Ørsted accidentally discovered that a magnetic needle is acted on by a voltaic current – a phenomenon establishing a relationship between electricity and magnetism. Ampère on becoming influenced by Ørsted’s discovery performed a series of experiments to clarify the exact nature of the relationship between electric current-flow and magnetism, as well as the relationships governing the behavior of electric currents in various types of conductors. Moreover he demonstrated that two parallel wires carrying electric currents magnetically attract each other if the currents are in the same direction and repel if the currents are in opposite directions. On the basis of these experiments, Ampère formulated his famous law of electromagnetism known as Ampère’s law. This law is mathematical description of the magnetic force between two electrical His findings were reported in the Académie des Sciences a week after Ørsted’s discovery. This laid the foundation of electrodynamics. Ampère died at Marseille on 10 June, 1836 and was buried in the Cimetière de Montmartre, Paris. The SI unit of measurement of electric current, the ampere, is named after him.
{"url":"http://www.famousscientists.org/andre-marie-ampere/","timestamp":"2014-04-19T13:18:10Z","content_type":null,"content_length":"53234","record_id":"<urn:uuid:4fadce71-d17a-4860-88d9-fd5299a4e825>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry: Enhanced with Graphing Utilities, Fifth Edition Back to skip links Trigonometry: Enhanced with Graphing Utilities, Fifth Edition | 978-0-13-602896-3 ISBN-13: 9780136028963 See more Author(s): Michael Sullivan; Michael Sullivan, Price Information Rental OptionsExpiration Date eTextbook Digital Rental:180 days Our price: $80.99 Regular price:$203.00 You save:$122.01 Additional product details ISBN-10 0135024676, ISBN-13 9780135024676 ISBN-10 0-13-602896-9, ISBN-13 978-0-13-602896-3 Author(s): Michael Sullivan; Michael Sullivan, Publisher: Pearson Copyright year: © 2009 Pages: 752 Marketing Promotion Three Ways to Study with eTextbooks! • Read online from your computer or mobile device. • Read offline on select browsers and devices when the internet won't be available. • Print pages to fit your needs. CourseSmart eTextbooks let you study the best way – your way.
{"url":"http://www.coursesmart.com/9780135024676","timestamp":"2014-04-19T07:00:02Z","content_type":null,"content_length":"52278","record_id":"<urn:uuid:c1cb4898-0e4a-4ab9-ac14-3d5b4b121f48>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Pharmaceutical Mathematics May 30th 2012, 04:29 PM #1 May 2012 Christmas Island Pharmaceutical Mathematics Hi, I just need verification of these questions to see if i am doing them correctly. They are mostly about pharmaceutical maths that I found off the internet. When reconstituted with 20 mL of water, a 1,000,000 units vial of penicillin G potassium used in the nursery contains 50,000 units/mL. How many milliliters would be in a dose of 30,000 units? What I did: Units: mL :: Units: mL 50,000:1 :: 30,000: x 50,000(x) = 30,000 x = 0.6mL The vial contains 600,000 units of penicillin procaine. How much diluent should be added to give a solution added to give a solution containing 150,000 units/mL? What I did: 600,000 units/ 150,000 units/mL = 4mL These ones I need help on: You have to find the total input of the following: The IV bag is at 4 and you have injected the patient with 10 cc. 4 = 400mL 10cc = 10mL The patient is now on their 2nd 1000mL IV bag and it is reading at 8.5mL The patient has had 500 mL of water twice today and has been injected a second time with 8cc's Thank you for reading. Last edited by Fooz; May 30th 2012 at 07:00 PM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/algebra/199475-pharmaceutical-mathematics.html","timestamp":"2014-04-21T13:53:47Z","content_type":null,"content_length":"29884","record_id":"<urn:uuid:cd1c9db0-075a-4920-8e55-630d7683b5e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Formatting number to add leading zeros - SQL Server Geeks With Blogs Formatting numbers to add leading zeros can be done in SQL Server. It is just simple. Lets create a new table and see how it works: CREATE TABLE Numbers(Num INT); Table Created. Lets insert few values and see: INSERT Numbers VALUES('12'); INSERT Numbers VALUES('112'); INSERT Numbers VALUES('12'); INSERT Numbers VALUES('122'); INSERT Numbers VALUES('122'); 1 row(s) affected. 1 row(s) affected. 1 row(s) affected. 1 row(s) affected. 1 row(s) affected. Now we can see how the numbers are formatted with 6 digits, if it has less than 6 digits it will add leading zeros. SELECT * FROM Numbers; 5 row(s) affected. SELECT RIGHT('00000'+ CONVERT(VARCHAR,Num),6) AS NUM FROM Numbers; 5 row(s) affected. Comments on this post: Formatting number to add leading zeros - SQL Server # re: Formatting number to add leading zeros - SQL Server Sweet ! Thanks for the tip. Left by on Jul 10, 2009 10:33 AM # re: Formatting number to add leading zeros - SQL Server thanks a lot, good one Left by on Jul 27, 2009 12:32 PM # re: Formatting number to add leading zeros - SQL Server Very, very clever! Left by on Oct 06, 2009 12:52 PM # re: Formatting number to add leading zeros - SQL Server Thanks for the tip Left by Gregorio Reyes on Oct 08, 2009 2:39 PM # re: Formatting number to add leading zeros - SQL Server Cool, and for negative numbers we have: SELECT RIGHT('00000'+ CONVERT(VARCHAR,-2),6); Really COOL! Left by on Nov 20, 2009 10:52 AM # re: Formatting number to add leading zeros - SQL Server Awesome little solution. Simple and effective. Left by on Dec 03, 2009 4:34 PM # re: Formatting number to add leading zeros - SQL Server thank you Left by Nam Nguyen Thanh on Dec 30, 2009 11:11 PM # re: Formatting number to add leading zeros - SQL Server It saved ton of my time. Thanks Left by on Jan 11, 2010 5:14 PM # re: Formatting number to add leading zeros - SQL Server thanks for the Code. Very Good . Shrinivas Technologies Left by Arun B M on Feb 15, 2010 10:07 PM # re: Formatting number to add leading zeros - SQL Server wonderful tip Left by on Mar 04, 2010 11:26 AM # re: Formatting number to add leading zeros - SQL Server this is sooo sweet tips........... Left by on Mar 23, 2010 8:16 PM # re: Formatting number to add leading zeros - SQL Server Your article is really very interesting. This is the information, I’ve been looking for… Thanks Left by on Apr 26, 2010 12:24 PM # re: Formatting number to add leading zeros - SQL Server Thanks alot. Left by on May 04, 2010 7:04 AM # re: Formatting number to add leading zeros - SQL Server Little.. but great !! Very helpful after searching lots of sites for solutions... thanks Left by on May 09, 2010 12:06 PM # re: Formatting number to add leading zeros - SQL Server Awesome application of mind... Left by on Jun 01, 2010 8:23 AM # re: Formatting number to add leading zeros - SQL Server Thanks ! Helpful ! Left by Narinder Jit Singh on Jun 25, 2010 4:59 AM # re: Formatting number to add leading zeros - SQL Server Wow, Genius, Intelligent, Thanks A lot! Left by on Jul 12, 2010 12:05 AM # re: Formatting number to add leading zeros - SQL Server great solution, thx Left by on Jul 19, 2010 7:46 AM # re: Formatting number to add leading zeros - SQL Server thank you so much Left by on Jul 26, 2010 6:30 AM # re: Formatting number to add leading zeros - SQL Server Thx for the instrucruion. Look if I've done right or not here Left by on Oct 23, 2010 7:54 AM # re: Formatting number to add leading zeros - SQL Server Thanks mate, is really very clever!! Left by on Nov 04, 2010 5:36 AM # re: Formatting number to add leading zeros - SQL Server Thanks a bunch - saved me time figuring it out :) Left by on Nov 24, 2010 1:37 PM # re: Formatting number to add leading zeros - SQL Server Nice . i search lot of website . but i get the solution here only. thaks alot Left by on Dec 10, 2010 2:43 AM # re: Formatting number to add leading zeros - SQL Server Check out the value of NUM with more than 6 digit -- Its dangerous :-) Left by on Dec 14, 2010 5:47 AM # re: Formatting number to add leading zeros - SQL Server wow, that is useful info! generic paxil buy cheap Zyrtec Left by on Dec 14, 2010 7:23 PM # re: Formatting number to add leading zeros - SQL Server Thanks for this, but I have another question regarding this : My database fields looks like : 1.00000000 (a 1 with 8 decimals) which I want to convert and export to: So 10 character with always 3 decimals. Can somebody help me with that? Left by Joost van der Meer on Jan 18, 2011 12:28 PM # re: Formatting number to add leading zeros - SQL Server this is very help full for me Thanks a lots Left by on Feb 10, 2011 5:25 AM # re: Formatting number to add leading zeros - SQL Server This is an awesome tip! Thank you so very much for sharing it! Left by Jaes W Overley on Feb 24, 2011 11:30 AM # Good one Smooth. Big help. Thanks! FYI I'm working in ColdFusion and had issues w/a query returning anything with leading zeros into an AJAX binded object; cfselect in my case. It passes the query as JSON which automatically drops leading zeros. I forced a linefeed character, #Chr(10)#, in the select to make it work. Dirty, but effective. :) Happy coding! Left by John M on Feb 25, 2011 12:59 PM # re: Formatting number to add leading zeros - SQL Server good, you helped me.... thanks. Left by on Mar 03, 2011 3:57 AM # re: Formatting number to add leading zeros - SQL Server really thank dear. Left by on Mar 23, 2011 8:24 AM # re: Formatting number to add leading zeros - SQL Server You are my hero. I wish I'd thought of that! Left by on Mar 31, 2011 1:51 PM # re: Formatting number to add leading zeros - SQL Server Can someting similar be done with a char field? Left by on Apr 01, 2011 12:29 AM # re: Formatting number to add leading zeros - SQL Server Great solution! Left by on Apr 05, 2011 9:04 AM # re: Formatting number to add leading zeros - SQL Server My problem is slightly different - I join two systems in personid - one system uses leading zeros , the other doesn't. So persin with ID 0001 in system A is matched with person with ID 1 in system B leading to a cartenian product. Any suggestions? Left by on Apr 08, 2011 4:48 AM # re: Formatting number to add leading zeros - SQL Server Great tip, thanks for posting. Left by on Jun 10, 2011 2:18 AM # re: Formatting number to add leading zeros - SQL Server Simple but bailed me out today - thanks! Left by on Jul 12, 2011 1:13 AM # re: Formatting number to add leading zeros - SQL Server This doesnt work, as the results would be : your just concatenating the string '00000' with the number ¬¬ Left by on Jul 14, 2011 11:23 AM # re: Formatting number to add leading zeros - SQL Server Thanks man. But how do I do it in access ?? Left by on Aug 01, 2011 3:39 AM # re: Formatting number to add leading zeros - SQL Server Crap, only works with positive numbers Left by Fred Flintstone on Aug 11, 2011 9:57 PM # re: Formatting number to add leading zeros - SQL Server Perfect, exactly what I was looking for. Left by on Aug 30, 2011 4:00 AM # re: Formatting number to add leading zeros - SQL Server Thank you very much Left by on Oct 02, 2011 5:49 PM # re: Formatting number to add leading zeros - SQL Server Thank you very much for your help, cheers :) Left by on Oct 02, 2011 5:49 PM # re: Formatting number to add leading zeros - SQL Server That worked perfectly. Left by Mike Klaarhamer on Oct 04, 2011 2:04 AM # re: Formatting number to add leading zeros - SQL Server hah, very clever. thanks! Left by on Oct 13, 2011 5:55 AM # re: Formatting number to add leading zeros - SQL Server Muy bueno, gracias. Left by on Nov 05, 2011 3:32 AM # re: Formatting number to add leading zeros - SQL Server This is a great idea to solve this situation. Left by Chen Noam on Nov 09, 2011 8:02 PM # re: Formatting number to add leading zeros - SQL Server Thank you so much for this. Left by on Dec 13, 2011 3:15 AM # re: Formatting number to add leading zeros - SQL Server Awesome, thanks a lot. Left by on Jan 03, 2012 6:19 AM # re: Formatting number to add leading zeros - SQL Server thanks for the code Left by on Jan 07, 2012 12:53 PM # re: Formatting number to add leading zeros - SQL Server great! works good Left by on Jan 07, 2012 12:54 PM # re: Formatting number to add leading zeros - SQL Server thanks again Left by on Jan 07, 2012 12:55 PM # re: Formatting number to add leading zeros - SQL Server Thanks for sharing Left by on Jan 29, 2012 7:28 PM # re: Formatting number to add leading zeros - SQL Server Thanks a lot. Nice trick. Left by Don Smith on Feb 08, 2012 5:10 PM # re: Formatting number to add leading zeros - SQL Server too tricky Left by on Feb 10, 2012 7:59 PM # re: Formatting number to add leading zeros - SQL Server Nice solution Left by on Apr 16, 2012 8:49 PM # re: Formatting number to add leading zeros - SQL Server Very nice solution Left by on Apr 19, 2012 7:09 PM # re: Formatting number to add leading zeros - SQL Server Thanks!! it was simple , smart solution thanks Left by on Apr 30, 2012 10:47 PM # re: Formatting number to add leading zeros - SQL Server very nice solution. it's very simple to format. Thx. Left by on Jul 12, 2012 10:01 PM # re: Formatting number to add leading zeros - SQL Server thanks a looot :) simple and its what i was looking for Left by on Aug 28, 2012 6:31 PM # re: Formatting number to add leading zeros - SQL Server finally i found the solution. thks Left by on Sep 07, 2012 7:50 PM # re: Formatting number to add leading zeros - SQL Server Good work. Left by on Dec 03, 2012 9:59 AM # re: Formatting number to add leading zeros - SQL Server So simply beautiful it brings tears to the eyes. Many thanks! Left by on Feb 28, 2013 3:17 PM # re: Formatting number to add leading zeros - SQL Server Nice! Short and sweet! Left by Vivek Todi on Mar 03, 2013 7:33 PM # re: Formatting number to add leading zeros - SQL Server Perfect. Works for my Scenario.. Thanks Left by on May 14, 2013 3:45 AM # re: Formatting number to add leading zeros - SQL Server Excellent code Left by Soumen Ghosh on May 22, 2013 12:02 AM # re: Formatting number to add leading zeros - SQL Server But how can i set it permanently using update? Please help thx Left by on Aug 15, 2013 4:35 PM # re: Formatting number to add leading zeros - SQL Server Something like this? update tbl set column1 = right('000000'+cast(column1 as varchar),6) Left by on Sep 03, 2013 5:33 AM # re: Formatting number to add leading zeros - SQL Server The method I usually employ for this is as follows: SELECT REPLACE(STR(Num, 6), ' ', '0') AS NUM FROM Numbers Left by Celtic Gizmo on Sep 25, 2013 5:26 AM # re: Formatting number to add leading zeros - SQL Server Very helpful. Thanks so much! Left by Eric H on Oct 01, 2013 2:51 AM # re: Formatting number to add leading zeros - SQL Server thinks for the tip Left by on Oct 20, 2013 7:31 PM # re: Formatting number to add leading zeros - SQL Server To cope with negatives too, you could use something like this.. DECLARE @Nums TABLE (Num INT) -- Populate Table Variable with data INSERT @Nums SELECT 1223 UNION SELECT 712 UNION SELECT 12 UNION SELECT 6 UNION SELECT 0 UNION SELECT -6 UNION SELECT -12 UNION SELECT -712 UNION SELECT -1223 -- Define width of number DECLARE @Digits Integer SET @Digits = 5 SELECT CASE WHEN Num < 0 THEN '-' ELSE ' ' END + RIGHT(Replicate('0',@Digits) + CONVERT(VARCHAR,Abs(Num)), @Digits) FROM @Nums Left by Adrian Parker on Dec 10, 2013 12:01 AM # re: Formatting number to add leading zeros - SQL Server How to add zero to the left to the float number like '.3'. Left by Suman Ganguly on Feb 26, 2014 6:43 PM Your comment: (will show your
{"url":"http://geekswithblogs.net/nagendraprasad/archive/2009/03/19/formatting-number-to-add-leading-zeros---sql-server.aspx","timestamp":"2014-04-20T08:56:55Z","content_type":null,"content_length":"120710","record_id":"<urn:uuid:150f8408-28d0-4569-9007-32df939b2674>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Atl, GA Algebra 2 Tutor Find an Atl, GA Algebra 2 Tutor I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 13 Subjects: including algebra 2, statistics, SAT math, GRE ...Tutored trigonometry topics during high school and college. Completed coursework through Multivariable Calculus. Worked as a Statistics teaching assistant for several years during college. 28 Subjects: including algebra 2, physics, calculus, economics ...I learned PERL at the University of the Virgin Islands during my Bioinformatics class. Perl is a dynamic programming language and I learned to manipulate proteins and codon sequences in PERL and transcribe DNA to RNA. In addition to that, I also learned scalar variables, array variables, string... 21 Subjects: including algebra 2, calculus, Java, algebra 1 I love tutoring because I enjoy working with individual students and helping them achieve their goals. I worked with students for 14 years as a tutor, a teacher, and a librarian. One of the things that I have enjoyed most about my work as a teacher and a librarian is working with a wide variety of students. 33 Subjects: including algebra 2, English, reading, writing ...While tutoring at the college for well over ten years I learned many different techniques and approaches to help people learn the same type of problems, because everyone learns differently. After earning my B.S. in Mathematics at Georgia State University I was offered the position of Mathematics... 15 Subjects: including algebra 2, chemistry, calculus, geometry Related Atl, GA Tutors Atl, GA Accounting Tutors Atl, GA ACT Tutors Atl, GA Algebra Tutors Atl, GA Algebra 2 Tutors Atl, GA Calculus Tutors Atl, GA Geometry Tutors Atl, GA Math Tutors Atl, GA Prealgebra Tutors Atl, GA Precalculus Tutors Atl, GA SAT Tutors Atl, GA SAT Math Tutors Atl, GA Science Tutors Atl, GA Statistics Tutors Atl, GA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Atlanta algebra 2 Tutors Belvedere, GA algebra 2 Tutors College Park, GA algebra 2 Tutors Decatur, GA algebra 2 Tutors East Point, GA algebra 2 Tutors Forest Park, GA algebra 2 Tutors Fort Gillem, GA algebra 2 Tutors Hapeville, GA algebra 2 Tutors Lake Spivey, GA algebra 2 Tutors Marietta, GA algebra 2 Tutors North Atlanta, GA algebra 2 Tutors North Decatur, GA algebra 2 Tutors Riverdale, GA algebra 2 Tutors Sandy Springs, GA algebra 2 Tutors Union City, GA algebra 2 Tutors
{"url":"http://www.purplemath.com/Atl_GA_Algebra_2_tutors.php","timestamp":"2014-04-18T13:49:09Z","content_type":null,"content_length":"23945","record_id":"<urn:uuid:4ff315ed-d595-4ea6-b077-860eb6f84357>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
To which automorphic forms/rep's over a function field can we associate a Galois representation? up vote 4 down vote favorite As far as I understand it, by the work of Lafforgue (cf. Laumon, "Cohom. of Drinfeld ... II", Thm 12.4.1) there is a Galois representation associated to an irreducible cuspidal automorphic representation $\pi$ , if $\pi$ is Steinberg at some place $\infty$. Do we expect Galois representations also if some of these conditions do not hold? I might be totally off here, but R. Taylor constructed in his thesis (using results from Brylinki-Labesse) Galois rep's to Hilbert modular forms by congruence methods. Has this be studied anywhere for function fields? Any hint, where such things are discussed, would be greatly appreciated! nt.number-theory reference-request automorphic-forms function-fields I'm definitely no expert, and I may be wrong, but you might expect L-functions associated to irreducible cuspidal automorphic representations to correspond exactly to primitive elements of the Selberg class. I don't know whether one can associate to any such primitive element a Galois representation though. – Sylvain JULIEN Oct 23 '13 at 16:43 add comment 1 Answer active oldest votes Let $X$ be a smooth projective geometrically irreducible curve over $\mathbb F_{q}$ a finite field and $F$ its global field. Let $\mathbf G/F$ be a split connected reductive group and $\ widehat{\mathbf G}$ its Langlands dual. Let $\pi$ be an irreducible cuspidal automorphic representation of $\mathbf G(\mathbb A_{F})$. Then for all $\ell\nmid q$ there exists a $G_{F} $-representation $\sigma_{\pi,\ell}$ with values in $\widehat{\mathbf G}(\bar{\mathbb Q}_\ell)$ attached to $\pi$ in the usual sense. This result is essentially optimal, so that the answer to the question in the title seems to be "all of them". up vote 4 down vote This is a result of Laurent Lafforgue (Invent. Math. 147) when $\mathbf G=GL_{n}$ and of Vincent Lafforgue (preprint available from his webpage) in the other cases. That said, these results are far beyond my own expertise, so I hope real experts will chime in. Regarding your second question, there exists unpublished work on congruences for Drinfeld modular varieties, but my impression is that not much work has been done in this direction. add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory reference-request automorphic-forms function-fields or ask your own question.
{"url":"http://mathoverflow.net/questions/145635/to-which-automorphic-forms-reps-over-a-function-field-can-we-associate-a-galois/145652","timestamp":"2014-04-21T12:45:36Z","content_type":null,"content_length":"53747","record_id":"<urn:uuid:3828e46b-6f5a-43f0-af23-223393c22b8c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
For which composite $N$ does $X_0(N)$ possess a non-cuspidal rational point? up vote 2 down vote favorite According the the introduction to Mazur's Rational Isogenies of Prime Degree the following question was open in 1978: Let $N$ be one of the integers 39, 65, 91, 125, or 169. Does the modular curve $X_0(N)$ possess noncuspidal rational points? It seems likely that this should have been resolved in the past 32 years. Does anyone know of a reference for this? moduli-spaces elliptic-curves reference-request add comment 2 Answers active oldest votes Let me make David Brown's answer more explicit. Theorem (Kenku, 1979): $X_0(39)(\mathbb{Q})$ consists entirely of the ($4$) cuspidal points. Theorem (Kenku, 1980): $X_0(65)(\mathbb{Q})$ and $X_0(91)(\mathbb{Q})$ each consist entirely of the ($4$) cuspidal points. up vote 6 down vote accepted Theorem (Kenku, 1980, see also Kenku, 1980): $X_0(169)(\mathbb{Q})$ consists entirely of $2$ rational cuspidal points. Theorem (Kenku, 1981): $X_0(125)(\mathbb{Q})$ consists entirely of $2$ rational cuspidal points. add comment These are all due to Kenku; see mathscinet. up vote 8 down vote add comment Not the answer you're looking for? Browse other questions tagged moduli-spaces elliptic-curves reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/39645/for-which-composite-n-does-x-0n-possess-a-non-cuspidal-rational-point?sort=newest","timestamp":"2014-04-21T07:27:57Z","content_type":null,"content_length":"54011","record_id":"<urn:uuid:0f699366-b5c4-4532-8def-2767f9814b4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Schererville Math Tutor Find a Schererville Math Tutor I've taught Algebra 1, Algebra 2, Geometry, and Pre-Calculus at the high school level for 6 years. In addition, I've completed a BS in Electrical Engineering and I am quite knowledgeable of advance mathematical concepts. (Linear Algebra, Calculus, Differential Equations) I create an individualized... 12 Subjects: including calculus, general computer, precalculus, trigonometry ...In these troubled economic times, prospects for financial aid are an important consideration. For this reason, school endowment and scholarship prospects should be examined also. The college application itself can seem quite intimidating. 28 Subjects: including SAT math, algebra 2, ACT Math, linear algebra ...I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from North Central College. I have assisted in Pre-Algebra, Algebra, and Pre-Calculus 7 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I have definitely had success in helping students to acquire these skills. The SAT writing test is unusual because it tests students rhetorical skills considerably more than the ACT writing (English) test does. Rhetorical questions can be tricky, and even subjective. 20 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...I have taught Social Studies/U.S. History for six years. I've also taught Geography, World History, Reading, Spelling and Language Arts. 12 Subjects: including prealgebra, geometry, probability, algebra 1
{"url":"http://www.purplemath.com/Schererville_Math_tutors.php","timestamp":"2014-04-18T13:30:47Z","content_type":null,"content_length":"23492","record_id":"<urn:uuid:351bb6ff-e3cf-4523-b098-a0a8b75c0fe8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Compile Time log2 Calculation The class template in <boost/integer/static_log2.hpp> determines the position of the highest bit in a given value. This facility is useful for solving generic programming problems. namespace boost typedef implementation-defined static_log2_argument_type; typedef implementation-defined static_log2_result_type; template <static_log2_argument_type arg> struct static_log2 static const static_log2_result_type value = implementation-defined; template < > struct static_log2< 0 > // The logarithm of zero is undefined. } // namespace boost The boost::static_log2 class template takes one template parameter, a value of type static_log2_argument_type. The template only defines one member, value, which gives the truncated, base-two logarithm of the template argument. Since the logarithm of zero, for any base, is undefined, there is a specialization of static_log2 for a template argument of zero. This specialization has no members, so an attempt to use the base-two logarithm of zero results in a compile-time error. ● static_log2_argument_type is an unsigned integer type (C++ standard, 3.9.1p3). ● static_log2_result_type is an integer type (C++ standard, 3.9.1p7). The program static_log2_test.cpp is a simplistic demonstration of the results from instantiating various examples of the binary logarithm class template. The base-two (binary) logarithm, abbreviated lb, function is occasionally used to give order-estimates of computer algorithms. The truncated logarithm can be considered the highest power-of-two in a value, which corresponds to the value's highest set bit (for binary integers). Sometimes the highest-bit position could be used in generic programming, which requires the position to be available statically (i.e. at compile-time). The original version of the Boost binary logarithm class template was written by Daryle Walker and then enhanced by Giovanni Bajo with support for compilers without partial template specialization. The current version was suggested, together with a reference implementation, by Vesa Karvonen. Gennaro Prota wrote the actual source file.
{"url":"http://www.boost.org/doc/libs/1_49_0/libs/integer/doc/html/boost_integer/log2.html","timestamp":"2014-04-17T13:17:18Z","content_type":null,"content_length":"10511","record_id":"<urn:uuid:0ed4e690-3d8f-40ab-97c7-b9cbb6521642>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Courant Part of Team to Resolve Ancient Mathematics Problem Mathematicians from North America, Europe, Australia, and South America have resolved the first one trillion cases of an ancient mathematics problem on congruent numbers. The advance, which included work by David Harvey, an assistant professor at New York University’s Courant Institute of Mathematical Sciences, was achieved through a complex technique for multiplying large numbers. The problem, first posed more than 1000 years ago, concerns the areas of right-angled triangles. A congruent number is a whole number equal to the area of a right triangle. The surprisingly difficult problem is to determine which whole numbers can be the area of a right-angled triangle whose sides are either whole numbers or fractions. For example, the 3-4-5 right triangle has area 1/2 × 3 × 4 = 6, so 6 is a congruent number. The smallest congruent number is 5, which is the area of the right triangle with sides 3/2, 20/3, and 41/6. The first few congruent numbers are 5, 6, 7, 13, 14, 15, 20, and 21. Many congruent numbers were known prior to this new calculation. For example, every number in the sequence 5, 13, 21, 29, 37, …, is a congruent number. But other similar looking sequences, like 3, 11, 19, 27, 35, …, are more mysterious and each number has to be checked individually. The new calculation found 3,148,379,694 new congruent numbers up to a trillion. The quantity of numbers involved in this calculation is significant-if their digits were written out by hand, they would stretch to the moon and back. The congruent number problem was first stated by the Persian mathematician al-Karaji in the 10th century. His version did not involve triangles, but instead was stated in terms of the square numbers. In the 13th century, Italian mathematician Fibonacci showed that 5 and 7 were congruent numbers, and he stated, but didn’t prove, that 1 is not a congruent number. That proof was supplied by France’s Pierre de Fermat in 1659. By 1915, the congruent numbers less than 100 had been determined, but by 1980 there were still cases smaller than 1000 that had not been resolved. In 1982, Rutgers University mathematician Jerrold Tunnell found a simple formula for determining whether or not a number is a congruent number. This allowed the first several thousand cases to be resolved very The research team also included mathematicians from Warwick University (England), Universidad de la Republica (Uruguay), the University of Sydney (Australia), and the University of Washington in Seattle. The work was supported by the American Institute of Mathematics through a Focused Research Group grant from the National Science Foundation.
{"url":"http://www.nyu.edu/about/news-publications/news/2009/09/23/courant_part_of_team_to.html","timestamp":"2014-04-18T08:47:08Z","content_type":null,"content_length":"40847","record_id":"<urn:uuid:1616cb7f-e96e-45ea-85b9-4a7668e1849f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
edr {edrGraphicalTools} Main function for estimation of the EDR space It creates objects of class edr to estimate the effective dimension regression (EDR) space. Several helper functions that require an edr object can then be applied to the output from this function. edr(Y, X, H, K, method, submethod="SIR-QZ", ...) A numeric vector representing the dependent variable (a response vector). A matrix representing the quantitative explanatory variables (bind by column). The chosen number of slices. The chosen dimension K. This character string specifies the method of fitting. The option includes "SIR-I", "SIR-II", and "SAVE". This character string specifies the method of fitting when the number of lines of X is greater than its number of columns. It should be either "SIR-QZ", "RSIR" or "SR-SIR". Arguments to be passed to edrUnderdet when the number of lines of X is greater than its number of columns. We are interested in the following semiparametric dimension reduction model proposed by Li (1991) y=f(b1'x,b2'x,...,bK'x,e) where the univariate response variable y is associated with the p-dimensional regressor p only through the reduced K-dimensional variable (b1'x,b2'x,...,bK'x) with K < p. The error term e is independent of x. The link function f and the b-vectors are unknown. We are interested in finding the linear subspace spanned by the K unknown b-vector, called the effective dimension reduction (EDR) space. We focus on the SIR, SIR-II and SAVE methods to estimate the EDR space. The slicing step of these methods depends on the number H of slices. We propose with the function criterionRkh a naive bootstrap estimation of the square trace correlation criterion to allow selection of an “optimal” number H of slices and simultaneously the corresponding suitable dimension K (number of the linear combination of x). After choosing an optimal couple (H,K) for the best estimation method (the square trace correlation criterion closest to one), the EDR space could be estimate with this function. Each method consists in a spectral decomposition of a matrix of interest. The eigenvectors of this matrix associated of the K largest eigenvalues are EDR directions. edr returns an object of class edr, with attributes: A matrix corresponding of the eigenvectors of the interest matrix The eigenvalues of the matrix of interest The chosen dimension. The chosen number of slices. Sample size. The dimension reduction method used. The matrix of the quantitative explanatory variables (bind by column). The numeric vector of the dependent variable (a response vector). Liquet, B. and Saracco, J. (2012). A graphical tool for selecting the number of slices and the dimension of the model in SIR and SAVE approaches. Computational Statistics, 27(1), 103-125. Li, K.C. (1991). Sliced inverse regression for dimension reduction, with discussions. Journal of the American Statistical Association 86, 316-342. Cook, R. D. and Weisberg, S. (1991). Discussion of “Sliced inverse regression”. Journal of the American Statistical Association, 86, 328-332. See Also criterionRkh, summary.edr, plot.edr n <- 500 beta1 <- c(1,1,rep(0,8)) beta2 <- c(0,0,1,1,rep(0,6)) X <- rmvnorm(n,sigma=diag(1,10)) eps <- rnorm(n) Y <- (X%*%beta1)**2+(X%*%beta2)**2+eps ## Estimation of the trace square criterion ## grid.H <- c(2,5,10,15,20,30) ## res2 <- criterionRkh(Y,X,H=grid.H,B=50,method="SIR-II") ## summary(res2) ## plot(res2) ## Estimation of the EDR direction for K=2 and H=2 and SIR-II method edr2 <- edr(Y,X,H=2,K=2,method="SIR-II") Documentation reproduced from package edrGraphicalTools, version 2.1. License: GPL (>= 2.0)
{"url":"http://www.inside-r.org/packages/cran/edrGraphicalTools/docs/edr","timestamp":"2014-04-20T00:45:59Z","content_type":null,"content_length":"21060","record_id":"<urn:uuid:f62d7c0c-9fc7-4171-9cc2-6aeadee31997>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Choose the correct answer So he has gone to London! When ........ there? a. did he go b . has he gone c. has he been d. he has gone • one year ago • one year ago Best Response You've already chosen the best response. What do you think is the answer @Devilish Best Response You've already chosen the best response. i think is d Best Response You've already chosen the best response. ^ wrong. @Devilish your turn. Best Response You've already chosen the best response. so is a Best Response You've already chosen the best response. I think it's (a ) because the present perfect is not used with specific time expressions and "when " asks for a specif time but all the teachers of English including my father (who is a teacher also) says that the answer is (b) but giving no reason for it I'm stuck on my answer and I really need your help Best Response You've already chosen the best response. Correct answer^ You chose A? Its correct! Best Response You've already chosen the best response. It cannot be B according to me. Best Response You've already chosen the best response. Hba..have you seen her reasoning? Best Response You've already chosen the best response. Yes i did and i messaged her. Best Response You've already chosen the best response. now time for reasons can you give me the reasons Best Response You've already chosen the best response. He has gone is already used in the first sentence we cannot repeat it. Best Response You've already chosen the best response. Ok. I would say A because it kind of talks about the present and when like what time did he go there. Example like 2 days ago or 3 days.. B I would say no because it is asking about the past times when he has been there.. I am always asked when did I go no when have I been Best Response You've already chosen the best response. So the answer is indeed "a" Best Response You've already chosen the best response. Sure it is. Best Response You've already chosen the best response. Best Response You've already chosen the best response. hello @hero you can give us your hint Best Response You've already chosen the best response. Its A. Best Response You've already chosen the best response. Can you give me the reason for your answer @uri ?? Best Response You've already chosen the best response. The first sentence says *he has gone to London*,The second one says *When____ there?,It will be *did he go* If you actually read it,and put each option in the blank,Option A is the most suitable. Best Response You've already chosen the best response. thanx a lot my friend @uri Best Response You've already chosen the best response. Yw :) Best Response You've already chosen the best response. Its a Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fab040e4b022b322701e37","timestamp":"2014-04-19T22:48:24Z","content_type":null,"content_length":"77761","record_id":"<urn:uuid:b5bcace1-9db6-4791-817c-9dabd58fb497>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Evolution's Imperative Bill Payne (bpayne15@juno.com) Wed, 24 Mar 1999 00:11:14 -0600 On Tue, 23 Mar 1999 18:09:22 -0700 "Kevin O'Brien" <Cuchulaine@worldnet.att.net> writes: >So I guess that when God says that Solomon had a circular tub built that >thirty cubits in circumference and 10 cubits in diameter, the value of >must really be 3 (otherwise the circumference of a 10 cubit-diameter >would have been 31.5 cubits, or the diameter of a 30 cubit-circumference >circle would have been 9.5 cubits). How can you possibly question this? I think if the 10-cubit measurement was an inside diameter and the circumference was measured around the outside, and if the wall was ~ an inch thick, then the value of pi would work out. >So I guess that rabbits must chew the cud, ... cud. They munch fecal pellets. As Gothard said, "The critics were looking at the wrong end of the rabbit." :-) You don't need to buy Internet access to use free Internet e-mail. Get completely free e-mail from Juno at http://www.juno.com/getjuno.html or call Juno at (800) 654-JUNO [654-5866]
{"url":"http://www2.asa3.org/archive/evolution/199903/0221.html","timestamp":"2014-04-20T10:49:05Z","content_type":null,"content_length":"2740","record_id":"<urn:uuid:81b01ed4-5a66-4267-91a0-a66633de9762>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
How to correct your altimeter data As with quite a few things on this website, you will see that for me "the devil is in the details". My overall philosophy of life is that if you take care of all of the details, everything else will work out. Anyway, this document gives you a method of making a fairly subtle correction to altimeter data you take during a hike to produce a truer altitude profile as a function of time. The Standard Atmosphere By and large, the pressure profile of the atmosphere as a function of height remains constant. If a low pressure system moves overhead, the surface pressure may be lower, but the pressures above that will also be lower. This overall profile is governed by basic laws of physics. Put most simply, the higher you go in the atmosphere, the less atmosphere is pushing down from above and thus the pressure is less. For many purposes, it is useful to define an overall average profile, including not only defining pressure as a function of altitude, but fixing the temperature at each altitude as The Standard Atmosphere begins at sea level with a pressure of 1013 millibars (or 29.92 inches of mercury), and a temperature of 59F (or 15C). It is not a concidence that these values are similar to the global annual average values. By 7000 feet above sea level, the standard pressure is down to 782 mb and the standard temperature is down to 34F. At 14,500 feet, corresponding to the highest peaks of the lower 48, the pressure is down to 583 mb and the temperature is just 8F! The peak bagger's problem Naturally, the temperature at a given altitude is certainly not the same all the time! The chances that the temperature is identical to the Standard Atmosphere during your hike is pretty small. If the temperatures at the top and bottom of your climb are not the same as the Standard Atmosphere, the pressure difference between those two points will not yield the expected altitude difference. For typical peak bagging conditions, temperatures are quite a bit above those for the Standard Atmosphere. At higher temperatures, a change from one particular pressure to another corresponds to a larger change in altitude than predicted by the Standard Atmosphere. This is what I will call the "Standard Atmosphere Effect" (SAE). Virtually all altimeter watches use the Standard Atmosphere to convert pressure changes into altitude changes, and are subject to this error in "warm" conditions. In effect, if you set your altimeter correctly at the trailhead and could somehow teleport to the top of a mountain rapidly enough to avoid weather changes, your altimeter will be registering a lower altitude than expected. The result of all of this is that one can't exactly trust their altimeter. One either needs to frequently reset their altimeter, which can be difficult as there are often few unambiguous reference points between a trailhead and a summit, or one needs to estimate the correction factor to apply to their altimeter. Further problems are that if you want to track the total elevation gain for some sort of hike (summit or not), your altimeter will usually be short, and if you track things like your rate of climb or descent it will also be short. The latter problems are really only a big deal to dataheads like me, but if you've read this far, I think it is fair to say "dataheads like us"! Strictly speaking, the correction I'm recommending here is not exactly for the SAE. It is really just a correction that sort of looks like the one that would be used. Furthermore, the correction is applied linearly with height, when that's not quite right either. This is just a reasonable, easily calculated correction for the SAE and other things that sort of behave like the SAE. It is certainly a lot better than doing nothing if you get tired of your altimeter being chronically short by amounts that may be as much as 200 feet or more. The solution If you are relatively comfortable using a spreadsheet, it is possible to apply this correction to an entire hike's worth of data in a handful of minutes. You need to gather a little bit of information, namely the true altitude of the trailhead and the summit (or at least some higher point on your hike), and of course you have to get the altitude information into the spreadsheet in the first place. The great thing about a spreadsheet is that once you figure out the proper correction for one data point, you can copy that cell and paste it down the rest of a column to correct the rest of the data. There are two corrections you must apply, a baseline correction to correct for pressure changes during the hike and incorrect trailhead elevation data, and then the SAE correction. A baseline If the atmosphere did not change during your hike, you should end with the same altitude as you recorded at the start and it should match the true altitude off of a map. If so, you can skip this step. Otherwise, we need to apply an interpolative correction given the initial error and the final error. If you do things right, your initial error should be zero; i.e., you should start the altimeter with exactly the right answer. In reality, this doesn't always work out either because of a small change in pressure while you are getting ready, or uncertainty in the actual starting elevation when you are in the field. To illustrate this correction, let's consider the following data for a brisk hike up a 13,000-foot peak starting from a 10,000-foot trailhead (don't worry about the right-hand columns quite yet): Time Elapsed Altimeter Actual Baseline Cor1 SAE Final 06:00 0 9980 10000 06:10 10 10300 06:20 20 10620 06:30 30 10940 06:40 40 11180 06:50 50 11450 07:00 60 11600 07:10 70 11840 07:20 80 12080 07:30 90 12290 07:40 100 12550 07:50 110 12700 08:00 120 12860 13000 08:10 130 12510 08:20 140 12210 08:30 150 11840 08:40 160 11490 08:50 170 11130 09:00 180 10860 09:10 190 10490 09:20 200 10200 09:30 210 9950 10000 You can see that we were 20 feet low at the trailhead and ended up 50 feet low. That means that the atmospheric pressure rose during the climb, hopefully associated with fair weather! We can easily see the proper correction factor for the beginning and the end. In between, we assume that the pressure changed uniformly throughout the climb, and thus we need to apply a correction that changes uniformly from +20 to +50 over the 210 minutes of the climb. Thus, the correction is of the form: Cor = 20 + Elapsed*(50-20)/Total_Time, or Cor = 20 + Elapsed*(30/210). As you can see below, this generates the correction for each step given in the table. The general formula for each cell is then: Cor = Initial_error + Elapsed*(Final_error-Initial_error)/Total_Time. Note that the initial and/ or final errors can be negative and the forumla will still work. It also works no matter how often you took data, or if it was taken uniformly, as long as you plug in the proper elapsed time since the beginning of the hike. Here is the revised table after we have applied the baseline correction, with all altitudes rounded to the nearest foot: Time Elapsed Altimeter Actual Baseline Cor1 SAE Final 06:00 0 9980 10000 20 10000 06:10 10 10300 21 10321 06:20 20 10640 23 10663 06:30 30 10920 24 10944 06:40 40 11180 26 11206 06:50 50 11450 27 11477 07:00 60 11600 29 11629 07:10 70 11840 30 11870 07:20 80 12080 31 12111 07:30 90 12290 33 12323 07:40 100 12550 34 12584 07:50 110 12700 36 12736 08:00 120 12860 13000 37 12897 08:10 130 12510 39 12549 08:20 140 12210 40 12250 08:30 150 11840 41 11881 08:40 160 11490 43 11533 08:50 170 11130 44 11174 09:00 180 10860 46 10906 09:10 190 10490 47 10537 09:20 200 10200 49 10257 09:30 210 9950 10000 50 10000 The "Standard Atmosphere Effect" Once we apply the baseline correction, we can then do the main correction for the deviation from the Standard Atmosphere. In reality, this correction may not be completely due to the Standard Atmosphere Effect, but we can still assume a correction of this form. Since we presumably know the summit elevation, we use it to make the correction. In the current example, we only have one data point at the summit. In general you may have several and what I do is average together the altimeter readings for all data points taken on the summit. As a dedicated datahead, I usually set my watch to take data every minute, so that generates quite a few summit altitude measurements. In this example, our measured summit elevation was just over 100 feet below the true elevation. In other words, the true trailhead-to-summit elevation change was 3000 feet, but we only measured 2897 feet. (Strictly speaking, we only "measured" a 2880-foot difference, and the baseline correction makes up the other 17 feet.) The correction factor is very easy to calculate; it is simply the true elevation difference divided by the measured difference. In our case, it is 3000/2897, or 1.03555. That means that to correct our data, we have to scale the data by an amount based on this factor times the altitude gain above the trailhead implied by the altimeter. To make that last sentence comprehsible, consider a simple example. If we think we are 1000 feet above the trailhead based on the altimeter, we are actually 1000*1.03555, or 1036 feet above the trailhead and have to add 36 feet to the altimeter reading. Because we have already corrected the data so that the trailhead altitudes are equal at the start and finish, the amount of the correction applied at a given altitude on the way up and on the way down will be the same. The maximum correction will be at the summit, because that is the point where we measure the greatest difference between the current position and the trailhead. The actual formula for each point is then: (Cor1 - TH_alt)*(1-factor), where factor is the correction factor we discussed above and TH_alt is simply the correct trailhead altitude. The (1-factor) term just means that we want the difference beyond 1.000000 to give us the amount of the correction. If we then add this correction to Cor1, we get the final corrected altitude corresponding to each data point that we took on the trip: Time Elapsed Altimeter Actual Baseline Cor1 SAE Final 06:00 0 9980 10000 20 10000 0 10000 06:10 10 10300 21 10321 11 10333 06:20 20 10640 23 10663 24 10686 06:30 30 10920 24 10944 34 10978 06:40 40 11180 26 11206 43 11249 06:50 50 11450 27 11477 53 11530 07:00 60 11600 29 11629 58 11686 07:10 70 11840 30 11870 66 11936 07:20 80 12080 31 12111 75 12186 07:30 90 12290 33 12323 83 12405 07:40 100 12550 34 12584 92 12676 07:50 110 12700 36 12736 97 12833 08:00 120 12860 13000 37 12897 103 13000 08:10 130 12510 39 12549 91 12639 08:20 140 12210 40 12250 80 12330 08:30 150 11840 41 11881 67 11948 08:40 160 11490 43 11533 54 11587 08:50 170 11130 44 11174 42 11216 09:00 180 10860 46 10906 32 10938 09:10 190 10490 47 10537 19 10556 09:20 200 10200 49 10257 9 10266 09:30 210 9950 10000 50 10000 0 10000 And that's it! You now have a reasonably accurate true elevation profile of your hike as a function of time. Other points Once you have determined the altitude correction factor, you can use that information in other ways. For example, my altimeter watch also stores a climb rate, so you can get a corrected climb rate by simply multiplying by the correction factor. Similarly, if you did an up and down hike and your altimeter gives a total cumulative gain, you can scale that up by the same factor. Correction factors for high peaks under non-winter conditions are usually about 3-8% and if you are normally a summer hiker, you will find that the correction factor is fairly consistent from hike-to-hike. So, if you are somewhat good at doing arithmetic in your head, you can apply an approximate correction in real-time during a hike. I.e., if you typically find a 5% correction factor, you need to add about 50 feet to your altimeter reading for every 1000 feet that you gain. If you are doing a hike with many thousands of feet of gain, this correction really adds up. Thus, you can get a more realistic idea of your progress when you are trying to figure out whether you have the time and energy to make it or need to turn back. It is very important to understand that this is only an approximation to the actual correction that should be applied. We have more or less assumed that air temperatures do not change very much during the hike, which is unrealistic. In effect, we are taking an average correction for the SAE and applying it uniformly to all data, when in fact the correction factor should change throughout the day. (Most likely increasing as the temperature increases.) Furthermore, I have found that the correction factor does not perfectly correlate with the temperature, which it should if the SAE is the only correction needed. However, clearly this is a major component of the required correction. One can see this in cases where accurate intermediate altitudes are available and the difference between actual altitudes and altimeter reading increase as one ascends and decrease as one descends. Finally, a lot of you reading this might be thinking, "when the hell is he going to mention GPS?!?!" The problem with GPS altitudes is that you really have no way of knowing when the altitudes are truly reliable. In many cases they are quite good; i.e., on a high ridge with good satellite coverage if you let the unit accumulate data for a while. On the other hand, GPS altitudes can be 10x as inaccurate as the horizontal positional accuracy and the altitude reading seems to be much more sensitive to the relative positions of the satellites. One satellite "winking" on or off can cause an abrupt change in the reported altitude. Back to the mountaineering essays index File last modified: 14 September 2009
{"url":"http://www.eskimo.com/~rachford/mountaineering/essays/altimeter_corr.html","timestamp":"2014-04-24T10:04:15Z","content_type":null,"content_length":"20733","record_id":"<urn:uuid:cba0b18c-79d7-41ca-a1e0-b51a2c1cb03d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
What is modulo and Binomial? July 5th 2011, 10:02 PM #1 Junior Member Nov 2009 What is modulo and Binomial? I saw this: binomial(n+3, n) mod n = 1 How can I solve for a few of the values of n... I have a basic understanding of mod such that if something were written: (242%9) I would know the answer to be 8... But what is this binomial form.. I've never seen this notation? Can someone give me an example of working this problem: binomial(n+3, n) mod n = 1 For say the first 3 numbers that work in this equation as "n" Re: What is modulo and Binomial? I saw this: binomial(n+3, n) mod n = 1 How can I solve for a few of the values of n... I have a basic understanding of mod such that if something were written: (242%9) I would know the answer to be 8... But what is this binomial form.. I've never seen this notation? Can someone give me an example of working this problem: binomial(n+3, n) mod n = 1 For say the first 3 numbers that work in this equation as "n" This is actually an identity of some kind. To prove it, I would recommend writing this out in factorial form. However, this is not true in all cases -- there is a very important condition that is missing from this statement that I state in the verification process. So it follows that \begin{aligned}\binom{n+3}{n}\pmod{n} &\equiv \frac{(n+3)!}{n!(n+3-n)!}\pmod{n}\\&\equiv \frac{(n+3)(n+2)(n+1)n!}{n!\cdot 6}\pmod{n}\\ &\equiv \frac{(n+3)(n+2)(n+1)}{6}\pmod{n}\end{aligned} Now when we're working with mods, $\frac{1}{6}\equiv 6^{-1}\pmod{n}$, and this inverse exists iff $\gcd(6,n)=1$ (i.e. $2mid n$ and $3mid n$). Supposing that $6^{-1}$ exists, then it follows that $6^{-1}(n+3)(n+2)(n+1)\pmod{n}&\equiv \ldots$ Thus, I leave it for you to finish verifying that $\binom{n+3}{n}\pmod{n}\equiv 1$ given that $\gcd(6,n)=1$. I hope this makes sense! Re: What is modulo and Binomial? That seems to give me these numbers: 1 5 7 11 13 17 19 23 25 29 31 35 37 41 43... But the OEIS that showed me that formula gave this list as the answers: A133633 - OEIS Re: What is modulo and Binomial? That seems to give me these numbers: 1 5 7 11 13 17 19 23 25 29 31 35 37 41 43... But the OEIS that showed me that formula gave this list as the answers: A133633 - OEIS Look up what n mod 1 is for a natural number n. Try re-reading the definition of the sequence on OEIS. July 5th 2011, 10:31 PM #2 July 5th 2011, 11:25 PM #3 Junior Member Nov 2009 July 7th 2011, 10:59 PM #4 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/number-theory/184151-what-modulo-binomial.html","timestamp":"2014-04-19T19:58:39Z","content_type":null,"content_length":"43137","record_id":"<urn:uuid:5f8923f5-ad75-4dcf-8546-d5aa13598a08>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: An error exists in the logic that says the following. Explain where and why the error occurs and provide graphical support for your explanation. • one year ago • one year ago Best Response You've already chosen the best response. \[\int\limits \tan x dx = \int\limits \frac{ \sin x }{ \cos x } dx\] Best Response You've already chosen the best response. Set u =cos x, then du = -sinx dx, so \(\int \dfrac{\sin x}{\cos x}dx=-\int \dfrac{-\sin xdx}{\cos x}=-\int \dfrac{du}{u}\). Do you recognise what to do now? Best Response You've already chosen the best response. i know i even get the answer \[\ln (\sec) + C\] But I can't find any error Best Response You've already chosen the best response. read the question it says whts the error in converting tan to sin/cos Best Response You've already chosen the best response. Well, if I go on with what I started, I get -ln|u| +C. If u > 0, this becomes \(-\ln u + C = \ln u^{-1} + C = \ln \frac{1}{u}+C=\ln(\sec x) + C\). If u < 0, you have \(-\ln(-u)+C=...=\ln(-\sec x) +C\) So it is necessary to take into cosideration what sign tan x has. Best Response You've already chosen the best response. we need calculus teachers here...not the correct answer Best Response You've already chosen the best response. no brackets should be integral (sinx/cosx)dx Best Response You've already chosen the best response. \[\int\limits tanx dx = \int\limits (\frac{ sinx }{ cosx } )dx\] Best Response You've already chosen the best response. cmon this is not the answer this doesn't even look like an error dude Best Response You've already chosen the best response. lol just guessing maybe its this Best Response You've already chosen the best response. dude i m frikin out lol Best Response You've already chosen the best response. I am unable to find a flaw in the logic only the math, you should end up with \[ln|cosx|+C\] which you do after integration, I would assume this method approprite Best Response You've already chosen the best response. correction \[-ln|cosx|+C\] Best Response You've already chosen the best response. ya so whats the error Best Response You've already chosen the best response. Whenever you do the u substitution integration technique it follows: let u=cosx du=-sinx we then have\[-\int\limits_{}^{}\frac{ 1 }{ u }du\] simply integrate from here the back-substitute yielding \[-ln|u|+C\] then \[-ln|cos x|+C\] Best Response You've already chosen the best response. i know i know...i got this answer too...but i can't understand what is the error...read the question Best Response You've already chosen the best response. I did and there does not exist an error if it supplies a correct answer we use this trick a lot in higher exponential tangent cases-I think though i may be wrong Best Response You've already chosen the best response. it asks for the error in converting tan to sin/cos Best Response You've already chosen the best response. yes I understand but as I said I use this alot, I do not see the flaw in the logic provided the cosine function does not equal zero on your bounds Best Response You've already chosen the best response. ok i guess...i said the same thing to my teacher but she said no there is one Best Response You've already chosen the best response. ha, really? that is odd. did you google it? Best Response You've already chosen the best response. ya no help from there...google told me to come here haha Best Response You've already chosen the best response. lol ok well the only issue would be the graph contains asymptotes and DNE at multiple points aka cos x=0 but your domain should be restricted to allow for this. That would be my best guess as to the answer they are fishing for Best Response You've already chosen the best response. what lol...say it again confused Best Response You've already chosen the best response. h/o a minute Best Response You've already chosen the best response. so if you look at the graph of the tangent function, it has horizontal asymptotes at every point where cos x=0 (shocking I know). Because cosine has a continuous domain and tangent does not, this must be accounted for whenever you evaluate a definite integral;however, your domain in a tangent function integration, usually accounts for this to start with-unless they are being...not nice Best Response You've already chosen the best response. lol...no idea to this man..... Best Response You've already chosen the best response. haha...i am never gonna figure it out XD Best Response You've already chosen the best response. I will copy it and show it to my lecturer and will get back to you Best Response You've already chosen the best response. sweet...dont forget to get the equation from comment box Best Response You've already chosen the best response. yeah, I copied it Best Response You've already chosen the best response. thx tell me ASAP plz Best Response You've already chosen the best response. @amistre64 can u solve this Best Response You've already chosen the best response. i cannot see a reason why there would be an error in that. By definition: \[\tan x=\frac{\sin x}{\cos x}\] is a trig identity. They have the same domains so reiterating the domain as Fibonacci pointed out ... seems moot. There is no error that can determine Best Response You've already chosen the best response. .... that i can determine Best Response You've already chosen the best response. haha...but there is an error idk wht Best Response You've already chosen the best response. @Best_Mathematician: A wide variety of Openstudy members have answered your question. The error you mention cannot be pointed out by any of them. I think it's your turn now. Just saying: there is an error is not enough. Ask your teacher what he/she means with the error. BTW: I am speaking out of experience: sometimes even teachers are wrong :) I know, because I'm a teacher myself... Best Response You've already chosen the best response. are you sure it does not say int tan x dx=int sin x dx/int cos x dx? Because that would be obviously flawed. Best Response You've already chosen the best response. @inkyvoyd ...that would make so much more sense Best Response You've already chosen the best response. i m heck sure...the question i wrote is the exact same copy...and if it wud be like tht it wud be easy not challenging...and here i m asking challenging questions Best Response You've already chosen the best response. did you ask your instructor? Best Response You've already chosen the best response. The error is that "an error exists" itself then, I believe. Please ask your instructor :) Best Response You've already chosen the best response. no worry... i will post the answer after my instructor replies...its spring break over here....but still i will reply the answer as soon as i can Best Response You've already chosen the best response. just watch this question Best Response You've already chosen the best response. There is no error in the OP. int(tanx)=int(sinx/cosx)=lnabs(secx) Best Response You've already chosen the best response. maby your getting -ln(abs(cosx)) but using the properties of logarithms this equals ln(abs(cos^(-1)x)) = ln(abs(secx)) Best Response You've already chosen the best response. will reply the correct one asap Best Response You've already chosen the best response. if anything, an error might exist in the assumption that an indefinite integration has a definite answer. \[\int tan(x)~dx=xxx+K\]\[\int \frac{sin(x)}{cos(x)}~dx=xxx+C\] but that distinction would be made pointless by the fact that an indefinite integral represents a family, or set, of functions anyway. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/513cda23e4b01c4790d272d5","timestamp":"2014-04-17T19:20:34Z","content_type":null,"content_length":"152995","record_id":"<urn:uuid:648d55ae-e3f4-4afb-b739-9e4e8c88e91e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: cumulative average moving through time [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: cumulative average moving through time From David Kantor <dkantor@jhu.edu> To statalist@hsphsun2.harvard.edu Subject Re: st: cumulative average moving through time Date Wed, 06 Oct 2004 16:33:51 -0400 At 03:59 PM 10/6/2004 -0400, Dan Egan wrote, among other things: by sort pid (ob):gen cave = sum(calc)/ob 1) Where/When did -sum()- become an acceptable argument to -generate-!?!? I have only ever seen it in the context of -egen-. Looking at the help for -generate-, there are no arguments that are explicitly stated to be useable. It is only at the very bottom of the examples that one sees an function -uniform- and then -sum- used with gen. Are the others? I know that using many egen arguments with -gen- will return errors (e.g. count). (I suppose you meant "bysort pid...".) sum() has always been a basic function. It is not a matter of a "argument" to -generate-. It's a function, and any function can appear in an expression (subject to type compatibility). Thus it can appear anywhere an expression is accepted, such as in -generate-, among others. See -help mathfun- for details. The egen sum() program makes use of it. See (in the appropriate ado directory) _gsum.ado to see how it works. Finally, the "functions" that -egen- accepts are just those that have programs written for them. (They all have "_g" as a prefix to their names. And you can write your own egen functions if you desire.) Some of these happen to have the same name as ordinary functions such as sum(). But there is no explicit connection between the two. In case you wanted to know, egen is a program that calls another program. If you type egen myvar = somefunction blah blah blah then egen will call _gsomefunction with blah blah blah as arguments. I hope this is useful. David Kantor Institute for Policy Studies Johns Hopkins University * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-10/msg00120.html","timestamp":"2014-04-20T16:10:55Z","content_type":null,"content_length":"7126","record_id":"<urn:uuid:2555d0dc-fbcd-4fb5-b979-009acd80609a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Midpoint Calculator | Midpoint Formula Calculator Midpoint Calculator Not what you're looking for? Get your own custom-built calculator starting at just $199: Contact Us Get this calculator for your site: Default customizations. Required link back. Easy to customize. No coding. Midpoint Formula: Midpoint = [(X1 + X2)/2 , (Y1 + Y2)/2] Midpoint Definition The exact middle of a line is found by using the midpoint formula. This formula is frequently used in math problems and in the real world. Once you learn how to find the midpoint with this calculator, you can use the information for several applications. How Knowing Midpoint is Helpful If you are trying to find the center point along a line, you need the midpoint formula. This can be useful for: • Calculating the center of a playing field line • Finding the center point on a map between two places • Locating the middle of a line on a graph What This Midpoint Calculator Needs to Find the Midpoint When you use this calculator, you need to insert four values based on the graph coordinates of the ends of the line. Be sure to use the correct coordinate for each point in each of the spots required on the calculator. Doing so will ensure that you get the correct answer. A common error is to put the second x value into the Y1 box. Double check your numbers before choosing the calculate button. The values needed for this midpoint calculator are: Formula for Midpoint The information you insert into this midpoint calculator is used in the following formula: Midpoint = [(X1 + X2)/2 , (Y1 + Y2)/2] This formula basically finds the average of the two x-coordinates and the average of the two y-coordinates to give you the location of the midpoint along that line. For instance, if you have the points (1,3) and (3,1), the midpoint would be (2,2). This comes from averaging the two x-parts: 1 and 3 to find 2. This is the x-coordinate of the midpoint. The two y-coordinates are averaged to also give you 2, which is the y-coordinate of the midpoint. How to Calculate Midpoint Let's be honest - sometimes the best midpoint calculator is the one that is easy to use and doesn't require us to even know what the midpoint formula is in the first place! But if you want to know the exact formula for calculating midpoint then please check out the "Formula" box above. Add a Free Midpoint Calculator Widget to Your Site! You can get a free online midpoint calculator for your website and you don't even have to download the midpoint calculator - you can just copy and paste! The midpoint calculator exactly as you see it above is 100% free for you to use. If you want to customize the colors, size, and more to better fit your site then pricing starts at just $29.99 for a one time purchase or $19.99/month to get access to all of our 100's of calculators. Click the "Get Started" button above to learn more!
{"url":"http://www.calculatorpro.com/calculator/midpoint-calculator/","timestamp":"2014-04-16T10:26:29Z","content_type":null,"content_length":"82439","record_id":"<urn:uuid:d5e5b989-a316-427d-ae13-06b58323cf72>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
The Euler Line of a Triangle It's amazing how much geometry there is in the lowly triangle! Consider the triangle ABC below (colored magenta). Several interesting points and lines can be constructed, and they all move as you drag about a vertex A, B, or C of the triangle. The centroid G of the triangle Along the sides of the triangle, you see the midpoints labelled A', B', and C'. (By the way, you can place your mouse cursor over the diagram and press the return key to lift the diagram off the page. You can move and resize the window that appears.) The midpoint of the side BC is A', the midpoint of the side CA is B', and the midpoint of the side AB is C'. A line connecting a vertex of the triangle to the midpoint of the opposite side is called a median of the triangle. The medians of this triangle are AA', BB', CC', and they’re colored green. Notice that they all meet at one point G in the triangle, also colored green. This point is called the centroid of the triangle. Other names for the centroid are the barycenter and the center of gravity of the triangle. If you make a real triangle out of cardboard, you can balance the triangle at this point. It can be shown that the centroid trisects the medians, that is to say, the distance from a vertex to the centroid G is twice the distance from the centroid to the opposite side of the triangle. So, for instance, AG is twice A'G. Incidentally, you can drag around other points besides the vertices A, B, and C. If you drag any other point, the figure is designed to swirl around the centroid G. An exception is G itself, and if you move it, the figure will slide along with it. Only moving A, B, or C will actually change the shape of the triangle. The circumcircle and the circumcenter O You might have first noticed the circle on which the vertices of the triangle A, B, and C all lie. It is called the circumcircle of the triangle. Any three points, unless they lie on a straight line, determine a unique circle, this circumcircle. The center of this circle is called the circumcenter, and it’s denoted O in the figure. For acute triangles, the circumcenter O lies inside the triangle; for obtuse triangles, it lies outside the triangle; but for right triangles, it coincides with the midpoint of the hypotenuse. As Euclid proved in Propsition IV.3 of his Elements, the circumcenter can be found as the intersection of the three perpendicular bisectors of the sides of the triangle. These are the lines perpendicular to the sides of the triangle passing through the midpoints of the sides. They’re labelled A'OD', B'OE', and C'OF', and they're colored black, as are the lines connecting the midpoints of the sides, A'B', B'C', and C'A'. The altitudes and the orthocenter H There's yet another interesting “center” of the triangle, the orthocenter. An altitude of the triangle is a line drawn through a vertex perpendicular to the side of the triangle opposite the vertex. There are three altitudes: one is AD perpendicular to the side BC, the second is BE perpendicular to the side CA, and the third is CF perpendicular to the side AB. They're colored blue here. Note that when the triangle is obtuse, two of the altitudes lie outside the triangle, so they actually connect a vertex to the opposite side extended. In the case of a right triangle, two of the altitudes are actually sides of the triangle. The altitudes of a triangle meet at a point, called the orthocenter, denoted here by H. For an acute triangle, the orthocenter lies inside the triangle; for an obtuse triangle, it lies outside the triangle; and for a right triangle, it coincides with the vertex at the right angle. For fun, see what points and lines coincide for special triangles: isosceles triangles, right triangles, equilateral triangles, and right isosceles triangles. The Euler line OGH of the triangle These three “centers” of the triangle lie on one straight line, called the Euler line. (“Euler” is pronounced something like “Oiler” in English.) Leonhard Euler (1707–1783) was a very prolific mathematician known for his discoveries in many branches of mathematics ranging from number theory to analysis to geometry. It's surprising that these three points lie on a straight line. But you might see why from the picture. Focus your attention on the centroid G. For each point, like A on one side of it, there is another, like A' on the other side of it but half as far away. On one side is B, the other B'; on one side C , the other C'. In fact, this correspondence sends the whole triangle ABC to the smaller, but similar, triangle A'B'C', called the medial triangle. The sides of the medial triangle A'B'C' are parallel and half the length of the sides of the original triangle ABC. You can see from the figure that this correspondence sends the altitudes of the original triangle, which are AD, BE, and CF, to the altitudes of the medial triangle, which are A'D', B'E', and C'F'. Since the altitudes of the original triangle meet at the orthocenter H of the original triangle, the altitudes of the medial triangle will meet at its orthocenter H' which you can see in the figure is labelled O. Behold! This orthocenter O of the medial triangle is the circumcenter of the original triangle! Thus, this correspondence sends H to O, that is, H and O are on the opposite sides of the centroid G, and O is half as far away from G as H is. This figure utilizes the Geometry Applet. March, 1996. David E. Joyce Department of Mathematics and Computer Science Clark University Worcester, MA 01610 The address of this file is http://aleph0.clarku.edu/~djoyce/java/Geometry/eulerline.html
{"url":"http://aleph0.clarku.edu/~djoyce/java/Geometry/eulerline.html","timestamp":"2014-04-21T12:16:48Z","content_type":null,"content_length":"9840","record_id":"<urn:uuid:836f2891-6f16-419f-9731-cecc4c3aacad>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
It’s a NON ARISTOTELIAN WORLD: Tyranosopher: Finite Logic should be called Non Aristotelian Logic. As I will show. Simplicius Maximus, a contradictor: I have two objections to your finite math madness. First it makes no sense, and, secondly, even if it did, it would be pointless. Tyranosopher: I love contradictions. I squash them, then drink their juicy parts. OK, bring it on. Let’s start with the contradiction you found. A French contributor, Paul de Foucault, already made the objection that m/0 = infinity. Sounds good. However, it violates Peano Arithmetic (PA). PA is the arithmetic common to all metamathematics. But for mine, of course. (I violate much, with glee, including the pairing axiom!) In PA, a.0 = 0 is one of the two axioms defining multiplication. So we see that if x = m/0, we would have x.0 = m. In other words, m = 0. That’s not surprising: a number called “infinity” is not defined in PA. Simplicius Maximus: OK, fine. Here is my objection. It’s well known that the square root of two is irrational. Even Aristotle knew this, but you apparently don’t. And then you give the world lessons about everything. You are a charlatan. T: What do you mean by irrational? SM: Ah, you see? It means square root of two cannot be equal to m/n, where m and n are integers. Let’s abbreviate square root two by sqrt(2). Irrational means the expansion of sqrt(2) never ends. T: Why? SM: Here is the proof. Suppose sqrt (2) were rational. That means: m/n = sqrt (2). Let’s suppose the terms m and n are as small as possible. That’s crucial to get the contradiction. T: Fair enough. SM: Now, square both sides. T: That means, more exactly, that you contrive to multiply the left hand side of the equation by m/n and the right hand side by sqrt(2). SM: Happy that you can follow that trivial trick. That gives us the equation: mm/nn = 2. T: As sqrt (2) sqrt (2) = 2. Indeed. By the way, you made an unwarranted assumption, so I view your reasoning as already faulty, at this point. SM: Faulty? Are you going mad? T: I will dissect your naïve error later. But please finish, Mr. Aristotle. SM: Call me Aristotelian if you wish. Multiplying both sides of the equation by nn, we get: mm = 2 nn. That implies that m is even. Because if m were odd, m = 2u + 1, then mm = 4uu + 4u + 1 , the sum of an even number (4uu + 4u) plus 1… And that, the sum of an even number with one, is odd. Hence m = 2a. But then 2a2a = 2 nn, or: 2 aa = nn. Thus n is even (same reasoning as before: the square of an odd number cannot be even). So we see that both m and n are even, a contradiction, as we assumed m and n were the smallest integers with a ratio equal to sqrt (2). T: This proof is indeed alluded to in Aristotle, and was interpolated much later into Euclid’s elements. The official Greek mathematicians did not like algebra. SM: I see that, although you don’t know math, you know historiography. Tyranosopher: I do know math, I’m just more rigorous than you, august parrot. Simplicius Maximus: Me, a parrot? Me, and 25 centuries of elite mathematicians who are household names, dozens of Field Medalists are also of the avian persuasion? How can you be so vain and smug? Tyranosopher: Because I’m smarter. SM: Really? Smarter than Aristotle? T: That’s an easy one. People like Aristotle spent a lot of time, all too much time, with politics, not enough with thinking. OK, let’s go back to your very first naive mathematical manipulation. You took the square of both sides. SM: Of course I did. Tyranosopher: You can’t do that. SM: Of course I can. Tyranosopher: No. In FINITE math, a = b does not imply that aa = bb. SM: Why? T: Because aa could be meaningless. It could be too big to have meaning. It’s a added to itself a times. If, as we compute aa, we hit the greatest number, #, we must stay silent, as Wittgenstein would have said. In FINITE math, the infinite set of integers N does not exist. Only what can be finitely constructed exist. Because there is no way to construct the set N, as it would be infinite (if it existed; that’s a huge difference between what I propose, and what David Hilbert proposed). In my system, integers and rational numbers are constructed, according to the principles I exposed in META, layer by layer, like an onion. SM: Wait. There are other proofs of the irrationality of square root of two. T: Yes, but it’s always the same story: at some point, multiplication is involved, so my objection resurfaces. SM: OK, all right. Let me go philosophical. What’s the point of all this madness? Trying to look smarter because you are so vain, at the cost of looking mad? Do you realize that you are throwing out of the window much of modern mathematics? T: Calm down. Entire parts of math are left untouched, such as topology, category theory, etc. My goal is to refocus all of math according to physics, and deny any worth to the areas that rest on All too many mathematicians have engaged in a science as alluring as the counting of angels on a pinhead in the Middle-Ages. SM: Dedekind said: “God created the integers, and the rest was man’s creation.” T: Precisely, God does not exist, so nor does the infinite set of the integers, N. This will allow mathematicians to refocus on what they can do, and remember that there is a smallest scale, and it would, assuredly change the methods of proof, in many parts. SM: Such as? T: Take the Navier Stokes fluid equation: one has to realize that, ultimately, the math have got to get grainy. This would help physics too, including all computations having to do with infinities. SM: You are asking for a mad jump into lala land. T: We are already in lala land. Finding the correct definitions is even more important than finding the correct theorems (as the latter can’t exist without the former). The reigning axiomatic theory, ZFC (Zermelo Fraenkel Choice) requires an infinite number of axioms. What’s more reasonable? An infinite number of axioms, or my finite onion? The answer is obvious. It’s a NON ARISTOTELIAN WORLD. In my not so humble opinion, the consequences are far reaching. Patrice Ayme Tags: Finite Logic, Metamathematics, Non Aristotelian. Finite math, Peano Arithmetic, ZFC thadroberts Says: November 5, 2013 at 1:08 am | Reply Great job writing this up!! • Patrice Ayme Says: November 5, 2013 at 1:15 am | Reply Thanks Thad! And welcome to the comments! The NON A world is opening up… Paul Handover Says: November 5, 2013 at 5:29 am | Reply I clicked the ‘Like’ button not because I understood what I read, far from it, but because you have given me an incentive, once November is behind me, to crave your patience in explaining this. It seems like a profound thing to understand before my very finite life comes to an end! • Patrice Ayme Says: November 5, 2013 at 2:54 pm | Reply Thanks Paul. Several people told me directly it’s hard to use the “like” button, or to comment (wordpress registration can be tricky). Although I like basically all your posts, none of the “likes” has gone through, for weeks… (It takes long to “load”, weirdly; a bit like the New York Times I have subscribed to for decades, and which tells the public I am NOT “verified”) You made me think about how one could put Non Aritotelian Logic, let’s say, poetically. In a way, it’s the logic of limits. It’s much more realistic than what’s reigning now. Ecologically, it’s what we need. Good luck with the novel. • Patrice Ayme Says: November 6, 2013 at 4:21 am | Reply My patience will be at your disposal. I do things that going to NON A is a major mental change. In a way, it introduces the Quantum where it belongs, at the core of the mind.PA Dominique Deux Says: November 6, 2013 at 2:24 am | Reply I hope you won’t mind very much if I keep my pet infinite’s pelt to hang over my fireplace. I rather liked the creature. It never harmed anyone and was happy to provide an airy perspective to my clogged mind. • Patrice Ayme Says: November 6, 2013 at 4:09 am | Reply Dear Dominique! Glad to see you manifest again. I am delighted to see you disagree. Because you make an interesting point: you regret the infinite beast. I should rather say, the infinity beasts. Because there is a whole hierarchy of them. However, there is nothing to regret. You see, with this, well, master stroke, matters are getting enriched, not impoverished. On the surface, it looks like much of mathematics will implode. And, indeed, it will. However, on second thought, all i did was to make the logic of mathematics MORE COMPLEX. So getting rid of the infinities jungles, is actually opening new logical dimensions. It unclogs minds by gaining dimensions. “Never harmed anyone”? I am sure of the opposite. Many generations of mathematicians have struggled with problems related to ever more byzantine versions of infinities that have proven more or less intractable. Many traditional problems may fall easily from the new approach. Others will look insignificant. After all, insignificance is what happened to most of Euclidean geometry. The Onion apprach, what I call META (which is used for the construction of my version of the naturals and rationals), also solves all the logical paradoxes (as it blocks, per force, all self This has an effect even on some people’s theory of consciousness, as they use, rather naively, self reference as the description/root of consciousness. Paul Handover Says: November 6, 2013 at 5:09 am | Reply Must stop reading this post and the comments last thing before turning the bedside light out. Find myself thinking about what’s written for an infinitely long time before going to sleep. ;-) • Patrice Ayme Says: November 6, 2013 at 6:16 am | Reply That’s the whole problem with infinity: never ends, sucks down everything into the hole… the primordial cave? I will be coming on something that will touch on Sagittarius A* tomorrow, BTW, speaking of black holes…;-)! Dominique Deux Says: November 6, 2013 at 3:18 pm | Reply Dear Patrice, I did not say I disagreed. I am in no position to agree or disagree. My math level never went beyond that of a good French engineering school – that is, early nineteenth century. I enjoyed it a lot but was well aware that much wider venues were being explored. I would never presume to cast an opinion on such issues – “sutor, nec ultra crepidam”. Knowing what you don’t know is better that thinking you know all. My point, as you well perceived, was about the discomfort that may come from having the carpet of common assumptions whipped out from under mankind’s feet, as already happened several times. When it has to be done, so be it, but it carries a price. And on that price, I can comment, even if I have no way of knowing if it buys me a new Grail or a vial of snake oil. Man’s mind is unique among animals in its abilities, but also in its needs. As an example: man’s abitity to store and process knowledge is several orders of magnitude ahead of other primates, however evolved – and with it goes man’s addiction to cramming himself full of knowledge. Be it Homer’s works (compulsory knowledge for educated Ancient Greeks, from start to end) or the Bible or baseball scores, or the fifty nuances of green some Amazonian tribes can name, or all visible stars. An African traditional healer has to spend seven hard years memorizing before graduating and setting up shop – like a surgeon. Another need, I think, is that of an immediate, personal link to the perception of transcendence. Religion once fulfilled this need, and it lost much of its appeal when the infinite provided a cleaner, less gory conduit. The urge is strong; when physicians state the Universe is finite, they hasten to add that there may be an infinity of universes along – to the great relief of most, despite the lack of any tangible consequences. By and large, the human mind is claustrophobic and dislikes boundaries, even though some are agoraphobic and will crowd into sects to fulfill their own craving for narrow mental venues. Once again, I judge neither. My own contribution, therefore, limits itself to an intuition that the exploration of an infinite-less world (as different from a finite one) need not rule out the continued use of the infinite as a notion. You say that Euclidian geometry faded into irrelevance. There I heartily disagree. What it lost is its claim to exclusivity – which it never seriously held, since nobody ever claimed Euclid’s postulate to be an axiom, and it was only a matter of time before other venues would be explored by Riemann et al. Within its better-defined field of validity, it remains valid and relevant. Wouldn’t it be a pity if Pascal’s definition of the infinite (observe of “de-finition of the in-finite” is in itself a fascinating expression) was to fade into nothingness? “A sphere, the center of which is everywhere, and the circumference, nowhere”. Elegance is never irrelevant. PS Blame WordPress for my absence. It now seems to work OK. Crossing fingers!) • Patrice Ayme Says: November 6, 2013 at 6:17 pm | Reply Dear Dominique: A very well thought out and interesting comment. I will start at this point with technical answers, the more advanced stuff will come later, with more time attributed. The French “physicien” translate as “physicist”, not “physician”, the later meaning MD, Medical Doctor. The universe thing in present day mainstream physics is OBVIOUSLY COMPLETELY FALSE. It’s just a collective hallucination, a herd phenomenon. Caused by LACK of imagination considering the Quantum, the Universe, and their relationship. Among other things, the so called “multiverse” or “cosmological inflation” don’t even respect energy conservation. Actually they don’t respect UNIVERSE CONSERVATION. It’s a mistake worthy of Yes, I have had big problems with WordPress too. I actually lost my finished version of Obamascare, thanks to WP (the one I published was just from a few scarps put together). People have complained to me they could not comment, and I myself have been unable to “like” any post from anybody. OK, more later on more substantial stuff. Axiom of Choice: Crazy Math | Patrice Ayme's Thoughts Says: March 31, 2014 at 12:07 am | Reply […] is part of my general, Non-Aristotelian campaign against infinity in mathematics and beyond. The nature of mathematics, long pondered, is […] What do you think? Please join the debate! The simplest questions are often the deepest!
{"url":"http://patriceayme.wordpress.com/2013/11/05/non-aristotelian/","timestamp":"2014-04-16T15:58:52Z","content_type":null,"content_length":"78245","record_id":"<urn:uuid:40ec09ba-742c-48c2-b351-fcd10c17695f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
SE Circles - TI Chapter 10: SE Circles - TI Created by: CK-12 The activities below are intended to supplement our Geometry flexbooks. Chapter Outline Chapter Summary You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/book/Texas-Instruments-Geometry-Student-Edition/r1/section/10.0/SE-Circles---TI-%253A%253Aof%253A%253A-Texas-Instruments-Geometry-Student-Edition/","timestamp":"2014-04-17T02:18:25Z","content_type":null,"content_length":"94800","record_id":"<urn:uuid:a63a2128-75f5-4c4b-88f1-57f60e659c55>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
e S An Eulerian surface hopping method for the Schrödinger equation with conical crossings Seminar Room 1, Newton Institute In a nucleonic propagation through conical crossings of electronic energy levels, the codimension two conical crossings are the simplest energy level crossings, which affect the Born-Oppenheimer approximation in the zeroth order term. The purpose of this paper is to develop the surface hopping method for the Schrödinger equation with conical crossings in the Eulerian formulation. The approach is based on the semiclassical approximation governed by the Liouville equations, which are valid away from the conical crossing manifold. At the crossing manifold, electrons hop to another energy level with the probability determined by the Landau-Zener formula. This hopping mechanics is formulated as an interface condition, which is then built into the numerical flux for solving the underlying Liouville equation for each energy level. While a Lagrangian particle method requires the increase in time of the particle numbers, or a large number of statistical samples in a Monte Carlo setting, the advantage of an Eulerian method is that it relies on fixed number of partial differential equations with a uniform in time computational accuracy. We prove the positivity and $l^ {1}$-stability and illustrate by several numerical examples the validity and accuracy of the proposed method. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/KIT/seminars/2010121310001.html","timestamp":"2014-04-19T20:05:57Z","content_type":null,"content_length":"6796","record_id":"<urn:uuid:a7d9ea8b-50bc-4d9d-b96d-c10abc94fa47>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Blocks in the finite set A when acted on by G. January 31st 2011, 08:55 AM Blocks in the finite set A when acted on by G. Let G be a transitive permutation group on the finite set A. a 'block' is a non-empty subset B of A such that for all $\sigma \text \in G$ either $\displaystyle \sigma (B)=B$ or $\sigma(B) \cap B =\phi$. (here $\sigma(B)$ is the set $\left\{\sigma(b)|b \in B \right\}$). It can be easily shown that if B is a block and ${\sigma}_1 (B),\text{} {\sigma}_2 (B), \ldots, \text{} {\sigma}_n (B)$ are all distinct images of B under the elements of G, then these form a partition of A. Can the same be proved if G is not transitive on the finite set A. January 31st 2011, 12:17 PM Let G be a transitive permutation group on the finite set A. a 'block' is a non-empty subset B of A such that for all $\sigma \text \in G$ either $\displaystyle \sigma (B)=B$ or $\sigma(B) \cap B =\phi$. (here $\sigma(B)$ is the set $\left\{\sigma(b)|b \in B \right\}$). It can be easily shown that if B is a block and ${\sigma}_1 (B),\text{} {\sigma}_2 (B), \ldots, \text{} {\sigma}_n (B)$ are all distinct images of B under the elements of G, then these form a partition of A. Can the same be proved if G is not transitive on the finite set A. Your terminology is unfamiliar to me. Are you saying that $G$ acts on $A$ transitively? Is that what your first sentence means? February 1st 2011, 12:07 AM Let G be a transitive permutation group on the finite set A. a 'block' is a non-empty subset B of A such that for all $\sigma \text \in G$ either $\displaystyle \sigma (B)=B$ or $\sigma(B) \cap B =\phi$. (here $\sigma(B)$ is the set $\left\{\sigma(b)|b \in B \right\}$). It can be easily shown that if B is a block and ${\sigma}_1 (B),\text{} {\sigma}_2 (B), \ldots, \text{} {\sigma}_n (B)$ are all distinct images of B under the elements of G, then these form a partition of A. Can the same be proved if G is not transitive on the finite set A. Yes, if I understand your definition of a `block' correctly. I mean, let $A_0 = A\setminus Gx$, where $Gx$ is the orbit of $G$. Then clearly $\sigma(A_0)\cap A_0 = \emptyset$, because $A_0 \cap Gx=\emptyset$...does that make sense?...Basically, every element of G takes everything in A and sticks it somewhere within the orbit. So the orbit is partitioned nicely (because G acts transitively on the orbit), and we can just take `everything else' to be another partition... February 1st 2011, 05:29 AM February 1st 2011, 05:51 AM Yes, if I understand your definition of a `block' correctly. I mean, let $A_0 = A\setminus Gx$, where $Gx$ is the orbit of $G$. Then clearly $\sigma(A_0)\cap A_0 = \emptyset$, because $A_0 \cap Gx=\emptyset$...does that make sense?...Basically, every element of G takes everything in A and sticks it somewhere within the orbit. So the orbit is partitioned nicely (because G acts transitively on the orbit), and we can just take `everything else' to be another partition... Although in the question it was given that G acts transitively on a, i still am assuming that G does not act transitively on A. Then $A_0$ is non-empty. is this correct?? Then $\sigma (A_0) \cap A_0 eq \phi$. is this correct too?? If these two are correct then i will be able to understand your post. THANKS IN ADVANCE. February 1st 2011, 09:00 AM Although in the question it was given that G acts transitively on a, i still am assuming that G does not act transitively on A. Then $A_0$ is non-empty. is this correct?? Then $\sigma (A_0) \cap A_0 eq \phi$. is this correct too?? If these two are correct then i will be able to understand your post. THANKS IN ADVANCE. Yes, these are correct. Can you see why they are? February 1st 2011, 08:16 PM yes swlabr. i can see why they are correct. I infer that your solution uses the fact that G acts transitively on A. does this mean that the distinct images of the block B ${\sigma}_1 (B),\text{} {\sigma}_2 (B), \ldots, \text{} {\sigma}_n (B)$ under the elements of G do not necessarily form a partition of A if G does not act transitively on A. February 1st 2011, 11:56 PM yes swlabr. i can see why they are correct. I infer that your solution uses the fact that G acts transitively on A. does this mean that the distinct images of the block B ${\sigma}_1 (B),\text{} {\sigma}_2 (B), \ldots, \text{} {\sigma}_n (B)$ under the elements of G do not necessarily form a partition of A if G does not act transitively on A. No no, I use the fact that G acts transitively on $G_x = A\setminus A_0$. So your blocks are the partitions of G_x along with A_0. February 2nd 2011, 02:54 AM Yes, if I understand your definition of a `block' correctly. I mean, let $A_0 = A\setminus Gx$, where $Gx$ is the orbit of $G$. Then clearly $\sigma(A_0)\cap A_0 = \emptyset$, because $A_0 \cap Gx=\emptyset$...does that make sense?...Basically, every element of G takes everything in A and sticks it somewhere within the orbit. So the orbit is partitioned nicely (because G acts transitively on the orbit), and we can just take `everything else' to be another partition... I have quoted the solution you have given. you have defined $A_0 = A \setminus Gx$. I agree with the fact that $A_0 \cap Gx=\emptyset$, because this follows from the definition of $A_0$. But i don't agree with $\sigma(A_0) \cap A_0=\emptyset$. According to me this is only true if G acts transitively on A. i say this because if G does not act transitively on A then $A_0$ is non-empty( as in this case $Gx \subset A$), and there must be some element $a \in A_0$ such that $\sigma(a) otin Gx \text{ } \forall \text{ } \sigma \in G$. May be i am just being a 'block'head here but i am only a newbie at group theory so please help this poor guy.(Doh)(Happy) February 2nd 2011, 04:38 AM I have quoted the solution you have given. you have defined $A_0 = A \setminus Gx$. I agree with the fact that $A_0 \cap Gx=\emptyset$, because this follows from the definition of $A_0$. But i don't agree with $\sigma(A_0) \cap A_0=\emptyset$. According to me this is only true if G acts transitively on A. i say this because if G does not act transitively on A then $A_0$ is non-empty( as in this case $Gx \subset A$), and there must be some element $a \in A_0$ such that $\sigma(a) otin Gx \text{ } \forall \text{ } \sigma \in G$. May be i am just being a 'block'head here but i am only a newbie at group theory so please help this poor guy.(Doh)(Happy) What do you mean by $\sigma(a)$? Do you mean $\sigma(a)=\{a\cdot g: g \in G\}$. As in, $\sigma(a)$ is the orbit of $a$. This is, I believe, what is happening. Then by definition, $\sigma(a) \in Gx$ for all $a \in A$, and so $\sigma(a) ot\in A_0$ for all $a \in A$, as required... February 2nd 2011, 09:23 AM What do you mean by $\sigma(a)$? Do you mean $\sigma(a)=\{a\cdot g: g \in G\}$. As in, $\sigma(a)$ is the orbit of $a$. This is, I believe, what is happening. Then by definition, $\sigma(a) \in Gx$ for all $a \in A$, and so $\sigma(a) ot\in A_0$ for all $a \in A$, as required... no no.. when i write $\sigma(a)$ i mean $\sigma \cdot a$. the orbit of a will be represented by $Ga$. I wrote $\sigma \cdot a$ as $\sigma(a)$ because in the question i had mentioned that G is permutation group on the finite set A. now do you understand what i am trying to say?? for example let $S_4$ act on {1,2,3,4}. then for $\sigma=\left\{2,3,1,4\right\} \in S_4$ we write $\sigma(2)=\sigma \cdot 2=3$ February 2nd 2011, 11:55 PM no no.. when i write $\sigma(a)$ i mean $\sigma \cdot a$. the orbit of a will be represented by $Ga$. I wrote $\sigma \cdot a$ as $\sigma(a)$ because in the question i had mentioned that G is permutation group on the finite set A. now do you understand what i am trying to say?? for example let $S_4$ act on {1,2,3,4}. then for $\sigma=\left\{2,3,1,4\right\} \in S_4$ we write $\sigma(2)=\sigma \cdot 2=3$ Oh, I think I understand what you mean now. So just take your blocks to be the individual orbits. These partition your set in the desired fashion. (e.g. $S_4 = \langle (1\:2\:3\:4),(1\:2)\rangle$ acts on $\{1, 2, 3, 4, 5, 6\}$ and your blocks will be $\{1, 2, 3, 4\}, \{5\}, \{6\}$. February 3rd 2011, 12:39 AM Oh, I think I understand what you mean now. So just take your blocks to be the individual orbits. These partition your set in the desired fashion. (e.g. $S_4 = \langle (1\:2\:3\:4),(1\:2)\rangle$ acts on $\{1, 2, 3, 4, 5, 6\}$ and your blocks will be $\{1, 2, 3, 4\}, \{5\}, \{6\}$. $B=\{1,2,3,4\}$ is a block--agreed. but $\sigma_1 \cdot \{1,2,3,4\}, \sigma_2 \cdot \{1,2,3,4\},\ldots,\sigma_{24} \cdot \{1,2,3,4\}$ do not form a partition of $\{1,2,3,4,5,6\}$. here $\sigma_i, 1 \leq i \leq 24$ are elements of $S_4$ More over the trivial blocks in this case are {1},{2},{3},{4},{5},{6},{1,2,3,4,5,6}.
{"url":"http://mathhelpforum.com/advanced-algebra/169810-blocks-finite-set-when-acted-g-print.html","timestamp":"2014-04-16T20:56:55Z","content_type":null,"content_length":"39037","record_id":"<urn:uuid:27886405-1178-4eb5-bb97-c4058b697e84>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Goodies is a free math help portal for students, teachers, and parents. Home | About Us | Contact Us | Privacy | Advertise | Share | Interactive Math Goodies Software | More Free Math Lessons Worksheets Generator Games Homework Articles Forums Glossary Puzzles Calculators Standards Practice Exercises: Unit 11 > Lesson 8 of 10 Data and Graphs Directions: For each exercise below, click once in the ANSWER BOX, type in your answer and then click ENTER. Your answer should be given as a whole number or as a decimal. If your answer is a percent, do NOT enter the percent symbol. Just enter the number. After you click ENTER, a message will appear in the RESULTS BOX to indicate whether your answer is correct or incorrect. To start over, click CLEAR. Refer to the line graph below for Exercises 1 to 3. 1. What is the largest number on the vertical scale? 2. What pulse rate was recorded at 2 minutes? 3. A pulse rate of 121 beats per minute was recorded at how many minutes? Refer to the bar graph below for Exercises 4 to 6. 4. How many items are being compared in the graph? 5. What was the average height in cm for Granny's Bloomers? 6. What was the average height in cm for No Fertilizer? Refer to the circle graph below for Exercises 7 to 10. 7. How many sectors are in this circle graph? 8. What percentage of people in Shrub Oak preferred chocolate ice cream? 9. What percentage of people in Shrub Oak preferred butter pecan ice cream? 10. If a total of 50 people were surveyed, then how many people preferred vanilla ice cream? This lesson is by Gisele Glosser. You can find me on Google. ┃ Lessons on Data and Graphs ┃ ┃ Data and Line Graphs ┃ ┃ Constructing Line Graphs ┃ ┃ Data and Bar Graphs ┃ ┃ Constructing Bar Graphs ┃ ┃ Data and Circle Graphs ┃ ┃ Constructing Circle Graphs ┃ ┃ Comparing Graphs ┃ ┃ Practice Exercises ┃ ┃ Challenge Exercises ┃ ┃ Solutions ┃ ┃ Related Activities ┃ ┃ Interactive Puzzles ┃ ┃ Printable Worksheets ┃ ┃ Math and Sports WebQuests ┃ We also recommend this Online Math Help Video. Lessons Worksheets WebQuests Games Homework Articles Forums Glossary Puzzles Newsletter Standards Buy the Goodies Now! Home | About Us | Contact Us | Privacy | Advertise | Share | Interactive Math Goodies Software | More Free Math Lessons Worksheets Generator Games Homework Articles Forums Glossary Puzzles Calculators Standards Practice Exercises: Unit 11 > Lesson 8 of 10 Data and Graphs Directions: For each exercise below, click once in the ANSWER BOX, type in your answer and then click ENTER. Your answer should be given as a whole number or as a decimal. If your answer is a percent, do NOT enter the percent symbol. Just enter the number. After you click ENTER, a message will appear in the RESULTS BOX to indicate whether your answer is correct or incorrect. To start over, click CLEAR. 1. What is the largest number on the vertical scale? 3. A pulse rate of 121 beats per minute was recorded at how many minutes? 4. How many items are being compared in the graph? 5. What was the average height in cm for Granny's Bloomers? 6. What was the average height in cm for No Fertilizer? 8. What percentage of people in Shrub Oak preferred chocolate ice cream? 9. What percentage of people in Shrub Oak preferred butter pecan ice cream? 10. If a total of 50 people were surveyed, then how many people preferred vanilla ice cream? This lesson is by Gisele Glosser. You can find me on Google. ┃ Lessons on Data and Graphs ┃ ┃ Data and Line Graphs ┃ ┃ Constructing Line Graphs ┃ ┃ Data and Bar Graphs ┃ ┃ Constructing Bar Graphs ┃ ┃ Data and Circle Graphs ┃ ┃ Constructing Circle Graphs ┃ ┃ Comparing Graphs ┃ ┃ Practice Exercises ┃ ┃ Challenge Exercises ┃ ┃ Solutions ┃ ┃ Related Activities ┃ ┃ Interactive Puzzles ┃ ┃ Printable Worksheets ┃ ┃ Math and Sports WebQuests ┃ Lessons Worksheets WebQuests Games Homework Articles Forums Glossary Puzzles Newsletter Standards Buy the Goodies Now!
{"url":"http://mathgoodies.com/lessons/graphs/practice_unit11.html","timestamp":"2014-04-17T10:03:27Z","content_type":null,"content_length":"37216","record_id":"<urn:uuid:4e58d312-4ccc-42b8-b273-8dcbc8a8c363>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Line Graph line graph Definition of Line Graph • Line graph is a graph that uses line segments to connect data points and shows changes in data over time. More about Line Graph Examples of Line Graph Solved Example on Line Graph Irena recorded her test scores in a line graph as shown. What is Irena's score on the Math test? A. 50 B. 80 C. 60 D. 40 Correct Answer: C Step 1: The height of each point in the line graph represents Irena's score on the particular test. Step 2: Note the height of the point representing Irena's score on the Math test. Step 3: The height of the point is 60. Step 4: So, Irena scored 60 on the Math test. Related Terms for Line Graph
{"url":"http://www.icoachmath.com/math_dictionary/Line_Graph.html","timestamp":"2014-04-19T19:53:39Z","content_type":null,"content_length":"12387","record_id":"<urn:uuid:863f7efe-ef28-441f-b03a-4f7c98448d3f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
The family of short-run cost curves consisting of average total cost, average variable cost, and marginal cost, all of which have U-shapes. Each is U-shaped because it begins with relatively high but falling cost for small quantities of output, reaches a minimum value, then has rising cost at large quantities of output. Although the average fixed cost curve is not U-shaped, it is occasionally included with the other three just for sake of completeness. The U-shapes of the average total cost, average variable cost, and marginal cost curves are directly or indirectly the result of increasing marginal returns for small quantities of output (production Stage I) followed by decreasing marginal returns for larger quantities of output (production Stage II). The decreasing marginal returns in Stage II result from the law of diminishing marginal returns The U-shaped cost curves form the foundation for the analysis of short-run, profit-maximizing production by a firm. These three curves can provide all of the information needed about the cost side of a firm's operation. Bring on the Curves The diagram to the right displays the three U-shaped cost curves--average total cost curve (ATC), average variable cost curve (AVC), and marginal cost curve (MC)--for the production of Wacky Willy Stuffed Amigos (those cute and cuddly snakes, armadillos, and turtles). All three curves presented in this diagram are U-shaped. In particular, the production of Wacky Willy Stuffed Amigos, like other goods, is guided by increasing marginal returns for relatively small output quantities, then decreasing marginal returns for larger quantities. Consider a few reference points: • The marginal cost curve reaches its minimum value at 4 Stuffed Amigos. • The average variable cost curve reaches its minimum at 6 Stuffed Amigos. • The average total cost curve reaches its minimum at 6.5 Stuffed Amigos. The marginal cost curve for Stuffed Amigos production is the only one of these three curves that is DIRECTLY affected by the law of diminishing marginal returns. Up to a production of 4 Stuffed Amigos, increasing marginal returns is in effect. From the 5th Stuffed Amigo on, decreasing marginal returns (and the law of diminishing marginal returns) takes over. The U-shaped pattern for the marginal cost curve that results from increasing and decreasing marginal returns is then indirectly responsible for creating the U-shape of the average variable cost and average total cost curves. The Average-Marginal Relation The average total cost, average variable cost, and marginal cost curves depict the basic mathematical relation that exists between any average and the corresponding marginal. • Average Variable Cost: First, note the relation between the average variable cost curve and the marginal cost curve. The marginal cost curve intersects the average variable cost curve at its minimum value. Moreover, when average variable cost is declining (the average variable cost curve is negatively sloped), marginal cost is less than average variable cost. And when average variable cost is rising (the average variable cost curve is positively sloped), marginal cost is greater than average variable cost. • Average Total Cost: Second, the average-marginal relation is also seen with the average total cost curve. The marginal cost curve intersects the average total cost curve at its minimum value, as well. When average total cost is declining (the average total cost curve is negatively sloped), marginal cost is less than average total cost. When average total cost is rising (the average total cost curve is positively sloped), marginal cost is greater than average total cost. Note that the minimum values of the average total cost curve and the average variable cost curve occur at different quantities. This results because: (1) marginal cost intersects each average curve at its minimum value, (2) the marginal cost curve has a positive slope, and (3) there is a gap between the two average curves, which is average fixed cost. As such, the marginal cost intersects the minimum of the average variable cost curve at 6 Stuffed Amigos, then rises a bit before intersecting the minimum of the average total cost curve at 6.5 Stuffed Amigos. What About Average Fixed Cost? Although the average fixed cost curve is not displayed in this exhibit, average fixed cost can be derived from the average total cost and the average variable cost curves. First, note that the distance separating the average total cost curve and the average variable cost curve is relatively wide for small quantities of output, but narrows with greater production. The reason for the narrowing gap is that the difference between the two curves is average fixed cost. Because average fixed cost declines with greater production, so too does the gap between these As such, average fixed cost can be derived from this diagram by calculating the vertical distance between the average total cost and the average variable cost curves. While an average fixed cost curve is sometimes included in a diagram such as this one, it is not really needed. So long the average total cost and the average variable cost curves are available, average fixed cost can be And What About the Totals? All total cost values--total cost, total variable cost, and total fixed cost--can also be derived from this diagram. If the output quantity, average total cost, and average fixed cost, are known, then the total cost measures can be derived. Total cost is quantity times average total cost. Total variable cost is quantity times average variable cost. And total fixed cost is quantity times average fixed cost. As such, this diagram of the three U-shaped cost curves provides all of the information available about the cost incurred by a firm for short-run production. Recommended Citation: U-SHAPED COST CURVES, AmosWEB Encyclonomic WEB*pedia, http://www.AmosWEB.com, AmosWEB LLC, 2000-2014. [Accessed: April 16, 2014]. Check Out These Related Terms... | | | | | | Or For A Little Background... | | | | | | | | | | And For Further Study... | | | | | | | | | | | | | | | | | | Search Again? Back to the WEB*pedia
{"url":"http://www.amosweb.com/cgi-bin/awb_nav.pl?s=wpd&c=dsp&k=U-shaped+cost+curves","timestamp":"2014-04-16T22:26:42Z","content_type":null,"content_length":"44288","record_id":"<urn:uuid:005731c2-9e57-47cb-8d0e-20835fdf331b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Show that all non integral solutions lie on exactly two lines May 20th 2009, 09:08 PM #1 Show that all non integral solutions lie on exactly two lines Consider the equation $\lfloor x \rfloor \lfloor y \rfloor = x + y$. Show that all non-integral solutions of this equation lie on exactly two lines. The left hand side is an integer whether x and y are or not. If x and y are not integer, then the "fraction parts" must cancel. Either x= a+ r, y= b- r or x= a- r, y= b+ r, where a and b are integers, 0< r< 1. Those give the two lines. (And, of course, ab= a+b.) $x+y=0$ and $x+y=6$ [x][y]=[x]+[y]+{x}+{y} ({x} and {y} denote the fractional parts of x and y respectively) 0 $\leq${x}+{y}<2 0 $\leq$([x][y]-[x]-[y])<2 1 $\leq$(1+[x][y]-[x]-[y])<3 1 $\leq$([x]-1)([y]-1)<3 Case 1 [x]-1=1 and [y]-1=1 {x}={y}=0,but then $x$ and $y$ will be integers Case 2 $<br /> [x]-1=-1 and [y]-1=-2<br />$ $[x]=0 and [y]=-1$ {x}+{y}=0-0-(-1)=1 and thus $x+y=0$ Case 3 $[x]-1=-1 and [y]-1=-1$ $[x]=0 and [y]=0$ {x}={y}=0,but then $x$ and $y$ are integers Case 4 $[x]-1=1 and [y]-1=2$ $[x]=2 and [y]=3$ Therefore, $x+y=6$ Last edited by pankaj; May 21st 2009 at 06:35 PM. May 21st 2009, 04:17 AM #2 MHF Contributor Apr 2005 May 21st 2009, 05:47 AM #3 May 21st 2009, 05:52 AM #4 May 21st 2009, 06:01 AM #5 May 21st 2009, 11:31 AM #6
{"url":"http://mathhelpforum.com/calculus/89883-show-all-non-integral-solutions-lie-exactly-two-lines.html","timestamp":"2014-04-19T20:00:06Z","content_type":null,"content_length":"53224","record_id":"<urn:uuid:7f34be9c-8025-4219-83fe-f5e7ac6db46c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
A reference for geometric class field theory? up vote 13 down vote favorite The classic reference of this topic is Serre's Algebraic Groups and Class Fields. However, many parts of this book use Weil's language, which I find quite hard to follow. Is there another reference to the topic, using a more modern language (schemes etc.)? add comment 3 Answers active oldest votes Have you looked at up vote 15 down vote accepted There are many other good references, but hope this can help. I just saw Elencwajg's reply; sorry for the duplicates in my reply. @Elenvwajg: glad to find myself often in your good company! – SGP Aug 17 '11 at 15:08 @SGP: thanks, likewise! – Georges Elencwajg Aug 17 '11 at 15:30 Many many thanks for the very good recommendations. – QcH Aug 17 '11 at 23:35 add comment 1) Our (slightly pseudonymous!) friend, Brian Conrad, has written this beautiful introduction to geometric class field theory in his characteristically lucid style. 2) Another friend, Péter Tóth, has just written a Master Thesis Geometric Abelian Class Field Theory (click on [full text]) which seems to be what you are looking for: it is geometric and contains all necessary prerequisites . up vote 16 down vote And the author writes in his abstract that he wants "...to remedy the unfortunate situation that the literature on this topic is very deficient, partial and sketchy written..." 3) David Ben-Zvi, a well-known specialist and another friend of ours, gave talks on Geometric Langlands at MSRI in 2002, and part I is on geometric class field theory. Here is a I am very happy and proud that all the specialists mentioned above are members of and active contributors to our site. 1 These are very valuable. Thanks a lot! – QcH Aug 17 '11 at 23:34 add comment I have seen Section e) of the letter from Deligne to Serre available at http://www.math.uni-bonn.de/people/richarz/DeligneAnSerreFeb74.pdf mentioned as a reference. I have not read it up vote 3 down myself and it is hand written (in French), but it might be what you are looking for. 1 Thanks for the recommendation. Unfortunately, I don't know any French, which makes reading handwriting pretty much impossible. – QcH Aug 17 '11 at 14:13 add comment Not the answer you're looking for? Browse other questions tagged reference-request class-field-theory ag.algebraic-geometry algebraic-groups textbook-recommendation or ask your own question.
{"url":"http://mathoverflow.net/questions/73054/a-reference-for-geometric-class-field-theory","timestamp":"2014-04-18T13:40:12Z","content_type":null,"content_length":"67143","record_id":"<urn:uuid:c4af4001-4a2d-49b8-89a5-640af6148223>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Mysteries of the Egyptian Pyramids The infinite, uniform sea of sand, infrequent dried bushes of plants, hardly noticeable tracks from an elapsed camel are swept with a wind. The incandescent sun of wasteland ... And it seems by dull, as if is covered with a fine sand. And suddenly, as if a mirage, before the amazed look there arise pyramids (Fig.1), fancy rock figures directed toward the Sun. By their vast sizes, perfection of the geometric form they strike our imagination. According to many descriptions, these gigantic monoliths had earlier other view than presently. They shined on the Sun by a white glaze of the polished calcareous tables on the background of many-pillar adjacent temples. Near to pharaoh's pyramids there were the pyramids of the wives and members of pharaoh's family. Figure 1. Complex of pyramids in Gisa. Pharaoh's authority in the Ancient Egypt was huge, the divine's honors were given to him, the pharaoh is called the "Great God". The God-Pharaoh was the Promoter of country, the Judge of people fates. The cult of the died pharaoh gained a huge importance in the Egyptian religion. The gigantic pyramids are constructed for preservation of the pharaoh body and his spirit and for extolling his authority. And not without reason these works of human hands fall into one of seven miracles of the World. The assigning of pyramids was multifunction. They served not only burial vault of pharaohs, but also were attributes of majesty, power and riches of country, monuments of culture, storehouses of the country history and items of information on life of the pharaoh and people. It is clear, that the pyramids had steep "scientific contents " embodied in their forms, sizes and orientation on terrain. Each part of a pyramid, each element of the form was selected carefully and should demonstrate a high level of knowledge of the creators of pyramids. They were constructed on millennia, "for all time". And not without reason the Arabian proverb claims: "All in World frights of time. The time frights of pyramids". Among the gigantic Egyptian pyramids the Great Pyramid of the pharaoh Cheops is of special interest. Before to begin the analysis of the form and sizes of Cheops pyramid it is necessary to remind the Egyptian measure system. The Ancient Egyptians used three measure units: "elbow" (466 mm) equaling to 7 "palms" (66,5 mm), which, in turn, was equal to 4 "fingers" (16,6 mm). Let's make the analysis of the sizes of Cheops pyramid (Fig.2), following to reason given in the remarkable book of the Ukrainian scientist Nickolai Vasutinski "The Golden Proportion" (1990). Figure 2. Geometric model of Cheops pyramid. The majority of the researchers believe that the length of the side of the pyramid basis for example, GF is equal to L = 233,16 m. This value corresponds almost precisely to 500 "elbows". The full fit to 500 "elbows" will be, if the length of "elbow" to consider as equal to 0,4663 m. The altitude of the pyramid (H) is estimated by the researchers variously from 146,6 m up to 148,2 m. And depending on an adopted pyramid altitude all ratios of its geometric elements change considerably. In what is the cause of distinctions in an estimation of the pyramid altitude? Strictly speaking, Cheops pyramid is truncated. Its topic platform today has the size approximately 10 ´ 10 m, but one century back it was equal 6 ´ 6 m. Apparently, that the top of the pyramid was dismantled and it does not fit to the initial pyramid. Estimating the pyramid altitude, it is necessary to take into consideration such physical factor, as "shrinkage" of construction. For a long time under effect of enormous pressure (reaching 500 tons on 1 ì^2 of a undersurface) the pyramid altitude decreased in comparison to its initial altitude. What was the initial altitude of the pyramid? This altitude can be reconstructed if to find the main "geometrical idea" of the pyramid. In 1837 the English colonel G. Vaise measured the inclination angle of the pyramid faces: it appeared equal a = 51°51'. The majority of the researchers recognizes this value and today. The indicated value of the inclination angle corresponds to the tangent equal to 1,27306. This value corresponds to the ratio of the pyramid altitude ÀÑ to the half of its basis CB (Fig.2), that is, AC / CB = H / (L / 2) = 2H / L. And here the researchers expected a large surprise! If we take the square root of the "golden" proportion a = 1,27306, we can see that these values are very close among themselves. If to take the angle a = 51°50', that is, to decrease it by one arc minute the value of tga will become equal to 1,272, that is, will be equal to the value of a =51°50'. These measurements resulted to the following rather interesting hypothesis: the ratio AC / CB = If to designate now the lengths of the triangle ABC sides through x, y, z, and also to take into consideration that the ratio y / x = z can be computed as the following: If to take x = 1, y = Figure 3. "Golden" right triangle. The right triangle, in which the sides are in the ratio t : Then if to accept for the basis the hypothesis, that the "golden" right triangle is the main "geometrical idea" of Cheops pyramid, from here it is possible to compute the "design" altitude of the Cheops pyramid. It is equal: H = (L/2) ´ Let's deduce now some other relations for the Chops pyramid resulting from the "golden" hypothesis. In particular let's find the ratio of the external area of the pyramid to the area of its basis. For this purpose we take the length of the leg CB for the unit, that is: CB = 1. But then the length of the side of the pyramid basis GF = 2 and the area of the basis EFGH will be equal to S[EFGH] = Let's calculate now the area of the lateral face of Cheops pyramid. As the altitude of AB of the triangle AEF is equal to t, the area of each lateral face will be equal to S[D] = t. Then the common area of all four lateral faces of the pyramid will be equal to 4t, and the ratio of the common external area of the pyramid to the area of its basis will be equal to the "golden" proportion! This also is the main geometrical secret of Cheops pyramid! The analysis of other Egyptian pyramids demonstrates, that the Egyptians were always aimed to embody in the pyramids some relevant mathematical knowledge. In this respect the Chefre pyramid is rather interesting. The measurements of the pyramid shown that the inclination angle of the lateral faces is equal to 53°12', that corresponds to the leg ratio of the right triangle: 4:3. Such leg ratio corresponds to the well-known right triangle with the side ratio: 3:4:5; this one is called "perfect", "sacred" or "Egyptian" triangle. According to historians testimony the "Egyptian" triangle had a magic sense. Plutarch wrote, that the Egyptians compared the nature of the Universe to the "sacred" triangle; they symbolically assimilated the vertical leg to the husband, the basis to the wife, and the hypotenuse to the child who is born from both. According to the Pythagorean theorem for the triangle 3:4:5 we have: 3^2 + 4^2 = 5^2, Possibly just this theorem the Egyptian priests wanted to perpetuate, carrying up the pyramid based on the triangle 3:4:5? It is difficult to find a more successful example for demonstration of the Pythagorean theorem, which was well known for the Egyptians long before its discovering by Pythagor. Thus, the ingenious designers of the Egyptian pyramids were aimed to strike of their far offsets by depth of their mathematical knowledge, and they have reached this by selecting of the "golden" right triangle as the main "geometrical idea" of Cheops pyramid and of the "sacred" or "Egyptian" triangle as the main "geometrical idea" of Chefre pyramid.
{"url":"http://www.goldenmuseum.com/0302Pyramids_engl.html","timestamp":"2014-04-17T04:02:43Z","content_type":null,"content_length":"11469","record_id":"<urn:uuid:84b0269c-226c-457c-9b26-5865bf438930>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Rounding Algorithms 1475209 story Posted by from the now-that's-just-wierd dept. dtmos writes "Clive Maxfield has an interesting article up on PL DesignLine cataloging most (all?) of the known rounding algorithms used in computer math. As he states, "...the mind soon boggles at the variety and intricacies of the rounding algorithms that may be used for different applications ... round-up, round-down, round-toward-nearest, arithmetic rounding, round-half-up, round-half-down, round-half-even, round-half-odd, round-toward-zero, round-away-from-zero, round-ceiling, round-floor, truncation (chopping), round-alternate, and round-random (stochastic rounding), to name but a few." It's a good read, especially if you *think* you know what your programs are doing." This discussion has been archived. No new comments can be posted. Rounding Algorithms Comments Filter: • Round this! (Score:5, Funny) by Anonymous Coward on Thursday January 05, 2006 @07:12PM (#14405070) 1.44th post! □ Re:Round this! (Score:3, Funny) by baadger (764884) shouldn't that be '1.44st post'...the mind is torn between math and grammar. □ Re:Round this! (Score:3, Interesting) by igny (716218) I and my friend once talked about how to own half of a penny. We came to a solution to use randomness, you flip a penny, and depending on the outcome my friend has the penny or I have it. The problem was that uncertainty is removed after the flip, and I either have it or not, and it is not quite the same as owning 1/2 penny. So for a while we flipped the penny every time we discussed the penny to keep the uncertainty. • Most important... (Score:5, Funny) by T-Ranger (10520) <jeffw AT chebucto DOT ns DOT ca> on Thursday January 05, 2006 @07:12PM (#14405072) Homepage Round down and put the extra aside. Say, in your own account. Like the have-a-penny-need-a-penny jar at the local Gulp-n-Blow. □ by Reziac (43301) * Meant as a joke, but... I vaguely remember being taught a system in grade school, where cumulative roundoff errors were reduced somewhat by combining the total number of roundoffs, and rounding the main product up or down one or more units depending on whether you'd rounded up or down more in the sub-units. I don't recall the exact system but it does work as a method in everyday life. ☆ Re:Most important... (Score:5, Informative) by slothman32 (629113) <.pjackso5. .at. .rochester.rr.com.> on Thursday January 05, 2006 @07:27PM (#14405192) Homepage Journal There's some straight line algorithm that uses a similar method. It keeps adding the slope value for every x increment and when it overloads it also makes the y position go up one. Or something like that. Bresenham's I believe. To get on topic I would use the usual "(x).5 to (x+1).499~9 goes to (x+1)" way. For negative, just ignore the sign when doing it, e.g. -1.6 -> -2 ☆ Re:Most important... (Score:2, Informative) by poopdeville (841677) This is what significant digits are for, though significant digit arithmetic uses probability to lower rounding error instead of forcing the user to do it himself. ☆ Re:Most important... (Score:4, Informative) by Mr Z (6791) on Friday January 06, 2006 @02:17AM (#14407288) Homepage Journal That's the basis behind delta-sigma modulation and Floyd-Steinberg dithering. You carry forward the cumulative error from previous quantization, adding it to the current term. Then you quantize as desired. Over multiple samples, the error gets spread out, such that the local average is very close to the original signal. □ Re:Most important... (Score:4, Interesting) by PapayaSF (721268) on Thursday January 05, 2006 @07:32PM (#14405235) Journal Round down and put the extra aside. Say, in your own account. It's a classic computer crime/urban legend [snopes.com], and has been used in various films. ☆ Re:Most important... (Score:5, Interesting) by CRCulver (715279) <crculver@christopherculver.com> on Thursday January 05, 2006 @07:43PM (#14405318) Homepage Is it really an "urban legend"? As much of a trope it has become in movies with technological whizzes, Snopes says it was reported in Whiteside's Computer Capers: Tales of Electronic Thievery, Embezzlement, and Fraud [amazon.com] , which should have the necessary citation on the case. □ by Khashishi (775369) yeah but you might get caught and end up in pound-you-in-the-ass penitentiary □ Re:Most important... (Score:2, Funny) by ClamIAm (926466) There has to be a joke here, I know there does. • Think you know.... (Score:5, Insightful) by thewiz (24994) * on Thursday January 05, 2006 @07:12PM (#14405073) especially if you *think* you know what your programs are doing. Pfft... I've been writing programs and working with computers for over 25 years. I *STILL* haven't figured out what they are doing. Come to think of it, I could say the samething about my wife. □ Come to think of it, I could say the samething about my wife. If your wife is getting rounded, I'd say she's either pregnant or bulimic. Either way, you have a problem. □ by gardyloo (512791) □ Slide Rules and precision (Score:5, Interesting) by goombah99 (560566) on Thursday January 05, 2006 @08:03PM (#14405448) These days kids are not taught to round. Instead you just do the compuations at absurdly large precision then on the last step round off. This way you don't accumulate systematic round-off error. It's good as long as you have the luxury of doing that. It used to be that C-programmers had a cavalier attitude of always writing the double-precision libraries first. Which is why Scientific programmers were initially slow to migrate from fortran. These days it's not so true any more. First there's lots of good scientific C programmers now so the problem of parcimonius computation is well appreciated. Moreover the creation of math co-processing, vector calcualtions, and math co-processors often makes it counter-intuitive what to do. For example it's quite likely that brute forcing a stiff calculation is double precision using a numeric co-processor will beat doing it in single precision with a few extra steps added to keep the precision in range. So being clever is not always helpful. people used to create math libraries that even cheated on using the full precision of the avialable floating point word size (sub-single precision accuracy) since it was fast (e.g. the radius libs for macintoshes) Pipelining adds more confusion, since the processor can be doing other stuff during those wait states for the higher precision. Vector code reverse this: if you are clever maybe shaving precision willlet you double the number of simultanoeus calcualtions. In any case, what was once intuitive: minimal precision and clever rounding to avoid systematic errors means faster computation is no longer true. Of course in the old days people learned to round early in life: no one wanted to use a 5 digit precision slide rule if you could use a 2 digit precision slide rule. ☆ Re:Slide Rules and precision (Score:3, Interesting) by Detritus (11846) From what I recall about C on the PDP-11, single precision didn't buy you much extra speed on the FP-11 (hardware floating point unit), so why not use double precision for all floating point operations? See the PDP-11 Handbook (1979) [bitsavers.org] for instruction timings. □ by birge (866103) Pfft... I've been writing programs and working with computers for over 25 years. I *STILL* haven't figured out what they are doing. Come to think of it, I could say the samething about my If you've been trying to program your wife for 25 years, I think I may see your problem... • my favorite rounding algorithm (Score:5, Funny) by User 956 (568564) on Thursday January 05, 2006 @07:12PM (#14405074) Homepage my favorite rounding algorithm is pi(r)^2. • My personal rounding program (Score:5, Funny) by charlesbakerharris (623282) on Thursday January 05, 2006 @07:16PM (#14405105) For rounding, I use the following: □ Mountain Dew □ Couch □ Lack of willpower □ Utter disdain for annual resolutions I made less than a week ago □ DiGiorno's pizzas. Seems to work. □ Re:My personal rounding program (Score:4, Funny) by Spy der Mann (805235) <spydermann@slashdot.gmail@com> on Thursday January 05, 2006 @07:31PM (#14405225) Homepage Journal * DiGiorno's pizzas. Seems to work. So that's rounding towards positive infinity, right? :P • Ugh (Score:3, Funny) by zzen (190880) on Thursday January 05, 2006 @07:19PM (#14405135) I don't think I know what my programs are doing all the time... I just hope they play nice when I'm not watching. :) • I read the first half of the article... (Score:5, Interesting) by under_score (65824) <mishkin-slashdot@berteig . c om> on Thursday January 05, 2006 @07:22PM (#14405161) Homepage ...where it discusses the various rounding methods. I had actually thought of/used most of them. The one that was new to me was the round-half-even (banker's rounding). Very cool idea, and I had no idea it was commonly used. This is a great reference article! If you are programmer working with numerical algorithms, keep this article handy. □ Re:I read the first half of the article... (Score:2, Informative) by poopdeville (841677) Banker's rounding is just a naive implementation of significant digit arithmetic. http://en.wikipedia.org/wiki/Significance_arithmet ic [wikipedia.org] □ Re:I read the first half of the article... (Score:3, Informative) by pjt33 (739471) My grandad tells me that he was taught to use round-half-even in the Royal Engineers back in WWII. One I've used which isn't in the list is stochastic rounding for all values (rather than just n+0.5) such that n+f (0 = f 1) rounds to n with probability (1-f) and (n+1) with probability f. □ Re:I read the first half of the article... (Score:3, Informative) by bill_mcgonigle (4333) * This is a great reference article! If you are programmer working with numerical algorithms, keep this article handy. This one too: What Every Computer Scientist Should Know About Floating-Point Arithmetic [sun.com]. • by Irvu (248207) on Thursday January 05, 2006 @07:23PM (#14405167) Rounding to the nearest square? □ by sld126 (667783) Or the nearest prime... • by RedLaggedTeut (216304) Computer games should round randomly. This means that every little bonus that the player gets might have an impact on the integer result. Example: phaos. The "other" neat thing is that the expected value of floor(x+rand()) == x with 0.0=0.0 • Office Space (Score:5, Funny) by TubeSteak (669689) on Thursday January 05, 2006 @07:30PM (#14405218) Journal So when the subroutine compounds the interest, right, it uses all these extra decimals places that just get rounded off. So we just simplify the whole thing and we just round it down and drop the remainder into an account that we own. So you're stealing. Ah, no. No. You don't understand. It's, uh, very complicated. It's, uh, it's, it's aggregate so I'm talking about fractions of a cent that, uh, over time, they add up to a lot. □ by Claire-plus-plus (786407) the same trick was done in Superman 2 or course. By that trained-on-the-job programmer played by Richard Prior (I think). I am still amazed that a programmer trained on the job would think of • by FirstTimeCaller (521493) I mostly program using fixed integer arithmetic, so I don't do much plain rounding. But I do frequently need to do division with rounding up to nearest integer value. For that I use: #define DIV_ROUNDUP(n, d) ((n)+((d)-1))/(d)) • floating point (Score:3, Interesting) by BigMFC (592465) on Thursday January 05, 2006 @07:38PM (#14405283) I'm currently working with floating point accumulation and I've come to realize that rounding is unbelievably important when it comes to floating point. For long accumulations or a series of operations you need round to nearest functionality, but even this can be insufficient depending on the nature of the numbers your adding. If truncation is used however, although the easiest to implement in hardware, the error can add up so fast that it'll stun you. It's good to see a fairly comprehensive summary of techniques out there that doesn't require wading through the □ by Doctor Memory (6336) ...for all things floating-point: What Every Computer Sceintist Should Know About Floating-Point Arithmetic [sun.com]. I keep a copy of this handy whenever I have to play with floats and doubles (except for the odd game of Water Tennis, of course). ☆ by Atzanteol (99067) Floats and doubles... Water tennis... <fry>I get it!</fry> • Interval arithmetic (Score:5, Interesting) by Diomidis Spinellis (661697) on Thursday January 05, 2006 @07:38PM (#14405287) Homepage Rounding towards the nearest neighbour is the default and ubiquitously used rounding mode. The complementary rounding modes (round toward -+ infinity or 0) are useful for doing calculations with interval arithmetic: a calculation can be performed twice with opposing rounding modes to derive an interval value for the result. If all operations are performed in this way, the final result of a complex calculation is expressed as an interval providing the range in which the real value will be (remember, often floating point numbers only approximate the real number). Using such a package [sun.com] can save you the trouble of performing error analysis. An article [acm.org] in the Journal of the ACM provides the details for implementing this feature. □ Re:Interval arithmetic (Score:3, Insightful) by Doug Merritt (3550) Using such a package can save you the trouble of performing error analysis Absolutely false. Let me explain that in simpler terms: wrong, wrong, wrong, wrong wrong. A good package will help a programmer avoid the worst problems in the simplest situations. The worst situations are not solvable even by human experts backed by state of the art theory. The bottom line is that numerical analysis is a very deep speciality, roughly like compiler design is a very deep specialty. In neither area is there some • IEEE Standard (Score:4, Informative) by Anonymous Coward on Thursday January 05, 2006 @07:42PM (#14405307) And the IEEE standard for rounding is Banker's Rounding, or Even Rounding, plus whatever other names it goes by. When rounding to the nearest whole number, when the value is exactly halfway between, i.e. 2.5, the rounding algorithm chooses the nearest even number. This allows the distribution of rounding to happen in a more even distributed manner. Always rounding up, which is what US kids are taught in school, will eventually create a bias and throw the aggregates off. 2.5 = 2 3.5 = 4 □ by The_Dougster (308194) And the IEEE standard for rounding is Banker's Rounding That's what I have always used for programs that calculate a cost from some other factors; for instance labor costs are sometimes calculated by multiplying minutes by some factor. The result is often a decimal value that needs to be rounded to dollars and cents. If you use banker's rounding, and you should, then the bean counter upstairs will probably get the same result as your program does. This is a good thing. □ by bcrowell (177657) Just because something is an IEEE standard, that doesn't mean it's approproate in all cases. TFA discusses the relative merits of the different methods in different applications. □ Only with money in fractions (Score:5, Informative) by MarkusQ (450076) on Thursday January 05, 2006 @09:37PM (#14406025) Journal "Bankers" rounding is only appropriate in a rather restricted range of problems; specifically, where you are more worried about "fairness" than about accuracy, and have a data set that is already biased towards containing exact halves (generally because you've already rounded it previously). For pretty much all other cases it is broken, wrong, bad, very bad, and misguided. It is a kludge cut from the same cloth as using red and black ink, parenthesis, or location on the page (and all the permutations thereof) to indicate the sign of a number. Do not try to do any sort of scientific calculations, or engineering, or anything else that matters and round in this way. Why? Because contrary to what some people think, there is no systematic bias in always rounding up. There are exactly as many values that will be rounded down as will be rounded up if you always round exact halves up. I think the trap that people fall into is forgetting that x.000... rounds down (they think of it as somehow "not rounding"). ☆ Re:Only with money in fractions (Score:2, Insightful) by LnxAddct (679316) I think you might be mistaken. Round to the nearest even is statisticly significantly more accurate. Rounding halves up does nothing for accuracy as you seem to imply. Large data sets of any type of data will be biased if rounding halves up, whereas rounding to the nearest even is ever more accurate with each datapoint. Your statement about rounding to even being bad makes me think you haven't fully grasped the underlying concept, I've never seen rounding halves up used for anything in a major environment s ○ Re:Only with money in fractions (Score:4, Informative) by MarkusQ (450076) on Friday January 06, 2006 @01:12AM (#14407023) Journal I think you might be mistaken. Round to the nearest even is statisticly significantly more accurate. Rounding halves up does nothing for accuracy as you seem to imply. Large data sets of any type of data will be biased if rounding halves up, whereas rounding to the nearest even is ever more accurate with each datapoint. Your statement about rounding to even being bad makes me think you haven't fully grasped the underlying concept, I've never seen rounding halves up used for anything in a major environment simply because it is almost always the wrong thing to use. On the contrary, I understand and have worked with this sort of thing for years. I know whereof I speak, and the situation is exactly opposite of what you claim. Specifically: ■ Round to the nearest even is statistically ssignificantly less accurate. ■ Rounding halves up is significantly more accurate. ■ Large data sets of almost any type of data will be biased if rounding to the nearest even, whereas rounding halves up is ever more accurate with each data point. Note that this is basically your list, with the claims reversed. So we disagree totally. Now let me explain why I am correct. First, let's review what you do when you round to the nearest integer (without loss of generality; rounding to the nearest 1/10th, or even 1/137th, is isomorphic). 1. You start with a number which has a (potentially infinite) string of digits to the right of the decimal place 2. You drop (truncate) all but one of the unwanted digits. 3. You conditionally change the lowest order digit you intend to keep ★ For "round up" you add one to it if the remaining unwanted digit is 5,6,7,8, or 9 ★ For "round to nearest even" you add one to it if the remaining unwanted digit is 6,7,8, or 9, or if it is odd and the remaining unwanted digit is five. 4. You drop the remaining unwanted digit Note that the only difference in results between these rules comes from numbers where: ■ The last digit to be kept is even and ■ The first of the digits to be disposed of is 5 For example, the number 4.56106531 would be rounded to 4 in the "nearest even" case or to 5 in the "round up" case But clearly, the "nearest even" result is less accurate, and introduces a significant bias. 4.56106531 is closer to 5 than to 4, and should be rounded up. Always. At this point, you may object that you aren't planning on truncating before you apply the rule (or, equivalently, you only do the even odd dance on "exact" halves). But how did you get an "exact" half? Unless you have infinite precision floating point hardware, less significant bits fell off the bottom of your number; unless they were all zero your "exact half" is the result of truncation and the above logic still applies. The only common case where it doesn't apply is (as I stated originally) when dealing with money, where 1) your sample is biased to contain "exact halves" and 2) it is more important to be "fair" than it is to be accurate. This, in any case, is more of a convention than a fact of mathematics; we agree that money is tracted to a certain point and ignore the half pennies owed and the $0.00004537531 of interest due on them; if we didn't even money would not be an exception to the logic above. -- MarkusQ ■ Re:Only with money in fractions (Score:3, Insightful) by mesterha (110796) For example, the number 4.56106531 would be rounded to 4 in the "nearest even" case or to 5 in the "round up" case But clearly, the "nearest even" result is less accurate, and introduces a significant bias. 4.56106531 is closer to 5 than to 4, and should be rounded up. Always. This is the wrong rule. The number 4.56106531 would be rounded to 5 with both techniques. The "nearest even" technique for rounding to the nearest integer only applies to 4.50000000. In this case, you would round the number to ☆ Why? Because contrary to what some people think, there is no systematic bias in always rounding up. There are exactly as many values that will be rounded down as will be rounded up if you always round exact halves up. I think the trap that people fall into is forgetting that x.000... rounds down (they think of it as somehow "not rounding"). This is incorrect. For illustration, suppose we use a floating-point format with a 10-bit mantissa. For a fixed exponent (say 0), this can represent values from 1.0 ○ Re:Only with money in fractions (Score:5, Insightful) by MarkusQ (450076) on Friday January 06, 2006 @12:38AM (#14406895) Journal For illustration, suppose we use a floating-point format with a 10-bit mantissa. For a fixed exponent (say 0), this can represent values from 1.0 to 1 1023/1024, in 1/1024 increments. The AVERAGE of these UNROUNDED values is 1 1023/2048, which is LESS THAN 1.5. However, if all these values are rounded (with 0.5 rounding up), the AVERAGE of the ROUNDED values will be EQUAL TO 1.5, an average increase of 1/2048. Thus, this type of rounding introduces a measurable positive bias into the arithmetic. No, the "bias" came from your choice of data (and your unrealistic expectation that the average of a set of rounded values would equal the average of the unrounded set). Such examples are as easy to construct as they are misleading. Suppose we instead take the values {0.2, 0.3, and 0.5}. Their average is 1/3, and if we round them ".5 up" we wind up with {0,0,1} with exactly the same average. On the other hand, if we round them with ".5 to even" we wind up with {0,0,0} with the average of zero and an "error" (by your metric) of ■ Re:Only with money in fractions (Score:3, Informative) No, the "bias" came from your choice of data (and your unrealistic expectation that the average of a set of rounded values would equal the average of the unrounded set). Such examples are as easy to construct as they are misleading. Suppose we instead take the values {0.2, 0.3, and 0.5}. Their average is 1/3, and if we round them ".5 up" we wind up with {0,0,1} with exactly the same average. On the other hand, if we round them with ".5 to even" we wind up with {0,0,0} with the average of zero and an "error • by jemenake (595948) As he states, "...the mind soon boggles at the variety and intricacies of the rounding algorithms" Probably because the one that everyone would *like* to use is patented. • Social Applications (Score:5, Funny) by Kesch (943326) on Thursday January 05, 2006 @07:51PM (#14405367) So it turns out instead of 2, there are more like 9 different types of people. The classics: Those who round a glass of water up (Has been filled) Those who round it down (Has been emptied) The oddballs: The round-half-up crowd(Half or greater is filled) The round-half-down crowd(Half or less is empty) The round toward zero types(Always empty) The round away from zero groupies(Always Full) The round alternate weirdos(They get interesting when you give them two glasses) The round random subset(Carry around a coin or die to decide such problems) And finally... The truncate ones who cannot handle such a problem and smash the glass to make sure it is empty. • by creimer (824291) ... nothing in comparison to trying to figure out what the compiler is doing. My beautiful code goes into one end and comes out as an executable file that segfaults. Of course, some twit always say the problem is with my beautiful code and not the stupid compiler. □ by Sparr0 (451780) You will never forget the day you find your first bug in gcc. I never did :) □ How many times do I have to explain this (Score:2, Insightful) by idonthack (883680) Compilers do not understand code if you write it as a haiku! Actually, that seems like an interesting concept. I always felt that my computer science class needed to be more challenging, and now I know how to do it! "Should I count puncutation?" ); // terrible haiku • by Boronx (228853) What's the difference between round-toward-zero and truncate is? Or floor round and round down? Or ceiling round and round up? • by autopr0n (534291) between round to zero and floor, and round to infinity and ceiling? • I was expecting something more detailed than this (Score:3, Insightful) by Zork the Almighty (599344) on Thursday January 05, 2006 @08:14PM (#14405532) Journal I was expecting something a little better than this, like maybe some fast code to study and use [aggregate.org]. • Precision (Score:4, Informative) by Repton (60818) on Thursday January 05, 2006 @09:19PM (#14405919) Homepage for example, rounding a reasonably precise value like $3.21 to the nearest dollar would result in $3.00, which is a less precise entity. I would say that $3.00 is just as precise as $3.21. If you want less precision, you have to go to $3... □ Re:Precision (Score:2) by bill_mcgonigle (4333) * I would say that $3.00 is just as precise as $3.21. If you want less precision, you have to go to $3... Me too, but I always got those marked wrong in Science class - purposely because nobody could explain to me why 3.00 was less precise than 3.21, but according to the textbooks it is. I don't get it. ☆ Re:Precision (Score:3, Informative) by thebigmacd (545973) It's not less prescise, it's less accurate. If you are retaining 3 sig digs, rounding 3.21 to 3.00 is inaccurate. You'd be prescisely inaccurate by 0.21. Prescision != Accuracy • learned from Ratt: Out on the streets, that's where we'll meet You make the night, I always cross the line Tightened our belts, abuse ourselves Get in our way, we'll put you on your shelf Another day, some other way We're gonna go, but then we'll see you again I've had enough, we've had enough Cold in vain, she said I knew right from the beginning That you would end up winnin' I knew right from the start You'd put an arrow through my heart Round and round With love we'll find a way just give it time • He missed some (Score:3, Interesting) by Nightlight3 (248096) on Thursday January 05, 2006 @09:27PM (#14405964) There is also a delayed rounding [arxiv.org] (see page 7-8) used in combinatorial compression (enumerative coding). • Old school (Score:2, Insightful) by Trevin (570491) I already got into these types of rounding a decade ago. For a really good read on an FPU implementation, try to find a copy of the Motorola 68881/2 Programmer's Reference Manual. • This stuff is still matters (Score:4, Insightful) by ChrisA90278 (905188) on Thursday January 05, 2006 @09:42PM (#14406055) This stuff is still important. Yes the big computers we have on our desks can do high precision floating point. but there are still some very small 4-bit and 8-bit micro controllers that controll battery chargers, control motors that move antenna on spacecraft and the control fins on air to air missles. And then there are those low-end DSP chips inside TV sets and digital cameras and camcorders.... These controllers need to do complex math using short integers and how round off errors accumulate still does matter. Remember: Not all software runs on PCs in fact _most_ software does not. At least this Slashdot poster appears to be well-rounded. • Round Toward Mean? (Score:4, Informative) by miyako (632510) <miyako@@@gmail...com> on Thursday January 05, 2006 @10:59PM (#14406436) Homepage Journal They left off one that I've used a few times when dealing with graphics, which using their naming convention would be something like "Round Toward Mean". You basically take the mean of the surrounding values in an array or matrix and then round up if the value is below the mean, and round down if it's above the mean. It's useful for smoothing out images if you use this for each color channel (RGB, CMYK, HSV, etc.). • I never thought I knew what was happening... (Score:3, Interesting) by nick_davison (217681) on Friday January 06, 2006 @06:35AM (#14407956) It's a good read, especially if you *think* you know what your programs are doing. I gave up assuming I knew exactly what my programs were doing right around the time I gave up writing assembly code. Actually, I gave up a little prior to that when I realised I wasn't very good as assembly code but that kind of clouds the point. For any given high level language, the moment concepts start getting abstracted out, all kinds of false assumptions start getting made based on those assumptions. Here's one: public class massiveadd { public static void main(String[] args) { double a=0.001; double b=0; for (int i=0; i<1000; i++) { ..or the equivalent in your language of choice. Before you run it, what do you figure you'll get? Please tell me you didn't honestly think you'd get 1? If you can't even rely on floating point numbers being accurate when well within their perceived range (+/- 2^1023 to 2^-52 is not actually the same as every possible number to that degree of accuracy, despite most assumptions) then, odds are, rounding isn't going to matter that much either. That said, at least 0.5 has the decency to fit really nicely in to binary as 2^-1 and so you can argue, with certainty, that the number you have is 0.5 before getting in to arguments about whether to round such a number up or down. Here's one for you though... double a=0.001; double b=0; for (int i=0; i Except what should be 0.5 has now wandered off to 0.5000000000000003 and even if you do try rounding point fives down, it's now greater than 0.5 anyway - so you'll get one instead of zero... Which raises the argument - just because you happen to be passing 0.5 in to your rounding function and are arguing which way it should go, what on earth makes you think 0.5 is the correct value of whatever you were working with anyway? The point of all of this being that these things are cool concepts to know to show off at nerd cocktail parties (OK, over a D&D game is more likely) but open a whole can of worms if you actually want to get authoratative on the subject as one assumption getting questioned leads to another and another. For a very few, it's worth discovering all of the many variants - which likely requires an indepth knowledge of how the compiler works and you're back at assembler anyway. For everyone else, beyond the nerd show off, it's about degrees of comfort and, in most cases, that degree is... leave the lid on the can and back away slowly. And that leaves me where I am. I'm aware that there're concepts like which way rounding goes, what happens with small decimals, etc. and, luckily for me, I'm in a field where thorough testing is considered accurate enough. Actually freaking out about such issues just feels like coder OCD. • That is nothing... (Score:3, Interesting) by McSnarf (676600) * on Friday January 06, 2006 @08:42AM (#14408245) I remember a project ages ago (before the Pentium rounding bug). The customer (a state railway company) wanted to calculate fare tables. For that, they needed to be able to round up (2.1 -> 3), down (3.9 -> 3) and commercial (2.5 -> 3). Nothing too fancy so far. However, they also needed this operation to be carried out on PARTS of a currency unit - as in $0.20. Rounding up here would mean $3.04 -> $3.20. A typical scenario would look something like this : From 1-20km, the fare is number of kilometers multiplied by .32, rounded up to the next $0.10, then multiplied with a "class factor" of 1 for second and 1.5 for first class. (And so on, and so on...) Calculating a complete fare table at that time would take about 12 hours on a serious size Tandem computer. (And of course, the program was written in a mix of COBOL and FORTRAN...) Related Links Top of the: day, week, month.
{"url":"http://developers.slashdot.org/story/06/01/05/1838214/rounding-algorithms?sdsrc=prev","timestamp":"2014-04-21T00:05:27Z","content_type":null,"content_length":"316359","record_id":"<urn:uuid:1a3052eb-ef6c-4cd1-90c2-c998de1cc914>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Jen on Friday, October 20, 2006 at 1:22pm. If f(x)=cosx + 3 how do I find f inverse(1)? y = cos(x) + 3 the inverse of this is x = cos(y) + 3 solve for y and you have your inverse The cos function only has a range of [-1,1], so the range of f(x) is [2,4]. this means f inverse of 1 doesn't exist. I didn't understand this. I have to find f inverse(1) and the derivative of f inverse(1) Answers are 0 and 1/3 respectively. This was your question If f(x)=cosx + 3 how do I find f inverse(1)? Is this cos(x+3) or cos(x) + 3, there is a difference. The first one is a shift up of the cosine function. The second is a shift to the right by 3 units. When you want to know the derivate of f inverse calculate f' , take the reciprocal and evaluate at the point. I'm also assuming you're using radians, not degrees. If f(x)=cos(x) + 3 then f'(x)=-sin(x) so f' (x)= -1/sin(x) I am really really sorry. It is f(x) = cosx + 3x I have to find f inverse(1) and derivative of f inverse(1) Now this is a completely different function altogether, and it's defined for all x. As x goes from (-infty,+infty) f(x) goes from (-infty,+infty). To find f inverse 1 you want 1 = cos(x) +3x you need some kind of root algroithm to solve this, but if you graph it you'll find x=0 then f(x)=1 f'(x)=-sin(x) + 3 so f' (x) = 1/(-sin(x) + 3) You can also see f'(0)=3 so (x) = 1/3 So f inverse(x) will be 1/(-sinx+3) The derivative of f inverse(x) will be That means f inverse(1) is 1/(-sin(1)+3) ? (should get 0) And derivative of f inverse(1) is cos(1)/(-sin(1)+3)²? (ans: 1/3) Am I doing right? No, the derivative of the inverse function is the reciprocal of the derivative. f inverse for this function doesn't have an elementary inverse function because of the cosine function. When I reviewed my post I suspected there could be problems reading the text due to the font style. Let's use an uppercase F for the function. Then we want F'(x) and F'<su>-1(x), the derivative of the inverse function. You want F<su>-1(1) which I said was x=0. You also wanted F'<su>-1(0), so I said to calculate F'<su>1(x) and reciprocate it. F'<su>1(0)=3 so you should be able to see how the answer was obtained now. I see my tags are incorrect. This is what it should be. First, that is not how to find the inverse of a function. No, the derivative of the inverse function is the reciprocal of the derivative. f inverse for this function has an elementary inverse function, but it requires results you haven't had yet. When I reviewed my post I suspected there could be problems reading the text due to the font style. Let's use an uppercase F for the function. Then we want F'(x) and F' (x), the derivative of the inverse function. You want F (1) which I said was x=0. You also wanted F' (0), so I said to calculate F' (x) and reciprocate it. (0)=3 so you should be able to see how the answer was obtained now. Once again I see an error. Calculate F'(x) and take the reciprocal of that to find the derivative of the inverse function. Related Questions Math - Determine whether each equation is true for all x for which both sides of... Trigonometry - Write equivalent equations in the form of inverse functions for a... Calculus - Does f(x)= 1 + cos(x), have an inverse or not, if so, what is the ... inverse - find f^-1 (x). (this is asking me to find the inverse) f(x) = -(x-2)^2... inverse functions - 1-3cos=sin squared You need to define the variables, such as... Trig - Compute inverse functions to four significant digits. cos^2x=3-5cosx ... trig - inverse tan x = inverse cos 4/5 find dy/ds - y = s*square root of(1-s^2) + cos inverse(s) Just give me some ... trig - inverse cos (-rad2/2)=135 degrees inverse sin (-rad2/2)=-45 degrees I ... Algebra - what is the inverse of the linear parent function? How would you graph...
{"url":"http://www.jiskha.com/display.cgi?id=1161364943","timestamp":"2014-04-18T23:42:20Z","content_type":null,"content_length":"11372","record_id":"<urn:uuid:47caef44-2d90-4608-9bc8-40e2a80b569d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Derived categories of coherent sheaves: suggested references? up vote 8 down vote favorite I am interested in learning about the derived categories of coherent sheaves, the work of Bondal/Orlov and T. Bridgeland. Can someone suggest a reference for this, very introductory one with least prerequisites. As I was looking through the papers of Bridgeland, I realized that much of the theorems are stated for Projective varieties (not schemes), I've just started learning Scheme theory in my Algebraic Geometry course,my background in schemes is not very good but I am fine with Sheaves. It would be better if you suggest some reference where everything is developed in terms of Projective varieties. 13 "Fourier-Mukai transforms in algebraic geometry" by D. Huybrechts – Francesco Polizzi Oct 17 '10 at 7:38 1 If you want to learn about derived categories in the modern way, take a look at M. Hovey's Model Categories chapters 1 and 2 (don't worry, they're pretty short and not too bad a read, although you might want to skip the parts on the small object argument). After that, there are two ways you can go: The computationally more useful but substantially less natural approach of working with triangulated categories, or by working with simplicial localizations and resolutions. – Harry Gindi Oct 17 '10 at 13:39 1 Hi,Verma,you might want to take Rosenberg's course now, it is of great help – Shizhuo Zhang Oct 17 '10 at 16:58 8 I disagree with Harry's implicit statement that there is a unique "modern way" to approach derived categories. Also, I was under the impression that Hovey's book does not say much about coherent sheaves on projective varieties. – S. Carnahan♦ Oct 18 '10 at 0:33 3 @Scott: Certainly, but I think it's been evident from the beginning that triangulated categories were not the canonical "right" notion. – Harry Gindi Oct 18 '10 at 1:13 add comment 4 Answers active oldest votes Kapustin-Orlov'a survey of derived categories of coherent sheaves is pretty good, • A. N. Kapustin, D. O. Orlov, Lectures on mirror symmetry, derived categories, and D-branes, Uspehi Mat. Nauk 59 (2004), no. 5(359), 101--134; translation in Russian Math. Surveys 59 (2004), no. 5, 907--940, math.AG/0308173 but more slow/elementary exposition starting with fundamentals of derived categories is in an earlier survey of Orlov • D. O. Orlov, Derived categories of coherent sheaves and equivalences between them, Uspekhi Mat. Nauk, 2003, Vol. 58, issue 3(351), pp. 89–172, Russian pdf, English transl. in Russian Mathematical Surveys (2003),58(3):511, doi link, pdf at Orlov's webpage (not on arXiv!) up vote 10 down vote There are also Orlov's handwritten slides in djvu from a 5-lecture course in Bonn • djvu, but the link is temporary For derived categories per se, apart from Gelfand-Manin methods book and Weibel's hoological algebra remember that a really good expositor is Bernhard Keller. E.g. his text • Bernhard Keller, Introduction to abelian and derived categories, pdf ...and also his Handbook of Algebra entry on derived categories: pdf Thank you very much. – Barbara Oct 18 '10 at 13:03 thank you very much,Zoran. – J Verma Oct 18 '10 at 19:01 add comment A good introduction to derived categories of coherent sheaves are Caldaru's notes Derived categories of sheaves: a skimming. up vote 11 As for Bridgeland's work, I would recommend reading his papers directly, using Huybrecht's book as a reference (as mentioned by Francesco above, "Fourier-Mukai transforms in algebraic down vote geometry"). Specifically for Bridgeland stability conditions, I also have some short notes on my homepage, but again, you should also read his original papers. Thanks for the link to the notes Arend! – babubba Oct 18 '10 at 2:24 add comment Let me suggest a perhaps longer program to master derived categories of coherent sheaves. In a comment you said "I don't know much Homological Algebra", so first you have to master the basics of abelian categories and ext and tor of modules. The first chapters of Weibel's "Homological Algebra" may be a useful reference. This said, I agree with Polizzi's suggestion of Huybrechts' book. The only issue I would point out is that the book concentrates on the bounded derived category of coherent sheaves. It is enough for its application to the classification of non-singular varieties. But if you have varieties with singularities in mind then things may get more complicated. First there is a up vote difference between perfect complexes and bounded complexes of coherent sheaves. Second one sometimes need to use infinite methods, therefore one is forced to consider quasi-coherent sheaves 6 down and unbounded derived categories. For this beautiful theory, a good introduction is (as I have suggested here before) the first chapters of Lipman's "Notes on Derived Functors and vote Grothendieck Duality", (in Foundations of Grothendieck Duality for Diagrams of Schemes, Lecture Notes in Mathematics, no. 1960, Springer, 2009). This is not a shortest path, but in my experience getting into general methods is that they pay back later. Your mileage may vary. Thanks Leo, time is not a constraint for me since I am a first year grad student. Of course I want to learn stuff quickly, but better understanding is my priority. – J Verma Oct 18 '10 at add comment Scheme theory won't be that important (as you said, most of the time you are concerned with smooth projective varieties over $\mathbb C$). However, you didn't mention about your experience in homological algebra. Huybrechts' book is a pleasure to read, but it can't serve as a textbook in homological algebra. I would recommend e.g. reading first first few chapters of Weibel's book (1, 2, 3, 5, 10). Gelfand-Manin "Methods of Homological Algebra" is also a good reference here. up vote 2 down vote Moreover, there is a short introduction by R. P. Thomas "Derived categories for the working mathematician". I don't know much Homological Algebra, could you please tell me what topics to look at. I'll highly appreciate if you could outline a roadmap to Bridgeland Stability starting from Projective varieties.Thanks. – J Verma Oct 17 '10 at 21:06 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ct.category-theory sheaf-theory mirror-symmetry reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/42463/derived-categories-of-coherent-sheaves-suggested-references?sort=votes","timestamp":"2014-04-21T05:19:39Z","content_type":null,"content_length":"78012","record_id":"<urn:uuid:0c33c59c-7263-4cae-bc08-eebb0b5707cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics 241 > Nakanishi > Notes > Lec05-22009.ppt | StudyBlue Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson
{"url":"http://www.studyblue.com/notes/note/n/lec05-22009ppt/file/390348","timestamp":"2014-04-17T19:32:10Z","content_type":null,"content_length":"35879","record_id":"<urn:uuid:bf99722a-5441-437e-82a6-94c7a37239ed>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Euler Lagrange Equation A selection of articles related to euler lagrange equation. Original articles from our library related to the Euler Lagrange Equation. See Table of Contents for further available material (downloadable resources) on Euler Lagrange Equation. Euler Lagrange Equation is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Euler Lagrange Equation books and related discussion. Suggested Pdf Resources Suggested News Resources When one then applies the Euler-Lagrange equation to the Lagrangian, one takes the derivatives with respect to a current density, leaving one current density standing in the subsequent field Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/euler-lagrange-equation/","timestamp":"2014-04-17T00:54:27Z","content_type":null,"content_length":"30803","record_id":"<urn:uuid:a4d00780-3c8a-4001-8ce7-0c8f1e2db8b3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: lab 8 • one year ago • one year ago Best Response You've already chosen the best response. Lab 8 here are the answers R, 1000,C,60p L,80u,R, 1000 C,160p,R,1000 R,1000,L,40u R,3300,R,1000 Best Response You've already chosen the best response. If correct plz give me as best response :D Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5084e082e4b02a1e48d77495","timestamp":"2014-04-19T04:55:47Z","content_type":null,"content_length":"29916","record_id":"<urn:uuid:5d25712d-75b3-4c5e-a56b-a16fab3f9a43>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: ERRATA FOR William Arveson Department of Mathematics University of California Berkeley CA 94720, USA June, 2003 Abstract. This is a list of corrections I know about. Special thanks go to Launey Thomas for catching many corrections that I had missed. I welcome additional feedback from users of this text. page 5. Exercise (5) is correctly stated, but the hint should be changed as follows. Hint: Apply Ascoli's theorem to KB1. page 11. In Exercise (1), the phrases xn X and y X should be replaced with xn E and y E respectively. page 12. Theorem 1.4.2: the inequality (2) should be e -1 x Lx c x . page 13. In Exercise (2), E should be a Banach space. page 14. Line 3: "...satisfies 1 = 1".
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/165/2579125.html","timestamp":"2014-04-20T21:18:39Z","content_type":null,"content_length":"7833","record_id":"<urn:uuid:40ea499b-bb6e-4294-8019-a06c5851c5c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Methuen Trigonometry Tutor Find a Methuen Trigonometry Tutor ...I have my degree in math and I am currently pursuing a Master's in education. Tutoring can include: Linear equations, Functions, Graphing, Quadratic and Polynomial equations, Statistics and Probability. Equations and inequalities, quadratics, graphing functions, complex numbers, coordinate geometry. 8 Subjects: including trigonometry, calculus, geometry, algebra 1 ...I also act as their mentor and adviser. Many of these students are low-income, first-generation, college bound students from the Dorchester area. I have thoroughly enjoyed all of my tutoring experiences and I learn from my students just as much as they learn from me. 10 Subjects: including trigonometry, calculus, geometry, algebra 1 ...After college I was an algebra tutor for some 10th grade students. I am comfortable explaining variables, how to solve for a variable and how to make a graph from a function such as y=mx+b. I was a straight A student in algebra. 9 Subjects: including trigonometry, geometry, algebra 1, algebra 2 I absolutely love math and love helping students to overcome their fear of math and to excel in it. I have been teaching and tutoring for over 25 years. I have had much success over the years with a lot of repeat and referral business. 19 Subjects: including trigonometry, geometry, GRE, algebra 1 ...I took the GRE recently and received a 166 in the Verbal Section, a 163 on the Quantitative Section, and a 5.5 on the Analytical Writing Section. I have several years of experience in teaching test preparation and teaching students with a variety of learning styles. I pride myself on adapting my approach to the needs as well as strengths/weaknesses of the students that I am teaching. 28 Subjects: including trigonometry, English, reading, writing
{"url":"http://www.purplemath.com/Methuen_Trigonometry_tutors.php","timestamp":"2014-04-21T12:35:31Z","content_type":null,"content_length":"23974","record_id":"<urn:uuid:458b52cd-de5c-4519-9631-1bb9e15aa70e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Planar cracks running along piecewise linear paths Herrero, Miguel A. and Oleaga Apadula, Gerardo Enrique and Velázquez, J.J. L. (2004) Planar cracks running along piecewise linear paths. Proceedings of the Royal Society of London Series A - Mathematical Physical and Engineering Sciences, 460 . pp. 581-601. ISSN 1364-5021 Restricted to Repository staff only until 31 December 2020. Official URL: http://rspa.royalsocietypublishing.org/content/460/2042/581.full.pdf+html Consider a crack propagating in a plane according to a loading that results in anti-plane shear deformation. We show here that if the crack path consists of two straight segments making an angle psi not equal 0 at their junction, examples can be provided for which the value of the stress-intensity factor (SIF) actually depends on the previous history of the motion. This is in sharp contrast with the rectilinear case (corresponding to psi = 0), where the SIF is known to have a local character, its value depending only on the position and velocity of the crack tip at any given time. Item Type: Article Uncontrolled Keywords: Fracture dynamics; wave propagation; linear elasticity; asymptotic behaviour of solutions; stress intensity factors; situations; expansion; form Subjects: Sciences > Physics > Mathematical physics Sciences > Physics > Materials ID Code: 16409 References: Amestoy, M. & Leblond, J. B. 1992 Crack paths in plane situations. II. Detailed form of the expansion of the stress intensity factors. Int. J. Solids Struct. 29, 465–501. Anderson, T. L. 1994 Fracture mechanics. Boca Raton, FL: CRC Press. Eshelby, J. D. 1969 The elastic field of a crack extending nonuniformly under general antiplane loading. J. Mech. Phys. Solids 17, 177–199. Freund, L. B. 1998 Dynamic fracture mechanics. Cambridge University Press. Kostrov, B. V. 1975 On the crack propagation with variable velocity. Int. J. Fract. 11, 47–56. Leblond, J. B. 1989 Crack paths in plane situations. I. General form of the expansion of the stress intensity factors. Int. J. Solids Struct. 25, 1311–1325. Deposited On: 18 Sep 2012 08:45 Last Modified: 07 Feb 2014 09:28 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/16409/","timestamp":"2014-04-19T07:04:40Z","content_type":null,"content_length":"28514","record_id":"<urn:uuid:f05a54d8-683b-4a7d-bf1e-4b226cc91c4d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Negative results about the power of static typing Negative results about the power of static typing I was thinking about Jon Riecke's remark > One can certainly imagine fancier type systems preventing more of > these checks, but the uncomputability argument means that static > type systems cannot rule out all such errors. and realized something that had never been clear to me: If one wants to prove negative results concerning the power of static typing, the programs being considered must communicate in some way with their environment (e.g., by taking in inputs). Let me explain... Suppose we formalize a programming language as an (effectively presented) triple (P, ->, A) P is the set of programs -> is the finitely branching transition relation A is the set of answers We assume that the elements of A are irreducible, but there may be other elements of P that are irreducible - and we consider these to be Let's assume that our language is interesting, so that it won't be decidable whether every execution sequence of a program terminates with an answer. On the other hand, we can easily augment our programming language with a type system so that every program with a type will have this As types we choose all the execution trees of (P, ->) that are finite and whose leaves are all in A. The set of all types will then be recursive. Say that a program p has type t iff the root of t is p. The set of all pairs (p, t) such that p has type t will be recursive, even though it won't be decidable whether a program has a type. And every execution sequence of a program that has a type obviously terminates with an answer. Of course, I'm not claiming that this approach is in any way practical---the types will be gigantic. I'm just arguing that one must be careful when stating negative results concerning the power of static typing. On the other hand, if our programs take in inputs from some infinite set, then we can use diagonalization to show that there is no effective type system for our language that guarantees termination with an answer.
{"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00327.html","timestamp":"2014-04-17T01:08:55Z","content_type":null,"content_length":"4077","record_id":"<urn:uuid:71093b3a-aca7-4479-b726-007009c429ba>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Advanced Numerical Differential Equation Solving in Introduction to Advanced Numerical Differential Equation Solving in Mathematica The Mathematica function NDSolve is a general numerical differential equation solver. It can handle a wide range of ordinary differential equations (ODEs) as well as some partial differential equations (PDEs). In a system of ordinary differential equations there can be any number of unknown functions , but all of these functions must depend on a single "independent variable" , which is the same for each function. Partial differential equations involve two or more independent variables. NDSolve can also solve some differential-algebraic equations (DAEs), which are typically a mix of differential and algebraic equations. Finding numerical solutions to ordinary differential equations. NDSolve represents solutions for the functions as InterpolatingFunction objects. The InterpolatingFunction objects provide approximations to the over the range of values to for the independent variable . In general, NDSolve finds solutions iteratively. It starts at a particular value of , then takes a sequence of steps, trying eventually to cover the whole range to . In order to get started, NDSolve has to be given appropriate initial or boundary conditions for the and their derivatives. These conditions specify values for , and perhaps derivatives , at particular points . When there is only one at which conditions are given, the equations and initial conditions are collectively referred to as an initial value problem. A boundary value occurs when there are multiple points . NDSolve can solve nearly all initial value problems that can symbolically be put in normal form (i.e. are solvable for the highest derivative order), but only linear boundary value problems. When you use NDSolve, the initial or boundary conditions you give must be sufficient to determine the solutions for the completely. When you use DSolve to find symbolic solutions to differential equations, you may specify fewer conditions. The reason is that DSolve automatically inserts arbitrary symbolic constants C[i] to represent degrees of freedom associated with initial conditions that you have not specified explicitly. Since NDSolve must give a numerical solution, it cannot represent these kinds of additional degrees of freedom. As a result, you must explicitly give all the initial or boundary conditions that are needed to determine the solution. In a typical case, if you have differential equations with up to points. You can use NDSolve to solve systems of coupled differential equations as long as each variable has the appropriate number of conditions. You can give initial conditions as equations of any kind. If these equations have multiple solutions, NDSolve will generate multiple solutions. NDSolve was not able to find the solution for y'[x]=-Sqrt[y[x]^3], because of problems with the branch cut in the square root function. NDSolve can solve a mixed system of differential and algebraic equations, referred to as differential-algebraic equations (DAEs). In fact, the example given is a sort of DAE, where the equations are not expressed explicitly in terms of the derivatives. Typically, however, in DAEs, you are not able to solve for the derivatives at all and the problem must be solved using a different method Note that while both of the equations have derivative terms, the variable appears without any derivatives, so NDSolve issues a warning message. When the usual substitution to convert to first-order equations is made, one of the equations does indeed become effectively algebraic. Also, since only appears algebraically, it is not necessary to give an initial condition to determine its values. Finding initial conditions that are consistent with DAEs can, in fact, be quite difficult. The tutorial "Numerical Solution of Differential-Algebraic Equations" has more information. From the plot, you can see that the derivative of is tending to vary arbitrarily fast. Even though it does not explicitly appear in the equations, this condition means that the solver cannot continue Unknown functions in differential equations do not necessarily have to be represented by single symbols. If you have a large number of unknown functions, for example, you will often find it more convenient to give the functions names like or . This actually computes an approximate solution of the heat equation for a rod with constant temperatures at either end of the rod. (For more accurate solutions, you can increase .) An unknown function can also be specified to have a vector (or matrix) value. The dimensionality of an unknown function is taken from its initial condition. You can mix scalar and vector unknown functions as long as the equations have consistent dimensionality according to the rules of Mathematica arithmetic. The InterpolatingFunction result will give values with the same dimensionality as the unknown function. Using nonscalar variables is very convenient when a system of differential equations is governed by a process that may be difficult or inefficient to express symbolically. NDSolve is able to solve some partial differential equations directly when you specify more independent variables. Finding numerical solutions to partial differential equations. NDSolve currently uses the numerical method of lines to compute solutions to partial differential equations. The method is restricted to problems that can be posed with an initial condition in at least one independent variable. For example, the method cannot solve elliptic PDEs such as Laplace's equation because these require boundary values. For the problems it does solve, the method of lines is quite general, handling systems of PDEs or nonlinearity well, and often quite fast. Details of the method are given in "Numerical Solution of Partial Differential Equations". As mentioned earlier, NDSolve works by taking a sequence of steps in the independent variable t. NDSolve uses an adaptive procedure to determine the size of these steps. In general, if the solution appears to be varying rapidly in a particular region, then NDSolve will reduce the step size to be able to better track the solution. NDSolve allows you to specify the precision or accuracy of result you want. In general, NDSolve makes the steps it takes smaller and smaller until the solution reached satisfies either the AccuracyGoal or the PrecisionGoal you give. The setting for AccuracyGoal effectively determines the absolute error to allow in the solution, while the setting for PrecisionGoal determines the relative error. If you need to track a solution whose value comes close to zero, then you will typically need to increase the setting for AccuracyGoal. By setting AccuracyGoal->Infinity, you tell NDSolve to use PrecisionGoal only. Generally, AccuracyGoal and PrecisionGoal are used to control the error local to a particular time step. For some differential equations, this error can accumulate, so it is possible that the precision or accuracy of the result at the end of the time interval may be much less than what you might expect from the settings of AccuracyGoal and PrecisionGoal. NDSolve uses the setting you give for WorkingPrecision to determine the precision to use in its internal computations. If you specify large values for AccuracyGoal or PrecisionGoal, then you typically need to give a somewhat larger value for WorkingPrecision. With the default setting of Automatic, both AccuracyGoal and PrecisionGoal are equal to half of the setting for WorkingPrecision. NDSolve uses error estimates for determining whether it is meeting the specified tolerances. When working with systems of equations, it uses the setting of the option NormFunction->f to combine errors in different components. The norm is scaled in terms of the tolerances, given so that NDSolve tries to take steps such that Through its adaptive procedure, NDSolve is able to solve "stiff" differential equations in which there are several components varying with at extremely different rates. NDSolve follows the general procedure of reducing step size until it tracks solutions accurately. There is a problem, however, when the true solution has a singularity. In this case, NDSolve might go on reducing the step size forever, and never terminate. To avoid this problem, the option MaxSteps specifies the maximum number of steps that NDSolve will ever take in attempting to find a solution. For ordinary differential equations, the default setting is MaxSteps->10000. The default setting MaxSteps->10000 should be sufficient for most equations with smooth solutions. When solutions have a complicated structure, however, you may sometimes have to choose larger settings for MaxSteps. With the setting MaxSteps->Infinity there is no upper limit on the number of steps used. NDSolve has several different methods built in for computing solutions as well as a mechanism for adding additional methods. With the default setting Method->Automatic, NDSolve will choose a method which should be appropriate for the differential equations. For example, if the equations have stiffness, implicit methods will be used as needed, or if the equations make a DAE, a special DAE method will be used. In general, it is not possible to determine the nature of solutions to differential equations without actually solving them: thus, the default Automatic methods are good for solving as wide variety of problems, but the one chosen may not be the best one available for your particular problem. Also, you may want to choose methods, such as symplectic integrators, which preserve certain properties of the solution. Choosing an appropriate method for a particular system can be quite difficult. To complicate it further, many methods have their own settings, which can greatly affect solution efficiency and accuracy. Much of this documentation consists of descriptions of methods to give you an idea of when they should be used and how to adjust them to solve particular problems. Furthermore, NDSolve has a mechanism that allows you to define your own methods and still have the equations and results processed by NDSolve just as for the built-in methods. When NDSolve computes a solution, there are typically three phases. First, the equations are processed, usually into a function that represents the right-hand side of the equations in normal form. Next, the function is used to iterate the solution from the initial conditions. Finally, data saved during the iteration procedure is processed into one or more InterpolatingFunction objects. Using functions in the context, you can run these steps separately and, more importantly, have more control over the iteration process. The steps are tied by an object, which keeps all of the data necessary for solving the differential equations.
{"url":"http://reference.wolfram.com/mathematica/tutorial/NDSolveIntroductoryTutorial.html","timestamp":"2014-04-19T17:05:43Z","content_type":null,"content_length":"70469","record_id":"<urn:uuid:de590ccd-cf23-4c87-ad15-3cdc210c6983>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what is {cscx/(1+cscx)}-{cscx/(1-cscx)} • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510df98fe4b09cf125bccf37","timestamp":"2014-04-19T15:09:46Z","content_type":null,"content_length":"42885","record_id":"<urn:uuid:3753390c-7057-4925-9abe-a5c9408909e7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Emeryville Math Tutor Find an Emeryville Math Tutor ...Math is not easy. It takes time. It takes effort. 8 Subjects: including algebra 1, algebra 2, vocabulary, prealgebra ...My linguistics coursework ties strongly with my personal study of languages (Spanish, French, Portuguese, Thai, German, and Hindi) and helps me teach English. As a speaking/pronunciation coach, reading tutor, and writing tutor, I've worked with native English speakers as well as English learners... 34 Subjects: including algebra 1, English, writing, Spanish ...I hold an M.S. in math and a Ph.D. in aerospace engineering from Stanford University. I can help you in upper level high school and college level math as well as algebra, precalculus and SAT math prep. I have taught math at the high school and college level. 7 Subjects: including algebra 1, algebra 2, calculus, precalculus ...The continued improvement and success of the individual student is my true motivation. I am currently a humanities department faculty member at University of Phoenix- Oakland campus. U.S. 38 Subjects: including trigonometry, Bible studies, differential equations, public speaking ...When I became a vegetarian, I realized I had to learn how to cook if I wanted to eat well. I was veggie for 30 years, but now also eat meat. I can teach you basics, including how to organize and stock a pantry. 53 Subjects: including ACT Math, Spanish, algebra 1, algebra 2
{"url":"http://www.purplemath.com/emeryville_math_tutors.php","timestamp":"2014-04-21T15:17:29Z","content_type":null,"content_length":"23280","record_id":"<urn:uuid:d16e93c3-2bad-4c48-b196-193d0592bf3a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
-Valent Analytic Functions of Complex Order Abstract and Applied Analysis Volume 2013 (2013), Article ID 517296, 4 pages Research Article Sufficiency Criteria for a Class of -Valent Analytic Functions of Complex Order Department of Mathematics, Abdul Wali Khan University, Mardan 23200, Pakistan Received 21 December 2012; Accepted 26 February 2013 Academic Editor: Fuding Xie Copyright © 2013 Muhammad Arif. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In the present paper, we consider a subclass of -valent analytic functions and obtain certain simple sufficiency criteria by using three different methods for the functions belonging to this class. Many known results appear as special consequences of our work. 1. Introduction Let be the class of functions analytic and -valent in the open unit disk and of the form In particular, , , and . By and , and , we mean the subclasses of which are defined, respectively, by For , , , the previous two classes defined in (2) reduce to the well-known classes of starlike and convex, respectively. For functions , of the form (1), we define the convolution (Hadamard product) of and by Now we define the subclass of by Sufficient conditions were studied by various authors for different subclasses of analytic and multivalent functions, for some of the related work see [1–8]. The object of the present paper is to obtain sufficient conditions for the subclass of . We also consider some special cases of our results which lead to various interesting corollaries and relevances of some of these with other known results also being mentioned. We will assume throughout our discussion, unless otherwise stated, that , , . 2. Preliminary Results To obtain our main results, we need the following Lemma's. Lemma 1 (see [9]). If with and satisfies the condition then Lemma 2 (see [10]). If satisfing the condition where is the unique root of the equation then Lemma 3 (see [11]). Let be a set in the complex plane , and suppose that is a mapping from to which satisfies for and for all real such that . If is analytic in and for all , then . 3. Main Results Theorem 4. If satisfies then . Proof. Let us set a function by for . Then clearly (11) shows that . Differentiating (11) logarithmically, we have which gives Thus using (10), we have Hence, using Lemma 1, we have . From (12), we can write Since , it implies that . Therefore, we get and this implies that . Setting and in Theorem 4, we get the following. Corollary 5. If satisfies then , the class of starlike functions of complex order . Putting and in Theorem 4, we have the following. Corollary 6. If satisfies then , the class of convex functions of complex order . Remark 7. If we put in Corollaries 5 and 6, we get the results proved by Uyanık et al. [1]. Furthermore, for , we obtain the results studied by Mocanu [2] and Nunokawa et al. [3], respectively. Also if we set with and in Theorem 4, we obtain the results due to Goyal et al. [4]. Theorem 8. If satisfies where is the unique root of (8), then . Proof. Let be given by (11), which clearly belongs to the class . Now differentiating (11), we have which gives Thus using (19), we have where is the root of (8). Hence, using Lemma 2, we have . From (20), we can write Since , it implies that . Therefore, we get (16), and hence . Making , with and , we have the following. Corollary 9. If satisfies where is the unique root of (8) with , then , the class of -valent starlike functions of order . Also if we take , with and in Theorem 8, we obtain the following result. Corollary 10. If satisfies where is the unique root of (8) with , then , the class of -valent convex functions of order . Remark 11. For putting in Corollary 10 and in Corollary 9, we obtain the results proved by Mocanu [10] and Uyanık et al. [1], respectively. Theorem 12. If satisfies where and then . Proof. Let us set Then is analytic in with . Taking logarithmic differentiation of (28) and then by simple computation, we obtain with Now for all real and satisfying , we have Reputing the values of , , , and then taking real part, we obtain where , , are given in (27). Let . Then and , for all real and satisfying , . Using Lemma 3, we have . This implies that and hence . If we put and in Theorem 12, we obtain the following result proved in [12]. Corollary 13. If satisfies then . Furthermore, for in Corollary 13, we have the following result proved in [13]. Corollary 14. If satisfies then . The author is thankful to the Prof. Dr. Ihsan Ali, Vice chancellor of Abdul Wali Khan University Mardan, for providing research facilities in AWKUM. 1. N. Uyanık, M. Aydogan, and S. Owa, “Extensions of sufficient conditions for starlikeness and convexity of order $\alpha$,” Applied Mathematics Letters, vol. 24, no. 8, pp. 1393–1399, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 2. P. T. Mocanu, “Some starlikeness conditions for analytic functions,” Revue Roumaine de Mathématiques Pures et Appliquées, vol. 33, no. 1-2, pp. 117–124, 1988. View at Zentralblatt MATH · View at 3. M. Nunokawa, S. Owa, Y. Polattoglu, M. Caglar, and E. Yavuz Duman, “Some sufficient conditions for starlikeness and convexity,” Turkish Journal of Mathematics, vol. 34, no. 3, pp. 333–337, 2010. View at Zentralblatt MATH · View at MathSciNet 4. S. P. Goyal, S. K. Bansal, and P. Goswami, “Extensions of sufficient conditions for starlikeness and convexity of order $\alpha$ for multivalent function,” Applied Mathematics Letters, vol. 25, no. 11, pp. 1993–1998, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 5. H. Al-Amiri and P. T. Mocanu, “Some simple criteria of starlikeness and convexity for meromorphic functions,” Mathematica, vol. 37, no. 60, pp. 11–20, 1995. View at Zentralblatt MATH · View at 6. M. Arif, I. Ahmad, M. Raza, and K. Khan, “Sufficient condition of a subclass of analytic functions defined by Hadamard product,” Life Science Journal, vol. 9, no. 4, pp. 2487–2489, 2012. 7. M. Arif, M. Raza, S. Islam, J. Iqbal, and F. Faizullah, “Some sufficient conditions for spirallike functions with argument properties,” Life Science Journal, vol. 9, no. 4, pp. 3770–3773, 2012. 8. B. A. Frasin, “Some sufficient conditions for certain integral operators,” Journal of Mathematical Inequalities, vol. 2, no. 4, pp. 527–535, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 9. P. T. Mocanu and Gh. Oros, “A sufficient condition for starlikeness of order $\alpha$,” International Journal of Mathematics and Mathematical Sciences, vol. 28, no. 9, pp. 557–560, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. P. T. Mocanu, “Some simple criteria for starlikeness and convexity,” Libertas Mathematica, vol. 13, pp. 27–40, 1993. View at Zentralblatt MATH · View at MathSciNet 11. S. S. Miller and P. T. Mocanu, “Differential subordinations and inequalities in the complex plane,” Journal of Differential Equations, vol. 67, no. 2, pp. 199–211, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 12. V. Ravichandran, C. Selvaraj, and R. Rajalaksmi, “Sufficient conditions for starlike functions of order $\alpha$,” Journal of Inequalities in Pure and Applied Mathematics, vol. 3, no. 5, pp. 1–6, 2002. View at Zentralblatt MATH · View at MathSciNet 13. J.-L. Li and S. Owa, “Sufficient conditions for starlikeness,” Indian Journal of Pure and Applied Mathematics, vol. 33, no. 3, pp. 313–318, 2002. View at Zentralblatt MATH · View at MathSciNet
{"url":"http://www.hindawi.com/journals/aaa/2013/517296/","timestamp":"2014-04-20T12:24:21Z","content_type":null,"content_length":"373340","record_id":"<urn:uuid:7d5ac67d-522d-4f69-b57f-b3053cc213a9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 727.11038 Autor: Erdös, Paul; Nicolas, J.L.; Sárközy, A. Title: On the number of partitions of n without a given subsum. II. (In English) Source: Analytic number theory, Proc. Conf. in Honor of Paul T. Bateman, Urbana/IL (USA) 1989, Prog. Math. 85, 205-234 (1990). Review: [For the entire collection see Zbl 711.00008.] Author's abstract: Let R(n,a) denote the number of unrestricted partitions of n whose subsums are all different of a, and Q(n,a) the number of unequal partitions (i.e. each part is allowed to occur at most once) with the same property. In a preceding paper [cf. Discrete Math. 75, 155-166 (1989; Zbl 673.05007)], we considered R(n,a) and Q(n,a) for a \leq \lambda[1]\sqrt{n}, where\lambda[1] is a small constant. Here we study the case a \geq \lambda[2]\sqrt{n}. The behaviour of these quantities depends on the size of a, but also on the size of s(a), the smallest positive integer which does not divide a. Reviewer: B.Garrison (San Diego) Classif.: * 11P81 Elementary theory of partitions 05A17 Partitions of integres (combinatorics) Keywords: unrestricted partitions; unequal partitions Citations: Zbl 711.00008; Zbl 673.05007 © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/72711038.htm","timestamp":"2014-04-20T16:25:14Z","content_type":null,"content_length":"4177","record_id":"<urn:uuid:3f72c662-7d2c-4538-b991-c50e1c619031>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample quantiles in statistical packages Rob J Hyndman^1 and Yanan Fan^2 1. Department of Econometrics and Business Statistics, Monash University, Clayton VIC 3800, Australia. Abstract: There are a large number of different definitions used for sample quantiles in statistical computer packages. Often within the same package one definition will be used to compute a quantile explicitly while other definitions may be used when producing a boxplot, a probability plot or a QQ-plot. We compare the most commonly implemented sample quantile definitions by writing them in a common notation and investigating their motivation and some of their properties. We argue that there is a need to adopt a standard definition for sample quantiles so that the same answers are produced by different packages and within each package. We conclude by recommending that the median-unbiased estimator is used since it has most of the desirable properties of a quantile estimator and can be defined independently of the underlying distribution. Keywords: sample quantiles, percentiles, quartiles, statistical computer packages. R code: The quantile() function in R from version 2.0.0 onwards implements all the methods in this paper. • Table 1, p361. P2 should have lower bound equal to ⌊np⌋. • p363, left column. P2 is satisfied if and only if α≥0 and β≤1. Thanks to Eric Langford and Alan Dorfman for pointing out the errors. 8 May 2007.
{"url":"http://robjhyndman.com/papers/quantiles/","timestamp":"2014-04-20T00:40:47Z","content_type":null,"content_length":"40144","record_id":"<urn:uuid:575814ee-1463-45e7-924e-e9ef7d9b478a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Derive an Equation....Please Help! :) Actually I didn't (post 28) . But that doesn't mean you are wrong (what did the spreadsheet give you?). Why so many answers to this? : an expression for 'e', without using 'e','r', or 'a*'.... Well there is a good reason why this happened. There are 4 equations and 12 variables. For the first part you were asked to get one equation with 3 variables forbidden. It worked like this: 4 equations and 12 variables ... eliminate e gives 3 equations and 11 variables ... eliminate r gives 2 equations and 10 variables ... eliminate a* gives 1 equation and 9 variables. So we have lost the three variables exactly as required. There's only one answer. (with maybe a bit of re-arranging) For part two you are asked for one equation with two variables forbidden ( 'e' doesn't count because that's the one we are trying to make) 4 equations and 12 variables ... eliminate 'r' 3 equations and 11 variables ... eliminate a* 2 equations and 10 variables. We've eliminated the forbidden variables but still have two equations. You can still eliminate a variable if you want and get a correct equation. But which one to eliminate? There's lots of choices so lots of answers. Your teacher is going to have lots of work checking each different answer to see if it works. Last edited by bob bundy (2013-03-11 07:07:34) You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=257108","timestamp":"2014-04-17T12:36:16Z","content_type":null,"content_length":"17003","record_id":"<urn:uuid:a5126736-262d-4c9b-9598-51480d1f817c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Farmers Branch, TX Prealgebra Tutor Find a Farmers Branch, TX Prealgebra Tutor ...I try to make math fun for my students and work at a pace that the student can maintain. I will meet your student at you home or a nearby location of your choice. If after your first lesson you don't feel I'm the right fit for your student just let me know and I won't charge you for the first lesson.I have taught Algebra at the high school level for over twenty years. 15 Subjects: including prealgebra, chemistry, calculus, geometry I have 25 years of experience in education. I have taught from 4th grade to 9th grade and tutored students in all grades including community college. I am certified in English and in Math for grades 1-8. 8 Subjects: including prealgebra, algebra 1, grammar, vocabulary ...He and his family spent three years in Mexico, initially serving at an orphanage in Chihuahua. Later, his family settled down, and Drew began studies in Computer Engineering. He also lived in Cape Town, South Africa for approximately one year, until returning to the States to continue with his studies. 37 Subjects: including prealgebra, Spanish, reading, chemistry ...I have worked with and am still working with graduates students on presentation development and coaching. Many of my students first language is not english and we have had to script there presentations. I have presented at numerous university including Yale and John Hopkins and at scientific meetings across the country for the last 12 years. 32 Subjects: including prealgebra, chemistry, reading, geometry ...AutoCAD and Civil Engineering go hand in hand. I spent nearly three years as a Structural Engineer/CAD Technician. For that time period, I spent at least three hours daily working in AutoCAD, creating & updating drawings. 17 Subjects: including prealgebra, physics, calculus, statistics Related Farmers Branch, TX Tutors Farmers Branch, TX Accounting Tutors Farmers Branch, TX ACT Tutors Farmers Branch, TX Algebra Tutors Farmers Branch, TX Algebra 2 Tutors Farmers Branch, TX Calculus Tutors Farmers Branch, TX Geometry Tutors Farmers Branch, TX Math Tutors Farmers Branch, TX Prealgebra Tutors Farmers Branch, TX Precalculus Tutors Farmers Branch, TX SAT Tutors Farmers Branch, TX SAT Math Tutors Farmers Branch, TX Science Tutors Farmers Branch, TX Statistics Tutors Farmers Branch, TX Trigonometry Tutors Nearby Cities With prealgebra Tutor Addison, TX prealgebra Tutors Balch Springs, TX prealgebra Tutors Bedford, TX prealgebra Tutors Carrollton, TX prealgebra Tutors Coppell prealgebra Tutors Euless prealgebra Tutors Flower Mound prealgebra Tutors Grapevine, TX prealgebra Tutors Highland Park, TX prealgebra Tutors Hurst, TX prealgebra Tutors Irving, TX prealgebra Tutors Parker, TX prealgebra Tutors Richardson prealgebra Tutors The Colony prealgebra Tutors University Park, TX prealgebra Tutors
{"url":"http://www.purplemath.com/Farmers_Branch_TX_prealgebra_tutors.php","timestamp":"2014-04-16T13:20:47Z","content_type":null,"content_length":"24415","record_id":"<urn:uuid:6e4f9c28-db69-479a-ab46-ae16e09236a9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Rectangular zoom in Chart September 30th, 2013, 09:42 AM Rectangular zoom in Chart Is there any algorithm to implement rectangular zoom? Please let me know, How should I calculate the Horizontal and Vertical Scroll bar position with respect to the rectangular zoom. Thanks in advance, November 4th, 2013, 07:37 AM Re: Rectangular zoom in Chart You are using GDI ? I could give you some hint if it is so ...
{"url":"http://forums.codeguru.com/printthread.php?t=540033&pp=15&page=1","timestamp":"2014-04-16T22:40:42Z","content_type":null,"content_length":"4839","record_id":"<urn:uuid:5adeb3a5-b989-49f2-9341-0574f768cf1a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Academic Year History of Science Lessons High School Mathematics-Physics SMILE Meeting 1997-2006 Academic Years History of Science 16 March 1999: Bill Colson [Morgan Park HS] He told a joke of a Y2K ["y" to "k"] solution where days changed from "day" to "dak" ie, Mondak instead of Monday. Should the year 2000 would be more appropriate as Y2M, instead of Y2K?? 06 April 1999: Bill Shanks [Joliet Junior College; quasi-retired from Joliet Central HS] Bill handed copies of an article entitled "Quantum Sound", which appeared in Electronic Magazine recently. It not only explained phonons as sound quanta, but introduced the concepts of the "photino" or "microphonon?, as well as "polyphonons", "telephonons", and the important work of the Bolognese residents Dr Leonardo Da Capo and Enrico Fermata, who finance their work by selling T-shirts with the slogan Hooked on Phonons. Bill then demonstrated his remarkable musical skills on a toy saxophone presumably purloined from a small child 12 October 1999: The final discussion took a non-phenomenological turn, addressing important matters such as the following: • The 1999 Nobel Prize in Physics went to Martinus Veltman and Gerard 't Hooft [http://www.aip.org/pnu/1999/split/pnu452-1.htm] on using dimensional regularization to prove that non-Abelian gauge theories are renormalizable. That is, in this very intricate fundamental theory, "bare" or "undressed" particles [such as electrons] are the fundamental entities of the theory as formulated, but one can observe only the "fully clothed" particles [real electrons are always enshrouded in photon clouds]. Therefore, very elaborate procedures must be employed to obtain consistent results for observable quantities. The "dimensional regularization" concept involves formulating the theory and calculating things in "d" space-time dimensions, and carefully extirpating the unphysical parts in the limit d--->4. Why? So you don't unwittedly destroy the gauge symmetry of the theory. Clear?? If not, consider this observation from the Hunting of the Snark, Fit the Fifth: The Beaver's Lesson by Louis Carroll [http://www.literature.org/authors/carroll-lewis/ You boil it in sawdust; you salt it in glue; You condense it with locusts and tape; Still keeping one principal object in view To preserve its symmetrical shape. Incidentally, one of Tini's students [a friend of mine; Hill, Alfred, 29 years, born 29.06.59, Sonthofen, Germany, German, seat number 14A; see [http://www.victimsofpanamflight103.org/victims] was killed in PanAm™ Flight 103 over Lockerby Scotland several years ago. Alfred Nobel [Swedish dynamite inventor and manufacturer] set up the physics prize in his will for a "device or discovery made by during the last year", but has become more of a "lifetime achievement academy award". • Albert Einstein said that there is no such thing as time; there are only clocks. Indeed developing precise clocks has been important for science, technology, and commerce. He also said that space and time should never be considered separately, but only a composite spacetime, in which the speed of light is the same for all "observation teams". Indeed, the speed of light is no longer "measurable", being defined at the value 299794258 meters/second. However, we can speak freely about "anywhere", but might be "locked up without the key" if we discussed "anywhen". How come space and time are "the same thing, but different"? • And, what about time travel and alternative futures? Consider these [Space Child's Mother Goose; see http://www.regehr.org/books/reviews/spacechildsmothergoose.html]. There was a man with a time machine Who borrowed a Tuesday all painted green His pockets with rockets he would like to jam And he said "I have think, so I cannot am". Probable-possible, my black hen She lays eggs in the relative when She cannot lay in the positive now Because she's not able to postulate how. 06 April 1999: John Bozovsky [Bowen HS] According to him, the seriousness of the fabled Y2K problem pales in comparison with the difficulties in facing the Y1K problem, as evidenced by an article from a London newspaper circa 999 AD/CE. [ http://www.towson.edu/~duncan/Y1K.html] In fact, many people predicted the end of the world at the end of the first millennium. They may have been correct! 29 February 2000: Porter Johnson (IIT Physics) told us of Birthdays on Today's Date (Leap Day!): Gioacchino Rossini http://en.wikipedia.org/wiki/Gioachino_Rossini in 1792 and Herman Hollerith http://www-groups.dcs.st-and.ac.uk/~history/ Mathematicians/Hollerith.html in 1860. The first being the composer (who wrote William Tell), and the latter the inventor of punched card computers. Interesting stuff! 02 May 2000: Arlyn van Ek (Illiana Christian HS) showed us an interesting 50 minute video available from NOVA. See the website http://www.pbs.org/whatson/press/winspring/secrets_medsiege.html. Titled Medieval Siege, it sells for about $26 (1-800-949-8670). He uses it when he must be out of the classroom. He showed us some excerpts about the WARWOLF [the atom bomb of the 14th century] and the Trebucket (like a catapult with a sling on its end), and how a group of people experimented building several models, many to full-scale, to test whether they had the range and power to knock down the thick defensive walls of a medieval castle. They had some problems until they put wheels on it, like old engravings showed them to be built. The wheels, counter-intuitively, increased the range of the projectiles and stability of the trebucket! Most interesting physics involved. Thanks, Arlyn! 23 October 2000: Don Kanner (Lane Tech HS, Physics) Inertia Don handed out selected portions of the authorized English translation of landmark book Philosophiae Naturalis Principia Mathematica by Sir Isaac Newton, which included explanations of the following definitions and laws: [For the Latin text see http://www.maths.tcd.ie/pub/HistMath/People/Newton/Principia/Bk1Sect1/]. 1. The quantity of matter is the measure of the same, arising from its density and bulk conjointly. 2. The quantity of motion is the measure of the same, arising from the velocity and quantity of matter conjointly. 3. The vis insita, or innate force of matter, is a power of resisting, by which every body, as much as in it lies, continues in its present state, whether it be of rest, or of moving uniformly forwards in a right line. 4. An impressed force is an action exerted upon a body, in order to change its state, either or rest, or of uniform motion in a right line. 5. A centripetal force is that by which bodies are drawn or impelled, or any way tend, towards a point as to a centre. 1. Every body continues in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed upon it. 2. The change of motion is proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed. 3. To every action there is always opposed an equal reaction; or, the mutual actions of two bodies upon each other are always equal, and directed to contrary parts. Curiously, Newton distinguishes an innate force of matter (inertia?) in #3 from an impressed (applied) force in #4. Porter Johnson commented that the question of whether Newton actually discovered his laws by himself has been hotly debated over the years. Consider this excerpt from a Newton Biography: http:// Isaac Newton was born at Wolsthorpe, Lincolnshire on 25 December 1642. Born into a farming family and first educated at Grantham, Isaac Newton was sent to Trinity College, Cambridge, where as an undergraduate, he came under the influence of Cartesian philosophy. When confined to his home at Woolsthorpe by the plague between 1665 and 1666 Newton carried through work in the analysis of the physical world which has profoundly influenced the whole of modern science. On returning to Cambridge, Newton became a Fellow of Trinity College, and was then appointed to the Lucasian Chair of mathematics in succession to Isaac Barrow. In the 1670s lectures, demonstrations and theoretical investigations in optics occupied Newton. In 1672 he constructed the reflecting telescope today named after him, but in the early years of the 1680s correspondence with Robert Hooke re-awakened his interest in dynamics. After Edmond Halley's visit to Cambridge to encourage him in this work, Newton laid the foundations of classical mechanics in the composition of his fundamental work Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), which was presented to the Royal Society in 1686, and its subsequent publication being paid for by his close friend Edmund Halley. Consider also this excerpt from the Biography of Robert Hooke: http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Hooke.html In 1672 Hooke attempted to prove that the Earth moves in an ellipse round the Sun and six years later proposed the inverse square law of gravitation to explain planetary motions. Hooke wrote to Newton in 1679 asking for his opinion:- ... of compounding the celestiall motions of the planetts of a direct motion by the tangent (inertial motion) and an attractive motion towards the centrall body ... my supposition is that the Attraction always is in a duplicate proportion to the Distance from the Center Reciprocall... Hooke seemed unable to give a mathematical proof of his conjectures. However he claimed priority over the inverse square law and this led to a bitter dispute with Newton who, as a consequence, removed all references to Hooke from the Principia. For balance, look at the corresponding Newton Biography on the St Andrews website: http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Newton.html. Consider also the excerpt from this source: http://www.aps.org/units/fhp/FHPnews/reports.cfm Michael Nauenberg, University of California, Santa Cruz, who organized the session, presented a paper entitled, "Newton's Early Computational Method for Dynamics." He began by observing that despite considerable historical research, very little is known about how Newton developed the mathematical theory of orbital dynamics which culminated in the Principia. A letter from Newton to Hooke, written on Dec. 13, 1679, reveals that Newton had made considerable more progress in understanding central force motion than had been previously realized. In particular a careful analysis of the original diagram which appears in this letter reveals that by then Newton understood by the fundamental symmetries of orbital motion for central forces. Moreover, the text of the letter indicates that he had developed a computational method to evaluate orbital motion for arbitrary central forces. Nauenberg went on to show that the early mathematical method Newton used to solve orbital motion for general central forces in his letter to Hooke was based on the calculus of curvature which he developed in the late 1660's. In correspondence with Newton in late 1679, Hooke suggested an alternative physical approach to which Newton gave a mathematical formulation without acknowledging Hooke (later in 1686 he wrote to Halley emphatically denying that Hooke had made any important contributions). This approach led Newton immediately to the discovery of the physical basis of Kepler's area law, which remained hidden in his earlier curvature method. The new approach is described in Proposition I, Theorem I of the Principia, and constitutes the cornerstone for the geometric methods in the book. 11 September 2001: Bill Colson (Morgan Park HS, Mathematics) passed out copies of a page from Popular Science Flash Forward Summer 2001, entitled Famous Last Words. It contained such entries as the following: • "Louis Pasteur's theory of germs is ridiculous fiction." --Pierre Pachet, Professor of Physiology at Toulouse University, 1872. • "My personal desire would be to prohibit entirely the use of alternating currents. There are [as] unnecessary as they are dangerous. --Thomas A Edison, North American Review, 1998 • "Airplanes are interesting toys but of no military value." -- French Marshal Ferdinand Foch, Professor of Strategy École Superieure de Guerre. He also passed around a copy of the book Flatterland: Like Flatland, only More So by Ian Stewart [Perseus Publishing 2001 ISBN 0 - 7382 - 04420]. This book stands as a sequel to the classic book Flatland by Edwin Abbott [Dover 1982 ISBN: 048627263X ]. Like its predecessor, it delves into travel from one dimension to another, including the "fractal forest", or the "Mandel Blot". Porter Johnson pointed out that theories involving gravity in ten dimensional spacetime are currently under serious investigation. 20 January 2002 Bill Shanks (Joliet Central, retired): Obituary of ISPP Participant Leo Seren [originally published in the Los Angles times at http://www.latimes.com/news/obituaries/la-000002603jan11.story?coll=la%2Dnews%2Dobituaries] Leo Seren, 83; Physicist on the Atomic Bomb Who Turned Pacifist. Leo Seren, 83, a University of Chicago physicist who called himself a war criminal for the role he played in the development of the atomic bomb, died of amyloidosis Jan. 3 at a hospital in Evanston, Ill. Seren had just earned his doctorate from the University of Chicago when he went to work with Enrico Fermi on the Manhattan Project in 1942. Seren was one of 51 people present in an abandoned squash court at the university's Stagg Field on Dec. 2, 1942, when the first nuclear reactor achieved critical mass. Seren's job was to measure the density of neutrons in piles of graphite, uranium and cadmium control rods used to build the reactor. He worked on nuclear power until 1960, stopping when he reached the conclusion that there was no way to safely dispose of radioactive waste. He began to focus on renewable energy sources, such as solar, wind and water power. In a 1982 speech before anti-nuclear demonstrators at the University of Chicago, Seren spoke of his regret over his role in the Manhattan Project, which led to the devastation of Hiroshima and Nagasaki, Japan, in 1945 and the loss of tens of thousands of lives. He said that if he were tried for crimes against humanity, "I'd plead guilty. And I'd say for mitigating circumstances that at least I decided that I'd never work on nuclear weapons again." 06 May 2003 Hiroshima and Nagasaki for Physics Teachers: A one-week workshop, 7-11 July , 2003: Guide: Raymond G. Wilson, Ph.D., Emeritus Associate Professor, Physics Department, Illinois Wesleyan University, Bloomington, IL 61702. He began teaching about the effects of nuclear war in 1959, and has spent seven summers of study in Hiroshima. For details see the website http://titan.iwu.edu/~physics/ 29 November 2005: Larry Alofs showed us the inside of a combination lock apparatus, which might once have been a locking door for a post office box. There are three round tumbler plates inside the lock, each having a notch. Each number of the combination rotates one of the plates so that the notch is aligned in a certain direction. The right sequence of numbers serves to align the three notches on the three plates, which allows the lock to be opened. Neat! Thanks, Larry. Porter Johnson then told us the story of how Richard Feynman, who, during his time at Los Alamos working on development of the atomic bomb during WWII, devised a way to open combination locks. He first made these observations: • Most people leave an open combination lock at the position of the last number in the combination. • Most combination locks will open if you get fairly close to the precise number in the combination. For a standard 40 number combination lock with 3 tumblers, it would take 40^3 = 64,000 trials to determine the correct combination. Feynman cleverly reduced the number of trials to 8^2 = 64, making it fairly easy to open an ordinary lock. 21 October 2003: Bill Blunk [Joliet Central HS, physics] Columbus and the Telescope In his unofficial capacity as guardian of public interest in historically accurate portrayal of science in the media, Bill was somewhat surprised to see an advertisement [from a furniture store] portraying Christopher Columbus looking out from his sailing vessel with a telescope. This image was quite remarkable, since Columbus sailed in 1492, and the telescope was invented around 1608. For more information on CC, the son of a wool merchant who found his way to the new world, see The Columbus Navigation Homepage [http://www.columbusnavigation.com]. Who really invented the telescope, and when? For insights, see http://www.mce.k12tn.net/renaissance/inventions.htm or http://www.ee.umd.edu/~taylor/optics3.htm, or perhaps even http://news.bbc.co.uk/1/hi/sci/tech/380186.stm. It makes one wonder about the furniture, as well! Thanks for blowing the whistle on this one, Bill! 18 November 2003: John Scavo [Evergreen Park HS] How Many Blades on a Propeller?? John reminded us of two important anniversaries: • Seventy-fifth Anniversary of Mickey Mouse. A physical unit called Mickey Count corresponds to the movement of a computer mouse by 1/200 inch. Isn't that fascinating? • December 17, 2003: Hundredth anniversary of the flight of the Wright Brothers at Kitty Hawk NC. According to John, the Wright brothers had determined how to make the proper size and shape of propeller. They calculated that 90 pounds of force were required for the launch, and they measured a force of 160 pounds; thus they decided to fly! For more details see the NASA website Wright 1903 Flyer -- Propeller Propulsion: http://wright.nasa.gov/airplane/propeller.html and the Nova: http:// www.pbs.org/wgbh/nova/wright/flye-nf.html. John mentioned that the P-51 Mustang fighter plane, champion of the skies in World War II, could fire its guns at their maximum rate for only 15 seconds before running out of ammunition. How many blades are optimal for a propeller? The answer depends upon many factors, such as the pitch of the blades, their operating speed, their size and shape, the weight, shape, and cruising speed of the plane, etc. The problem is different for a windmill, which converts wind energy into more useful forms. For details see An Illustrated History of Wind Power Development: http://telosnet.com/ wind/index.html . See also the science project Number and Size of Blades on Wind Turbine vs Electrical Output: http://www.selah.k12.wa.us/SOAR/SciProj2000/JohnH.html. Interesting questions, John! 24 February 2004: Carl Martikean [Crete Monee HS, Physics] Galileo's Study of Motion (handout) Carl pointed out that, in Galileo's time, the subject of algebra was very new to Europeans -- even the well-educated elite. The scientists of his and Isaac Newton's time were more comfortable with geometrical arguments, since the study of Euclidean geometry was part of their education. Thus, Galileo's analysis of Accelerated Motion [in effect, blocks sliding down inclined planes] was given in the language of geometry. Carl discussed Galileo's proof of the following proposition: Theorem II, Proposition II "The spaces described by a body falling down from rest with a uniformly accelerated motion are to each other as the squares of the time intervals employed in traversing these distances." [Today we are more comfortable in describing such motions using algebra: d = v[o] t +1/2 a t^2, with v[o] = 0 for starting from rest.] Using spark timer data to describe uniform acceleration, Carl outlined the steps in analyzing the information in the style of Galileo. We were struck with the difficulty we encountered in following Galileo's approach. Carl also mentioned that it took only 3-4 generations for investigators to develop calculus, once they had learned about algebra. Still, the scientists of the late renaissance considered themselves first and foremost as geometers. When Sir Isaac Newton said that there was no royal path to learning geometry, he used the word "geometry" to mean what we call "physics" today. For more details on Galileo's life and discoveries, see the ST Andrews [UK] History of Mathematics website [http://www-groups.dcs.st-and.ac.uk/~history/Indexes/HistoryTopics.html], and especially their biographical information on Galileo Galilei [http:// Thanks for sharing this new insight, Carl. 04 May 2004: Ann Brandon [Joliet West HS, physics] Sophie Germain; phone cord Ann passed around the article Sophie Germain: Genius with a Pseudonym [http://sciencewomen.blogspot.com/2008/11/sophie-germain-mathematical-genius.html], which appeared in the Program for the Goodman Theater production of the play Proof by David Auburn, which deals with a fragile young woman without formal mathematical training who makes an important discovery concerning prime numbers. Sophie Germain (1776-1831) [http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Germain.html] --alias Monsieur Le Blanc -- obtained important results on Fermat's Last Theorem [http:// www-groups.dcs.st-and.ac.uk/~history/HistTopics/Fermat's_last_theorem.html#40] for Germain Primes --- prime numbers n for which 2n+1 is also a prime. See also the column This Month in Physics History in the APS NEWS, 02 May 2004 [http://www.aps.org/apsnews/] Revolutionary Pursuits: Circa May 1816: Germain Forms Theory of Elastic Surfaces. Sophia Germain was instrumental in saving the life of the most famous mathematician in the world, Carl Frederich Gauss [ http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Gauss.html], from the invading armies of Napoleon Bonaparte. Gauss once said that she must have "the noblest courage, quite extraordinary talents, and superior genius". Ann also showed us a novel use for an ordinary phone cord, a coiled wire that connects the handset to the main body of the telephone. These cords are available separately at low cost -- for example, try the Dollar Store. Simply hold the cord at both ends and stretch it -- the cord can be used to display transverse waves much more easily and reliably than with our usual choice, Mr Slinky. Very nice ideas, Ann. Excellent! 14 September 2004: Porter Johnson and Don Kanner called attention to The Archimedes Palimpsist, which was discussed in a recent PBS program. For details see http://www.pbs.org/wgbh/nova/archimedes/ palimpsest.html on the PBS website, http://www.pbs.org/. Note: a palimpsist is a parchment or tablet that has been written upon more than once, the previous text having been imperfectly erased and still visible. This partial manuscript of The Method by Archimedes indicates that he used continuous summation (integral calculus) in calculating the volumes inside curved surfaces. 09 November 2004: Dianna Uchida [Morgan Park HS] Neat Book Dianna recently obtained the book Science Explorer [ISBN 0-7566-0430-3] for $12.97 at COSTCO. This book, published in 2004 by DK Eyewitness Books [http://www.dk.com/], shows a beautifully illustrated and annotated variety of historical apparatus, and is rather wide-ranging. She cited the example of development of bronze tools for axe blades, swords, and (ouch!) razors in primitive society. Thanks for the information, Dianna! 13 September 2005: Karlene Joseph (Lane Tech HS, physics) Predicting Paths: Exercises for Critical Thinking Karlene has been searching for problems for her students that get away from "plug and chug" and more towards critical thinking. One she called "Predicting Paths". The first example was a hoop that was marked with a point on its circumference. Karlene slowly rolled the hoop on the (flat, horizontal) table. She asked her students to visualize/determine the path traveled by the single point as the hoop rolls in one direction. Several of us commented that the curve is a cycloid -- the special case of a Brachistochrone is described below. A similar question involves a hoop with the point rolling on the inside of a fixed circle. The curve here might be called an epicycloid. A third one is like the second but now the outer (large) hoop rolls while the inner hoop is stationary -- like a point on the edge of a Hula Hoop. The next time around Karlene asked her students to design an apparatus to give a physical demonstration of the process and the results. One student represented the cycloid visually by taping a flashlight onto the inside edge of a coffee can, and rolling the can across the table. Nifty! Thanks for the ideas, Porter had us consider an application, the Brachistochrone problem, posed by Johann Bernoulli in 1696 --- see Brachistochrone Problem: http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/ Brachistochrone.html. Here is a simple statement of the problem: Let us connect two fixed points, A and B, by a wire that maintains its shape. What is the optimal shape of the wire, for a bead (of constant weight) to slide down the wire from A to B -- without friction -- in the shortest time? This question was the first one solved using The Calculus of Variations. For additional details see the website The Brachistochrone: http://whistleralley.com/brachistochrone/brachistochrone.htm. 13 December 2005: Bill Colson [Morgan Park HS, mathematics] Stamps Bill called attention to a set of commemorative stamps that honor Richard Feynman and other scientists. For details see the USPS website http://www.usps.com/communications/news/stamps/2004/ sr04_076.htm and the Friends of Tuva website: http://www.fotuva.org/online/frameload.htm?/online/stamp.html. Be sure to obtain this stamp before 08 January 2006, when postage rates increase. 24 January 2006: Bill Colson (Morgan Park HS, math) Handouts Bill passed around the article Secret Science in Art by Josie Glausiusz, which appeared in the December 2005 issue of Discover Magazine: http://www.discover.com/issues/dec-05/features/ physics-art-matisse-seurat/. The article contains the following introductory statement: "Shakespeare, Seurat, and Matisse knew little about physics, but their work is awash in its principles." Bill also showed us the 08 January 2006 Foxtrot cartoon by Bill Amend, Physicists always lose snowball fights: http://yodha.livejournal.com/136964.html. Thanks, Bill. 04 April 2006: Terri Donatello (retired) Newspaper Articles Terri brought in two recent newspaper articles. In one a team of MIT scientists tried to recreate Archimedes death ray: http://web.mit.edu/2.009/www//experiments/deathray/10_Mythbusters.html. While they did not disprove that it actually happened, they showed that it was unlikely to have worked. A second article was about the current neutrino project at Fermilab (in which IIT is significantly involved). It described a continuing experiment in which neutrinos made at Fermilab sre beamed 450 miles to a detector 0.5 mile below the surface in the Soudan Underground Iron Mine in northern Minnesota: http://www.dnr.state.mn.us/state_parks/soudan_underground_mine/index.html. Thanks, Terri. 02 May 2006: Porter Johnson (IIT physics) Shroud of Turin Project Porter learned about a Shroud of Turin Project website from Steve Feld, Editor of ThinkQuest NYC Newsletter: http://shroud2000.com/LatestNews.htm. The site is featured on the Landmarks for Schools homepage http://landmark-project.com/index.php. The website describes scientific investigations on the authenticity of The Shroud of Turin. It gives a link to various English and Greek Language versions of a Biblical eye-witness account in the Gospel of John, Chapter 20, verse 7, in which the shroud of Jesus is specifically mentioned [Online Parallel Bible: http://bible.cc/john/20-7.htm]. Finally, it provides information to suggest that Leonardo DaVinci may actually have created the shroud, discussing his motivations as well as his capabilities. A re-creation of a Shroud Image has recently been done by Stephen Beckman, using a camera obscura -- along with other materials and technology available in Leonardo's era. This Shroud of Turin website is http://shroud2000.com/ LatestNews.htm. The website merely presents the information, allowing visitors to the website to express their own opinions on this subject. The results of an online poll are presented. The project provides an excellent example of Scientific Inquiry! Thanks.
{"url":"http://mypages.iit.edu/~smart/acadyear/historyofscience.htm","timestamp":"2014-04-19T01:46:29Z","content_type":null,"content_length":"36538","record_id":"<urn:uuid:68ab935b-184c-45cf-b49f-63ca55d37795>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Influence diagram 34,117pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking - Cognitive processes Cognition - Outline Index This article needs rewriting to enhance its relevance to psychologists.. Please help to improve this page yourself if you can.. In [decision theory]] an influence diagram (ID) (also called a relevance diagram, decision diagram or a decision network) is a compact graphical and mathematical representation of a decision situation. It is a generalization of a Bayesian network, in which not only probabilistic inference problems but also decision making problems (following maximum expected utility criterion) can be modeled and solved. ID was first developed in mid-1970s within the decision analysis community with an intuitive semantic that is easy to understand. It is now adopted widely and becoming an alternative to decision tree which typically suffers from exponential growth in number of branches with each variable modeled. ID is directly applicable in team decision analysis, since it allows incomplete sharing of information among team members to be modeled and solved explicitly. Extension of ID also find its use in game theory as an alternative representation of game tree. An ID is a directed acyclic graph with three types (plus one subtype) of node and three types of arc (or arrow) between nodes. □ Decision node (corresponding to each decision to be made) is drawn as a rectangle. □ Uncertainty node (corresponding to each uncertainty to be modeled) is drawn as an oval. ☆ Deterministic node (corresponding to special kind of uncertainty that its outcome is deterministically known whenever the outcome of some other uncertainties are also known) is drawn as a double oval. □ Functional arcs (ending in value node) indicate that one of the components of additively separable utility function is a function of all the nodes at their tails. □ Conditional arcs (ending in uncertainty node) indicate that the uncertainty at their heads is probabilistically conditioned on all the nodes at their tails. ☆ Conditional arcs (ending in deterministic node) indicate that the uncertainty at their heads is deterministically conditioned on all the nodes at their tails. □ Informational arcs (ending in decision node) indicate that the decision at their heads is made with the outcome of all the nodes at their tails known beforehand. Given a properly structured ID; □ Decision nodes and incoming information arcs collectively state the alternatives (what can be done when the outcome of certain decisions and/or uncertainties are known beforehand) □ Uncertainty/deterministic nodes and incoming conditional arcs collectively model the information (what are known and their probabilistic/deterministic relationships) □ Value nodes and incoming functional arcs collectively quantify the preference (how things are preferred over one another). Alternative, information, and preference are termed decision basis in decision analysis, they represent three required components of any valid decision situation. Formally, the semantic of influence diagram is based on sequential construction of nodes and arcs, which implies a specification of all conditional independencies in the diagram. The specification is defined by the $d$-separation criterion of Bayesian network. According to this semantic, every node is probabilistically independent on its non-successor nodes given the outcome of its immediate predecessor nodes. Likewise, a missing arc between non-value node $X$ and non-value node $Y$ implies that there exists a set of non-value nodes $Z$, e.g., the parents of $Y$, that renders $Y$ independent of $X$ given the outcome of the nodes in $Z$. Simple influence diagram for making decision about vacation activity Consider the simple influence diagram representing a situation where a decision-maker is planning her vacation. □ There is 1 decision node (Vacation Activity), 2 uncertainty nodes (Weather Condition, Weather Forecast), and 1 value node (Satisfaction). □ There are 2 functional arcs (ending in Satisfaction), 1 conditional arc (ending in Weather Forecast), and 1 informational arc (ending in Vacation Activity). □ Functional arcs ending in Satisfaction indicate that Satisfaction is a utility function of Weather Condition and Vacation Activity. In other words, her satisfaction can be quantified if she knows what the weather is like and what her choice of activity is. (Note that she does not value Weather Forecast directly) □ Conditional arc ending in Weather Forecast indicates her belief that Weather Forecast and Weather Condition can be dependent. □ Informational arc ending in Vacation Activity indicates that she will only know Weather Forecast, not Weather Condition, when making her choice. In other words, actual weather will be known after she makes her choice, and only forecast is what she can count on at this stage. □ It also follows semantically, for example, that Vacation Activity is independent on (irrelevant to) Weather Condition given Weather Forecast is known. Applicability in value of information The above example highlights the power of influence diagram in representing an extremely important concept in decision analysis known as value of information. Consider the following three scenarios; □ Scenario 1: The decision-maker could make her Vacation Activity decision while knowing what Weather Condition will be like. This corresponds to adding extra informational arc from Weather Condition to Vacation Activity in the above influence diagram. □ Scenario 2: The original influence diagram as shown above. □ Scenario 3: The decision-maker makes her decision without even knowing the Weather Forecast. This corresponds to removing informational arc from Weather Forecast to Vacation Activity in the above influence diagram. Scenario 1 is the best possible scenario for this decision situation since there is no longer any uncertainty on what she cares about (Weather Condition) when making her decision. Scenario 3, however, is the worst possible scenario for this decision situation since she needs to make her decision without any hint (Weather Forecast) on what she cares about (Weather Condition) will turn out to be. The decision-maker is usually better off (definitely no worse off) to move from scenario 3 to scenario 2 through the acquisition of new information. The most she should be willing to pay for such move is called value of information on Weather Forecast, which is essentially value of imperfect information on Weather Condition. Likewise, it is the best for the decision-maker to move from scenario 3 to scenario 1. The most she should be willing to pay for such move is called value of perfect information on Weather Condition. The applicability of this simple ID and the value of information concept is tremendous, especially in medical decision making when most decisions have to be made with imperfect information about patients, diseases, etc. Influence diagrams are hierarchical and can be defined either in terms of their structure or in greater detail in terms of the functional and numerical relation between diagram elements. An ID that is consistently defined at all levels—structure, function, and number—is a well-defined mathematical representation and is referred to as a well-formed influence diagram (WFID). WFIDs can be evaluated using reversal and removal operations to yield answers to a large class of probabilistic, inferential, and decision questions. More recent techniques have been developed by artificial intelligence community with their works around Bayesian network inference (Belief propagation). Influence diagram having only uncertainty nodes (i.e., Bayesian network) is also called relevance diagram. This is perhaps a better use of language than influence diagram. An arc connecting node A to B implies not only that "A is relevant to B", but also that "B is relevant to A" (i.e., relevance is a symmetric relationship). The word influence implies more of a one-way relationship, which is reinforced by the arc having a defined direction. Since some arcs are easily reversed, this "one-way" thinking that somehow "A influences B" is incorrect (the causality could be the other way around). However, the term relevance diagram is never adopted in larger community, and the world continues to refer to influence diagram. • Detwarasiti, A. and R.D. Shachter. (2005). Influence diagrams for team decision analysis. Decision Analysis 2(4): 207-228. • Holtzman, Samuel, Intelligent Decision Systems (1989), Addison-Wesley. • Howard, R.A. and J.E. Matheson, "Influence diagrams" (1981), in Readings on the Principles and Applications of Decision Analysis, eds. R.A. Howard and J.E. Matheson, Vol. II (1984), Menlo Park CA: Strategic Decisions Group. • Koller, D. and B. Milch. (2003). Multi-agent influence diagrams for representing and solving games. Games and Economic Behavior, 45: 181-221. • Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, CA: Morgan Kaufmann Publishers, Inc. • Shachter, R.D. (1986). Evaluating influence diagrams. Operations Research, 34: 871-882. • Shachter, R.D. (1988). Probabilistic inference and influence diagrams. Operations Research 36: 589-604. • Lev Virine and Michael Trumper. Project Decisions: The Art and Science, Vienna, VA: Management Concepts, 2008. ISBN 978-1-56726-217-9 See also External links
{"url":"http://psychology.wikia.com/wiki/Influence_diagram?oldid=146831","timestamp":"2014-04-25T06:21:50Z","content_type":null,"content_length":"75000","record_id":"<urn:uuid:f7569a38-30fd-43e2-8ac7-a9e5ce50066e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
User Andrea Mori bio website location Torino, Italia age 53 visits member for 4 years, 2 months seen Apr 8 at 22:30 stats profile views 859 Laurea (B.S.): Roma 1984 Ph.D. Brandeis 1989 At the University of Torino since 1990. 23 awarded Popular Question 18 awarded Yearling Feb On Universal Abelian surfaces over a Shimura curve. 18 comment $\binom{\tau^\prime}1=\binom{a^{-1}(\tau)}1=j(a^{-1},\tau)^{-1}a\binom\tau1$ Feb On Universal Abelian surfaces over a Shimura curve. 18 revised added 319 characters in body 18 answered On Universal Abelian surfaces over a Shimura curve. May Do you set a one or two commas when using \mapsto? 6 comment In the case in question one can just write "The canonical map $f:X\rightarrow Y$ is injective". 3 answered Trichotomies in mathematics Feb Trichotomies in mathematics 3 comment Holy Trinity! That's a striking observation. 3 accepted Explicit Casselman theory: reference needed Jan Explicit Casselman theory: reference needed 29 revised added 41 characters in body Jan Explicit Casselman theory: reference needed 28 comment @Giuseppe: Grazie! I don't understand what I was doing wrong... Jan Explicit Casselman theory: reference needed 28 revised deleted 82 characters in body Jan Explicit Casselman theory: reference needed 28 comment Class 1 principal series are induced from unramified characters. Sorry about the down-vote: I feel that down-voters should always give a justification. 28 accepted Infinite sets of primes of density 0 Jan Explicit Casselman theory: reference needed 28 revised edited body Jan Explicit Casselman theory: reference needed 28 comment @Marc_Palm : I'm certainly primarily interested in complex representations. I left that somewhat implicit, also the remark about modular forms should be revealing! Please repost your Jan Explicit Casselman theory: reference needed 28 revised added 8 characters in body 28 asked Explicit Casselman theory: reference needed Dec a question on game of chess 11 comment Who does actually conjectures that White has a winning strategy? One hundred and forty years of Grandmaster practice seems to suggest that the game is about even. Apr Notation for a canonical quotient of an abelian variety in positive characteristic 29 comment @Kevin Buzzard: I actually like the idea of calling it $A^{(1/p)}$! I'm sort of surprised, though, that there is no standard notation for it, like if this object had not been given much
{"url":"http://mathoverflow.net/users/3602/andrea-mori?tab=activity","timestamp":"2014-04-19T12:05:57Z","content_type":null,"content_length":"45177","record_id":"<urn:uuid:cf72737d-8bd5-4bce-a067-daf5644d3765>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/muhammad9t5/answered/1","timestamp":"2014-04-16T22:31:12Z","content_type":null,"content_length":"114910","record_id":"<urn:uuid:17eaf0d4-a529-4f3c-a572-8391582dc8fc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
ADM formalism ADM Formalism developed by Richard Arnowitt Stanley Deser Charles W. Misner is a formulation of general relativity . This formulation plays an important role both in quantum gravity numerical relativity A comprehensive review of this formalism was published by the same authors in "Gravitation: An introduction to current research" Louis Witten (editor), Wiley NY (1962); chapter 7, pp 227–265. Recently, this has been reprinted in the journal General Relativity and Gravitation The original papers can be found in Physical Review The formalism supposes that into a family of spacelike surfaces <math>Sigma_t</math>, labeled by their time coordinate <math>t</math>, and with coordinates on each slice given by <math>x^i</math> . The dynamic variables of this theory are taken to be the metric tensor of three dimensional spatial slices <math>gamma_(t,x^k)</math> and their conjugate momenta <math>pi^(t,x^k)</math>. Using these variables it is possible to define a , and thereby write the equations of motion for general relativity in the form of Hamilton's equations In addition to the twelve variables... Read More
{"url":"http://pages.rediff.com/adm-formalism/584697","timestamp":"2014-04-20T09:15:45Z","content_type":null,"content_length":"31967","record_id":"<urn:uuid:1b0d5f07-73b9-49a5-b084-e2b5bc73fcae>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof that there is no largest prime number Suppose there were a largest prime number. Call it N. Now consider N! + 1. Clearly, N! + 1 does not have any number between 1 and N as a divisor. This means that either a) N! + 1 is prime, or b) N! + 1 has a prime divisor greater than N. In either case, we obtain a contradiction. Thus, there is no largest prime number. QED. See also proof, mathematics.
{"url":"http://everything2.com/title/Proof+that+there+is+no+largest+prime+number","timestamp":"2014-04-18T18:44:13Z","content_type":null,"content_length":"35217","record_id":"<urn:uuid:7dc3123b-3455-40c1-86ee-b5d7d7a8471b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00124-ip-10-147-4-33.ec2.internal.warc.gz"}