content
stringlengths
86
994k
meta
stringlengths
288
619
Give Thediode Logic Diagram For The Followingbooleanexpression(use... | Chegg.com Give thediode logic diagram for the followingbooleanexpression(use positve logic and assume two input gates) C = Image text transcribed for accessibility: Give the diode logic diagram for the following boolean expression(use positive logic and assume two input gates)x + y + z Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/give-thediode-logic-diagram-followingbooleanexpression-use-positve-logic-assume-two-input--q316280","timestamp":"2014-04-21T15:40:36Z","content_type":null,"content_length":"20896","record_id":"<urn:uuid:a36f8d1d-ff23-430e-af9a-8ee67128249a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
A Bayesian approach of Quantitative Polymerase Chain Reaction Nadia Lalam and Christine Jacob In: European Conference on Mathematical and Theoretical Biology, 18-22 Jul 2005, Dresden, Germany. Quantitative Polymerase Chain Reaction aims at determining the initial amount $X_0$ of a specific portion of DNA molecules from the observation of the amplification process of the DNA molecules quantity. This amplification process is achieved through successive replication cycles. It depends on the efficiency $\{p_n\}_n$ of the replication of the molecules, $p_n$ being the probability that a molecule will duplicate at replication cycle $n$. Modelling the amplification process by a branching process and assuming $p_n=p$ for all $n$, we estimate the unknown parameter $\theta=(p, X_0)$ using Markov Chain Monte Carlo methods under a Bayesian framework. PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00001467/","timestamp":"2014-04-20T03:13:07Z","content_type":null,"content_length":"6860","record_id":"<urn:uuid:fba769da-6757-48e4-8f13-781db2a3a98c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
How much pressure did it take to pop the top off Mexico’s Popocatépetl volcano? | Science Blogs | WIRED On June 17, the Popocatépetl volcano in the state of Puebla in Mexico belched out a pretty impressive looking volcanic plume. Fortunately for us, it was caught on webcam, at a town a safe distance away. Here’s the video (it’s been sped up): Now, I’m guessing this explosion didn’t come as big surprise. Popocatépetl is a known active volcano. Even the Aztecs knew this, that’s why they named it the smoking mountain (in their language, popōca means ‘it smokes’ and tepētl means mountain). The volcano is under 24 hour surveillance by CENAPRED, who have also restricted access to anywhere within 12 kilometers of the crater. Watching that video , two things leapt out at me. First, you can actually see the clouds react to the explosion just a little while after the plume emerges. That’s when the shock wave of the explosion (the BOOM) hits the clouds. The other thing that struck me was the incredible amount of stuff that’s rolling down the volcano’s slope *really* fast. Here’s how Wired blogger and volcanologist Erik Klemetti describes what’s happening: Now, these explosions come with a lot of force, and you can see after the initial explosion is how the clouds of water vapor around Popocatepetl shudder as the explosion front moves past. Then quickly, the upper flanks of the volcano turn grey from the rapid raining out of ash and volcanic debris (tephra). So, let’s get our SCIENCE on, and try to dig beneath the surface of this volcanic eruption (figuratively speaking, of course). Here’s my first question: just how fast is that debris sliding down the Here’s the plan. Let’s find the distance that the debris travels. Then we’ll time how long it took to cover that distance. Divide the distance by the time, and we’ve got the speed. If you noticed, the video has a timestamp in the top-right corner. So it’s easy to time the debris as it rolls down the mountain. How about the distance? Well, first I wanted to find out where the video was recorded from, so I went to the URL on the youtube video. From there, it was easy to find the webcam feed, which actually tells you where the webcam is located. It’s in San Nicolás de Los Ranchos, a town that’s about 15 kilometers (9 miles) away from the crater. Next, I zoomed in to the volcano on google Earth, and put down a marker at the center of the volcano (where the debris came from), and three more markers lower down on the volcano slope, right at the edge of the tree line. The reason for putting three markers on the right is so I can take the average of three distance measurements. Now comes the fun part. Using Google Earth, I can actually fly over to San Nicolás de Los Ranchos, look up at the volcano, and see if those markers are at the right place. That looks pretty reasonable to me. The markers are pretty much where the debris stops sliding in the video. While we’re at it, just for fun, here’s a video of what the explosion would have looked like to residents of San Nicolás de Los Ranchos. I made this by landing on the town in Google Earth, looking up at the volcano, and then lining up the video volcano with the Google Earth volcano (the video is still sped up, though). It just blows my mind that we have access to a life-sized map of the world (without stepping outdoors, that is). Alright, enough fooling around. Now for some more gratuitous science. I used the ruler tool in Google Earth to measure the on-the-ground distance between the markers. This takes into account the sloping terrain – it’s the distance you’d cover if you were to start walking from the lower marker, and walk straight up the volcano slope and into the crater (don’t try this at home, unless you’re the lava-walking dude in this crazy video). The average of the three measurements was 3.31 kilometers. That’s the average length of one of red paths. Now, to measure the time. Looking at the timestamp on the video, I see that the volcano let out the plume at 13:23:38. Depending on what part of the debris you want to consider, it reaches the treeline somewhere between 13:24:22 and 13:24:46. So it took somewhere between 44 to 68 seconds to reach that point. Divide the two numbers, and we get our speed. The slow estimate puts it at 49 meters/second (109 mph), and the fast estimate puts it at 75 meters/second (168 mph). Taking the average, we get 62 meters/second or 139 mph. Since we got a pretty big variation, I decided to check my calculation by putting another marker half way down the mountain, and timing how long it took the plume to travel this new distance. Here’s what the mountain looks like with the new marker on it (it’s a little hard to see, but there’s a fourth pin halfway along the red line): The distance to the halfway marker was 1.64 km, and the time was 26 seconds. I think these numbers are a bit more reliable than before. Divide distance by time, and you get a speed of 63 meters/ second or 140 mph. Hmm, that’s basically the same number from before, which is a bit odd. Given the huge uncertainty in our speed calculation, and the various factors like gravity and friction that can change the speed, this is just a coincidence. Nonetheless, I’d conclude that 140 mph is a pretty good estimate of the speed of debris flow down the volcano. So now we’ve got the speed. What can we do with that? Surprisingly, we can actually use the speed of the mud to estimate the pressure inside the volcano. What follows is what physicists call an order-of-magnitude estimate – it’s a back-of-the-envelope calculation that will give us a rough answer. This is a dramatic simplification of the hairy physics that’s actually going on inside a volcano. Nonetheless, readers of this blog will know that I like these toy models, because they give you some insight in exchange for not a lot of work. With that in mind, let’s go on. Picture the moment that the volcano explodes. Inside the volcano, hot gases have built up a huge pressure, whereas outside, everything is normal. At the top of the volcano sits a chunk of rock, like a cork on a champagne bottle, that’s being pushed up from the inside by high pressure gas. We can use an equation called Bernoulli’s equation to relate the pressure at the top and at the bottom of this chunk of rock. $latex P_{inside}$ is the pressure inside the volcano at the moment before it explodes. This is what we want to find. $latex P_{outside}$ is the pressure outside the volcano at this moment, which is good ‘ol atmospheric pressure. ρ is the density of mud sitting on top of the volcano. And $latex v$ is the speed of the rock at the breaking point. I’m assuming that this is somewhere near the speed we calculated above, of 63 meters/second. But, wait. What’s the density of mud at the top of a volcano? That’s a tough one. Well, according to this paper, “the density of the rock overlying the gas reservoir” of a volcano is about 2,300 to 3,000 kilograms per cubic meter (I’m glad I’m not the grad student who had to measure that). Let’s use the average, of 2,650 kg/cubic meter. Plug in the numbers, and we get that the pressure that built up inside Popocatépetl before it exploded is about 54 atmospheres, or 609 pounds per square inch! BOOM. 600 psi. That’s about the pressure inside a paintball gun, except imagine thousands of paintball guns aimed at one heck of a paintball. It’s also the water pressure at a depth of about 400 meters, which is about as deep as most submarines can venture. Is this number even in the right ballpark? Fortunately, I found a PhD thesis that builds a detailed scientific model relating the pressure inside a volcano to the speed of the eruption. And what’s more, this model was built specifically to understand the Popocatépetl volcano in Mexico (science – it’s the gift that keeps on giving). Here’s a figure from the paper. If I understand it correctly, the author basically heated and squeezed volcanic rocks, and then measured how much pressure it took for them to give way. The data includes rocks from an eruption of Popocatépetl in 2003. The vertical axis represents the pressure that it takes to obliterate the rock, measured in MPa, or millions of Pascals (to go from MPa to atmospheric pressure, just multiply by 10). The horizontal axis has to do with how porous the rock is. The figure includes data using rock from many volcanic eruptions (including some real famous ones), but take a look at the blue stars – that’s data from Popo (aka the big P). Depending on how porous the rock is, the pressure needed to cause an eruption at Popocatépetl varies from about 50 atmospheres to 200 atmospheres. That cluster of blue stars on the right – they’re all at around 50 atmospheres. So our calculation of 54 atmospheres for the pressure inside Popocatépetl before an eruption, while it may be simplistic, is probably within a factor of 2 or so from the actual result. 1. By the way, there’s another neat piece of physics that you can see in the video. If you look at the plume of smoke that erupts upwards from the volcano, you’ll notice that it keeps rising until it levels off at a maximum height – which is where it’s neutrally buoyant. It’s actually possible to use that height to measure the strength of the volcano (i.e. its eruption rate). 2. Here’s a paper that build a better model than the one I used to relate the pressure inside a volcano the velocity of the stuff it spews out. It’s also where I got the number for the density of rock in the top of a volcano. Relationships between pressure, volatile content and ejecta velocity in three types of volcanic explosions. Lionel Wilson. Journal of Volcanology and Geothermal Research, 1980 3. The very detailed PhD thesis filled with numerical models, experimental results and figures showing how ejecta velocity of a volcano is related to the pressure inside. This one specifically models the Popocatépetl volcano. A model of volcanic explosions at Popocatépetl volcano (Mexico): Integrating fragmentation experiments and ballistic analysis. MA Alatorre Ibargüengoitia.
{"url":"http://www.wired.com/2013/06/how-much-pressure-did-it-take-to-pop-the-top-off-mexicos-popocatepetl-volcano/","timestamp":"2014-04-16T13:03:06Z","content_type":null,"content_length":"116180","record_id":"<urn:uuid:4aa57f03-a227-43ac-a6b7-4d1a2254fb17>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculate dihedral (not torsion) angles? Antonio osrisfol at ssmain.uniss.it Wed Mar 17 06:29:17 EST 1999 We know how to calculate a torsion angle about an axis, for example if we have 4 points in 3-D (for example, a butane molecule) A D r1 \ / r3 B- - - -C in which the points A, B, C and D are connected by the vectors r1, r2 and r3, we can obtain the torsion angle phi about r2 by applying: p1= r1 x r2 p2= r2 x r3 p1 . p2 = |p1| |p2| cos (phi) r2 . (p2 x p1) = |p1| |p2| |r2| sin(phi) and finally phi=atan[sin(phi)/cos(phi)] through a function like ATAN2 in Fortran. Phi is obviously also the angle between the A-B-C and B-C-D The problem is, how to calculate a dihedral angle when the four centers defining the two planes are not directly connected? For example, if we have a six ring 1 ---2 / \ \ / 5 -- 4 and we need the angle between the 1-3-5 and 1-2-3 planes, now there isn't a common bond, as r2 in the previous case (indeed now we can't speak of 'torsion' angle, but more generally of 'dihedral', even if both are angles between planes). Therefore, the previous expressions aren't directly applicable. I tried to define a 'dummy' common bond, e.g 1--3, in order to apply the above expressions to a system like: \ / actually, with this trick we should obtain the angle between planes 5-1-3 and 1-3-2, which should be the same as that between 1-3-5 and 1-2-3 planes. However, this doesn't seem to work well. Maybe the last assumption (dihedral 513/132= dihedral 135/123) is not correct, or some adjustment are needed in the above expressions, to make them valid in this case? Or do you know of a different method to evaluate such dihedral (not torsion) angles? Thanks in advance osrisfol at ssmain.uniss.it More information about the Molmodel mailing list
{"url":"http://www.bio.net/bionet/mm/molmodel/1999-March/001361.html","timestamp":"2014-04-17T14:25:58Z","content_type":null,"content_length":"4767","record_id":"<urn:uuid:20854600-814a-425b-a931-0226090a5f4d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: THE UNIVERSITY OF CHICAGO JUNE 2000 To Yuliya, who made a wonderful difference in my life. Automorph class theory formalism is developed for the case of integral nonsingular quadratic forms in an odd number of variables. As an application, automorph class theory is used to construct a lifting of similitudes of quadratic Z-modules of arbitrary nondegenerate ternary quadratic forms to morphisms between certain subrings of associated Clifford algebras. The construction explains and generalizes Shimura's correspondence in the case of theta-series of positive definite ternary quadratic forms. The relation between associated zeta-functions is considered.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/049/2131045.html","timestamp":"2014-04-19T10:08:06Z","content_type":null,"content_length":"8026","record_id":"<urn:uuid:68e88d24-3d05-4df1-9ee2-d6baf3dad563>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Post a New Question | Current Questions 9th grade soil dwelling insects are examples of invertabrates. explain why this statement is true and use examples of two soil dwelling insects Tuesday, March 23, 2010 at 7:26am 9th grade 1j + 2c =11 1L + 1c = 5 1j + 2L = 3 1j + 1c + 1L = t What is t? Monday, March 22, 2010 at 10:58pm 9th grade 5x-3y=23...............equation (1) 3x+5y=7.................equation (2) Eq. (1) x 5: 25x-15y=115......Equation (3) Eq. (2) x 3: 9x + 15y=21........Equation (4) Eq (3)+Eq. (4): 34x = 136 (eliminating y) Therefore, x = 4 Tuesday, March 16, 2010 at 6:56pm 9th grade Algebra A health food store mixes granola that costs them $2 per pound and raisins that cost them $ 4 per pound together to make 25 pounds of raisin granola. How many ponds of raisins should they include if they want the mixture to cost them a total of $80? Monday, March 1, 2010 at 1:32am Math, not "9th grade" Assistance needed. Please type your subject in the School Subject box. Any other words, including obscure abbreviations, are likely to delay responses from a teacher who knows that subject well. Sunday, February 28, 2010 at 10:58am 9th grade SCIENCE organic compounds: rubbing alcohol gasoline vegetable oil butter sucrose starch vitamins tartaric acid vinegar maltose corn syrup (dextrose) High Fructose corn syrup (fructose) honey (50:50 mix Friday, February 26, 2010 at 5:12pm 9th grade SCIENCE I have to list 5 different organic compounds; list different types for each; name; structure; properties; and uses. I am a bit confused as to what an organic compound actually is. I know it must contain a carbon, but would vegetables or household foods be considered organic ... Friday, February 26, 2010 at 3:56pm 9th grade Does it have to be informative, persuasive, etc? Your best bet would be to do something that is interesting you because, you don't need to really memorize anything because you can just state your opinions on it and then it'll be easier to talk about it for as long as ... Monday, February 22, 2010 at 4:50pm Chemistry, not "9th grade" Assistance needed. Please type your subject in the School Subject box. Any other words, including obscure abbreviations, are likely to delay responses from a teacher who knows that subject well. Sunday, February 21, 2010 at 5:23pm 9th grade A student put 12.4 grams of potassium sulfate in 29 ml and stirred for 3 hrs. he saw that not all of the solid dissolved, so he removed the extra solid and found it had a mass of 3.4 grams. what was the solubility of the chemical? Sunday, February 21, 2010 at 4:44pm 9th Grade Math Ok, well you need a system of equations. 5x+3.50y=2265 x+y=543 x=543-y 5(543-y)+3.50y=2265 solve for y and substitute in other equation to solve for x can you take it from here? Friday, February 12, 2010 at 9:50pm 9th Grade Math The money collected at a school basketball game was 2,265 from 543 paid admissions. If adult tickets sold at $5.00 each and student tickets sold for $3.50 each, how many tickets of each kind were Friday, February 12, 2010 at 9:38pm 9th grade geometry That depends upon the angles of the triangle. If it is equilateral, each side is 15 m. If it is a right angle isosceles triangle, the two equal sides are each 13.18 m long and the hypot4nuse is 18.64 m. The longest side's can be as long as 22.5 m. Friday, February 12, 2010 at 10:12am 9th grade In a cube all sides are the same length, so the surface area of one face is (5x-3)^2 So the surface area of all 6 faces is 6(5x-3)^2 To find what it is for x = 3, sub x=3 into the above expression. Wednesday, January 27, 2010 at 8:42am Algebra (9th grade) 21.5 * 0.3 = 6.45 $21.50 - 6.45 = $15.05 15.05 * 1.07 = $16.10 Thursday, January 21, 2010 at 7:11pm 9th grade math 7/200 for the Ecuadorian, 10/199 for the Kenyan (one non-Kenyan person is already gone). The probability of both (or all) events is found by multiplying the probabilities of the individual events. I hope this helps. Wednesday, January 13, 2010 at 11:33pm 9th grade math The slope is m = (2-1)/(5-9) = -1/4 so the general equation is y = (-x/4) + b where b is a constant. Evaluate b by making the line go through either point. For example 2 = (-1/4)*5 + b b = 13/4 Therefore the final equation is y = -x/4 + 13/4 or y = (13-x)/4 Monday, January 11, 2010 at 7:11am 9th grade science? Sorry, nothing comes to mind within that category for me either. But then, this is not my area of expertise. If it was just blue and round, I would think of water drops, but I don't know where the striped would fit, unless it is reflecting something striped. Thursday, January 7, 2010 at 11:14am 9th grade science 1)write the fomulas for the following compound: Magnesium hydroxide. 2)Write the names of the compound: Ba(ClO3)2 *the numbers are the subscripts* I really don't understand how to do those two, I did the rest of them. I could really use the help. THANKS!! Monday, January 4, 2010 at 5:19pm 9th grade The fly a combined 2100km 2100km=500km/hr*time1 + 700km/hr*time2 but time2=time1-1 2100=500*time1+700(time1-1) now solve for time 1 Monday, November 16, 2009 at 10:56am 9th grade At noon a private plane left Austin for Los Angeles, 2100 km away, flying at 500km/h. One hour later a jet left Los Angeles for Austin at 700 km/h. At what time did they pass each other? Monday, November 16, 2009 at 7:20am 9th grade (chemistry) The subject is chemistry. Chemical reactions do not result in a change of mass. Mass is conserved, whether the reaction goes to completion of not, and regardless of whether any reactant exceeds stochiometric proportions. Thursday, November 12, 2009 at 7:34am 9th grade 10 gm of silver nitrate solution is added to 10 gm of sodium chloride solution what change in mass do you expect after the reaction and why? Thursday, November 12, 2009 at 7:32am 9th grade 10 gm of silver nitrate solution is added to 10 gm of sodium chloride solution what change in mass do you expect after the reaction and why? Thursday, November 12, 2009 at 7:25am 9th grade Velocity is speed with a direction. In this case, there is no direction, so just put the speed. It would be 200m per 2 minutes = 100m per 1 minuite = 6000. per hour = 6 km h-1 (km/h) will Monday, November 9, 2009 at 12:39am 9th grade Math No , first you multiply each side by 8 to get 7n - 1= 48 now add 1 to each side 7n = 49 divide both sides by 7 n = 7 check: in original ((7(7) -1 1)/6 = (49-1)/6 = 48/8 = 6 Check!! Monday, November 2, 2009 at 7:50pm 9th grade Math (7n-1)/(8)=6 Okay, I think I add 1 to both sides, so now I have.. (7n)/(8)=7 And then, to get n by itself I divide 7 into both sides, so now I have.. n/8=1 n=1 is what I get, but I'm not sure, could someone please check this for me? Monday, November 2, 2009 at 7:46pm 9th grade There won't be a pattern. Take the factors of each number, include 1 and the number itself. Count the number of factors. This is the number of "state" flips. If it is even, the locker is closed. If it is odd, the locker is open. Sunday, November 1, 2009 at 4:35pm 9th grade the variable cost is 10/km and a 2.5-km trip costs $40, determine the equation relating cost, C, in dollars, and distance, d, in kilometers Friday, October 23, 2009 at 7:22pm 9th grade (math) the surface of our earth is 20'c for every km, k below the surface the tempature increses by 10'c. let t represent the tempature in 'c 1a write an equation that represent the relationship between disatnce k and t tempature 1b. what is the slope and and y-intercept Saturday, October 17, 2009 at 12:05pm 9th grade (math) im learnin about y=mx+b 1.describe the line you think each equation represent Y=x, y=5, x=-3 2. The equation of a line is y=mx+2. determione the value of m when the line passes through each point D (12.5) s(1.-3) e(-2.6) a(-5.1) Saturday, October 17, 2009 at 12:03pm 9th grade : Algebra the output of a finction is more 5 more than 2 times the input. find the input when the output is 17. Thursday, October 8, 2009 at 8:26pm 9th grade i need help finding what these words are from these clues: possible opening found in the cell membrane substance which makes up cell walls granular material in the nucleus that thickens to become chromosomes the endoplasmic reticulum is a network of.... structures responsible ... Tuesday, September 15, 2009 at 8:52pm 3rd grade grade 1 215 grade 2 186 grade 3 259 how many more were sold bt grade 3 than by grade 2? 73 answer why aren't any of the tens regrouped to subtract the ones? Monday, September 14, 2009 at 4:11pm 9th grade algebra 5^2 = 25 -2 -25 = -27 -(-27) = 27 Subtract 6 from that. Sunday, September 13, 2009 at 11:06am 9th grade (geography) See http://www.thefreedictionary.com/landform Wednesday, September 9, 2009 at 12:24am 9th grade in which of the following sentences does the subject come after the verb a. on the next street, you'll find the shoe shop. b. whenever he's in town, jeremy likes to go fishing. c. brock traveled over miles of dirt roads. d. where is my elvis CD Thank you! Sunday, September 6, 2009 at 9:08pm 3rd grade even number? adding to nine? well, even means the last digit is 0, 2, 4, 6, 8, between 9th and 23, so the first digit is 1 or 2. How can you get 1 or 2 added to one of the even digits to make 9? Wednesday, August 26, 2009 at 7:58pm 9th grade summer reading i have to do an essay about a 25 word passage for summer reading. i have to state why and what it means to me. i have no clue how long it should be Wednesday, August 26, 2009 at 11:29am Im in 9th grade algebra. and we are learning adding and subtracting real numbers. im stumped on a couple of problems. I get confused when i see the absolute value lines. Does that change the munber? Do i just go on with the problem as if they werent there? And i have NO idea ... Tuesday, August 25, 2009 at 5:18pm 9th grade classes hey MC why you want to take easy classes? Take classes that challenge you to do well, and you'll do better in the rest of your high school years, make collage a little easier, and make a better life for your self in the working world. Thursday, August 20, 2009 at 9:14pm Algebra 1, 9th grade prime expressions cannot have factors. The prime number 13 has no factors except itself. 5z has two factors, 5, and z. Tuesday, August 4, 2009 at 8:02pm Algebra 1 9th grade "steepness" is measured by the absolute value of the slope, m, used in the standard linear equation y=mx+c The greater the value of m, the "steeper" is the curve. If you transform/interpret equations to the above form and compare the values of m in each ... Friday, July 24, 2009 at 12:41am Algerbra 1, 9th grade The slope intercept equation for a line is y = mx + b where: m = slope b = y intercept Thursday, July 23, 2009 at 12:51am 9th Grade Courses MC, neither of those courses will do much for college preparation. I personally think you are too young to be making that decision now, narrowing your options for the future. Is Biology out of the Wednesday, July 15, 2009 at 3:08pm I remember i was in English Honors in the 9th grade and we would have stupid words like that on our vocab list. Just goes to show how retarded the school system is and how little they expect of us. Sunday, June 14, 2009 at 9:35pm 9th Grade Biology Like animal life, plant life has evolved. Rose bushes have not "always been around on earth." I hope this helps. If not, repost your questions in more detail. Thanks for asking. Monday, June 1, 2009 at 11:59am 9th grade global one effect of rgged, mountainous geography on the civilization of ancient Greece was the development of -absolute monarchies -seperate, independent city-states -extensive trade with the Persians -belief in one God Sunday, May 31, 2009 at 2:25pm 9th grade Global China under th Han dynasty and the Roman Empire were similar in that both grew wealthy because they..... -developed extenive trade networks -encouraged democratic ideals -established free-market economies -created classless societies Sunday, May 31, 2009 at 2:21pm 9th grade classes OK, here's a list of the subjects I think I want to take: English 1 Refresher Math Life Science, Biology, or Environmental Science...not sure which one would be the best?? I want to take Sociology for Social Sciences Health And Fine Arts is mandatory Sound OK? Thanks -MC Tuesday, May 26, 2009 at 1:58pm 9th grade y/6>y/12+1 multiply each term by 12, the common denominator. 2y > y + 12 subtract y from each side y > 12 Sunday, May 3, 2009 at 1:40pm 9th grade science Explain which statement is correct: energy is lost when water is boiled or the energy used to boil water is present, but it is no longer in a usable form unless you use work or heat to make it Tuesday, April 28, 2009 at 6:38pm 9th grade Math -8= (y/4) so multiply both sides by 4. On the right side the 4's will cancel and on the left it will be -32. -8 = (y/4) -8*4 = (y/4)*4 -32 = y Hint: Substitue your answer back into the original equation to check your work. So in this case check, does (-32/4) equal -8? Yes! Thursday, April 23, 2009 at 9:33pm 9th grade * I have a project to do for William Shakespeare, Were i have to make some sort of amusement park and i need help thinking of what rides i should put in my amusement park but those rides have to relate to Shakespeare in someway. Plz Help! Wednesday, April 22, 2009 at 10:45pm 9th grade algebra Use parentheses to make things easier. [(x^2)-(4x/5)]*[(2/x)-2] [(x^2)*5/5-(4x/5)]*[(2/x)-2*x/x) (5x^2-4x) * (2-2x) --------- ---- 5 * x Multiply through to get: 10x^3+18x^2-8x -------------- 5x Thursday, April 9, 2009 at 3:33pm 9th grade algebra drwls here is another one of those questions like the last one we submitted if this will help with what they are after. 200 dollars/1 ton x 100 cents/1 dollar x 1 ton/2000 pounds x 1 pound/16 ounces could you please show me how you come up with the answer? I am not sure how ... Thursday, April 9, 2009 at 1:07am 9th grade algebra Are you sure that does not start out as 30 miles/hr? If so, you would end up after doing the multiplication with the equivalent speed in ft/sec. Otherwise, it is 44 ft hr/ton*sec, which makes no Thursday, April 9, 2009 at 12:57am 9th grade algebra I've never seen a problem like this, please help. I think we have the number part right but what would the unit be once the math is done? Please show me how you came up with the answer so I will know for next time. 30 miles/1 ton x 5280 feet/1 mile x 1 hour/60 minutes x 1 ... Thursday, April 9, 2009 at 12:48am 9th grade - Algebra??? standard form is ax+by=c. so just plug them all into that form. for example, the first one would be -14x-18y=24. you have to change the sign of 18 because you move it to the other side of the Saturday, April 4, 2009 at 4:48pm 9th grade math x^2 - 10 = 0 Add 10 to both sides, then get the square root of both sides. I hope this helps. Thanks for asking. Friday, April 3, 2009 at 12:55pm 9th grade Math/Logic If it only take one germ a minute to split into two germs, then the two germs have a one minute head start 59 minutes. I hope this helps a little more. Thanks for asking. Wednesday, April 1, 2009 at 1:01pm 9th grade 9/[(√27)^5(√a)^5] = 3^2/[3^(15/2)(a^2√a)] = 3^(-11/2)/(a^2√a) or = 1/[(3^5)(3^1/2)(a^2√a)] = 1/[(243√3)(a^2√a)] messy looking thing! actually looks better in the original form of 9/(√27a)^5 Sunday, March 29, 2009 at 11:58am 9th grade weighte average The final score is the sum of the weights times scores of the parts. For instance, if tests are scored at 50 percent, homework is 10 percent, and class participation is 40 percent, then weighted score= .5 testavg + .1 hmwavg + .4 classscore Thursday, March 26, 2009 at 7:31pm 9th grade Pre Algebra The way you have typed the formula is confusing. y^2 = y squared You have both - and + signs before 3y. If you mean both plus and minus (±), can be done by pushing + and Shift and alt/option (at least on a Mac). Please repost with revised formula and be more specific ... Wednesday, March 25, 2009 at 10:15am 9th grade science 1.WHAT DOES MANIPULATION OF DNA DO? 2.WHAT IS MANIPULATION OF DNA USED FOR Saturday, March 21, 2009 at 5:56am 9th grade Algebra so for example 4(7)-2y=32 or -3(7)-5y=-11 and then it equals 28-2y=32 or -21-5y=-11 then what from their this is driving me insane! Wednesday, March 18, 2009 at 7:33pm 9th grade Algebra I would multiply the top equation by 5, and the bottom equation by 2 20x-10y=160 -6x-10y=-22 then subtract the bottom equation (remember to change the signs) 26x=182 solve for x. Wednesday, March 18, 2009 at 7:16pm 9th grade Arts and Culture Plus, you may have heard the expression "art is in the eye of the beholder." For example, a friend of mine adores modern art, some of which just seems to be "scribbling" to me! Sra Saturday, March 7, 2009 at 11:52am 9th grade math y=5x+2? --------m = 5 A:5y-x=1--------m = 1/5 not parallel B:5x-y=1--------m = 5 yes parallel C:y-5x=4 -------m = 5 yes parallel D:10x-2y=1 -----m = 5 yes parallel Sunday, March 1, 2009 at 4:34pm 9th grade 25 inches -----> 1450 feet 1 inch -------> 58 feet (I divided by 25) Thursday, February 26, 2009 at 10:08pm 9th grade My question is on scales etc.. heres the question | The Sears Tower is 1450 feet tall. If a model is 25 inches tall, find the scale. Thursday, February 26, 2009 at 9:35pm 9th grade lit poem Do you believe this poem is negative or positive tone. I think it's positive because the speaker has positive thoughts. Tuesday, February 17, 2009 at 5:25pm 9th grade lit poem In the Alabama poem, the hands of many colors represent different races -- but all in the red clay soil, the essence of Alabama. Tuesday, February 17, 2009 at 5:21pm 9th grade lit poem The setting in the first part of the poem is in a car driving on a road north of Tampico. The setting shifts, though, in the third stanza. There is no particular place named, but it's obviously referring to a time later in the poet's life. Tuesday, February 17, 2009 at 5:16pm 9th grade lit poem To me the backseat symbolizes being taken somewhere by her parents. She was scared and seeking reassurance. The borders are literally borders between countries, yet they also represent the many changes we all go through in our lives. Tuesday, February 17, 2009 at 4:57pm 9th grade lit poem I need assistance I was on here before with this poem but not with the same name (Lemon Pie). The poem is making a fist by Namomi Shihab Nye. I don't understand it. Could someone try to make me understand it I would appreciate it thanks for you time LP Tuesday, February 17, 2009 at 4:40pm 9th grade(English) Which sentence contains an adverbial clause? 1.We placed our blanket under a tall shade tree at the park 2.Before we could eat our picnic lunch, some ants needed to be uninvited. 3. We decided to distract them with an open bottle of suntan lotion placed strategically in the ... Monday, February 9, 2009 at 2:08pm how do i reword this sentence to a 9th grade leverl. students know that charged particles are sources of electric fields and are subject to the forces of the electric fields from other charges. Monday, January 26, 2009 at 9:25pm 9th grade HEALTH MY QUESTION IS ABOUT HEALTH... I HAVE A MIDTERM DUE MONDAY 01/26/2009.. AND I REALLY DONT KNOW HOW TO START IT I HAVE TO USE I,ME,MY STATEMENT ONLY... HOW CAN I START MY MIDTERM??..PLEASE HELP!!?? Saturday, January 24, 2009 at 2:46pm sra JMcGuin spanish Hi This is sam's mom - you have helped my child enormously - Whenever work is done with you the grade is amazing. Finals ae coming up and I think you are close to us. We live on the west side and go to school in the valley. SO anywhere would be convenient - We have a tutor... Thursday, January 1, 2009 at 6:12pm URGENT!!!!9th grade tech that simply did not answer my question the slighest bit, that just gave me a bunch on unessary links about searching read the question again otherwise if you dont know the answer simply dont answer Thursday, December 18, 2008 at 7:36pm 9th grade math if you sketch it and make some quick slope calculations, you will find that opposite sides have the same slope, so you have a parallelogram. Thursday, December 18, 2008 at 1:48pm 9th grade Thursday, December 11, 2008 at 2:14pm 9th grade Are you trying to solve for y? 12y-5y-2=0 12y-5y=0+2 7y=2 y= 2/7 Is this what you were asking for? Monday, December 8, 2008 at 11:52pm solve for the variable ã(square root) ãy+6 - ãy = ã2 would i square both sides first? im stuck Monday, December 8, 2008 at 7:14pm 9th grade geography Different religions and ethnic groups have greatly influenced the Balkans. When the Ottoman Empire conquered this region, some people converted from Christianity to Islam. Other foreign influence further alienated the people from each other. Think about the creation of ... Monday, December 8, 2008 at 6:21pm 9th grade chemistry That is perchlorate ion. Per means more and ClO4^- has one more oxygen than the "normal" acid, ClO3^-. So ClO3^- is chlorate and ClO4^- is perchlorate. Thursday, December 4, 2008 at 6:31pm 9th grade chemistry Yes, the name you have given it is correct. NaHCO3 is sodium hydrogen carbonate. NaH2PO4 is sodium dihydrogen phosphate Na2HPO4 is disodium hydrogen phosphate Thursday, December 4, 2008 at 6:28pm 9th grade : Algebra Can you answer this question which equation represents the direct variation relationship of the equation x/y=1/2 the answer choices are A-y=2x B-x=2y C-y=3x D-y=x+1/2 and can you show the work Wednesday, December 3, 2008 at 5:16pm i am in 9th grade biology and i am REALLY confused about the difference between controll variable, responding variable, manipulated variable and controll group. any help would be great. i have a test Wednesday, December 3, 2008 at 4:27pm 9th grade for my physical science class were supposed to name ionic compounds and change the name into a formuala but i dont understand how to do it my teacher didnt explain it fully. Ex:Na2CO3= Sodium Carbonate Sodium phosphide=Na3+P-3 Tuesday, December 2, 2008 at 9:19pm 9th Grade Math You didn't say in what setup your three angles are found. Are the the interior angles of a triangle ?? if so, then you are not correct. then x + x+10 + 130 = 180 2x + 140 = 180 2x = 40 x = 20 that would make angle1 = 30º Tuesday, December 2, 2008 at 7:50pm 9th Grade Math Hey, I finished my math homework, and im not so sure i did it correct, so can someone do this question and i will look at the answer to see if my work was correct this is State and Justification Given : m<4 = 130degrees, m<1 = x+10 , and m<2 = x Prove : m<1 = ... Tuesday, December 2, 2008 at 6:50pm 9th grade : Algebra y = mx+b , thats the equation so we do this 5y = 10-2x 5y = -2x + 10 now we want to get rip of the 5 in the Y so we do -2x / 5 10 / 5 so its -0.4 and 2 so y= -0.4x + 2 get it? Tuesday, December 2, 2008 at 6:45pm 9th grade : Algebra Okay, I see. First take 2x away from both sides. That makes: 5y= 10-2x Now, I'm not positive about this part, but I think you would divide the whole side by 5, therefore making: y = 2-.4x or y = -.4x + 2 Hope I helped! Tuesday, December 2, 2008 at 4:04pm 9th grade 3t+8(2t-6)=2+14t 3t + 16t - 48 = 2 + 14t 19t - 14t = 50 5t = 50 t = ? Wednesday, November 19, 2008 at 9:11pm 9th grade two men worked 8 hours each and they built 32 frames for sheds. what is the productivity or output per hour add ______ hours hours+_____ hours here ______ hours Monday, November 17, 2008 at 8:46pm 9th grade 1.How do you think Andrew Jackson might have countered his critics' accusation that he was acting like a king? 2. In whar ways do you think the tariff crises of 1828 and 1832 might be considered important milestones in American history beofre Civil War? Saturday, November 15, 2008 at 1:32pm 9th grade Did you notice that the words you, your, yourself appear 13 times in this assignment? We can't possibly answer these questions for you. Their purpose is to get YOU to think. Since you're paying for this class, it would behoove you to take advantage of the educational ... Monday, November 10, 2008 at 11:02am 9th grade biology honors Because jellyfish have more salt then the lake, it is gonna try to reach an equilibium, so it is gonna take more water from the lake and the cell will swell!! Thursday, November 6, 2008 at 8:26pm Pages: <<Prev | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | Next>>
{"url":"http://www.jiskha.com/9th_grade/?page=15","timestamp":"2014-04-20T08:34:37Z","content_type":null,"content_length":"38202","record_id":"<urn:uuid:399dc98b-7ab5-4ac8-852a-e8afae597f79>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
help with calculating z-score. January 27th 2013, 08:54 PM #1 Nov 2012 help with calculating z-score. Hello guys, I have the answer here and need explanation on coming to terms with it. The question, 5000 apartments sold in a certain area and the mean price is $459,000 and the standard deviation is $76000. A particular apartment has a z-score of 3.78. What did it sell for? – The formula to calculate z-score is (x -mean) / standard deviation. The answer that I have from the text book is $459,000+3.78*76,000=$746,280… but why? I've been punching in the numbers into the calculator and do not get $746,280. Any help is appreciated. Re: help with calculating z-score. BASIC MATH IDIOT MISTAKE... I should have did the multiplication before the addition. Hello guys, I have the answer here and need explanation on coming to terms with it. The question, 5000 apartments sold in a certain area and the mean price is $459,000 and the standard deviation is $76000. A particular apartment has a z-score of 3.78. What did it sell for? – The formula to calculate z-score is (x -mean) / standard deviation. The answer that I have from the text book is $459,000+3.78*76,000=$746,280… but why? I've been punching in the numbers into the calculator and do not get $746,280. Any help is appreciated. January 27th 2013, 08:58 PM #2 Nov 2012
{"url":"http://mathhelpforum.com/statistics/212141-help-calculating-z-score.html","timestamp":"2014-04-17T02:06:42Z","content_type":null,"content_length":"32469","record_id":"<urn:uuid:3fef718d-c99d-415d-bee4-a8f3b45dec2c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Global conformal invariance in quantum field theory Communications in Mathematical Physics Global conformal invariance in quantum field theory M. Lüscher and G. Mack Article information Comm. Math. Phys. Volume 41, Number 3 (1975), 203-234. First available: 24 December 2004 Permanent link to this document Mathematical Reviews number (MathSciNet) Primary: 81.22 Secondary: 81.53 Lüscher, M.; Mack, G. Global conformal invariance in quantum field theory. Communications in Mathematical Physics 41 (1975), no. 3, 203--234. http://projecteuclid.org/euclid.cmp/1103898909.
{"url":"http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.cmp/1103898909&page=record","timestamp":"2014-04-23T20:04:26Z","content_type":null,"content_length":"27297","record_id":"<urn:uuid:813ef4ac-a977-45a1-ac03-45c99e8bf55f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Pinecrest, FL Geometry Tutor Find a Pinecrest, FL Geometry Tutor ...Thank for your time and consideration, and I look forward to hearing from you soon. Sincerely, NadeemDiscrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the... 23 Subjects: including geometry, chemistry, physics, calculus ...My favorite part of tutoring is breaking a problem down to its basic pieces. Whether its Public Speaking or Pre-algebra, all problems can be broken down and solved, one piece at a time. Note: My hourly rate is $35 for travel of 10 or more miles, with a minimum of a two hour session.I have insta... 23 Subjects: including geometry, reading, English, biology ...Over the summer, I will be teaching a graduate level statistics course for online MPA students before I start as a professor at Miami in the fall. I am happy you took the time to check out my profile. Just to tell you a bit more about my approach - I specialize in helping students of all ages understand difficult concepts using a friendly and personally tailored tutoring approach. 16 Subjects: including geometry, writing, statistics, GRE ...I'm a native Spanish speaker, and throughout my academic career I have both helped people improve their conversational skills, and performed official translations. Working in an appellate law firm, where I perform the drafting of contracts both in English and Spanish, I make use of highly techni... 38 Subjects: including geometry, English, Spanish, reading I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and Programming. After college I moved to Spain where I gave private test prep lessons to high school students ... 11 Subjects: including geometry, calculus, physics, algebra 1 Related Pinecrest, FL Tutors Pinecrest, FL Accounting Tutors Pinecrest, FL ACT Tutors Pinecrest, FL Algebra Tutors Pinecrest, FL Algebra 2 Tutors Pinecrest, FL Calculus Tutors Pinecrest, FL Geometry Tutors Pinecrest, FL Math Tutors Pinecrest, FL Prealgebra Tutors Pinecrest, FL Precalculus Tutors Pinecrest, FL SAT Tutors Pinecrest, FL SAT Math Tutors Pinecrest, FL Science Tutors Pinecrest, FL Statistics Tutors Pinecrest, FL Trigonometry Tutors Nearby Cities With geometry Tutor Coral Gables, FL geometry Tutors Crossings, FL geometry Tutors Cutler Bay, FL geometry Tutors El Portal, FL geometry Tutors Kendall, FL geometry Tutors Key Biscayne geometry Tutors Maimi, OK geometry Tutors Mia Shores, FL geometry Tutors Miami geometry Tutors Palmetto Bay, FL geometry Tutors Snapper Creek, FL geometry Tutors South Miami, FL geometry Tutors Sweetwater, FL geometry Tutors West Miami, FL geometry Tutors West Park, FL geometry Tutors
{"url":"http://www.purplemath.com/pinecrest_fl_geometry_tutors.php","timestamp":"2014-04-20T23:44:52Z","content_type":null,"content_length":"24205","record_id":"<urn:uuid:fb1c3146-6b18-41dc-b75c-9c6ce5a6b9ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Watts Cheaper 110 or 220 Volts Watts Cheaper 110 or 220 Volts? How much will I save on my electric bill if I run my lights on 220 volts? A quick answer: Probably nothing. This is a common misunderstanding about how electricity works and how the power companies charge you for it. The point often noted for the money saving argument is that the amperage is half as much when running grow lights on 220 volts instead of 110 volts. This is true but the utility company doesn't charge you for amperage, they charge you for wattage. They bill you in kilowatt-hour units. A kilowatt-hour is 1000 watts of usage for one hour or approximately equals a 1000 watt light running for one hour. There's a nice formula for this: Wattage / Voltage = Amperage. If we plug in the numbers for a 1000 watt sodium grow light, you can see that although the voltage and amperage can change, the wattage always stays the same. 1000 Sodium Grow Light On 110 Volts: 1100W / 110V = 10A - On 220 Volts: 1100W / 220V = 5A Note that a 1000 watt sodium ballast draws 1100 watts. Right about now is when I get the question "well why do they make stuff to run on 220 volts then?" Usually large machines and appliances that draw lots of power run on 220 volts (or more) mainly because of the size wire you would need to use to run them on 110 volts would be very large. The gauge and length of the wire will determine the maximum amperage it will handle before it melts! On a 220 volt circuit, the load is split between two 110 volt wires. This allows you to run smaller wire. This brings us to the "probably" part of the answer. There is another factor, it's the voltage drop or the voltage lost when the power travels down the wire. The lower the resistance on the wire, the less the voltage drop. If you are running one or two lights in a typical home with the breaker box a short distance away, the efficiency lost due to voltage drop may not be significant enough to justify rewiring your grow room for 220 volts. Related Information: Calculate your electricity cost to run a grow light. How to Build a Four Light Grow Light Controller for Less Than $80 Voltage Drop Calculator by electrician.com This calculator uses K = 12.9 circular mil ohms per foot for copper or K = 21.2 cicular mil ohms per foot for aluminum. These values assume a conductor operating temperature of 75 degrees C. For other values of K based on conductor temperature use the advanced voltage drop calculator. Select Material Select Size Select Voltage and Phase Enter 1-way circuit Enter Load length in feet in amperes Voltage drop Voltage at load end of circuit Per Cent cma of voltage drop conductor Copyright 2014 Greentrees Hydroponics
{"url":"http://www.hydroponics.net/learn/is-220-volts-more-efficient.asp","timestamp":"2014-04-18T11:51:57Z","content_type":null,"content_length":"10680","record_id":"<urn:uuid:3edbbd9b-94df-41a2-a3ce-a926cebf79fa>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Order of normalizer Sylow 5-subgroup in Suzuki group up vote 0 down vote favorite It known the Suzuki group $Sz(q)$, where $q=2^{2n+1}$ is of order $q^2(q^{2}+1)(q-1)$. By $2^{2}$ $=-1$ mod $5$, then $2^{2n}$ $=(-1)^{n}$ mod $5$. So $q=2^{2n+1}$ $=2(-1)^{n}$ mod $5$ and so $q^{2} +1=0$ mod $5$. Therefore $5$ always divide order of $Sz(q)$. My question is: If $P$ is a Sylow $5$-subgroup of the group $Sz(q)$, then order of normalizer of $P$ in $Sz(q)$? Thanks. gr.group-theory finite-groups I'm reluctant to upvote the question, since it should start with a search of the available literature on Suzuki groups (which you seem to know something about). As Nick points out, the original two-part paper by Michio Suzuki (Ann. of Math. 1962, 1964) already contains lots of relevant detail. Theorem 9 occurs in Part I. These papers are available online via JSTOR if you have access. – Jim Humphreys May 23 '12 at 19:22 add comment 1 Answer active oldest votes Go to Suzuki's original paper On a class of doubly transitive groups. Theorem 9 of that paper lists all of the subgroups of $G=Sz(q)$. In particular, setting $q=2^{2a+1}$ and $r=2^{a+1}$, there is a maximal subgroup $M$ of order $4(q-r+1)$. Now $M$ has a cyclic normal subgroup $C$ of order $q-r+1$ which contains $P$, a Sylow $5$-subgroups of $G$. Since a subgroup of a cyclic up vote 8 group is characteristic, this implies that $M$ normalizes $P$. Since $M$ is maximal we conclude that $M=N_G(P)$. down vote @Nick, thank you very much. – R K May 23 '12 at 13:58 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory finite-groups or ask your own question.
{"url":"https://mathoverflow.net/questions/97757/order-of-normalizer-sylow-5-subgroup-in-suzuki-group/97760","timestamp":"2014-04-18T23:18:48Z","content_type":null,"content_length":"52155","record_id":"<urn:uuid:2745556c-24eb-4ccc-9d91-37ea2d88894d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Plotting functions for the psych package of class plot.psych {psych} R Documentation Plotting functions for the psych package of class “psych" Combines several plotting functions into one for objects of class “psych". This can be used to plot the results of fa, irt.fa, VSS, ICLUST, omega, factor.pa, or principal. ## S3 method for class 'psych' ## S3 method for class 'irt' ## S3 method for class 'poly' ## S3 method for class 'residuals' plot(x,main="QQ plot of residuals",qq=TRUE,...) x The object to plot labels Variable labels xlab Label for the x axis – defaults to Latent Trait ylab Label for the y axis ylim Specify the limits for the y axis main Main title for graph type "ICC" plots items, "IIC" plots item information, "test" plots test information, defaults to IIC. D The discrimination parameter cut Only plot item responses with discrimiantion greater than cut keys Used in plotting irt results from irt.fa. qq if TRUE, plot qq plot of residuals, otherwise plot a cor.plot of residuals ... other calls to plot Passes the appropriate values to plot. For plotting the results of irt.fa, there are three options: type = "IIC" (default) will plot the item characteristic respone function. type = "IIC" will plot the item information function, and type= "test" will plot the test information function. These are calls to the generic plot function that are intercepted for objects of type "psych". More precise plotting control is available in the separate plot functions. plot may be used for psych objects returned from fa, irt.fa, ICLUST, omega, as well as principal A "jiggle" parameter is available in the fa.plot function (called from plot.psych when the type is a factor or cluster. If jiggle=TRUE, then the points are jittered slightly (controlled by amount) before plotting. This option is useful when plotting items with identical factor loadings (e.g., when comparing hypothetical models). Objects from irt.fa are plotted according to "type" (Item informations, item characteristics, or test information). In addition, plots for selected items may be done if using the keys matrix. Plots of irt information return three invisible objects, a summary of information for each item at levels of the trait, the average area under the curve (the average information) for each item as well as where the item is most informative. It is also possible to create irt like plots based upon just a scoring key and item difficulties, or from a factor analysis and item difficulties. These are not true IRT type analyses, in that the parameters are not estimated from the data, but are rather indications of item location and discrimination for arbitrary sets of items. To do this, find irt.stats.like and then plot the results. Graphic output for factor analysis, cluster analysis and item response analysis. More precise plotting control is available in the separate plot functions. William Revelle See Also VSS.plot and fa.plot, cluster.plot, fa, irt.fa, VSS, ICLUST, omega, factor.pa, or principal test.data <- Harman74.cor$cov f4 <- fa(test.data,4) plot(resid(f4),main="Residuals from a 4 factor solution",qq=FALSE) #not run #e.irt <- irt.fa(bfi[11:15]) #just the extraversion items #plot(e.irt) #the information curves ic <- iclust(test.data,3) #shows hierarchical structure plot(ic) #plots loadings version 1.3.2
{"url":"https://personality-project.org/r/psych/help/plot.psych.html","timestamp":"2014-04-18T23:23:47Z","content_type":null,"content_length":"6538","record_id":"<urn:uuid:cb74db7f-f170-4973-9e90-2b3e7f520032>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Reading Hydraulic Schematics PPT Presentation Summary : Reading Hydraulic Schematics Lecture 9 Purpose of Hydraulic Schematics Schematics are drawings that you can use to see how a system is put together Schematics can be ... Source : http://www.lassenhigh.com/lita/diesel/hydr/hylect9.ppt Schematic diagrams - Vanderbilt University PPT Presentation Summary : Schematic diagrams 11 Mar 2011 Why are we talking about diagrams? In lots of scientific communication Talks and posters Grant proposals Papers What else? Source : https://medicine.mc.vanderbilt.edu/sites/default/files/Schematic%20diagrams.ppt Presentation Summary : Schematic Symbols The Key to Understanding Wiring Diagrams Battery Resistor SPST Switch Light Bulb Fuse Capacitor Diode/Rectifier Transistor Motor Electric Fuel ... Source : http://www.cte-auto.net/auto/powerpoints/Schematic%20Symbols.pps ELECTRICAL DIAGRAMS - salemmbrothers PPT Presentation Summary : Symbols. Symbols are used to standardize the reading of electrical diagrams. Electrical diagrams use a variety of symbols to represent component in electrical circuit Source : http://salemmbrothers.com.managewebsiteportal.com/files/others/Electrical-Diagrams.pptx Welding, Pipes and Symbols - University of Texas–Pan American PPT Presentation Summary : Symbols, Welding and Piping Symbols Links are drawn in schematic form on technical drawings and for engineering design analysis. This figure shows the schematic ... Source : http://crown.panam.edu/EG/vrml/castle/student/projects/spring05/campg/Welding,%20Pipes.ppt Lecture 5 - University of Alabama PPT Presentation Summary : Vacuum System Schematic Symbols Hand Operated Valve. Gate Valve. Pneumatic Gate Valve. Leak Valve. Butterfly Valve. Pneumatic Butterfly. Bellows. Sorption Trap. Source : http://bama.ua.edu/~phx34/SP04/05VacuumFundamentals.ppt BEX100 – Basic Electricity PPT Presentation Summary : BEX100 – Basic Electricity Semiconductors Transistors & SCR’s Lesson Objectives To understand the basic construction elements and schematic symbols of a ... Source : http://apps.elizabethtown.kctcs.edu/members/jnail/BEXpowerpoint/BEX-Transistors.ppt Electronic Diagrams - kovalchuck11 - home PPT Presentation Summary : Electronic Diagrams Chapter 18 Objectives Identify common component symbols on an electronic schematic diagram Draw a schematic diagram using standardized symbols ... Source : http://kovalchuck11.wikis.birmingham.k12.mi.us/file/view/chapter+18.ppt Circuit Symbols (.ppt) - Space Sciences Laboratory | The ... PPT Presentation Summary : These circuit symbols are for drawing schematics in PowerPoint. For an example, see the next chart. Component circuit symbols here are “groups” to retain their ... Source : http://www.ssl.berkeley.edu/~mlampton/Circuit_Symbols.ppt Piping Drawings - Birmingham Public Schools PPT Presentation Summary : ... Draw multiview or pictorial piping drawings using standard schematic symbols Draw the schematic symbols for valves and identify flow direction Standard ... Source : http://kovalchuck11.wikis.birmingham.k12.mi.us/file/view/chapter+21.ppt Ohm's Law and Kirchoff's Laws - UH Cullen College of Engineering PPT Presentation Summary : Voltage Sources – Schematic Symbols for Dependent Voltage Sources The schematic symbols that we use for dependent voltage sources are shown here, ... Source : http://www0.egr.uh.edu/courses/ece/ece3455/2300_Lecture_Notes/2300NotesSet02v32.ppt Presentation Summary : SCHEMATIC SYMBOLS Electric Motors An electric motor symbol shows a circle with the letter M in the center and two electrical connections, ... Source : http://wps.prenhall.com/wps/media/objects/6955/7122251/ppts/Chapter11.ppt Diodes . ppt - Calvin College PPT Presentation Summary : A K Schematic Symbol for a Zener Diode Types of Diodes and Their Uses Kristin Ackerson, ... A K A K Schematic Symbols for Photodiodes Sources Dailey, Denton. Source : http://www.calvin.edu/~pribeiro/courses/engr311/Handouts/Diodes.ppt Presentation Summary : Schematic Symbols + Resistor SPST Switch Light-Emitting Diode Battery Capacitor Inductor Transformer SPDT Switch Diode NPN Bipolar Transistor (BJT) ... Source : http://www.scouting.org/filestore/jota/ppt/RadioMB_module2.ppt Presentation Summary : Schematics Wiring Diagrams or Schematics use a symbolic language Understand the symbols and ... The entire Blower Motor Switch is shown in this schematic If I ... Source : http://www.linnbenton.edu/auto/electric/electrical_diagrams.ppt pneumatic schematics..+ - Gears Educational Systems - Robust ... PPT Presentation Summary : Pneumatic Schematics Using Graphic Symbols to Illustrate Basic Circuit Designs 3/2 Solenoid Valve Regulator Pneumatic Components Single Acting Pull Type Cylinder and ... Source : http://www.gearseds.com/curriculum/images/figures/pneumatic%20schematics.ppt Presentation Summary : Electrical Schematic Symbols. Title: No Slide Title Author: APSD Last modified by: Owner Created Date: 2/21/2002 3:40:15 PM Document presentation format: On-screen ... Source : http://gse-manuals.com/app/download/6560026904/Electical+Symbols.pps Presentation Summary : A schematic need not depict the actual physical arrangement of the components SkeeterSat Schematic Component Symbols Wires and wire connections Component Symbols ... Source : http://laspace.lsu.edu/aces/Lectures/Electronics/Electronics%20Lecture%203.ppt Schematics 201 - Ivy Tech -- Faculty Web PPT Presentation Summary : Schematics 201 Lecture Topic: Electrical Symbols 02-23-2011 Agenda for the week Test # 1. Discuss Electrical Symbols Start Assignment # 6. FM Tuner Schematic. Source : http://faculty.ivytech.edu/~bl-desn/dsn201/Lectures/Elect%20Sch.ppt Electrical Schematics and Solderless Breadboards PPT Presentation Summary : Electrical Schematics and Solderless Breadboards Objectives Learn how to read a simple electrical schematic. Understand how the pins on a solderless breadboard are ... Source : http://www.cornerstonerobotics.org/curriculum/lessons_year1/Electrical%20Schematics%20High%20School%20Presentation.ppt Component Identification - Conestoga Valley High School PPT Presentation Summary : Component Identification & Schematic Symbols Electronics 1 CVHS Basic Elements of a Circuit Three Essential Elements Load Complete Path Power Source Additional ... Source : http://blog.cvsd.k12.pa.us/energypowertrans/files/2010/02/Component-Identification.ppt Electrical Safety - HCC Learning Web PPT Presentation Summary : The schematic wiring diagram includes the symbols and the line representations so the user can easily identify loads and switches along with the circuits. Source : http://learning.hccs.edu/faculty/hoang.do/hart1303/unit-2-electrical-symbols/Chapter%2005Symbols.ppt Hydraulics - Arkansas State University PPT Presentation Summary : Design features continued….. Schematic Symbols for a Reservoir Schematic Symbols for a Reservoir- 2 Check the text for info on coolers The end ... Source : http://clt.astate.edu/dagnew/Hydraulics/hy_reservoirs_06.ppt If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a presentation that is using one of your presentation without permission, contact us immidiately at
{"url":"http://www.xpowerpoint.com/ppt/schematic-symbols.html","timestamp":"2014-04-23T19:47:03Z","content_type":null,"content_length":"23188","record_id":"<urn:uuid:0003d61a-cfb7-4ac3-bbdd-b4de312fa547>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2010 [00349] [Date Index] [Thread Index] [Author Index] Re: Replacement Rule with Sqrt in denominator • To: mathgroup at smc.vnet.net • Subject: [mg114677] Re: Replacement Rule with Sqrt in denominator • From: Andrzej Kozlowski <akoz at mimuw.edu.pl> • Date: Sun, 12 Dec 2010 05:47:09 -0500 (EST) On 11 Dec 2010, at 07:52, Jack L Goldberg 1 wrote: > a) Input as typed: 2<==x<==4. Look at its fullform. On my Mac > running ver. 7 of Mathematica, I get returned, > LessEqual[2,x,4]. > b) Now type in Reduce[2<==x<==4]. You will get > Inequality[2,LessEqual,x,LessEqual,4]. > These are are different expressions! How can one program replacement > rules when one can not be sure of the FullForm? These structures are > entirely different. Which fullform can one assume is the one Mathematica sees > in some complicated module wherein one step is a replacement rule? > Jack Goldberg > Mathematics > University of Michigan O.K. but I don't see anything here that in any way contradicts anything that has been said about the need for for looking at FullForm before trying pattern matching. Actually, it is als o an argument against using Copy and Paste. To see that, evaluate Reduce[2<==x<==4]. Now, copy the output and paste it into another cell and wrap FullForm around it, then evaluate. You will get LessEqual[2,x,4]. I don't see this as a problem, do you? You can certainly match both forms with a single pattern: {2 <== x <== 4, Reduce[2 <== x <== 4]} /. (a_) <== x <== (b_) | Inequality[a_, LessEqual, x, LessEqual, b_] :> {a, b} {{2, 4}, {2, 4}} Andrzej Kozlowski
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00349.html","timestamp":"2014-04-18T00:51:52Z","content_type":null,"content_length":"26410","record_id":"<urn:uuid:b45adb06-5890-4751-bc0a-c91ab081e846>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
SPCS 7 Posted by: matheuscmss | February 24, 2012 SPCS 7 Today we will make some preparations towards the application of Avila-Viana simplicity criterion to the case of the Kontsevich-Zorich cocycle over the ${SL(2,\mathbb{R})}$-orbits of square-tiled surfaces (along the lines of a forthcoming article by C.M., M. Möller, and J.-C. Yoccoz). Of course, given that the simplicity criterion was stated in terms of locally constant cocycles over complete shifts on finitely or countably many symbols, we need to discuss how to “reduce”, or more precisely, to code, the Kontsevich-Zorich cocycle over the ${SL(2,\mathbb{R})}$-orbits of square-tiled surfaces to the setting of complete shifts. In fact, the main purpose of this post (corresponding to J.-C. Yoccoz 7th lecture) is the presentation of an adequate coding of the Teichmüller flow and Kontsevich-Zorich cocycle over the ${SL(2,\mathbb{R})}$-orbits of square-tiled surfaces. Let ${\pi:M\rightarrow\mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2}$ be a reduced origami (i.e., an origami whose periods generate the lattice ${\mathbb{Z}\oplus i\mathbb{Z}}$). Let’s consider the Kontsevich-Zorich cocycle ${G_{KZ}^t}$ over the Teichmüller flow restricted to the unit tangent bundle of the Teichmüller surface (“curve”) associated to ${M}$. More concretely, we consider (the unit tangent bundle) ${SL(2,\mathbb{R})/SL(M)}$, where ${SL(M)}$ is the Veech group of ${M}$ (that is, the stabilizer of ${M}$ under the action of ${SL(2,\mathbb {R})}$ on the moduli space of Abelian differentials). Recall that ${SL(M)}$ is a finite-index subgroup of ${SL(2,\mathbb{Z})}$ (as ${M}$ is a reduced origami). In this language, the Teichmüller geodesic flow is the action of $\displaystyle g_t=\left(\begin{array}{cc}e^t & 0 \\ 0 & e^{-t} \end{array}\right)$ We begin our discussion with the case of ${SL(2,\mathbb{R})/SL(2,\mathbb{Z})}$. In the sequel, we will think of ${SL(2,\mathbb{R})/SL(2,\mathbb{Z})}$ as the space of normalized (i.e., unit covolume) lattices of ${\mathbb{R}^2}$, and we will select an appropriate fundamental domain. Here, it is worth to point out that we’re not going to consider the lift to ${SL(2,\mathbb{R})}$ of the “classical” fundamental domain ${\mathcal{F}=\{z\in\mathbb{H}: |z|\geq 1, |\textrm{Re}z|\leq 1/2\}}$ of the action of ${SL(2,\mathbb{Z})}$ on the hyperbolic plane ${\mathbb{H}}$. Indeed, as we will see below, our choice of fundamental domain is not ${SO(2,\mathbb{R})}$-invariant, while any fundamental domain obtained by lifting to ${SL(2,\mathbb{R})}$ a fundamental domain of ${\mathbb{H}/SL(2,\mathbb{Z})} $ must be ${SO(2,\mathbb{R})}$-invariant (as ${\mathbb{H}/SL(2,\mathbb{Z})=SO(2,\mathbb{R})\backslash SL(2,\mathbb{R})/SL(2,\mathbb{Z})}$). Definition 1 A lattice ${L\subset\mathbb{R}^2}$ is irrational if ${L}$ intersect the coordinate axis ${x}$ and ${y}$ precisely at the origin ${0\in\mathbb{R}^2}$. Equivalently, ${L}$ is irrational if and only if the orbit ${g_t(L)}$ doesn’t diverge (neither in the past nor in the future) to the cusp of ${SL(2,\mathbb{R})/SL(2,\mathbb{Z})}$. Our choice of fundamental domain will be guided by the following fact: Proposition 2 Let ${L}$ be a normalized irrational lattice. Then, there exists an unique basis ${\{v_1=(\lambda_1,\tau_1), v_2=(\lambda_2,\tau_2)\}}$ of ${L}$ such that exactly one of the two possibilities below occur: □ “Top” case: ${\lambda_2\geq 1>\lambda_1>0}$ and ${0<\tau_2<-\tau_1}$; □ “Bottom” case: ${\lambda_1\geq 1>\lambda_2>0}$ and ${0<-\tau_1<\tau_2}$. Proof: Consider the following open unit area squares of the plane: ${Q^+:=(0,1)\times (0,1)}$ and ${Q^-=(0,1)\times (-1,0)}$. Observe that ${Q^{\pm}}$ can’t contain two linearly independent vectors of ${L}$: indeed, if ${v_1, v_2\in Q^{\pm}\cap L}$, and ${v_1, v_2}$ are linearly independent, then ${0<|\det(v_1,v_2)|<1}$, a contradiction with the fact that ${\det(v_1,v_2)\in\mathbb{Z}}$ (as ${L}$ is normalized). On the other hand, we claim that ${Q^+\cup Q^-}$ contains (at least) one vector of ${L}$. In fact, since ${L}$ is normalized and irrational, one would have that ${L}$ is disjoint from some convex symmetric open set (strictly) containing the closure of ${Q^+\cup Q^-\cup (-Q^+)\cup(-Q^-)}$, a contradiction with Minkowski theorem (that any convex symmetric set ${C\subset \mathbb{R}^d}$ of volume ${\textrm{vol}(C)>2^d}$ intersects any normalized lattice of ${\mathbb{R}^d}$). In particular, we have three possibilities: • (a) ${Q^+\cap Leq\emptyset}$, ${Q^-\cap L=\emptyset}$; • (b) ${Q^+\cap L=\emptyset}$, ${Q^-\cap Leq\emptyset}$; • (c) ${Q^+\cap Leq\emptyset}$, ${Q^-\cap Leq\emptyset}$. Because the first two cases are similar, we’ll treat only items (b) and (c). We start by item (b), that is, ${Q^-\cap Leq\emptyset}$ but ${Q^+\cap L=\emptyset}$. In this situation, we select a primitive ${v_1=(\lambda_1,\tau_1)\in Q^-\cap L}$, so that $\displaystyle 0<\lambda_1<1 \quad \textrm{ and } 0<-\tau_1<1$ Next, we select ${v_2=(\lambda_2,\tau_2)}$ such that • ${\{v_1, v_2\}}$ is a direct basis, i.e., ${\det(v_1,v_2)=\lambda_1\tau_2-\lambda_2\tau_1=1}$; • ${\tau_2>0}$ is minimal. Then, ${\tau_1+\tau_2<0}$: otherwise we could replace ${v_2}$ by ${v_2+v_1}$ to contradict the minimality. Thus, ${\lambda_2>0}$ (as ${\{v_1, v_2\}}$ is a direct basis, and ${0<\tau_2<-\tau_1<1}$ forces ${0<\lambda_1\tau_2<1}$). Since ${Q^+\cap L=\emptyset}$, we have that ${\lambda_2\geq 1}$, and hence ${\{v_1,v_2\}}$ is a basis of ${L}$ fitting the requirements of the top case. Now, we verify the uniqueness of such ${\{v_1, v_2\}}$. Firstly, if ${\{v_1'=(\lambda_1',\tau_1'), v_2'=(\lambda_2',\tau_2')\}}$ fits the requirements ${\lambda_2'\geq 1>\lambda_1'>0}$ and ${0<-\tau_1'<\ tau_2'}$ of the bottom case, then the relation $\displaystyle 1=\lambda_1'\tau_2'-\lambda_2'\tau_1'$ implies that ${\tau_2'<1}$, and, a fortiori, ${v_2'\in Q^+\cap L}$, a contradiction with our assumptions in item (b). Secondly, if ${\{v_1', v_2'\}}$ fits the requirements ${\lambda_2'\geq 1>\ lambda_1'>0}$ and ${0<\tau_2'<-\tau_1'}$ of the top case, then ${v_1'\in Q^-\cap L}$, and, therefore, ${v_1'=v_1}$ (as ${Q^-}$ can’t contain two linearly independent vectors of ${L}$). Now, we write ${v_2'=v_2+n v_1}$, and we notice that, since ${0<\tau_2'<-\tau_1'=-\tau_1}$, ${0<\tau_2<-\tau_1}$, and ${\tau_2'=\tau_2+n\tau_1}$, one has ${n=0}$, i.e., ${v_2'=v_2}$, and the analysis of item (b) is complete. It remains only to analyze item (c). Take ${V_1=(\Lambda_1, T_1)\in Q^-\cap L}$ and ${V_2=(\Lambda_2, T_2)\in Q^+\cap L}$ primitive vectors. We have that $\displaystyle 2>\det(V_1,V_2)=\Lambda_1 T_2 - \Lambda_2 T_1 >0$ and, since ${\det(V_1,V_2)\in\mathbb{Z}}$, it follows that ${\Lambda_1 T_2 - \Lambda_2 T_1 = 1}$. Furthermore, ${T_1+T_2eq 0}$ because ${L}$ is irrational. Assume ${T_1+T_2<0}$ (the other case $ {T_1+T_2>0}$ is analogous). Then, we set ${v_2:= V_2}$ and ${v_1:=V_1+nV_2}$ where ${n\geq 1}$ is the largest integer such that ${T_1+nT_2<0}$. We have that ${v_1=(\lambda_1,\tau_1)}$, ${v_2=(\ lambda_2,\tau_2)}$ verifies ${0<\lambda_2,\tau_2<1}$ (as ${v_2:=V_2\in Q^+}$), and ${0<-\tau_1<\tau_2}$ (as ${n}$ was taken to be the largest possible). Furthermore, ${\lambda_1\geq 1}$, as, otherwise, ${v_1=V_1+nV_2}$ (recall that ${n\geq 1}$) and ${V_1}$ would be linearly independent vectors of ${L}$ inside ${Q^-}$, a contradiction. In resume, ${\{v_1,v_2\}}$ is a basis of ${L}$ meeting the requirements of the bottom case. Now, we check the uniqueness of such ${\{v_1', v_2'\}}$. Here, since the argument is the same one used for item (b), we will illustrate only the bottom case ${\lambda_1'\geq 1>\lambda_2'>0}$, ${0<-\tau_1'<\tau_2'}$. In this situation, ${v_2'\in Q^-}$ (as ${L}$ is normalized), so that ${v_2'=v_2}$. Then, we write ${v_1'=v_1+nv_2}$, and we conclude that ${n=0}$ because ${0<-\tau_1'<\tau_2'=\tau_2}$, ${0<-\tau_1<\tau_2}$, and ${\tau_1'=\tau_1+n\tau_2}$. $\Box$ Using this proposition, we can describe the Teichmüller geodesic flow ${g_t=\left(\begin{array}{cc}e^t & 0 \\ 0 & e^{-t} \end{array}\right)}$ on the space ${SL(2,\mathbb{R})/SL(2,\mathbb{Z})}$ of normalized lattices as follows. Let ${L_0}$ be a normalized irrational lattice, and let ${(v_1,v_2)}$ be the basis of ${L_0}$ given by the proposition above, i.e., the top, resp. bottom, condition. Then, we see that the basis ${(g_t v_1, g_t v_2)}$ of ${L_t:=g_t L_0}$ satisfies the top, resp. bottom condition for all ${t <t^*}$, where ${\lambda_1 e^{t^*}=1}$ in the top case, resp. ${\lambda_2 e ^{t^*}=1}$ in the bottom case. However, at time ${t^*}$, the basis ${\{v_1^*=g_{t^*} v_1, v_2^*=g_{t^*} v_2\}}$ of ${L_0}$ceases to fit the requirements of the proposition above, but we can remedy this problem by changing the basis: for instance, if the basis ${\{v_1,v_2\}}$ of the initial lattice ${L_0}$ has top type, then it is not hard to check that $\displaystyle v_1'=v_1^* \quad \textrm{ and } \quad v_2'=v_2^*-a v_1^*$ where ${a=\lfloor\lambda_2/\lambda_1\rfloor}$ is a basis of ${L_{t^*}}$ of bottom type. Here, we observe that the quantity ${\alpha:=\lambda_1/\lambda_2\in (0,1)}$ giving the ratios of the first coordinates of the vectors ${g_t v_1, g_t v_2}$ forming a top type basis of ${L_t}$ for any ${0\leq t<t^*}$ is related to the integer $a$ by the formula $\displaystyle a=\lfloor 1/\alpha\rfloor$ Also, the new quantity ${\alpha'}$ giving the ratio of the first coordinates of the vectors ${v_1', v_2'}$ forming a bottom type basis of ${L_{t^*}}$ is related to ${\alpha}$ by the formula $\displaystyle \alpha'=\lambda_2'/\lambda_1' = \{1/\alpha\}:=G(\alpha)$ where ${G}$ is the so-called Gauss map. In this way, we find the classical relationship between the geodesic flow on the modular surface ${SL(2,\mathbb{R})/SL(2,\mathbb{Z})}$ and the continued fraction algorithm. At this stage, we’re ready to code the Teichmüller flow over the unit tangent bundle of the Teichmüller surface ${SL(2,\mathbb{R})/SL(M)}$ associated to a reduced origami. -Coding the geodesic flow on ${SL(2,\mathbb{R})/SL(M)}$- Let ${\Gamma(M)}$ be the following graph: the set of its vertices is $\displaystyle \textrm{Vert}(\Gamma(M)) = \{SL(2,\mathbb{Z})-\textrm{orbit of } M \}\times \{t,b\}$ $\displaystyle = \{M=M_1,\dots, M_r\}\times \{t,b\}$ and its arrows are $\displaystyle (M_i,c)\stackrel{\gamma_{a,i,c}}{\rightarrow} (M_j,\overline{c})$ where ${a\in\mathbb{N}}$, ${a\geq 1}$, ${c\in\{t,b\}}$, ${\overline{c}= b}$ (resp. ${t}$) if ${c=t}$ (resp. ${b}$), and $\displaystyle M_j=\left\{\begin{array}{cl}\left(\begin{array}{cc} 1 & a \\ 0 & 1\end{array}\right) M_i, & \textrm{if } c=t \\ \left(\begin{array}{cc} 1 & 0 \\ a & 1\end{array}\right) M_i, & \textrm {if } c=b \end{array}\right.$ Notice that this graph has finitely many vertices but countably many arrows. Using this graph, we can code irrational orbits of the flow ${g_t}$ on ${SL(2,\mathbb{R})/SL(M)}$ as follows. Given ${m_0\ in SL(2,\mathbb{R})}$, let ${L_{st}=\mathbb{Z}^2}$ be the standard lattice and put ${m_0 L_{st} = L_0}$. Also, let us denote ${m_t=g_t m_0}$. By Proposition 2, there exists an unique ${h_0\in SL(2,\mathbb{Z})}$ such that ${v_1=m_0 h_0^{-1}(e_1)}$, ${v_2 = m_0 h_0^{-1}(e_2)}$ satisfying the conditions of the proposition (here, ${\{e_1, e_2 \}}$ is the canonical basis of ${\mathbb{R}^2}$). Denote by ${c}$ the type (top or bottom) of the basis ${\{v_1,v_2\}}$ of ${L_0}$. We assign to ${m_0}$ the vertex ${(M_i:=h_0 M, c)\in\textrm{Vert}(\ For sake of concreteness, let’s assume that ${c=t}$ (top case). Recalling the notations introduced after the proof of Proposition 2, we notice that the lattice ${L_{t^*}}$ associated to ${m_{t^*}}$ has a basis of bottom type $\displaystyle v_1' = g_{t^*} m_0 h_0^{-1}(e_1) = g_{t^*} m_0 h_1^{-1}(e_1)$ $\displaystyle v_2' = g_{t^*} m_0 h_0^{-1}(e_2-a e_1) = g_{t^*} m_0 h_1^{-1}(e_2)$ where ${h_1=h_* h_0}$ and $\displaystyle h_* = \left(\begin{array}{cc}1 & a \\ 0 & 1\end{array}\right)$ In other words, starting from the vertex ${(M_i,t)}$ associated to the initial point ${m_0}$, after running the geodesic flow for a time ${t^*}$, we end up with the vertex ${(M_j,b)}$ where ${M_j= h_* M_i}$. Equivalently, the piece of trajectory from ${m_0}$ to ${g_{t^*} m_0}$ is coded by the arrow $\displaystyle (M_i,t)\stackrel{\gamma_{i,a,t}}{\rightarrow}(M_j,b)$ Evidently, we can iterate this procedure (by replacing ${L_0}$ by ${L_{t^*}}$) in order to code the entire orbit ${g_t m_0}$ by a succession of arrows. However, this coding has the “inconvenient” (with respect to the setting of Avila-Viana simplicity criterion) that it is not associated to a complete shift but only a subshift (as we do not have the right to concatenate two arrows ${\gamma}$ and ${\gamma'}$ unless the endpoint of ${\gamma}$ coincides with the start of ${\gamma'}$). Fortunately, this little difficulty is easy to overcome: in order to get a coding by a complete shift, it suffices to fix a vertex ${p^*\in\textrm{Vert}(\Gamma(M))}$ and consider exclusively concatenations of loops based at ${p^*}$. Of course, we pay a price here: since there may be some orbits of ${g_t}$ whose coding is not a concatenation of loops based on ${p^*}$, we’re throwing away some orbits in this new way of coding. But, it is not hard to see that the (unique, Haar) ${SL(2,\mathbb{R})}$-invariant probability ${\mu}$ on ${SL(2,\mathbb{R})/SL(M)}$ gives zero weight to the orbits that we’re throwing away, so that this new coding still captures most orbits of ${g_t}$ (from the point of view of ${\mu}$). In any case, this allows to code ${g_t}$ by a complete shift whose (countable) alphabet is constituted of (minimal) loops based at ${p^*}$. Once we know how to code our flow ${g_t}$ by a complete shift, the next natural step (in view of Avila-Viana criterion) is the verification of the bounded distortion condition of the invariant measure induced by ${\mu}$ on the complete shift. This is the content of the next section. -Verification of the bounded distortion condition- As we saw above, the coding of the geodesic flow (and modulo the stable manifolds, that is, the “${\tau}$-coordinates” [vertical coordinates]) is the dynamical system $\displaystyle \textrm{Vert}(\Gamma(M))\times ((0,1)\cap(\mathbb{R}-\mathbb{Q}))\rightarrow\textrm{Vert}(\Gamma(M))\times ((0,1)\cap(\mathbb{R}-\mathbb{Q}))$ given by ${(p,\alpha)\mapsto (p',G(\alpha))}$ where ${G(\alpha)=\{1/\alpha\}=\alpha'}$ is the Gauss map and ${p\stackrel{\gamma_{a,p}}{\rightarrow}p'}$ with ${a=\lfloor1/\alpha\rfloor}$. In this language, ${\mu}$ becomes (up to normalization) the Gauss measure ${dt/(1+t)}$ on each copy ${\{p\}\times (0,1)}$, ${p\in \textrm{Vert}(\Gamma(M))}$, of the unit interval ${(0,1)}$. Now, for sake of concreteness, let us fix ${p^*}$ a vertex of top type. Given ${\gamma}$ a loop based on ${p^*}$, i.e., a word on the letters of the alphabet of the coding leading to a complete shift, we denote by ${I(\gamma)\subset (0,1)}$ the interval corresponding to ${\gamma}$, that is, the interval ${I(\gamma)}$ consisting of ${\alpha\in (0,1)}$ such that the concatenation of loops (based at ${p^*}$) coding the orbit of ${(p^*,\alpha)}$ starts by the word ${\gamma}$. In this setting, the measure induced by ${\mu}$ on the complete shift is easy to express: by definition, the measure of the cylinder ${\Sigma(\gamma)}$ corresponding to concatenations of loops (based at ${p^*}$) starting by ${\gamma}$ is the Gauss measure of the interval ${I(\gamma)}$ up to normalization. Because the Gauss measure is equivalent to the Lebesgue measure (as its density ${1/(1+t)}$ satisfies ${1/2\leq1/(1+t)\leq 1}$ in ${(0,1)}$), we conclude that the measure of ${\Sigma(\gamma)}$ is equal to $\displaystyle |I(\gamma)|:=\textrm{Lebesgue measure of } I(\gamma)$ up to a multiplicative constant. In particular, it follows that the bounded distortion condition for the measure induced by ${\mu}$ on the complete shift is equivalent to the existence of a constant ${C>0}$ such that $\displaystyle C^{-1}|I(\gamma_0)|\cdot|I(\gamma_1)|\leq |I(\gamma)|\leq C|I(\gamma_0)|\cdot |I(\gamma_1)| \ \ \ \ \ (1)$ for every ${\gamma=\gamma_0\gamma_1}$. In resume, this reduces the bounded distortion condition to the problem of understanding the interval ${I(\gamma)}$. Here, by the usual properties of the continued fraction, it is not hard to show that ${I(\gamma)}$ is a Farey interval $\displaystyle I(\gamma)=\left(\frac{p}{q}, \frac{p+p'}{q+q'}\right)$ $\displaystyle \left(\begin{array}{cc}p' & p \\ q' & q\end{array}\right)\in SL(2,\mathbb{Z})$ being t-reduced, i.e., ${0<p'\leq p,q'\leq q}$. Consequently, from this description, we recover the classical fact that $\displaystyle \frac{1}{2q^2}\leq |I(\gamma)|=\frac{1}{q(q+q')}\leq \frac{1}{q^2} \ \ \ \ \ (2)$ Given ${\gamma=\gamma_0\gamma_1}$, and denoting by ${\left(\begin{array}{cc}p_0' & p_0 \\ q_0' & q_0\end{array}\right)}$, resp. ${\left(\begin{array}{cc}p_1' & p_1 \\ q_1' & q_1\end{array}\right)}$, resp. ${\left(\begin{array}{cc}p' & p \\ q' & q\end{array}\right)}$ the matrices associated to ${\gamma_0}$, resp. ${\gamma_1}$, resp. ${\gamma}$, it is not hard to check that $\displaystyle \left(\begin{array}{cc}p' & p \\ q' & q\end{array}\right)=\left(\begin{array}{cc}p_0' & p_0 \\ q_0' & q_0\end{array}\right)\left(\begin{array}{cc}p_1' & p_1 \\ q_1' & q_1\end{array}\ so that ${q=q_0'p_1+q_0q_1}$. Because these matrices are t-reduced, we have that $\displaystyle q_0q_1\leq q\leq 2q_0q_1$ Therefore, in view of (1) and (2), the bounded distortion condition follows. Once we know that the basis dynamics (Teichmüller geodesic flow on ${SL(2,\mathbb{R})/SL(M)}$) is coded by a complete shift equipped with a probability measure with bounded distortion, we can pass to the study of the Kontsevich-Zorich cocycle in terms of the coding. -Cocycle over the complete shift induced by ${G_{KZ}^t}$- Let ${(M_i,[\textrm{t, resp. b}])\stackrel{\gamma_{a,i,t}}{\rightarrow}(M_j,[\textrm{b, resp. t}])}$ be an arrow of ${\Gamma(M)}$ and denote by ${A:M_i\rightarrow M_j}$ an affine map of derivative $ {\left(\begin{array}{cc}1 & a \\ 0 & 1\end{array}\right)}$, resp. ${\left(\begin{array}{cc}1 & 0 \\ a & 1\end{array}\right)}$. Of course, ${A}$ is only well-defined up to automorphisms of ${M_i}$ and /or ${M_j}$. In terms of translation structures, given ${g\in SL(2,\mathbb{R})}$ and a translation structure ${\zeta}$ on ${M}$, the identity map ${\textrm{id}:(M,\zeta)\rightarrow (M,g\zeta)}$ is an affine map of derivative ${g}$. Given ${\gamma}$ a path in ${\Gamma(M)}$ obtained by concatenation ${\gamma=\gamma_1\dots\gamma_{\ell}}$, and starting at ${(M_i,c)}$ and ending at ${(M_j,c')}$, one has, by functoriality, ${A_{\ gamma}:M_i\rightarrow M_j}$ an affine map given by ${A_{\gamma}=A_{\gamma_{\ell}}\dots A_{\gamma_1}}$. Suppose now that ${\gamma}$ is a loop based at ${(M,c)}$. Then, by definition, the derivative ${A_{\gamma}\in SL(M)}$. For our subsequent discussions, an important question is: what matrices of ${SL (M)}$ can be obtained in this way? In this direction, we recall the following definition (already encountered in the previous section): Definition 3 We say that ${A=\left(\begin{array}{cc}a & b \\ c & d\end{array}\right)\in SL(2,\mathbb{Z})}$ is □ t-reduced if ${0< a\leq b,c < d}$; □ b-reduced if ${0<d\leq b,c <a}$. Observe that the product of two t-reduced (resp. b-reduced) matrices is also t-reduced (resp. b-reduced), i.e., these conditions are stable by products. The following statement is the answer to the question above: Corollary 4 The matrices associated to the loops ${\gamma}$ based at the vertex ${(M,c)}$ are precisely the c-reduced matrices of ${SL(M)}$. Indeed, this is a corollary to the next proposition: Proposition 5 A matrix ${A}$ is t-reduced if and only if there exists ${k\geq 1}$ and ${a_1,\dots, a_{2k}\geq 1}$ such that $\displaystyle A=\left(\begin{array}{cc}1 & 0 \\ a_{2k} & 1\end{array}\right)\left(\begin{array}{cc}1 & a_{2k-1} \\ 0 & 1\end{array}\right)\dots \left(\begin{array}{cc}1 & 0 \\ a_2 & 1\end{array} \right)\left(\begin{array}{cc}1& a_1 \\ 0 & 1\end{array}\right)$ Furthermore, the decomposition above is unique. Of course, one has similar statements for b-reduced matrices (by conjugation by the matrix ${\left(\begin{array}{cc}0& 1 \\ 1 & 0\end{array}\right)\in GL(2,\mathbb{Z})}$). Actually, this proposition follows from the following slightly more general fact: Proposition 6 Let ${A=\left(\begin{array}{cc}a & b \\ c & d\end{array}\right)\in SL(2,\mathbb{Z})}$ with ${a,b,c,d\geq 0}$. Then, there exists an unique decomposition $\displaystyle A=\left(\begin{array}{cc}1 & 0 \\ a_{2k} & 1\end{array}\right)\left(\begin{array}{cc}1 & a_{2k-1} \\ 0 & 1\end{array}\right)\dots \left(\begin{array}{cc}1 & 0 \\ a_2 & 1\end{array} \right)\left(\begin{array}{cc}1& a_1 \\ 0 & 1\end{array}\right)$ with ${a_i\geq 0}$ for all ${i}$, and ${a_i>0}$ if ${1<i<2k}$ Assuming the validity of Proposition 6, we can derive Proposition 5 as follows. As one can easily check, it suffices to rule out the possibilities ${a_{2k}=0}$ or ${a_1=0}$ in the decomposition above. We treat only the case ${a_{2k}=0}$ as ${a_1=0}$ is analogous. If ${a_{2k}=0}$, we would have $\displaystyle A=\left(\begin{array}{cc}1 & a_{2k-1} \\ 0 & 1\end{array}\right)\left(\begin{array}{cc}a' & b' \\ c' & d'\end{array}\right) = \left(\begin{array}{cc}a'+a_{2k-1}c' & b'+a_{2k-1}d' \\ c' & d'\end{array}\right)$ with ${d'<b'+a_{2k-1}d'}$ (as ${a_{2k-1}\geq 1}$), that is, ${A}$ is not t-reduced. Concerning the proof of Proposition 6, while it is not difficult (essentially an “Euclidean division algorithm”-like argument), we prefer to omit it in order to present the following related Proposition 7 ${A\in SL(2,\mathbb{Z})}$ is conjugated (in ${SL(2,\mathbb{Z})}$) to a t-reduced matrix if and only if its trace ${\textrm{tr}(A)>2}$. We close this post with the proof of this proposition. Proof: Since $\displaystyle \left(\begin{array}{cc}1 & 0 \\ a_2 & 1\end{array}\right)\left(\begin{array}{cc}1& a_1 \\ 0 & 1\end{array}\right)=\left(\begin{array}{cc}1 & a_1 \\ a_2 & 1+a_1a_2\end{array}\right)$ has trace ${2+a_1a_2>2}$ whenever ${a_1,a_2\geq 1}$, we have that if ${A}$ is conjugated to a t-reduced matrix, then, by Proposition 5, ${\textrm{tr}(A)>2}$. Conversely, given ${A\in SL(2,\mathbb{Z})}$ with ${\textrm{tr}(A)>2}$, its eigenvalues satisfy ${\lambda>1>\lambda^{-1}}$. Let ${\{e_u,e_s\}\subset\mathbb{R}^2}$ be a direct normalized basis with ${A (e_u)=\lambda e_u}$, ${A(e_s)=\lambda^{-1}e_s}$. There exists ${g\in SL(2,\mathbb{Z})}$ such that ${g e_1=\alpha e_u+\beta e_s}$, ${g e_2=\gamma e_u + \delta e_s}$ with ${\alpha,\gamma,\delta>0}$, ${\beta<0}$. Geometrically, these conditions correspond to the following picture: In this situation, the matrix ${A':=g^{-1} A g}$ has nonnegative coefficients. By Proposition 6, we have the following possibilities: • (a) ${a_1,a_{2k}>0}$ • (b) ${a_1=a_{2k}=0}$ • (c) ${a_1>0}$, ${a_{2k}=0}$ • (d) ${a_1=0}$, ${a_{2k}>0}$ Evidently, the proof is complete in the cases (a) and (b). Also, the cases (c) and (d) are similar, so that the argument is finished once we treat (c): in this situation, we observe that $\displaystyle \left(\begin{array}{cc}1 & a_{2k-1} \\ 0 & 1\end{array}\right)\dots \left(\begin{array}{cc}1 & 0 \\ a_2 & 1\end{array}\right)\left(\begin{array}{cc}1& a_1 \\ 0 & 1\end{array}\right)$ is conjugated to $\displaystyle \left(\begin{array}{cc}1 & a_{2k-1} \\ 0 & 1\end{array}\right)^{-1}\cdot\left(\begin{array}{cc}1 & a_{2k-1} \\ 0 & 1\end{array}\right)\dots \left(\begin{array}{cc}1 & 0 \\ a_2 & 1\end {array}\right)\left(\begin{array}{cc}1& a_1 \\ 0 & 1\end{array}\right)\cdot\left(\begin{array}{cc}1 & a_{2k-1} \\ 0 & 1\end{array}\right)$ that is, $\displaystyle \left(\begin{array}{cc}1 & 0 \\ a_{2k-2} & 1\end{array}\right)\dots \left(\begin{array}{cc}1 & 0 \\ a_2 & 1\end{array}\right)\left(\begin{array}{cc}1& a_1+a_{2k-1} \\ 0 & 1\end{array}\ a t-reduced matrix (by Proposition 5). $\Box$ Posted in expository, math.DS, Mathematics | Tags: College de France, continued fractions, Jean-Christophe Yoccoz, Kontsevich-Zorich cocycle, Surfaces a petits carreaux (suite), Teichmüller flow
{"url":"http://matheuscmss.wordpress.com/2012/02/24/spcs-7/","timestamp":"2014-04-21T09:35:51Z","content_type":null,"content_length":"179735","record_id":"<urn:uuid:526c4a96-e1a4-4be6-89bd-0b7c7063af7e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell] natural numbers Pablo E. Martinez Lopez fidel at sol.info.unlp.edu.ar Thu Mar 4 17:41:13 EST 2004 > "it would be very natural to add a type Natural providing an unbounded > size unsigned integer, just as Integer provides unbounded size signed > integers. We do not do that yet since there is no demand for it." > if that is not too much work could we have that in the library? i think > it would be very useful. > (i am trying to generate demand :-) In my courses here in Argentina we have used a type Natural for pedagogical purposes. It has some ugly features regarding negative constants (their meaning is bottom), but it gives young students the feeling that factorial, fibonacci and other functions that are more "natural" over natural numbers have the right type. I attach my implementation, in case it may be useful to someone. Pablo E. Martínez López (Fidel) -------------- next part -------------- -- Author: Pablo E. Martínez López -- University of La Plata and University of Buenos Aires, Argentina -- fidel at sol.info.unlp.edu.ar -- Date: Sep 2000 module Nat (Nat) where data Nat = N Integer -- N is hidden -- These numbers are used as any other (including numeric constants) -- thanks to the class system. -- The check about the naturality of the number is done in runtime, in -- the construction of the representation. -- For that reason, eg. (2-3) :: Nat has the right type, but gives an -- error if it is evaluated. -- At Universities of Buenos Aires and La Plata we use them for pedagogical -- purposes, so that functions like factorial or fibonacci can be given a -- type involving Nats and not Integers. -- Internal constructor (runtime check) nat :: Integer -> Nat nat n = if n>=0 then N n else error (show n ++ " is not a natural number!") -- The instances of all the classes instance Eq Nat where (N n) == (N m) = n==m instance Ord Nat where (N n) <= (N m) = n <= m instance Enum Nat where fromEnum (N n) = fromEnum n toEnum n = nat (toEnum n) instance Num Nat where (N n) + (N m) = nat (n+m) (N n) - (N m) = nat (n-m) (N n) * (N m) = nat (n*m) negate (N n) = error "A natural number cannot be inverted!" abs n = n signum (N n) = nat (signum n) fromInteger n = nat n instance Show Nat where showsPrec p (N n) = showsPrec p n instance Read Nat where readsPrec p s = [ (nat n, s') | (n,s') <- readsPrec p s ] instance Real Nat where toRational (N n) = toRational n instance Integral Nat where quot (N n) (N m) = nat (quot n m) rem (N n) (N m) = nat (rem n m) div (N n) (N m) = nat (div n m) mod (N n) (N m) = nat (mod n m) quotRem (N n) (N m) = let (q,r) = quotRem n m in (nat q, nat r) divMod (N n) (N m) = let (q,r) = divMod n m in (nat q, nat r) toInteger (N n) = n More information about the Haskell mailing list
{"url":"http://www.haskell.org/pipermail/haskell/2004-March/013762.html","timestamp":"2014-04-20T16:23:27Z","content_type":null,"content_length":"5266","record_id":"<urn:uuid:113e7eb1-be93-4995-9ca0-fef07d3d7a1d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning Goal: To Learn To Calculate Energy And ... | Chegg.com home / homework help / questions and answers / science / advanced physics / learning goal: to learn to calculate energy and ... Relativistic Energy and Momentum 500 pts ended This question is closed. No points were awarded. Learning Goal: To learn to calculate energy and momentum for relativistic particles and, from the relativistic equations, to find relations between a particle's energy and its momentum through its mass. The relativistic momentum Part A Find the momentum Try Again; 2 attempts remaining Part B Find the total energy Part C This question will be shown after you complete previous question(s). Part D What is the rest mass Advanced Physics Answers (3) • Learning Goal: To learn to calculate energy and momentum for relativistic particles and, from the relativistic equations, to find relations between a particle's energy and its momentum through its mass. The relativistic momentum Part A Find the momentum Try Again; 2 attempts remaining Part B Find the total energy Part C This question will be shown after you complete previous question(s). Part D What is the rest mass Rating:4 stars 4 stars 1 BrightStar3395 answered 6 hours later
{"url":"http://www.chegg.com/homework-help/questions-and-answers/learning-goal-learn-calculate-energy-momentum-relativistic-particles-relativistic-equation-q3239404","timestamp":"2014-04-21T03:56:28Z","content_type":null,"content_length":"105040","record_id":"<urn:uuid:6e7358e3-c4d1-4e87-acf4-fffb9a81786b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Arithmetical soundness of ZFC joeshipman@aol.com joeshipman at aol.com Mon May 25 11:54:59 EDT 2009 Sure, just say "As N --> infinity, the fraction of grammatically well-formed sentences of length N that are decidable in PA approaches However, this would probably depend on the precise formulation of the grammar. Almost all sufficiently large sentences will have Con(PA) as a conjunct or disjunct, but that is not enough to render the sentence In fact, for the most obvious and user-friendly formulations of PA, the above is false because a nonzero fraction of sentences of PA begin "((0=0) V (" and so are decidably true and a nonzero fraction begin "((0=S(0)) & (" and so are decidably false. The actual probability that a well-formed sentence will be decidable (in any reasonable notion of probability) is likely to be equivalent to Chaitin's number Omega (which is also coding-dependent) in a strong sense. As for the arithmetical unsoundness of ZFC: if there is an Inaccessible Cardinal k, then V(k) models ZFC and Th(V(k)) cannot include any false arithmetical sentences so ZFC must be arithmetically sound. Therefore any evidence for AFC's arithmetical unsoundness is also evidence there are no inaccessibles. In fact the same argument works for any Standard Model of ZFC because of the absoluteness of arithmetical sentences. Therefore you won't be able to argue for the arithmetical unsoundness of ZFC unless you start by assuming that there is no Standard Model. I think that this is actually a reasonable assumption to make, although I happen to believe that ZFC *is* arithmetically sound. -- JS -----Original Message----- From: Timothy Y. Chow <tchow at alum.mit.edu> Harvey Friedman <friedman at math.ohio-state.edu> wrote: One example would be a proof of "ZFC is inconsistent" in ZFC. Actually, what I think is more interesting is Nik Weaver's implicit suggestion that a random sentence in the first-order language of arithmetic is undecidable in PA. Is there any way to make this precise? More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2009-May/013733.html","timestamp":"2014-04-19T12:01:02Z","content_type":null,"content_length":"4604","record_id":"<urn:uuid:07d2fc29-b8e7-4e26-a012-eda19506f714>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Multivariable Optimization Problem Got the idea. You set z=120-x-y and plug it into xy+xz+yz, which is then f(x,y). So you take the derivative of that, set the partials equal to 0, solve for x, solve for y, plug those back into z= 120-x-y... and you get x, y and z equal 40.
{"url":"http://www.physicsforums.com/showthread.php?t=111678","timestamp":"2014-04-17T09:55:12Z","content_type":null,"content_length":"27134","record_id":"<urn:uuid:b9c50401-a499-4ebf-b668-57a0bb281645>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface area of a revolution March 26th 2011, 06:53 PM #1 Surface area of a revolution I'm having some trouble with a question, I've got a lot of the difficult stuff down but I can't get the limits right and my answer is twice what it should be. Curve C: $x=cos^3 \theta \: , \: y= sin^3 \theta \: , \: 0 \leq \theta \leq \frac{\pi}{2}$ The curve C is rotated through 360 degrees about the x-axis. Show that the curved surface area of the solid of revolution generated by: $<br /> 6 \pi \int^{ \frac{\pi}{2} }{0} }_0 sin^4 \theta cos \theta d\theta$ Hence find this curved surface area. So I know that $A = \int 2 \pi y ds$ and that $ds = \sqrt{ (\frac{dx}{d \theta})^2 + (\frac{dy}{d \theta})^2 } = \frac{3}{2}sin 2 \theta$. $A = 2\pi \int 2 sin^3 \theta \frac{3}{2} \sin 2 \theta d \theta$ $sin 2 \theta \equiv 2 \sin \theta \cos \theta$ $A = 6\pi \int sin^4 \theta cos \theta d \theta$ So I have the right integral but I can't show that the limits are 0 to $\frac{\pi}{2}$. I don't think it wants me to use symmetry as that would make it $12 \pi$, I tried $\theta = arcos( x^{\frac{1}{3} } )$ but I can't get the right answers. Surely it's because you're told in the first line of the problem what the values of $\displaystyle \theta$ are... Well it says show, and I can't show that they are the limits. I tried the lower limit as -1: $\theta = arcos( (-1)^{\frac{1}{3} }) = \pi$ And upper as 1: $\theta = arcos( 1^{\frac{1}{3} }) = 0$ So I'm quite confused. No, it says to show that the integral gives you the surface area. You don't need to go any further than to use the formula you have been given (which you have done) and substituting the endpoints of your $\displaystyle \theta$ domain. To be picky, it actually says show that the integral between those two points gives the surface area. Could you perhaps, live up to your name of "prove it" and show me why the limits have to be what it says, as to me, mathematically they can't be. I have just noticed that you have made a mistake in your integral. It appears you took out $\displaystyle 2\pi$ as a factor, but also left $\displaystyle 2$ inside the integral as well. That will explain why you're getting double the answer you're supposed to. It always helps to have a picture of what you are trying to do. In this case, to draw a graph of the function will mean you need to write $\displaystyle y$ in terms of $\displaystyle x$, and the required range of $\displaystyle \theta$ will be the distance between $\displaystyle x$ intercepts. If $\displaystyle x = \cos^3{\theta}$ and $\displaystyle y = \sin^3{\theta}$, then $\displaystyle y = \left(\sin^2{\theta}\right)^{\frac{3}{2}}$ $\displaystyle y = \left(1 - \cos^2{\theta}\right)^{\frac{3}{2}}$ $\displaystyle y = \left[1 - \left(\cos^3{\theta}\right)^{\frac{2}{3}}\right]^{\frac{3}{2}}$ $\displaystyle y = \left(1 - x^{\frac{2}{3}}\right)^{\frac{3}{2}}$. The $\displaystyle x$ intercepts are where $\displaystyle y = 0$, so $\displaystyle 0 = \left(1 - x^{\frac{2}{3}}\right)^{\frac{3}{2}}$ $\displaystyle 0 = 1 - x^{\frac{2}{3}}$ $\displaystyle x^{\frac{2}{3}} = 1$ $\displaystyle x^2 = 1$ $\displaystyle x = \pm 1$. And since $\displaystyle x = \cos^3{\theta}$, solving $\displaystyle \cos^3{\theta} = \pm 1$ will give $\displaystyle \theta = 0$ and $\displaystyle \theta = \pi$, but you can now use symmetry, since we have found why your answer is double what it should be. Why do I do these things?! Thanks for that I see what I need to do now, and I wouldn't have thought to equate y to zero in order to get the upper and lower limits so thanks for that too. Sorry I've gone over it again and I made a typo in my original post but the answer is still what I said it was: $A= 2\pi \int \sin^3 \theta \frac{3}{2}\sin 2 \theta d\theta$ $A =3\pi \int \sin^3 \theta 2 \sin \theta \cos \theta d \theta$ $A =6\pi \int \sin^4 \theta \cos \theta d \theta$ So lower limit should be $b = arcos(-1) = \pi$ And upper limit $a = arcos(1) = 0$ So shouldn't it be: $A =6\pi \int^{\pi}_0 \sin^4 \theta \cos \theta d \theta$ So still, using symmetry would make it $12 \pi$ which is wrong. I keep checking it and can't see where I'm wrong :S March 26th 2011, 06:55 PM #2 March 26th 2011, 06:59 PM #3 March 26th 2011, 07:02 PM #4 March 26th 2011, 07:19 PM #5 March 26th 2011, 07:37 PM #6 March 26th 2011, 07:43 PM #7 March 27th 2011, 02:47 PM #8
{"url":"http://mathhelpforum.com/calculus/175948-surface-area-revolution.html","timestamp":"2014-04-18T15:47:46Z","content_type":null,"content_length":"63412","record_id":"<urn:uuid:b81b0431-958c-4c9d-addf-1f622ee73e2d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I I am still trying to work the problem m^3-6m/m^2-64=? • one year ago • one year ago Best Response You've already chosen the best response. \[\frac{m^3-6m}{m^2-64}=?\]is this it? Best Response You've already chosen the best response. What I would do is factor the bottom first and get \[m^2-64=(m-8)(m+8)\]and try to factor the top out to get one those but can't seem to figure it out. Anyone care to jump in? Best Response You've already chosen the best response. And you are just trying to simplify this right? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/500cea71e4b0549a89329b9c","timestamp":"2014-04-18T00:28:29Z","content_type":null,"content_length":"32440","record_id":"<urn:uuid:61644712-9431-45d8-aabf-47cdc62be5db>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
lotka-volterra equations?? Euler December 18th 2012, 04:23 PM #1 Dec 2012 lotka-volterra equations?? Euler Hey guys I am working on oscillation and have been looking at the lotka-volterra equations. I may have misunderstood the question, but this is the equations: y'= Ax-Bxy Can read quick about it here:http://mathworld.wolfram.com/images/...s/Inline11.gif I may have misunderstood, but i am supposed to find a numerical solution using Eulers method. Well, give an example of it. Can anyone help me pls. Really desperate. Re: lotka-volterra equations?? Euler Hey MathNoobMisc. Eulers method in general is just a taylor expansion of a function involving the initial condition and the derivative. We know that the first two taylor series terms are f(x) = f(a) + (x-a)*f'(a). If we have a system of equations, we replace x's and a's with vectors where f'(a) is a linear object (i.e a nxn matrix for system of n equations) and our Euler up-date becomes: f(x) = f(a) + C*(x-a) where a = [a1,a2,...,an] and x = [x1,x2,....,xn] and C is nxn derivative evaluated at vector a. What computational tools do you have? Do you have MATLAB or the open source version Octave (and GUIOctave)? December 18th 2012, 08:49 PM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/differential-equations/210107-lotka-volterra-equations-euler.html","timestamp":"2014-04-24T18:21:28Z","content_type":null,"content_length":"32752","record_id":"<urn:uuid:93e84023-2bc3-41f2-b447-cabc5699f196>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Healthcare Economist Mahalanobis Distance Written By: Jason Shafrin - May• 21•13 What is Mahalanobis distance? Most people know what Euclidean distance is…it is the shortest distance between any two points. In other words, its what we typically think of when we think of distance – the distance we would measure with a ruler, and the one given by the Pythagorean formula. Unlike Euclidean distance, Mahalanobis distance gauges the similarity of an unknown sample set to a known one. It differs from Euclidean distance in that it takes into account the correlations of the data set and is scale-invariant. In other words, it is a multivariate effect size. Wikipedia defines Mahalanobis distance using the following intuition. Consider the problem of estimating the probability that a test point in N-dimensional Euclidean space belongs to a set, where we are given sample points that definitely belong to that set. Our first step would be to find the average or center of mass of the sample points. Intuitively, the closer the point in question is to this center of mass, the more likely it is to belong to the However, we also need to know if the set is spread out over a large range or a small range, so that we can decide whether a given distance from the center is noteworthy or not. The simplistic approach is to estimate the standard deviation of the distances of the sample points from the center of mass. If the distance between the test point and the center of mass is less than one standard deviation, then we might conclude that it is highly probable that the test point belongs to the set. The further away it is, the more likely that the test point should not be classified as belonging to the set. This intuitive approach can be made quantitative by defining the normalized distance between the test point and the set to be The drawback of the above approach was that we assumed that the sample points are distributed about the center of mass in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we would expect the probability of the test point belonging to the set to depend not only on the distance from the center of mass, but also on the direction. In those directions where the ellipsoid has a short axis the test point must be closer, while in those where the axis is long the test point can be further away from the center. Putting this on a mathematical basis, the ellipsoid that best represents the set’s probability distribution can be estimated by building the covariance matrix of the samples. The Mahalanobis distance is simply the distance of the test point from the center of mass divided by the width of the ellipsoid in the direction of the test point. How does one calculate the Mahalanobis distance in practice? The Healthcare Economist has a simple example for you to examine here. That’s a very good explanation of the Mahalanobis Distance. Thanks. [...] Mahalanobis Metric Matching. This method randomly orders subjects and then calculates the distance between first treated subjects and all controls, where the distance d(i,j) = (u–v)TC−1(u–v) where u and v are the values of matching variables (including propensity score) and C is the sample covariance matrix of matching variables from the full set of control subjects. I describe the logic of calculating Mahalanobis distance in a previous post. [...]
{"url":"http://healthcare-economist.com/2013/05/21/mahalanobis-distance/","timestamp":"2014-04-17T05:10:17Z","content_type":null,"content_length":"28983","record_id":"<urn:uuid:bdaa6ccc-28a4-4bf5-a78d-e313284afb52>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is the Universe Accelerating? - S.M. Carroll 2.2. Quantum zero-point energy The introduction of quantum mechanics changes this story somewhat. For one thing, Planck's constant allows us to define a gravitational length scale, the reduced Planck length as well as the reduced Planck mass where "reduced" means that we have included the 8c = 1, we have L = T = M^-1 = E^-1, where L represents a length scale, T a time interval, M a mass scale, and E an energy.) Hence, there is a natural expectation for the scale of the cosmological constant, namely or, phrased as an energy density, We can partially justify this guess by thinking about quantum fluctuations in the vacuum. At all energies probed by experiment to date, the world is accurately described as a set of quantum fields (at higher energies it may become strings or something else). If we take the Fourier transform of a free quantum field, each mode of fixed wavelength behaves like a simple harmonic oscillator. ("Free" means "noninteracting"; for our purposes this is a very good approximation.) As we know from elementary quantum mechanics, the ground-state or zero-point energy of an harmonic oscillator with potential V(x) = 1/2 ^2 x^2 is E[0] = 1/2 The strategy of decomposing a free field into individual modes and assigning a zero-point energy to each one really only makes sense in a flat spacetime background. In curved spacetime we can still "renormalize" the vacuum energy, relating the classical parameter to the quantum value by an infinite constant. After renormalization, the vacuum energy is completely arbitrary, just as it was in the original classical theory. But when we use general relativity we are really using an effective field theory to describe a certain limit of quantum gravity. In the context of effective field theory, if a parameter has dimensions [mass]^n, we expect the corresponding mass parameter to be driven up to the scale at which the effective description breaks down. Hence, if we believe classical general relativity up to the Planck scale, we would expect the vacuum energy to be given by our original guess (1.9). However, we believe we have now measured the vacuum energy through a combination of Type Ia supernovae (Riess et al. 1998, Perlmutter et al. 1999, Tonry et al. 2003, Knop et al. 2003), microwave background anisotropies (Spergel et al. 2003), and dynamical matter measurements (Verde et al. 2002), to reveal For reviews see Sahni and Starobinski 2000, Carroll 2001, or Peebles and Ratra 2003. Clearly, our guess was not very good. This is the famous 120-orders-of-magnitude discrepancy that makes the cosmological constant problem such a glaring embarrassment. Of course, it is somewhat unfair to emphasize the factor of 10^120, which depends on the fact that energy density has units of [energy]^4. We can express the vacuum energy in terms of a mass scale, so our observational result is The discrepancy is thus We should think of the cosmological constant problem as a discrepancy of 30 orders of magnitude in energy scale.
{"url":"http://ned.ipac.caltech.edu/level5/March04/Carroll/Carroll2_2.html","timestamp":"2014-04-16T21:58:18Z","content_type":null,"content_length":"7613","record_id":"<urn:uuid:1f6858ca-be20-4f5f-bd30-e420670c5aa3>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Confirmatory factor analysis Anonymous posted on Monday, August 26, 2002 - 6:38 pm My topic for a thesis is about Confirmatory factor analysis.Can you suggest the best type of data where I applied and in what field of interest. bmuthen posted on Tuesday, August 27, 2002 - 10:15 am There are so many application areas. I think you should study the literature and explore the area that you are most interested in and that is also agreeable to your mentor and your department. Anonymous posted on Tuesday, August 27, 2002 - 10:12 pm Thank you very much! Anonymous posted on Monday, September 02, 2002 - 11:11 am I have conducted a multigroup factor analysis in Mplus (using categorical indicator variables). I want to output the Mplus factor scores (FSs) to a file, and the match them to my original data set. I'm having a great deal of difficulty because Mplus does not save a CASE ID to its output files. Furthermore Mplus appears to resort the input data by GROUP ID and by other criteria before producing the FS output file. I know this because even after I resort my input data by GROUP ID and CASE ID, the weights for the input file are ordered differently than in the Mplus output file. Is there anyway to sort Mplus FS output file so that I can reliably patch the FSs back into my original data set ? Linda K. Muthen posted on Monday, September 02, 2002 - 11:38 am Mplus Version 2.0 and up does allow the inclusion of an ID variable. The IDVARIABLE option is part of the VARIABLE command. Anonymous posted on Tuesday, September 03, 2002 - 10:50 am Perfect. I'd consulted the wrong part of the manual. Works great. Anonymous posted on Tuesday, September 10, 2002 - 12:03 am Is it possible to include an indirect effect when examing measurement invariance of a single-factor measure in a multiple group model? Thanks! bmuthen posted on Tuesday, September 10, 2002 - 8:02 am Yes. I assume you mean that you have an x variable that influences the factor and therefore the indicators indirectly. Hervé CACI posted on Monday, February 24, 2003 - 2:13 am In some recent exchanges on SEMNET, Stan Mulaik argued that his parsimony ratio should be taken into consideration for fit testing. I don't see how it can work with WLSMV since the number of degrees of freedom reflect both the number of parameters to be estimated and the data. Stan nor anybody on the list answered my question. Is it a worthless thought? bmuthen posted on Tuesday, February 25, 2003 - 9:42 am I think you might want to use WLS for this. Anonymous posted on Tuesday, June 07, 2005 - 6:44 am What is the maximum number of dichtotmous items Mplus can handle when doing the CFA? When I run 147 dichtotmous items, it kept running. Thanks! Linda K. Muthen posted on Tuesday, June 07, 2005 - 7:04 am The maximum number of variables allowed in Mplus is 500. With categorical outcomes, the analysis can take some time with 147 items depending on the speed of your computer. Eric Buhi posted on Tuesday, January 31, 2006 - 10:38 am Accorinding to the APA manual, I need to report means/SDs for all the variables I include in my modeling. I get variable means with SAMPSTAT, but how do I produce the standard deviations? Thanks! Linda K. Muthen posted on Tuesday, January 31, 2006 - 10:45 am Take the square root of the variances that are also reported in the sample statistics. Eric Buhi posted on Tuesday, January 31, 2006 - 11:16 am Thank you for your reply. Do you mean the covariances on the diagonal (following the means results)? Linda K. Muthen posted on Tuesday, January 31, 2006 - 1:05 pm The variances are on the diagonal of a variance/covariance matrix. The off-diagonal elements are covariances. anonymous posted on Wednesday, January 10, 2007 - 11:42 am I have performed a CFA for 7 factors. In the output is it possible get Eigenvalues for each of these factors? Something similar to an SPSS or STATA output for factor analyses? Bengt O. Muthen posted on Thursday, January 11, 2007 - 8:36 am Short answer is no. A longer answer is as follows. Mplus gives eigenvalues for exploratory factor analysis and these eigenvalues are for the sample correlation matrix, used to guide in choosing number of factors. Many researchers in the past have used the amount of variance explained in the observed variables by a factor as a descriptive of the quality of the factor solution. This amount of variance is the sum of the squared loadings in a column (for a factor) when the factors are uncorrelated. This amount is related to the eigenvalue - would be the eigenvalue if the estimation method was principal component analysis (which is not a great estimator for factor models). Also, one could compute the eigenvalues for the model-estimated correlation matrix. However, I would question the value of eigenvalue information for factor analysis beyond the EFA purpose of guiding the choice of number of factors. To decide on a well-fitting model in CFA we have better fit measure alternatives (and eigenvalues are not fit measures anyhow). And since factor analysis is not designed to maximize variance explained (but capturing correlation structure), the descriptive value of an eigenvalue is also not clear. anonymous posted on Tuesday, January 16, 2007 - 8:21 pm Does this mean that the value of variance for each factor in the output is the variance explained by the factor? I am a little confused as to what does it represent? Linda K. Muthen posted on Wednesday, January 17, 2007 - 9:07 am No, the factor variance is how much variability there is in the factor. Variance explained refers to how much variance of the factor indicators is explained by the factor. You can find this by looking the the R-square values of the factor indicators. Reetu Kumra posted on Monday, February 26, 2007 - 10:53 am I have a few questions: 1. In a confirmatory factor analysis output, the column that is labeled StdYX (last column)...how is this interpreted? Is this the correlation between the latent construct and the actual variable? Please help. 2. When doing a CFA on two groups within a sample, what is the difference in doing a multi-group analysis and doing a CFA on these two groups separately? Linda K. Muthen posted on Monday, February 26, 2007 - 3:54 pm 1. This is a raw coefficient standardized using both latent variable and observed variable variances. 2. If you analyze both groups together with all parameters free across groups, you will obtain the same estimates as if you analyzed the two groups separately. Usually, the two groups are analyzed together so that equality constraints can be used to test for measurement invariance. Reetu Kumra posted on Tuesday, February 27, 2007 - 11:09 am Thanks Linda! One last question: Once the CFA is complete, is there a way to make the latent constructs created into a measurable variable? (i.e. Can we somehow get something equivalent to data for the latent Linda K. Muthen posted on Tuesday, February 27, 2007 - 11:26 am Are you asking if you can obtain factor scores? If so, you can do this using the FSCORES option of the SAVEDATA command. Reetu Kumra posted on Tuesday, February 27, 2007 - 12:08 pm Hi Linda. I have two more questions: 1. How exactly is the StdYX derived? When you say standardized, please clarify how the raw coefficients are standardized. 2. How exactly are the factor scores created? Is this an overall measure of the raw data that go into the factor? Linda K. Muthen posted on Tuesday, February 27, 2007 - 1:28 pm 1. See Technical Appendix 3 which is on the website. 2. See Technical Appendix 11 which is on the website. Derek Kosty posted on Wednesday, August 20, 2008 - 10:25 am I have noticed that the number of free parameters between Mplus version 4 and 5.1 disagree. When running the model: MODEL: intern by LMDD4 LDYS4 LDPD4 LGOA4 LPTS4 LSPE4 LSOC4 LPAN4 LOBC4; version 4 counts 9 free parameters and version 5.1 counts 18. What is the reason behind this? Linda K. Muthen posted on Wednesday, August 20, 2008 - 11:11 am With Version 5, TYPE=MEANSTRUCTURE became the default. This is the cause. You can add MODEL=NOMEANSTRUCTURE; to the ANALYSIS command to override this default. Joykrishna Sarkar posted on Friday, July 31, 2009 - 11:37 am I was trying to save the standardized output in configural, metric , scalar and complete invariance tests. But Mplus does not save the standardized output of factor loadings in metric invariance, intercepts and factor loadings in scalar invariance, and intercepts, factor loadings & residual variances in complete invariance tests. Instead of saving standardized output of parameters mentioned above, Mplus save 999 (missing). Any help about how to save these standardized outputs would be appreciated. Linda K. Muthen posted on Friday, July 31, 2009 - 4:20 pm Mplus does not save standardized parameter estimates that are constrained to be equal. ehsan malek posted on Wednesday, July 14, 2010 - 11:17 am i have a CFA model with two latent variables. i calculated average variance extracted for each of the two variables and it is around .3 for each. composite reliability is around .7 for each of the two latent variables. i have around 500 cases. model fit indices are ok (almost ok, chi square is not and i think it is because of the big sample size). what can i do for the AVE (as its recommended value is >.5)? does it have something to do with the sample size? as other things are ok with the model can i accept it? Linda K. Muthen posted on Thursday, July 15, 2010 - 7:57 am I would look at factor determinacy. It is probably correlated with AVE. Can you give a reference for AVE? I would also not discount chi-square with a sample size of 500. This is not large. Christopher Bratt posted on Monday, July 19, 2010 - 3:29 pm Linda, AVE is average variance extracted in factor analysis. (It would be great if Mplus could compute AVE...) Chris B. Morayo Ayodele posted on Tuesday, July 03, 2012 - 10:29 am Hello Dr. Muthen, Is there a reason why a model would run without errors in one sample and not in another irrespective of sample size? I am trying to run a four-factor model in four independent samples of N = 234, 296, 334, and 568. It returned errors for samples 296 and 334. F1 by sgl3 sgl17 sgl25 sgl68; F2 by sgl41 sgl42 sgl67 sgl76 sgl100; F3 by sgl5 sgls8 sgl78 sgl84 sgl94 sgl96 sgl98; F4 by sgl30 sgl40 sgl55 sgl83 sgl92 sgl97 sgl102; Output: Sampstat standardized mod tech4; WARNING: The latent variable covariance matrix (psi) is not positive definite. This could indicate a negative variance/residual variance for a latent variable, a correlation greater or equal to one between two latent variables, or a linear dependency among more than two latent variables. Check the tech4 output for more information. Problem involving variable F2. I did observe a correlation greater than 1 for two latent variables (F2 & F4). Is there anything way of fixing this problem? Thank you Linda K. Muthen posted on Tuesday, July 03, 2012 - 10:48 am The same model might not be correct for different data sets. It sounds like that is the case. A correlation greater than one means the model is inadmissible. You need to change the model. Back to top
{"url":"http://www.statmodel.com/discussion/messages/9/202.html?1341337736","timestamp":"2014-04-16T08:28:20Z","content_type":null,"content_length":"61738","record_id":"<urn:uuid:9daabad3-9e88-49cd-ba99-43e73d8f61f4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Binomial expansion help June 2nd 2009, 07:15 AM #1 Jun 2009 Binomial expansion help hi.i've just come across this question and got stuck at part (iii).i could really use some help as i have to submit this assignment.heres the question: (i)Expand ((1-x)/(1+x))^n in ascending powers of x up to and including the term x^3. (ii) State the values of x for which the series expansion is valid. (iii)Hence find an approximation to the fourth root of 19/21, in the form p/q ,where p and q are positive integers. Ans: (i)1−2nx+2n2x2 (ii)x <1 (iii) 3121/3200 hi.i've just come across this question and got stuck at part (iii).i could really use some help as i have to submit this assignment.heres the question: (i)Expand ((1-x)/(1+x))^n in ascending powers of x up to and including the term x^3. (ii) State the values of x for which the series expansion is valid. (iii)Hence find an approximation to the fourth root of 19/21, in the form p/q ,where p and q are positive integers. Ans: (i)1−2nx+2n2x2 (ii)x <1 (iii) 3121/3200 For part (iii) observe that: So now use your approximation with $x=\frac{1}{20}$ and $n=\frac{1}{4}$ to get the required rational approximation. thanks alot captain black =D June 3rd 2009, 03:52 AM #2 Grand Panjandrum Nov 2005 June 3rd 2009, 06:35 AM #3 Jun 2009
{"url":"http://mathhelpforum.com/algebra/91538-binomial-expansion-help.html","timestamp":"2014-04-16T19:30:21Z","content_type":null,"content_length":"35732","record_id":"<urn:uuid:e7455e5f-134e-49c4-91d8-d3a2eaa003ad>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximate groupoids again This post is for future (google) reference for my project of relating approximate groups with emergent algebras. I would appreciate any constructive comment which could validate (or invalidate) this path of research. Here is the path I would like to pursue further. The notion of approximate groupoid (see here for the definition) is not complete, because it is flattened, i.e. the set of arrows $K$ should be seen as a set of variables. What I think is that the correct notion of approximate groupoid is a polynomial functor over groupoids (precisely a specific family of such functors). The category Grpd is cartesian closed, so it has an associated model of (typed) lambda calculus. By using this observation I could apply emergent algebra techniques (under the form of my graphic lambda calculus, which was developed with — and partially funded by – this application in mind) to approximate groupoids and hope to obtain streamlined proofs of Breuillard-Green-Tao type results. One thought on “Approximate groupoids again”
{"url":"http://chorasimilarity.wordpress.com/2013/01/04/approximate-groupoids-again/","timestamp":"2014-04-19T06:55:39Z","content_type":null,"content_length":"87627","record_id":"<urn:uuid:a8af4ed0-9fde-4bb9-9e57-5e3c7bce4995>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
[racket] Generative recursion From: Jos Koot (jos.koot at telefonica.net) Date: Fri Nov 5 07:05:50 EDT 2010 Actually there is one more way. We only have to check numbers 1 up to the integer-sqrt of n. For each check whose remainder is 0, we immediately have two divisors, the checking number and the quotient (except when these two are equal, giving one divisor only -this happens only when n is a square-)) From: users-bounces at racket-lang.org [mailto:users-bounces at racket-lang.org] On Behalf Of Todd O'Bryan Sent: 05 November 2010 10:31 To: lukejordan at gmail.com Cc: users at racket-lang.org Subject: Re: [racket] Generative recursion There are really two ways of doing this problem. The way I'd probably use is to make a list of all the *possible* divisors and then use the filter function to pull out the actual divisors. The way you're probably thinking of requires a helper function for the recursion, because you need to keep track of where you are in building up the list. Back in the section on natural numbers, there are exercises to create lists of numbers of varying types, for example: ; list-from-a-to-b: number number -> list-of-number (check-expect (list-from-a-to-b 3 7) (list 3 4 5 6 7)) If you can figure out how to write this function, then you just need to include a conditional to decide whether to add the next number into the list you're creating or not. On Wed, Nov 3, 2010 at 7:18 PM, Luke Jordan <luke.jordan at gmail.com> wrote: I found implementing this trickier than grasping the solution as well. Stick with it. I don't see that you need any functions related prime numbers. Perhaps if input is prime that is a trivial case, but try to focus on what the output is: A list of numbers that can evenly divide the input. Those numbers are the numbers from 1 to input. To think about how to get that list, try solving it by hand. If input is 3, how do you go about it? Does 3 divide 3 with no remainder? Yes, we know that numbers divide themselves with no remainder. How about 2? 1? Try it over with larger numbers, like 6 and 10. What process are you using to determine whether the numbers <= input and > 1 divide input with no remainder, and what happens to them if do? What happens if they do not? When does evaluation cease? When it comes to a termination statement, don't forget that list and append are Trying not to say too much, but hope I'm still saying something useful. - Luke On Wed, Nov 3, 2010 at 17:43, Ken Hegeland <hegek87 at yahoo.com> wrote: I am trying to do the problem 26.1.1 in the book HTDP and I just feel like the more I think about it, the more confused I get. I found a post from 2007 with some tips from Jens Axel Søgaard, the link is I understand whats to be done, but Im just unsure how to accomplish the task. I believe that trivially solvable in this case is n=1 and n=prime. For n=1 the solution would be (list 1) for n=prime Im thinking it should be (list 1 n), my function is similar to the one on the link with a bit of different organization. When I read jens' tips near the end I am getting confused. The closest I can get to an answer is, (=(tabulate-div 20)(list 20 10 5 2 >From the advice supplied, I was able to say that the smaller problems that you split the program into are, one to get the largest divisor, and one which gets the smallest divisor using the largest. As far as template I am using what is supplied in the book. Im simply stuck, and would love something to help me out, I will continue to run this problem through my head and see what I can come up with. Thanks in advance for any help. For list-related administrative tasks: For list-related administrative tasks: -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.racket-lang.org/users/archive/attachments/20101105/adf11be0/attachment.html> Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2010-November/042639.html","timestamp":"2014-04-19T10:18:39Z","content_type":null,"content_length":"9819","record_id":"<urn:uuid:3bc31939-8e09-4561-addc-e7f435afa691>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagoras in his school When Pythagoras came to Croton, he founded a philosophical school called 'semicircle'. The school had many followers, but the inner circle of the school had smaller number of philosophers/ mathematicians who were called mathematikoi . They lived as if they were in a monastery, having no private property and observing a number of strict rules. Among other things, they believed that • Nature and the whole reality has an underlying mathematical structure • That philosophy should be used for spiritual purification • That the soul which is pure can rise to experience the union with the divine • That certain symbols (including mathematical) have a mystical significance. Mathematikoi were both men and women, while the outer circle of followers, known as akousmatics lived in their own families and homes, and attended the society's meetings and lectures during the day. They were not required to follow the strict rules as mathematikoi. It would be wrong to say that Pythagoreans were organised as a research group, or research institution or university. They were rather a philosophical and in a way, religious society, who, because they believed that at the root of all reality lies mathematical truth, studied mathematics and properties of mathematical objects. This was however, very important for the development of mathematical thought generally, as their study made abstraction of mathematical ideas somewhat common-place and well known in philosophical and therefore in mathematical circles. Pythagoreans studied varied array of subjects such as: • Numbers (and had mathematically formulated odd, triangular and perfect numbers) • Music (and noticed that the ratios of the lengths of the strings are whole numbers, and that these ratios can be extended to other instruments) • Geometry (and made it into a science which is deserving of study for its own sake and not only for practical purposes). It is believed that Cylon, a powerful citizen of Croton, wanted to become one of mathematikoi which Pythagoras refused him. This led to a whole-scale attack on Pythagoras and his followers, and their ultimate persecution and demise after the death of Pythagoras himself.
{"url":"http://www.mathsisgoodforyou.com/topicsPages/pythagoreans/brotherhood.htm","timestamp":"2014-04-18T00:33:04Z","content_type":null,"content_length":"11038","record_id":"<urn:uuid:b663c9bf-27c2-472b-8e34-ae3e1510819b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Iterative Inversion of Data from Simultaneous Geophysical Sources Patent application title: Iterative Inversion of Data from Simultaneous Geophysical Sources Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Method for reducing the time needed to perform geophysical inversion by using simultaneous encoded sources in the simulation steps of the inversion process. The geophysical survey data are prepared by encoding (3) a group of source gathers (1), using for each gather a different encoding signature selected from a set (2) of non-equivalent encoding signatures. Then, the encoded gathers are summed (4) by summing all traces corresponding to the same receiver from each gather, resulting in a simultaneous encoded gather. (Alternatively, the geophysical data are acquired from simultaneously encoded sources.) The simulation steps needed for inversion are then calculated using a particular assumed velocity (or other physical property) model (5) and simultaneously activated encoded sources using the same encoding scheme used on the measured data. The result is an updated physical properties model (6) that may be further updated (7) by additional iterations. A computer-implemented method for inversion of measured geophysical data to determine a physical properties model for a subsurface region, comprising: (a) obtaining measured geophysical data from a geophysical survey of the subsurface region; (b) inverting the measured data by iterative inversion to determine a physical properties model for the subsurface region, wherein: (i) at least one iteration of the inversion comprises: simultaneous encoded-source simulation of survey data representing a plurality of survey sources, or receivers if source-receiver reciprocity is used, wherein source or receiver signatures in the simulation are encoded, resulting in a simulated simultaneous encoded-source or encoded-receiver gather of geophysical data, the inversion process involving updating an assumed physical properties model to reduce misfit between the simulated simultaneous encoded-source or encoded-receiver gather and a corresponding simultaneous encoded-source or encoded-receiver gather formed by summing gathers of measured survey data encoded with the same encoding functions used in the simulation; and (ii) at least one iteration of the inversion is performed by sequential source or receiver inversion; and (c) downloading the updated physical properties model or saving it to computer storage. The method of claim 1, wherein the encoding in (b)(i) is designed to cause the simultaneous encoded-source or encoded-receiver gather of measured data to sum individual gathers incoherently. The method of claim 1, wherein for one or more of the iterations in (b)(i), different encoding functions are used compared to the preceding iteration. This application is a continuation of U.S. application Ser. No. 13/345,314, filed 6 Jan. 2012, which is a continuation of U.S. application Ser. No. 12/441,685, now issued as U.S. Pat. No. 8,121,823, filed 17 Mar. 2009, which is the national stage of PCT/US2007/019724 that published as WO 2008/042081 and was filed on 11 Sep. 2007, which claims the benefit of U.S. Provisional Application No. 60/ 847,696, filed on Sep. 28, 2006, each of which is incorporated herein by reference, in its entirety, for all purposes. FIELD OF THE INVENTION [0002] The invention relates generally to the field of geophysical prospecting, and more particularly to geophysical data processing. Specifically, the invention is a method for inversion of data acquired from multiple geophysical sources such as seismic sources, involving geophysical simulation that computes the data from many simultaneously-active geophysical sources in one execution of the BACKGROUND OF THE INVENTION [0003] Geophysical inversion [1,2] attempts to find a model of subsurface properties that optimally explains observed data and satisfies geological and geophysical constraints. There are a large number of well known methods of geophysical inversion. These well known methods fall into one of two categories, iterative inversion and non-iterative inversion. The following are definitions of what is commonly meant by each of the two categories: Non-iterative inversion--inversion that is accomplished by assuming some simple background model and updating the model based on the input data. This method does not use the updated model as input to another step of inversion. For the case of seismic data these methods are commonly referred to as imaging, migration, diffraction tomography or Born inversion. Iterative inversion--inversion involving repetitious improvement of the subsurface properties model such that a model is found that satisfactorily explains the observed data. If the inversion converges, then the final model will better explain the observed data and will more closely approximate the actual subsurface properties. Iterative inversion usually produces a more accurate model than non-iterative inversion, but is much more expensive to compute. Two iterative inversion methods commonly employed in geophysics are cost function optimization and series methods. Cost function optimization involves iterative minimization or maximization of the value, with respect to the model M, of a cost function S(M) which is a measure of the misfit between the calculated and observed data (this is also sometimes referred to as the objective function), where the calculated data is simulated with a computer using the current geophysical properties model and the physics governing propagation of the source signal in a medium represented by a given geophysical properties model. The simulation computations may be done by any of several numerical methods including but not limited to finite difference, finite element or ray tracing. Series methods involve inversion by iterative series solution of the scattering equation (Weglein [3]). The solution is written in series form, where each term in the series corresponds to higher orders of scattering. Iterations in this case correspond to adding a higher order term in the series to the solution. Cost function optimization methods are either local or global [4]. Global methods simply involve computing the cost function S(M) for a population of models {M , M , M , . . . } and selecting a set of one or more models from that population that approximately minimize S(M). If further improvement is desired this new selected set of models can then be used as a basis to generate a new population of models that can be again tested relative to the cost function S(M). For global methods each model in the test population can be considered to be an iteration, or at a higher level each set of populations tested can be considered an iteration. Well known global inversion methods include Monte Carlo, simulated annealing, genetic and evolution algorithms. Local cost function optimization involves: 1. selecting a starting model, 2. computing the gradient of the cost function S(M) with respect to the parameters that describe the model, 3. searching for an updated model that is a perturbation of the starting model in the gradient direction that better explains the observed data. This procedure is iterated by using the new updated model as the starting model for another gradient search. The process continues until an updated model is found which satisfactorily explains the observed data. Commonly used local cost function inversion methods include gradient search, conjugate gradients and Newton's method. As discussed above, iterative inversion is preferred over non-iterative inversion, because it yields more accurate subsurface parameter models. Unfortunately, iterative inversion is so computationally expensive that it is impractical to apply it to many problems of interest. This high computational expense is the result of the fact that all inversion techniques require many compute intensive forward and/or reverse simulations. Forward simulation means computation of the data forward in time, and reverse simulation means computation of the data backward in time. The compute time of any individual simulation is proportional to the number of sources to be inverted, and typically there are large numbers of sources in geophysical data. The problem is exacerbated for iterative inversion, because the number of simulations that must be computed is proportional to the number of iterations in the inversion, and the number of iterations required is typically on the order of hundreds to thousands. The compute cost of all categories of inversion can be reduced by inverting data from combinations of the sources, rather than inverting the sources individually. This may be called simultaneous source inversion. Several types of source combination are known including: coherently sum closely spaced sources to produce an effective source that produces a wavefront of some desired shape (e.g. a plane wave), sum widely spaces sources, or fully or partially stacking the data before inversion. The compute cost reduction gained by inverting combined sources is at least partly offset by the fact that inversion of the combined data usually produces a less accurate inverted model. This loss in accuracy is due to the fact that information is lost when the individual sources are summed, and therefore the summed data does not constrain the inverted model as strongly as the unsummed data. This loss of information during summation can be minimized by encoding each shot record before summing Encoding before combination preserves significantly more information in the simultaneous source data, and therefore better constrains the inversion. Encoding also allows combination of closely spaced sources, thus allowing more sources to be combined for a given computational region. Various encoding schemes can be used with this technique including time shift encoding and random phase encoding. The remainder of this Background section briefly reviews various published geophysical simultaneous source techniques, both encoded and non-encoded. Van Manen [5] suggests using the seismic interferometry method to speedup forward simulation. Seismic interferometry works by placing sources everywhere on the boundary of the region of interest. These sources are modeled individually and the wavefield at all locations for which a Green's function is desired is recorded. The Green's function between any two recorded locations can then be computed by cross-correlating the traces acquired at the two recorded locations and summing over all the boundary sources. If the data to be inverted has a large number of sources and receivers that are within the region of interest (as opposed to having one or the other on the boundary) then this is a very efficient method for computing the desired Green's functions. However, for the seismic data case it is rare that both the source and receiver for the data to be inverted are within the region of interest. Therefore, this improvement has very limited applicability to the seismic inversion problem. Berkhout [6] and Zhang [7] suggest that inversion in general can be improved by inverting non-encoded simultaneous sources that are summed coherently to produce some desired wave front within some region of the subsurface. For example point source data could be summed with time shifts that are a linear function of the source location to produce a down-going plane wave at some particular angle with respect to the surface. This technique could be applied to all categories of inversion. A problem with this method is that coherent summation of the source gathers necessarily reduces the amount of information in the data. So for example, summation to produce a plane wave removes all the information in the seismic data related to travel time versus source-receiver offset. This information is critical for updating the slowly varying background velocity model, and therefore Berkhout's method is not well constrained. To overcome this problem many different coherent sums of the data (e.g. many plane waves with different propagation directions) could be inverted, but then efficiency is lost since the cost of inversion is proportional to the number of different sums inverted. Such coherently summed sources are called generalized sources. Therefore, a generalized source can either be a point source or a sum of point sources that produces a wave front of some desired shape. Van Riel [8] suggests inversion by non-encoded stacking or partial stacking (with respect to source-receiver offset) of the input seismic data, then defining a cost function with respect to this stacked data which will be optimized. Thus, this publication suggests improving cost function based inversion using non-encoded simultaneous sources. As was true of the Berkhout's [6] simultaneous source inversion method, the stacking suggested by this method reduces the amount of information in the data to be inverted and therefore the inversion is less well constrained than it would have been with the original data. Mora [9] proposes inverting data that is the sum of widely spaced sources. Thus, this publication suggests improving the efficiency of inversion using non-encoded simultaneous source simulation. Summing widely spaced sources has the advantage of preserving much more information than the coherent sum proposed by Berkhout. However, summation of widely spaced sources implies that the aperture (model region inverted) that must be used in the inversion must be increased to accommodate all the widely spaced sources. Since the compute time is proportional to the area of this aperture, Mora's method does not produce as much efficiency gain as could be achieved if the summed sources were near each other. Ober [10] suggests speeding up seismic migration, a special case of non-iterative inversion, by using simultaneous encoded sources. After testing various coding methods, Ober found that the resulting migrated images had significantly reduced signal-to-noise ratio due to the fact that broad band encoding functions are necessarily only approximately orthogonal. Thus, when summing more than 16 shots, the quality of the inversion was not satisfactory. Since non-iterative inversion is not very costly to begin with, and since high signal-to-noise ratio inversion is desired, this technique is not widely practiced in the geophysical industry. Ikelle [11] suggests a method for fast forward simulation by simultaneously simulating point sources that are activated (in the simulation) at varying time intervals. A method is also discussed for decoding these time-shifted simultaneous-source simulated data back into the separate simulations that would have been obtained from the individual point sources. These decoded data could then be used as part of any conventional inversion procedure. A problem with Ikelle's method is that the proposed decoding method will produce separated data having noise levels proportional to the difference between data from adjacent sources. This noise will become significant for subsurface models that are not laterally constant, for example from models containing dipping reflectors. Furthermore, this noise will grow in proportion to the number of simultaneous sources. Due to these difficulties Ikelle's simultaneous source approach may result in unacceptable levels of noise if used in inverting a subsurface that is not laterally constant. What is needed is a more efficient method of iteratively inverting data, without significant reduction in the accuracy of the resulting inversion. SUMMARY OF THE INVENTION [0022] A physical properties model gives one or more subsurface properties as a function of location in a region. Seismic wave velocity is one such physical property, but so are (for example) p-wave velocity, shear wave velocity, several anisotropy parameters, attenuation (q) parameters, porosity, permeability, and resistivity. Referring to the flow chart of FIG. 10, in one embodiment the invention is a computer-implemented method for inversion of measured geophysical data to determine a physical properties model for a subsurface region, comprising: (a) obtaining a group of two or more encoded gathers of the measured geophysical data, wherein each gather is associated with a single generalized source or, using source-receiver reciprocity, with a single receiver, and wherein each gather is encoded with a different encoding signature selected from a set non-equivalent encoding signatures; (b) summing (4) the encoded gathers in the group by summing all data records in each gather that correspond to a single receiver (or source if reciprocity is used), and repeating for each different receiver, resulting in a simultaneous encoded gather; (c) assuming a physical properties model 5 of the subsurface region, said model providing values of at least one physical property at locations throughout the subsurface region; (d) calculating an update 6 to the assumed physical properties model that is more consistent with the simultaneous encoded gather from step (b), said calculation involving one or more encoded simultaneous source forward (or reverse) simulation operations that use the assumed physical properties model and encoded source signatures using the same encoding functions used to encode corresponding gathers of measured data, wherein an entire simultaneous encoded gather is simulated in a single simulation operation; (e) repeating step (d) at least one more iteration, using the updated physical properties model from the previous iteration of step (d) as the assumed model to produce a further updated physical properties model 7 of the subsurface region that is more consistent with a corresponding simultaneous encoded gather of measured data, using the same encoding signatures for source signatures in the simulation as were used in forming the corresponding simultaneous encoded gather of measured data; and (f) downloading the further updated physical properties model or saving it to computer storage. It may be desirable in order to maintain inversion quality or for other reasons to perform the simultaneous encoded-source simulations in step (b) in more than one group. In such case, steps (a)-(b) are repeated for each additional group, and inverted physical properties models from each group are accumulated before performing the model update in step (d). If the encoded gathers are not obtained already encoded from the geophysical survey as described below, then gathers of geophysical data 1 are encoded by applying encoding signatures 3 selected from a set of non-equivalent encoding signatures 2. In another embodiment, the present invention is a computer-implemented method for inversion of measured geophysical data to determine a physical properties model for a subsurface region, comprising: (a) obtaining a group of two or more encoded gathers of the measured geophysical data, wherein each gather is associated with a single generalized source or, using source-receiver reciprocity, with a single receiver, and wherein each gather is encoded with a different encoding signature selected from a set non-equivalent encoding signatures; (b) summing the encoded gathers in the group by summing all data records in each gather that correspond to a single receiver (or source if reciprocity is used), and repeating for each different receiver, resulting in a simultaneous encoded gather; (c) assuming a physical properties model of the subsurface region, said model providing values of at least one physical property at locations throughout the subsurface region; (d) simulating a synthetic simultaneous encoded gather corresponding to the simultaneous encoded gather of measured data, using the assumed physical properties model, wherein the simulation uses encoded source signatures using the same encoding functions used to encode the simultaneous encoded gather of measured data, wherein an entire simultaneous encoded gather is simulated in a single simulation operation; (e) computing a cost function measuring degree of misfit between the simultaneous encoded gather of measured data and the simulated simultaneous encoded gather; (f) repeating steps (a), (b), (d) and (e) for at least one more cycle, accumulating costs from step (e); (g) updating the physical properties model by optimizing the accumulated costs; (h) iterating steps (a)-(g) at least one more time using the updated physical properties model from the previous iteration as the assumed physical properties model in step (c), wherein a different set non-equivalent encoding signatures may be used for each iteration, resulting in a further updated physical properties model; and (i) downloading the further updated physical properties model or saving it to computer storage. In another embodiment, the invention is a computer-implemented method for inversion of measured geophysical data to determine a physical properties model for a subsurface region, comprising: (a) obtaining a group of two or more encoded gathers of the measured geophysical data, wherein each gather is associated with a single generalized source or, using source-receiver reciprocity, with a single receiver, and wherein each gather is encoded with a different encoding signature selected from a set non-equivalent encoding signatures; (b) summing the encoded gathers in the group by summing all data records in each gather that correspond to a single receiver (or source if reciprocity is used), and repeating for each different receiver, resulting in a simultaneous encoded gather; (c) assuming a physical properties model of the subsurface region, said model providing values of at least one physical property at locations throughout the subsurface region; (d) selecting an iterative series solution to a scattering equation describing wave scattering in said subsurface region; (e) beginning with the first n terms of said series, where n≧1, said first n terms corresponding to the assumed physical properties model of the subsurface region; (f) computing the next term in the series, said calculation involving one or more encoded simultaneous source forward (or reverse) simulation operations that use the assumed physical properties model and encoded source signatures using the same encoding functions used to encode corresponding gathers of measured data, wherein an entire simultaneous encoded gather is simulated in a single simulation operation and the simulated encoded gather and measured encoded gather are combined in a manner consistent with the iterative series selected in step (d); (g) updating the model by adding the next term in the series calculated in step (f) to the assumed model; (h) repeating steps (f) and (g) for at least one time to add at least one more term to the series to further update the physical properties model; and (i) downloading the further updated physical properties model or saving it to computer storage. In another embodiment, the invention is a computer-implemented method for inversion of measured geophysical data to determine a physical properties model for a subsurface region, comprising: (a) obtaining measured geophysical data from a geophysical survey the subsurface region; (b) assuming an initial physical properties model and inverting it by iterative inversion involving simultaneous simulation of survey data representing a plurality of survey sources (or receivers if source-receiver reciprocity is used) wherein source signatures in the simulation are encoded, resulting in a simulated simultaneous encoded gather of geophysical data, the inversion process involving updating the physical properties model to reduce misfit between the simulated simultaneous encoded gather and a corresponding simultaneous encoded gather formed by summing gathers of measured survey data encoded with the same encoding functions used in the simulation; and (c) downloading the updated physical properties model or saving it to computer storage. BRIEF DESCRIPTION OF THE DRAWINGS [0054] The present invention and its advantages will be better understood by referring to the following detailed description and the attached drawings in which: FIG. 1 is a flow chart showing steps in a method for preparing data for simultaneous encoded-source inversion; FIG. 2 is a flow chart showing steps in one embodiment of the present inventive method for simultaneous source computation of the data inversion cost function; FIG. 3 is a base velocity model for an example demonstrating the computation of the full wavefield cost function; FIG. 4 is a data display showing the first 3 of 256 sequential source data records simulated in the Example from the base model of FIG. 3; FIG. 5 shows a single simultaneous encoded-source gather produced from the 256 sequential source data records of which the first three are shown in FIG. 4; FIG. 6 illustrates one of the perturbations of the base model in FIG. 3 that is used in the Example to demonstrate computation of the full wave inversion cost function using simultaneous sources; FIG. 7 shows the cost function computed for the present invention's simultaneous source data shown in FIG. 5; FIG. 8 shows the cost function computed for the sequential source data shown in FIG. 4, i.e., by traditional inversion; FIG. 9 shows the cost function for a prior-art "super-shot" gather, formed by simply summing the sequential source data shown in FIG. 4; and FIG. 10 is a flow chart showing basic steps in one embodiment of the present inventive method. The invention will be described in connection with its preferred embodiments. However, to the extent that the following detailed description is specific to a particular embodiment or a particular use of the invention, this is intended to be illustrative only, and is not to be construed as limiting the scope of the invention. On the contrary, it is intended to cover all alternatives, modifications and equivalents that may be included within the scope of the invention, as defined by the appended claims. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS [0066] The present invention is a method for reducing the computational time needed to iteratively invert geophysical data by use of simultaneous encoded-source simulation. Geophysical inversion attempts to find a model of subsurface elastic properties that optimally explains observed geophysical data. The example of seismic data is used throughout to illustrate the inventive method, but the method may be advantageously applied to any method of geophysical prospecting involving at least one source, activated at multiple locations, and at least one receiver. The data inversion is most accurately performed using iterative methods. Unfortunately iterative inversion is often prohibitively expensive computationally. The majority of compute time in iterative inversion is spent computing forward and/or reverse simulations of the geophysical data (here forward means forward in time and reverse means backward in time). The high cost of these simulations is partly due to the fact that each geophysical source in the input data must be computed in a separate computer run of the simulation software. Thus, the cost of simulation is proportional to the number of sources in the geophysical data (typically on the order of 1,000 to 10,000 sources for a geophysical survey). In this invention, the source signatures for a group of sources are encoded and these encoded sources are simulated in a single run of the software, resulting in a computational speedup proportional to the number of sources computed simultaneously. As discussed above in the Background section, simultaneous source methods have been proposed in several publications for reducing the cost of various processes for inversion of geophysical data [3,6,7,8,9]. In a more limited number of cases, simultaneous encoded-source techniques are disclosed for certain purposes [10,11]. These methods have all been shown to provide increased efficiency, but always at significant cost in reduced quality, usually in the form of lower signal-to-noise ratio when large numbers of simultaneous sources are employed. The present invention mitigates this inversion quality reduction by showing that simultaneous encoded-source simulation can be advantageously used in connection with iterative inversion. Iteration has the surprising effect of reducing the undesirable noise resulting from the use of simultaneous encoded sources. This is considered unexpected in light of the common belief that inversion requires input data of the highest possible quality. In essence, the simultaneous encoded-source technique produces simulated data that appear to be significantly degraded relative to single source simulation (due to the data encoding and summation which has the appearance of randomizing the data), and uses this apparently degraded data to produce an inversion that has, as will be shown below, virtually the same quality as the result that would have been obtained by the prohibitively expensive process of inverting the data from the individual sources. (Each source position in a survey is considered a different "source" for purposes of inversion.) The reason that these apparently degraded data can be used to perform a high quality iterative inversion is that by encoding the data before summation of sources the information content of the data is only slightly degraded. Since there is only insignificant information loss, these visually degraded data constrain an iterative inversion just as well as conventional sequential source data. Since simultaneous sources are used in the simulation steps of the inversion, the compute time is significantly reduced, relative to conventional sequential source inversion. Two iterative inversion methods commonly employed in geophysics are cost function optimization and series methods. The present invention can be applied to both of these methods. Simultaneous encoded-source cost function optimization is discussed first. Iterative Cost Function Optimization [0071] Cost function optimization is performed by minimizing the value, with respect to a subsurface model M, of a cost function S(M) (sometimes referred to as an objective function), which is a measure of misfit between the observed (measured) geophysical data and corresponding data calculated by simulation of the assumed model. A simple cost function S often used in geophysical inversion is: ( M ) = g = 1 N g r = 1 N r t = 1 N t ψ calc ( M , g , r , t , w g ) - ψ obs ( g , r , t , w g ) N ( 1 ) ##EQU00001## N=norm for cost function (typically the least squares or L2-Norm is used in which case N=2), M=subsurface model, g=gather index (for point source data this would correspond to the individual sources), =number of gathers, r=receiver index within gather, =number of receivers in a gather, t=time sample index within a data record, =number of time samples, =calculated geophysical data from the model M, =measured geophysical data, and =source signature for gather g, i.e. source signal without earth filtering effects. The gathers in Equation 1 can be any type of gather that can be simulated in one run of a forward modeling program. For seismic data, the gathers correspond to a seismic shot, although the shots can be more general than point sources [6]. For point sources, the gather index g corresponds to the location of individual point sources. For plane wave sources, g would correspond to different plane wave propagation directions. This generalized source data, ψ , can either be acquired in the field or can be synthesized from data acquired using point sources. The calculated data ψ on the other hand can usually be computed directly by using a generalized source function when forward modeling (e.g. for seismic data, forward modeling typically means solution of the anisotropic visco-elastic wave propagation equation or some approximation thereof). For many types of forward modeling, including finite difference modeling, the computation time needed for a generalized source is roughly equal to the computation time needed for a point source. The model M is a model of one or more physical properties of the subsurface region. Seismic wave velocity is one such physical property, but so are (for example) p-wave velocity, shear wave velocity, several anisotropy parameters, attenuation (q) parameters, porosity, and permeability. The model M might represent a single physical property or it might contain many different parameters depending upon the level of sophistication of the inversion. Typically, a subsurface region is subdivided into discrete cells, each cell being characterized by a single value of each parameter. Equation 1 can be simplified to: ( M ) = g = 1 N g δ ( M , g , w g ) N ( 2 ) ##EQU00002## where the sum over receivers and time samples is now implied and - ) (3) One major problem with iterative inversion is that computing takes a large amount of computer time, and therefore computation of the cost function, S, is very time consuming Furthermore, in a typical inversion project this cost function must be computed for many different models M. The computation time for ψ is proportional to the number of gathers (for point source data this equals the number of sources), N , which is on the order of 10,000 to 100,000 for a typical seismic survey. The present invention greatly reduces the time needed for geophysical inversion by showing that S(M) can be well approximated by computing ψ for many encoded generalized sources which are activated simultaneously. This reduces the time needed to compute ψ by a factor equal to the number of simultaneous sources. A more detailed version of the preceding description of the technical problem being addressed follows. The cost function in Equation 2 is replaced with the following: S sim ( M ) = G = 1 N G g .di-elect cons. G δ ( M , g , c g w g ) N ( 5 ) ##EQU00003## where a summation over receivers and time samples is implied as in Equation 2 and: = 1 N g = G = 1 N G g .di-elect cons. G ##EQU00004## defines a sum over gathers by sub groups of gathers =cost function for simultaneous source data, G=the groups of simultaneous generalized sources, and =the number of groups, =functions of time that are convolved ({circle around (x)}) with each gather's source signature to encode the gathers, these encoding functions are chosen to be different, i.e. non-equivalent, for each gather index g (e.g. different realizations of random phase functions). The outer summation in Equation 5 is over groups of simultaneous generalized sources corresponding to the gather type (e.g. point sources for common shot gathers). The inner summation, over g, is over the gathers that are grouped for simultaneous computation. For some simulation methods, such as finite difference modeling, the computation of the model for summed sources (the inner sum over g ε G) can be performed in the same amount of time as the computation for a single source. Thus, Equation 5 can be computed in a time that is N times faster than Equation 2. In the limiting case, all gathers are computed simultaneously (i.e. G contains all N sources and N =1) and one achieves a factor of N This speedup comes at the cost that S (M) in Equation 5 is not in general as suitable a cost function for inversion as is S(M) defined in Equation 2. Two requirements for a high quality cost function are: 1. It has a global minimum when the model M is close to the true subsurface model, 2. It has few local minima and they are located far from the true subsurface model. It is easy to see that in the noise-free case the global minimum of both S(M) and S (M) will occur when M is equal to the true subsurface model and that their value at the global minimum is zero. Experience has shown that the global minimum of S (M) is also close to the actual subsurface model in the case where the data are noisy. Thus, S (M) satisfies requirement number 1 above. Next it will be shown how S (M) can be made to satisfy the second enumerated requirement. One cannot in general develop a cost function for data inversion that has no local minima. So it would be unreasonable to expect S (M) to have no local minima as desired by requirement 2 above. However, it is at least desirable that S (M) has a local minima structure not much worse than S(M). According to the present invention, this can be accomplished by a proper choice of encoding signatures. When the cost function uses an L2-Norm, choosing the encoding signatures to be random phase functions gives a simultaneous source cost function that has a local minima structure similar to the sequential source cost function. This can be seen by developing a relationship between S (M) to S(M) as follows. First, Equation 5 is specialized to the L2-Norm case: S sim ( M ) = G = 1 N G g .di-elect cons. G δ ( M , g , c g w g ) 2 ( 6 ) ##EQU00005## The square within the sum over groups can be expanded as follows S sim ( M ) = G = 1 N G ( g .di-elect cons. G δ ( M , g , c g w g ) 2 + g , g ' .di-elect cons. G g ≠ g ' δ ( M , g , c g w g ) δ ( M , g ' , c g ' w g ' ) ) ( 7 ) ##EQU00006## By choosing the c[g] so that they have constant amplitude spectra, the first term in Equation 7 is simply S(M), yielding: S sim ( M ) = S ( M ) + G = 1 N G g .di-elect cons. G g ' .di-elect cons. G g ' ≠ g δ ( M , g , c g w g ) δ ( M , g ' , c g ' w g ' ) ( 8 ) ##EQU00007## 8 reveals that S (M) is equal S(M) plus some cross terms. Note that due to the implied sum over time samples, the cross terms |δ(M,g,c {circle around (x)}w )∥δ(M,g', c '{circle around (x)}w ')| are really cross correlations of the residual from two different gathers. This cross correlation noise can be spread out over the model by choosing the encoding functions c such that c and c ' are different realizations of random phase functions. Other types of encoding signatures may also work. Thus, with this choice of the c , S (M) is approximately equal to S(M). Therefore, the local minima structure of S (M) is approximately equal to S(M). In practice, the present invention can be implemented according to the flow charts shown in FIGS. 1 and 2. The flow chart in FIG. 1 may be followed to encode and sum the geophysical survey data to be inverted to form simultaneous gather data. In step 20, the input data 10 are separated into groups of gathers that will be encoded and summed to form simultaneous encoded gathers. In step 40, each gather in one of the gather groups from step 20 are encoded. This encoding is performed by selecting a gather from the gather group and selecting an encoding signature from the set of non-equivalent encoding signatures 30. All the traces from the gather are then temporally convolved with that selected encoding signature. Each gather in the gather group is encoded in the same manner, choosing a different encoding signature from 30 for each gather. After all gathers have been encoded in 40, all the gathers are summed in 50. The gathers are summed by summing all traces corresponding to the same receiver from each gather. This forms a simultaneous encoded-source gather which is then saved in step 60 to the output set of simulated simultaneous encoded gathers 70. At step 80, steps 40-60 are typically repeated until all gather groups from step 20 have been encoded. When all gather groups have been encoded, this process is ended and the file containing the simultaneous encoded gathers 70 will contain one simultaneous encoded gather for each gather group formed in step 20. How many gathers to put in a single group is a matter of judgment. The considerations involved include quality of the cost function vs. speedup in inversion time. One can run tests like those in the examples section below, and ensure that the grouping yields a high quality cost function. In some instances, it may be preferable to sum all the gathers into one simultaneous gather, i.e., use a single group. FIG. 1 describes how simultaneous encoded gathers are obtained in some embodiments of the invention. In other embodiments, the geophysical data are acquired from simultaneous encoded sources, eliminating the need for the process in FIG. 1. It may be noted that acquiring simultaneous encoded-source data in the field could significantly reduce the cost of acquiring the geophysical data and also could increase the signal-to-noise ratio relative to ambient noise. Thus the present invention may be advantageously applied to (using a seismic vibrator survey as the example) a single vibrator truck moved sequentially to multiple locations, or to a survey in which two or more vibrator trucks are operating simultaneously with different encoded sweeps in close enough proximity that the survey receivers record combined responses of all vibrators. In the latter case only, the data could be encoded in the field. FIG. 2 is a flowchart showing basic steps in the present inventive method for computing the data inversion cost function for the simultaneous encoded-source data. The simultaneous encoded gathers 120 are preferably either the data formed at 70 of FIG. 1 or are simultaneous encoded gathers that were acquired in the field. In step 130, a simultaneous encoded gather from 120 is forward modeled using the appropriate signatures from the set of encoding signatures 110 that were used to form the simultaneous encoded gathers 120. In step 140, the cost function for this simultaneous encoded gather is computed. If the cost function is the L2 norm cost function, then step 140 would constitute summing, over all receivers and all time samples, the square of the difference between the simultaneous encoded gather from 120 and the forward modeled simultaneous encoded gather from 130. The cost value computed in 140 is then accumulated into the total cost in step 150. Steps 130-150 are typically repeated for another simultaneous encoded gather from 120, and that cycle is repeated until all desired simultaneous encoded gathers from 120 have been processed (160). There are many techniques for inverting data. Most of these techniques require computation of a cost function, and the cost functions computed by this invention provide a much more efficient method of performing this computation. Many types of encoding functions c can be used to ensure that S (M) is approximately equal to S(M) including but not limited to: Linear, random, chirp and modified chirp frequency dependent phase encoding as presented in Romero et al. [12]; The frequency independent phase encoding as presented in Jing et al. [13]; Random time shift encoding. Some of these encoding techniques will work better than others depending upon the application, and some can be combined. In particular, good results have been obtained using frequency dependent random phase encoding and also by combining frequency independent encoding of nearby sources with frequency dependent random phase encoding for more widely separated sources. An indication of the relative merits of different encodings can be obtained by running test inversions with each set of encoding functions to determine which converges faster. It should be noted that the simultaneous encoded-source technique can be used for many types of inversion cost function. In particular it could be used for cost functions based on other norms than L2 discussed above. It could also be used on more sophisticated cost functions than the one presented in Equation 2, including regularized cost functions. Finally, the simultaneous encoded-source method could be used with any type of global or local cost function inversion method including Monte Carlo, simulated annealing, genetic algorithm, evolution algorithm, gradient line search, conjugate gradients and Newton's method. Iterative Series Inversion [0105] Besides cost function optimization, geophysical inversion can also be implemented using iterative series methods. A common method for doing this is to iterate the Lippmann-Schwinger equation [3]. The Lippmann-Schwinger equation describes scattering of waves in a medium represented by a physical properties model of interest as a perturbation of a simpler model. The equation is the basis for a series expansion that is used to determine scattering of waves from the model of interest, with the advantage that the series only requires calculations to be performed in the simpler model. This series can also be inverted to form an iterative series that allows the determination of the model of interest, from the measured data and again only requiring calculations to be performed in the simpler model. The Lippmann-Schwinger equation is a general formalism that can be applied to all types of geophysical data and models, including seismic waves. This method begins with the two =-I (9) =-I (10) where L , L are the actual and reference differential operators, G and G are the actual and reference Green's operators respectively and I is the unit operator. Note that G is the measured point source data, and G is the simulated point source data from the initial model. The Lippmann-Schwinger equation for scattering theory is: VG (11) where V from which the difference between the true and initial models can be extracted. Equation 11 is solved iteratively for V by first expanding it in a series (assuming G=G for the first approximation of G and so forth) to get: + (12) Then V is expanded as a series =V.sup.(1)+V.sup.(2)+V.sup.(3)+ (13) where V .sup.(n) is the portion of V that is n order in the residual of the data (here the residual of the data is G-G measured at the surface). Substituting Equation 13 into Equation 12 and collecting terms of the same order yields the following set of equations for the first 3 orders: V- .sup.(1)G and similarly for higher orders in V . These equations may be solved iteratively by first solving Equation 14 for V.sup.(1) by inverting G on both sides of V.sup.(1) to yield: V.sup.(1) from Equation 17 is then substituted into Equation 15 and this equation is solved for V.sup.(2) to yield: .- sup.-1 (18) and so forth for higher orders of V Equation 17 involves a sum over sources and frequency which can be written out explicitly as: ( 1 ) = ω s G 0 - 1 ( G s - G 0 s ) G 0 s - 1 ( 17 ) ##EQU00008## where G[s] is the measured data for source s, G s is the simulated data through the reference model for source s and G can be interpreted as the downward extrapolated source signature from source s. Equation 17 when implemented in the frequency domain can be interpreted as follows: (1) Downward extrapolate through the reference model the source signature for each source (the G term), (2) For each source, downward extrapolate the receivers of the residual data through the reference model (the G s) term), (3) multiply these two fields then sum over all sources and frequencies. The downward extrapolations in this recipe can be carried out using geophysical simulation software, for example using finite differences. The simultaneous encoded-source technique can be applied to Equation 17 as follows: ~ ( 1 ) = ω G 0 - 1 [ s φ s ( ω ) G s - s φ s ( ω ) G 0 s ] s ' ( φ s , ( ω ) G 0 s ' ) - 1 ( 18 ) ##EQU00009## where a choice has been made to encode by applying the phase function (ω) which depends on the source and may depend on the frequency w. As was the case for Equation 17, Equation 18 can be implemented by: (1) Encoding and summing the measured data (the first summation in brackets), (2) Forward simulating the data that would be acquired from simultaneous encoded sources using the same encoding as in step 1 (the second term in the brackets, (3) Subtract the result for step 2 from the result from step 1, (4) Downward extrapolate the data computed in step 3 (the first G term applied to the bracketed term), (5) Downward extrapolate the simultaneous encoded sources encoded with the same encoding as in step 1, (6) Multiply these two fields and sum over all frequencies. Note that in this recipe the simulations are all performed only once for the entire set of simultaneous encoded sources, as opposed to once for each source as was the case for Equation 17. Thus, Equation 18 requires much less compute time than Equation 17. Separating the summations over s and s' into portions where s=s' and s≠s' in Equation 18 gives: ~ ( 1 ) = ω s G 0 - 1 ( G s - G 0 s ) G 0 s - 1 + ω G 0 - 1 s s ' ≠ s φ s ( ω ) - φ s ' ( ω ) ( G s - G 0 s ) G 0 s ' - 1 ( 19 ) ##EQU00010## The first term in Equation 19 may be recognized as Equation 17 and therefore: {tilde over (V)}.sup.(1)=V.sup.(1)+crossterms (20) The cross terms in Equation 19 will be small if φ ' when s≠s'. Thus, as was the case for cost function optimization, the simultaneous encoded-source method speeds up computation of the first term of the series and gives a result that is similar to the much more expensive sequential source method. The same simultaneous encoded-source technique can be applied to higher order terms in the series such as the second and third-order terms in Equations 15 and 16. Further Considerations [0112] The present inventive method can also be used in conjunction with various types of generalized source techniques, such as those suggested by Berkhout [6]. In this case, rather than encoding different point source gather signatures, one would encode the signatures for different synthesized plane waves. A primary advantage of the present invention is that it allows a larger number of gathers to be computed simultaneously. Furthermore, this efficiency is gained without sacrificing the quality of the cost function. The invention is less subject to noise artifacts than other simultaneous source techniques because the inversion's being iterative implies that the noise artifacts will be greatly suppressed as the global minimum of the cost function is approached. Some variations on the embodiments described above include: The c encoding functions can be changed for each computation of the cost function. In at least some instances, using different random phase encodings for each computation of the cost function further reduces the effect of the cross terms in Equation 8. In some cases (e.g., when the source sampling is denser than the receiver sampling) it may be advantageous to use reciprocity to treat the actual receivers as computational sources, and encode the receivers instead of the sources. This invention is not limited to single-component point receivers. For example, the receivers could be receiver arrays or they could be multi-component receivers. The method may be improved by optimizing the encoding to yield the highest quality cost function. For example the encoding functions could be optimized to reduce the number of local minima in the cost function. The encoding functions could be optimized either by manual inspection of tests performed using different encoding functions or using an automated optimization procedure. Acquisition of simultaneous encoded-source data could result in significant geophysical data acquisition cost savings. For marine seismic data surveys, it would be very efficient to acquire encoded source data from simultaneously operating marine vibrators that operate continuously while in motion. Other definitions for the cost function may be used, including the use of a different norm (e.g. L1 norm (absolute value) instead of L2 norm), and additional terms to regularize and stabilize the inversion (e.g. terms that would penalize models that aren't smooth or models that are not sparse). While the invention includes many embodiments, a typical embodiment might include the following features: 1. The input gathers are common point source gathers. 2. The encoding signatures 30 and 110 are changed between iterations. 3. The encoding signatures 30 and 110 are chosen to be random phase signatures from Romero et. al. [12]. Such a signature can be made simply by making a sequence that consists of time samples which are a uniform pseudo-random sequence. 4. In step 40, the gathers are encoded by convolving each trace in the gather with that gather's encoding signature. 5. In step 130, the forward modeling is performed with a finite difference modeling code in the space-time domain. 6. In step 140, the cost function is computed using an L2 norm. EXAMPLES [0129] FIGS. 3-8 represent a synthetic example of computing the cost function using the present invention and comparison with the conventional sequential source method. The geophysical properties model in this simple example is just a model of the acoustic wave velocity. FIG. 3 is the base velocity model (the model that will be inverted for) for this example. The shading indicates the velocity at each depth. The background of this model is a linear gradient starting at 1500 m/s at the top of the model and having a gradient of 0.5 sec . Thirty-two 64 m thick horizontal layers (210) having a plus or minus 100 m/s velocity are added to the background gradient. The darker horizontal bands in FIG. 3 represent layers where 100 m/s is added to the linear gradient background, and the alternating lighter horizontal bands represent layers where 100 m/s is subtracted from the linear gradient background. Finally a rectangular anomaly (220) that is 256 m tall and 256 m wide and having a velocity perturbation of 500 m/s is added to the horizontally layered model. Conventional sequential point source data (corresponding to item 10 in FIG. 1) were simulated from the model in FIG. 3. 256 common point source gathers were computed, and FIG. 4 shows the first three of these gathers. These gathers have a six second trace length and are sampled at 0.8 msec. The source signature (corresponding to w in Equation 2) is a 20 Hz Ricker wavelet. The distance between sources is 16 m and the distance between receivers is 4 m. The sources and receivers cover the entire surface of the model, and the receivers are stationary. The flow outlined in FIG. 1 is used to generate simultaneous encoded-source data from the sequential source data shown in FIG. 4. In step 20 of FIG. 1, all 256 simulated sequential gathers were formed into one group. These gathers were then encoded by convolving the traces from each point source gather with a 2048 sample (1.6384 sec long) uniform pseudo-random sequence. A different random sequence was used for each point source gather. These gathers were then summed to produce the single simultaneous encoded-source gather shown in FIG. 5. It should be noted that this process has converted 256 sequential source gathers to a single simultaneous encoded-source gather. To compute a cost function, the base model is perturbed and seismic data are simulated from this perturbed model. For this example the model was perturbed by changing the depth of the rectangular anomaly. The depth of the anomaly was perturbed over a range of -400 to +400 m relative to its depth in the base model. One perturbation of that model is shown in FIG. 6, with the anomaly indicated at 310. For each perturbation of the base model a single gather of simultaneous encoded-source data was simulated to yield a gather of traces similar to the base data shown in FIG. 5. The encoding signatures used to simulate these perturbed gathers were exactly the same as those used to encode the base data in FIG. 5. The cost function from Equation 6 was computed for each perturbed model by subtracting the perturbed data from the base data and computing the L2 norm of the result. FIG. 7 is a graph of this simultaneous encoded-source cost function. This cost function may be compared to the conventional sequential source cost function for the same model perturbations shown in FIG. 8 (computed using the data in FIG. 4 as the base data and then simulating sequential source data from the perturbed models). FIG. 8 corresponds to the cost function in Equation 2 with N=2. The horizontal axis in FIGS. 7 and 8 is the perturbation of the depth of the rectangular anomaly relative to its depth in the base model. Thus, a perturbation of zero corresponds to the base model. It is important to note that for this example the simultaneous encoded-source cost function was computed 256 times faster than the sequential source cost function. Two things are immediately noticeable upon inspection of FIGS. 7 and 8. One is that both of these cost functions have their global minimum (410 for the simultaneous source data and 510 for the sequential source data) at zero perturbation as should be the case for an accurate inversion. The second thing to note is that both cost functions have the same number of local minima (420 for the simultaneous source data and 520 for the sequential source data), and that these local minima are located at about the same perturbation values. While local minima are not desirable in a cost function, the local minima structure of the simultaneous encoded-source cost function is similar to the sequential source cost function. Thus, the simultaneous encoded-source cost function (FIG. 7) is just as good as the sequential source cost function (FIG. 8) for seismic inversion. The factor of 256 computational time reduction of the simultaneous encoded-source cost function, along with similar quality of the two cost functions for seismic inversion, leads to the conclusion that for this example the simultaneous encoded-source cost function is strongly preferred. The perturbed models represent the various model guesses that might be used in a real exercise in order to determine which gives the closest fit, as measured by the cost function, to the measured data. Finally, to demonstrate the importance of encoding the gathers before summing, FIG. 9 shows the cost function that would result from using Mora's [9] suggestion of inverting super shot gathers. This cost function was computed in a manner similar to that FIG. 7 except that the source gathers were not encoded before summing. This sum violates Mora's suggestion that the sources should be widely spaced (these sources are at a 16 m spacing). However, this is a fair comparison with the simultaneous encoded-source method suggested in this patent, because the computational speedup for the cost function of FIG. 9 is equal to that for FIG. 7, while Mora's widely spaced source method would result in much less speedup. Note that the global minimum for the super shot gather data is at zero perturbation (610), which is good. On the other hand, the cost function shown in FIG. 9 has many more local minima (620) than either the cost functions in FIG. 7 or FIG. 8. Thus, while this cost function achieves the same computational speedup as the simultaneous encoded-source method of this patent, it is of much lower quality for inversion. The foregoing application is directed to particular embodiments of the present invention for the purpose of illustrating it. It will be apparent, however, to one skilled in the art, that many modifications and variations to the embodiments described herein are possible. All such modifications and variations are intended to be within the scope of the present invention, as defined in the appended claims. Persons skilled in the art will readily recognize that in preferred embodiments of the invention, at least some of the steps in the present inventive method are performed on a computer, i.e. the invention is computer implemented. In such cases, the resulting updated physical properties model may either be downloaded or saved to computer storage. REFERENCES [0138] 1. Tarantola, A., "Inversion of seismic reflection data in the acoustic approximation," Geophysics 49, 1259-1266 (1984). 2. Sirgue, L., and Pratt G. "Efficient waveform inversion and imaging: A strategy for selecting temporal frequencies," Geophysics 69, 231-248 (2004). 3. Weglein, A. B., Araujo, F. V., Carvalho, P. M., Stolt, R. H., Matson, K. H., Coates, R. T., Corrigan, D., Foster, D. J., Shaw, S. A., and Zhang, H., "Inverse scattering series and seismic exploration," Inverse Problems 19, R27-R83 (2003). 4. Fallat, M. R., Dosso, S. E., "Geoacoustic inversion via local, global, and hybrid algorithms," Journal of the Acoustical Society of America 105, 3219-3230 (1999). 5. Van Manen, D. J., Robertsson, J. O. A., Curtis, A., "Making wave by time reversal," SEG International Exposition and 75 Annual Meeting Expanded Abstracts, 1763-1766 (2005). 6. Berkhout, A. J., "Areal shot record technology," Journal of Seismic Exploration 1, 251-264 (1992). 7. Zhang, Y., Sun, J., Notfors, C., Gray, S. H., Cherris, L., Young, J., "Delayed-shot 3D depth migration," Geophysics 70, E21-E28 (2005). 8. Van Riel, P., and Hendrik, W. J. D., "Method of estimating elastic and compositional parameters from seismic and echo-acoustic data," U.S. Pat. No. 6,876,928 (2005). 9. Mora, P., "Nonlinear two-dimensional elastic inversion of multi-offset seismic data," Geophysics 52, 1211-1228 (1987). 10. Ober, C. C., Romero, L. A., Ghiglia, D. C., "Method of Migrating Seismic Records," U.S. Pat. No. 6,021,094 (2000). 11. Ikelle, L. T., "Multi-shooting approach to seismic modeling and acquisition," U.S. Pat. No. 6,327,537 (2001). 12. Romero, L. A., Ghiglia, D. C., Ober, C. C., Morton, S. A., "Phase encoding of shot records in prestack migration," Geophysics 65, 426-436 (2000). 13. Jing X., Finn, C. J., Dickens, T. A., Willen, D. E., "Encoding multiple shot gathers in prestack migration," SEG International Exposition and 70 Annual Meeting Expanded Abstracts, 786-789 (2000). Patent applications by David L. Hinkley, Spring, TX US Patent applications by Jerome R. Krebs, Houston, TX US Patent applications by John E. Anderson, Houston, TX US Patent applications by Ramesh Neelamani, Houston, TX US Patent applications by Thomas A Dickens, Houston, TX US Patent applications in class MODELING BY MATHEMATICAL EXPRESSION Patent applications in all subclasses MODELING BY MATHEMATICAL EXPRESSION User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130191090","timestamp":"2014-04-18T13:49:34Z","content_type":null,"content_length":"101030","record_id":"<urn:uuid:8a8d3ea9-60ab-48c2-9fb0-12f747d6be11>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Model Checking via Reachability Testing for Timed Automata Results 1 - 10 of 34 , 1999 "... We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approac ..." Cited by 2407 (62 self) Add to MetaCart We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approach is that local properties are often not preserved at the global level. We present a general framework for using additional interface processes to model the environment for a component. These interface processes are typically much simpler than the full environment of the component. By composing a component with its interface processes and then checking properties of this composition, we can guarantee that these properties will be preserved at the global level. We give two example compositional systems based on the logic CTL*. , 2004 "... Abstract. This is a tutorial paper on the tool Uppaal. Its goal is to be a short introduction on the flavor of timed automata implemented in the tool, to present its interface, and to explain how to use the tool. The contribution of the paper is to provide reference examples and modeling patterns. 1 ..." Cited by 173 (9 self) Add to MetaCart Abstract. This is a tutorial paper on the tool Uppaal. Its goal is to be a short introduction on the flavor of timed automata implemented in the tool, to present its interface, and to explain how to use the tool. The contribution of the paper is to provide reference examples and modeling patterns. 1 - Lectures on Concurrency and Petri Nets: Advances in Petri Nets, number 3098 in LNCS , 2004 "... Abstract. This chapter is to provide a tutorial and pointers to results and related work on timed automata with a focus on semantical and algorithmic aspects of verification tools. We present the concrete and abstract semantics of timed automata (based on transition rules, regions and zones), decisi ..." Cited by 92 (2 self) Add to MetaCart Abstract. This chapter is to provide a tutorial and pointers to results and related work on timed automata with a focus on semantical and algorithmic aspects of verification tools. We present the concrete and abstract semantics of timed automata (based on transition rules, regions and zones), decision problems, and algorithms for verification. A detailed description on DBM (Difference Bound Matrices) is included, which is the central data structure behind several verification tools for timed systems. As an example, we give a brief introduction to the tool UPPAAL. 1 - In Proc. of the 18th IEEE Real-Time Systems Symposium , 1997 "... During the past few years, a number of verification tools have been developed for real-time systems in the framework of timed automata (e.g. Kronos and Uppaal). One of the major problems in applying these tools to industrial-size systems is the huge memory-usage for the exploration of the state-spac ..." Cited by 56 (8 self) Add to MetaCart During the past few years, a number of verification tools have been developed for real-time systems in the framework of timed automata (e.g. Kronos and Uppaal). One of the major problems in applying these tools to industrial-size systems is the huge memory-usage for the exploration of the state-space of a network (or product) of timed automata, as the modelcheckers must keep information on not only the control structure of the automata but also the clock values specified by clock constraints. In this paper, we present a compact data structure for representing clock constraints. The data structure is based on an O(n 3 ) algorithm which, given a constraint system over realvalued variables consisting of bounds on differences, constructs an equivalent system with a minimal number of constraints. In addition, we have developed an on-the-fly reduction technique to minimize the space-usage. Based on static analysis of the control structure of a network of timed automata, we are able to comp... - THEORETICAL COMPUTER SCIENCE , 2001 "... The computational engine of the verification tool UPPAAL consists of a collection of efficient algorithms for the analysis of reachability properties of systems. Model-checking of properties other than plain reachability ones may currently be carried out in such a tool as follows. Given a property t ..." Cited by 30 (11 self) Add to MetaCart The computational engine of the verification tool UPPAAL consists of a collection of efficient algorithms for the analysis of reachability properties of systems. Model-checking of properties other than plain reachability ones may currently be carried out in such a tool as follows. Given a property to model-check, the user must provide a test automaton T for it. This test automaton must be such that the original system S has the property expressed by precisely when none of the distinguished reject states of T can be reached in the parallel composition of S with T . This raises the question of which properties may be analyzed by UPPAAL in such a way. This paper gives an answer to this question by providing a complete characterization of the class of properties for which model-checking can be reduced to reachability testing in the sense outlined above. This result is obtained as a corollary of a stronger statement pertaining to the compositionality of the property language considered in this study. In particular, it is shown that our language is the least expressive compositional language that can express a simple safety property stating that no reject state can ever be reached. Finally, the property language characterizing the power of reachability testing is used to provide a definition of characteristic properties with respect to a timed version of the ready simulation preorder, for nodes of -free, deterministic timed automata. , 2002 "... In this paper we present the continuous and on-going development of datastructures and algorithms underlying the veri cation engine of the tool Uppaal. In particular, we review the datastructures of Dierence Bounded Matrices, Minimal Constraint Representation and Clock Dierence Diagrams used in ..." Cited by 28 (10 self) Add to MetaCart In this paper we present the continuous and on-going development of datastructures and algorithms underlying the veri cation engine of the tool Uppaal. In particular, we review the datastructures of Dierence Bounded Matrices, Minimal Constraint Representation and Clock Dierence Diagrams used in symbolic state-space representation and-analysis for real-time systems. - Proc. FTRTFT 2000. 84 ALTISEN ET AL , 2000 "... To combat the state-explosion problem in automatic verification, we present a method for scaling up the real-time verification tool Uppaal by complementing it with methods for abstraction and compositionality. We identify a notion of timed ready simulation which we show is a sound condition for pres ..." Cited by 25 (4 self) Add to MetaCart To combat the state-explosion problem in automatic verification, we present a method for scaling up the real-time verification tool Uppaal by complementing it with methods for abstraction and compositionality. We identify a notion of timed ready simulation which we show is a sound condition for preservation of safety properties between realtime systems, and in addition is a precongruence with respect to parallel composition. Thus, it supports both abstraction and compositionality. We furthermore present a method for automatically testing for the existence of a timed ready simulation between real-time systems using the Uppaal tool. , 2001 "... This paper studies the structural complexity of model checking for several timed modal logics presented in the literature. More precisely, we consider (variations on) the specification formalisms used in the tools CMC and Uppaal, and fragments of a timed -calculus. For each of the logics, we charact ..." Cited by 14 (6 self) Add to MetaCart This paper studies the structural complexity of model checking for several timed modal logics presented in the literature. More precisely, we consider (variations on) the specification formalisms used in the tools CMC and Uppaal, and fragments of a timed -calculus. For each of the logics, we characterize the computational complexity of model checking, as well as its specification and program complexity, using (parallel compositions of) timed automata as our system model. In particular, we show that the complexity of model checking for a timed -calculus interpreted over (networks of) timed automata is EXPTIME-complete, no matter whether the complexity is measured with respect to the size of the specification, of the model or of both. All the flavours of model checking for timed versions of Hennessy-Milner logic, and the restricted fragments of the timed µ-calculus studied in the literature on CMC and Uppaal, are shown to be PSPACE-complete or EXPTIME-complete. Amongst the complexity results o ered in the paper is a theorem to the effect that the model checking problem for the sublanguage L s of the timed -calculus, proposed by Larsen, Pettersson and Yi, is PSPACE-complete. This result is accompanied by an array of statements showing that any extension of L s has an EXPTIME-complete model checking problem. We also argue that the model checking problem for the timed propositional µ-calculus T is EXPTIME-complete, thus improving upon results by Henzinger, Nicollin, Sifakis and Yovine. - In 5th International AMAST Workshop on Real-Time and Probabilistic Systems, volume Lecture Notes in Computer Science , 1999 "... Abstract. A real-time system for power-down control in audio/video components is modeled and verified using the real-time model checker UPPAAL. The system is supposed to reside in an audio/video component and control (read from and write to) links to neighbor audio/video components such as TV, VCR a ..." Cited by 11 (2 self) Add to MetaCart Abstract. A real-time system for power-down control in audio/video components is modeled and verified using the real-time model checker UPPAAL. The system is supposed to reside in an audio/video component and control (read from and write to) links to neighbor audio/video components such as TV, VCR and remote–control. In particular, the system is responsible for the powering up and down of the component in between the arrival of data, and in order to do so in a safe way without loss of data, it is essential that no link interrupts are lost. Hence, a component system is a multitasking system with hard real-time requirements, and we present techniques for modeling time consumption in such a multitasked, prioritized system. The work has been carried out in a collaboration between Aalborg University and the audio/video company B&O. By modeling the system, 3 design errors were identified and corrected, and the following verification confirmed the validity of the design but also revealed the necessity for an upper limit of the interrupt frequency. The resulting design has been implemented and it is going to be incorporated as part of a new product line. 1 - In International Conference on Computer Aided Verification , 2003 "... Temporal logic is popular for specifying correctness properties of reactive systems. Real-time temporal logics add the ability to express quantitative timing aspects. Tableau constructions are algorithms that translate a temporal logic formula into a finite-state automaton that accepts precisely ..." Cited by 8 (1 self) Add to MetaCart Temporal logic is popular for specifying correctness properties of reactive systems. Real-time temporal logics add the ability to express quantitative timing aspects. Tableau constructions are algorithms that translate a temporal logic formula into a finite-state automaton that accepts precisely all the models of the formula. On-the-fly versions of tableau-constructions enable their practical application for modelchecking.
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.16.3583","timestamp":"2014-04-16T11:01:47Z","content_type":null,"content_length":"39330","record_id":"<urn:uuid:a654ec47-e577-4953-b852-4e5d5338664c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
From New World Encyclopedia In physics and chemistry, an atomic orbital is a region in which an electron may be found within a single atom.^[1] Likewise, a molecular orbital is a region in which an electron may be found in a molecule.^[2] In general, atomic orbitals combine to form molecular orbitals. In the classical model, electrons were thought to orbit the atomic nucleus much like planets orbiting the Sun (or moths orbiting speedily around a lamp). As the concept of an electron shifted from a solid particle to an entity with both wave-like and particle-like properties, it became clear that the electron does not have a well-defined position or orbit within the atom. Consequently, the electron was thought of as a "cloud" distributed around a nucleus, like a large atmosphere around a tiny planet. For this reason, the term orbit was replaced by the term orbital. Atomic orbitals Explaining the distribution and behavior of electrons within an atom was one of the driving forces behind the development of quantum mechanics. In quantum mechanics, atomic orbitals are the quantum states (or discrete energy states) that electrons surrounding an atomic nucleus may exist in. Each atomic orbital has a characteristic energy level and a particular distribution of electron density. An orbital can be described as a "wave function" of an electron in an atom, and the shape of an orbital indicates the probability of locating the electron within a particular region of the atom. Orbitals in hydrogen-like atoms The simplest atomic orbitals are those that occur in an atom with a single electron, such as the hydrogen atom.^[3] An atom of any other element ionized down to a single electron is very similar to hydrogen, and the orbitals take similar forms. For atoms with two or more electrons, the governing equations can be solved only by using the methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, the numerical approximations must be used. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: • the principal quantum number n • the angular momentum (or azimuthal) quantum number l • the magnetic quantum number m[l]. The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of hydrogen-like atoms are its atomic orbital. In general, however, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-dependent combinations (linear combinations) of multiple orbitals. The quantum number n first appeared in the Bohr model. It determines, among other things, the distance of the electron from the nucleus; all electrons with the same value of n lay at the same distance. Modern quantum mechanics confirms that these orbitals are closely related. For this reason, orbitals with the same value of n are said to comprise a "shell." Orbitals with the same value of n and the same value of l are even more closely related and are said to comprise a "subshell." Values of the quantum numbers An atomic orbital is uniquely identified by the values of the three quantum numbers n, l, and m[l]. Each set of these three quantum numbers corresponds to exactly one orbital, but the quantum numbers occur in only certain combinations of values. The rules governing the possible values of the quantum numbers are as follows: • The principal quantum number n is always a positive integer: 1, 2, 3, … In principle, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Generally, an atom has several orbitals associated with each value of n. Orbitals associated with the same value of n are said to lie in the same shell. • The azimuthal quantum number $\ell$ is a non-negative integer: 0, 1, 2, … Within a shell where n is some integer n[0], $\ell$ ranges across all (integer) values satisfying the relation $0 \le \ ell \le n_0-1$. For instance, the shell represented by n = 1 has only orbitals with $\ell=0$, and the shell n = 2 shell has only orbitals with $\ell=0$, and $\ell=1$. The set of orbitals associated with a particular value of $\ell$ are said to form the same subshell.'. • The magnetic quantum number $m_\ell$ is also always an integer. Within a subshell where $\ell$ is some integer $\ell_0$, $m_\ell$ ranges as: $-\ell_0 \le m_\ell \le \ell_0$. The above rules are summarized in the following table. Each cell represents a subshell, and lists the values of $m_\ell$ available in that subshell. Empty cells represent subshells that do not exist. l = 0 1 2 3 4 ... n = 1 m[l] = 0 2 0 -1, 0, 1 3 0 -1, 0, 1 -2, -1, 0, 1, 2 4 0 -1, 0, 1 -2, -1, 0, 1, 2 -3, -2, -1, 0, 1, 2, 3 5 0 -1, 0, 1 -2, -1, 0, 1, 2 -3, -2, -1, 0, 1, 2, 3 -4, -3, -2 -1, 0, 1, 2, 3, 4 ... ... ... ... ... ... ... Subshells are usually identified by their n- and $\ell$-values. n is represented by its numerical value, but $\ell$ is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with n = 2 and $\ell=0$ as a "2s subshell." Orbital names The first four orbital names (s, p, d, f) are derived from the quality of their spectroscopic lines: sharp, principal, diffuse, and fundamental. Subsequent orbitals are named in alphabetical order (g, h, …). Orbital names take the following format: $X \, \mathrm{type}^y \$ where X is the energy level corresponding to the "shell" number principal quantum number $n \$; type is a lower-case letter denoting the shape or "subshell" of the orbital, corresponding to the angular momentum quantum number (or azimuthal quantum number) $\ell \$; and y is the number of electrons in that orbital. For example, the orbital 1s^2 (pronounced "one s two") has two electrons in the lowest energy level (n = 1), with an angular momentum quantum number of l = 0. In some cases, the principal quantum number is designated by a letter. For n = 1, 2, 3, 4, 5, … , the associated letters are K, L, M, N, O, … , respectively. Shapes of atomic orbitals Any discussion of the shapes of electron orbitals is necessarily imprecise, because a given electron, regardless of which orbital it occupies, can at any moment be found at any distance from the nucleus and in any direction due to the uncertainty principle. However, the electron is much more likely to be found in certain regions of the atom than in others. Given this, a boundary surface can be drawn so that the electron has a high probability to be found anywhere within the surface, and all regions outside the surface have low values. The precise placement of the surface is arbitrary, but any reasonably compact determination must follow a pattern specified by the behavior of ψ^2, the square of the wavefunction. This boundary surface is what is meant when the "shape" of an orbital is mentioned. Generally speaking, the number n determines the size and energy of the orbital: as n increases, the size of the orbital increases. Also in general terms, $\ell$ determines an orbital's shape, and $m_\ell$ its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on $m_\ell$ as well. The single s-orbitals ($\ell=0$) are shaped like spheres. For n=1 the sphere is "solid" (it is most dense at the center and fades exponentially outwardly), but for n=2 or more, each single s-orbital is composed of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). The s-orbitals for all n numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). The three p-orbitals have the form of two ellipsoids with a point of tangency at the nucleus, (sometimes referred to as a dumbbell). The three p-orbitals in each shell are oriented at right angles to each other, as determined by their respective values of $m_\ell$. Four of the five d-orbitals look similar, each with four pear-shaped balls, each ball tangent to two others, and the centers of all four lying in one plane, between a pair of axes. Three of these planes are the xy-, xz-, and yz-planes, and the fourth has the centers on the x and y axes. The fifth and final d-orbital consists of three regions of high probability density: a torus with two pear-shaped regions placed symmetrically on its z axis. There are seven f-orbitals, each with shapes more complex than those of the d-orbitals. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. Table of atomic orbitals The following table shows all orbital configurations up to 7s. It covers the simple electronic configurations for all elements of the periodic table up to Ununbium (element 112), with the exception of Lawrencium (element 103), which would require a 7p orbital. s (l=0) p (l=1) d (l=2) f (l=3) Orbital energy In atoms with a single electron (essentially the hydrogen atom), the energy of an orbital (and, consequently, of any electrons in the orbital) is determined exclusively by n. The n = 1 orbital has the lowest possible energy in the atom. Each successively higher value of n has a higher level of energy, but the difference decreases as n increases. For high values of n, the energy level becomes so high that the electron can easily escape from the atom. In atoms with multiple electrons, the energy of an electron depends not only on the intrinsic properties of its orbital, but also on its interactions with the other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on n but also on $\ell$. Higher values of $\ell$ are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When $\ell$ = 2, the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s -orbital in the next higher shell; when $\ell$ = 3 the energy is pushed into the shell two steps higher. The energy order of the first 24 subshells is given in the following table. Each cell represents a subshell with n and $\ell$ given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. Empty cells represent subshells that do not exist. s p d f g Electron placement and the periodic table Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of all quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as s, or spin quantum number. Thus, two electrons may occupy a single orbital, so long as they have different values of s. However, only two electrons, because of their spin, can be associated with each orbital. Additionally, an electron always tries to occupy the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above. This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same $\ell$-state (but the n associated with that $\ell$-state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. The number of electrons in a neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. Formal quantum mechanical definition of atomic orbitals In quantum mechanics, the state of an atom, that is, the eigenstates of the atomic Hamiltonian, is expanded into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When their spin component is included, they are called atomic spin orbitals.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated to particular electron configurations, i.e., by occupations schemes of atomic orbitals (e.g., $1s^2 2s^2 2p^6 \$ for the ground state of neon—term symbol: ${}^1\!S_0 \$ This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated to a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless one has to keep in mind that electrons are fermions ruled by Pauli exclusion principle and cannot be distinguished from the other electrons in the atom. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinantal wave function at all. This is the case when electron correlation is large. Fundamentally, an atomic orbital is a one-electron wavefunction. Molecular orbitals Most methods in computational chemistry today start by calculating the molecular orbitals (MOs) of the system. A molecular orbital describes the behavior of one electron in the electric field generated by the nuclei and some average distribution of the other electrons. If two electrons occupy the same orbital, the Pauli principle demands that they have opposite spin. Qualitative discussion For an imprecise but qualitatively useful discussion of molecular structure, molecular orbitals can be obtained by the "linear combination of atomic orbitals molecular orbital method." In this approach, the molecular orbitals are expressed as linear combinations of atomic orbitals, as if each atom were on its own. Molecular orbitals were first introduced by Friedrich Hund and Robert S. Mulliken in 1927 and 1928.^[4]^[5] The linear combination of atomic orbitals approximation for molecular orbitals was introduced in 1929 by Sir John Lennard-Jones.^[6]. His ground-breaking paper showed how to derive the electronic structures of the fluorine and oxygen molecules from quantum principles. This qualitative approach to molecular orbital theory formed part of the beginning of modern quantum chemistry. Some properties: • The number of molecular orbitals is equal to the number the atomic orbitals included in the linear expansion. • If the molecule has some symmetry, the degenerate atomic orbitals (with the same atomic energy) are grouped in linear combinations (called symmetry adapted atomic orbitals (SO)) which belong to the representation of the symmetry group, so the wave functions that describe the group is known as symmetry-adapted linear combinations (SALC). • The number of molecular orbitals belonging to one group representation is equal to the number of symmetry-adapted atomic orbitals belonging to this representation. • Within a particular representation, the symmetry-adapted atomic orbitals mix more if their atomic energy levels are closer. As a simple example, consider the hydrogen molecule, H[2], with the two atoms labeled H' and H." The lowest-energy atomic orbitals, 1s' and 1s," do not transform according to the symmetries of the molecule. However, the following symmetry-adapted atomic orbitals do: 1s' - 1s" Antisymmetric combination: negated by reflection, unchanged by other operations 1s' + 1s" Symmetric combination: unchanged by all symmetry operations The symmetric combination (called a bonding orbital) is lower in energy than the basis orbitals, and the antisymmetric combination (called an antibonding orbital) is higher. Because the H[2] molecule has two electrons, they can both go in the bonding orbital, making the system lower in energy (and hence more stable) than two free hydrogen atoms. This is called a covalent bond. The bond order is equal to the number of bonding electrons minus the number of antibonding electrons, all divided by 2. In this example, there are two electrons in the bonding orbital and none in the antibonding orbital; the bond order is 1, and there is a single bond between the two hydrogen atoms. On the other hand, consider the hypothetical molecule of He[2], with the atoms labeled He' and He. Again, the lowest-energy atomic orbitals, 1s' and 1s," do not transform according to the symmetries of the molecule, while the following symmetry-adapted atomic orbitals do: 1s' - 1s" Antisymmetric combination: negated by reflection, unchanged by other operations 1s' + 1s" Symmetric combination: unchanged by all symmetry operations Similar to the molecule H[2], the symmetric combination (called a bonding orbital) is lower in energy than the basis orbitals, and the antisymmetric combination (called an antibonding orbital) is higher. However, in its neutral ground state, each helium atom contains two electrons in its 1s orbital, combining for a total of four electrons. Two electrons fill the lower energy bonding orbital, while the remaining two fill the higher energy antibonding orbital. Thus, the resulting electron density around the molecule does not support the formation of a bond (sigma bond) between the two atoms, and the molecule therefore is not formed. Another way of looking at it is that there are two bonding electrons and two antibonding electrons; therefore, the bond order is 0 and no bond exists. Ionic bonds When the energy difference between the atomic orbitals of two atoms is quite large, one atom's orbitals contribute almost entirely to the bonding orbitals, and the other's almost entirely to the antibonding orbitals. Thus, the situation is effectively that some electrons have been transferred from one atom to the other. This is called a (predominantly) ionic bond. Molecular orbital diagrams For more complicated molecules, the wave mechanics approach loses utility in a qualitative understanding of bonding (although is still necessary for a quantitative approach). The qualitative approach of MO uses a molecular orbital diagram. In this type of diagram, the molecular orbitals are represented by horizontal lines; the higher a line, the higher the energy of the orbital, and degenerate orbitals are placed on the same level with a space between them. Then, the electrons to be placed in the molecular orbitals are slotted in one by one, keeping in mind the Pauli exclusion principle and Hund's rule of maximum multiplicity (only two electrons per orbital (opposite spins); have as many unpaired electrons on one energy level as possible before starting to pair them). The hardest part is to construct the MO diagram. For a simple molecule such as H[2], we draw the diagram like this: __ σ^* __ σ The σ indicates a sigma bonding orbital, while σ^* indicates a sigma antibonding orbital. We know that the diagram looks like this because we know that two s orbitals will interact to form a σ bonding orbital and a σ antibonding orbital. Now if one considers N[2], one realizes that the two nitrogen atoms each have a filled 1s orbital, a filled 2s orbital, and three half-filled 2p orbitals. The 1s orbitals, being inner shell, do not interact (or, equivalently, they are not valence electrons, as explained by valence bond theory). The two 2s orbitals do, however, interact to create a σ[s] orbital and a σ[s]^* orbital: __ σ[s]^* __ σ[s] If we assume that the interatomic axis joining the two N atoms is the z axis, we find that the two 2p[z] orbitals are able to overlap lobe-to-lobe to create a sigma bond. The two 2p[x] and two 2p[y] orbitals, lying perpendicular to the z axis, interact to create four pi orbitals (two bonding, two antibonding). Finally, we must decide on the order of the orbitals. The 2s orbitals, since they were initially of lowest energy, interact to create the lowest-energy orbitals. The 2p sigma bonds must be stronger than the pi bonds, so we expect the σ[p] orbital to be lower than the π[p] orbital. However, this is not the case, primarily because of hybridization mixing the 2s and 2p orbitals. However, we do have the expected order for the σ[p]^* and π[p]^* orbitals: ___ σ[p]^* ___ ___ π[p]^* ___ σ[p] ___ ___ π[p] ___ σ[s]^* ___ σ[s] As promised, there are 8 orbitals, the sum of the number of atomic orbitals (4+4) which combined to create the molecular orbitals. The total number of electrons is then 10 (five valence electrons from each atom). Two go into the σ[s] orbital; two go into the σ[s]^* orbital; four into the two π[p] orbitals, and two go into the σ[p] orbital. The sigma bond order is the total number of electrons in sigma bonding orbitals (4), minus the total number of electrons in sigma bonding orbitals (2) , all over 2 giving (4-2)/2 = 1. There is the similar pi bond order, giving (4 - 0)/2. Adding these together gives the total bond order. In this case the lowest two orbitals "cancel out"; there is one sigma bond and two pi bonds. Dinitrogen therefore has a triple bond. Finally, we know that diatomic nitrogen is diamagnetic since there are no unpaired electrons in the diagram. This diagram, however, is not applicable to molecules of oxygen, fluorine, and neon. Because of the higher electronegativity of these elements, the formation of hybrid orbitals is less important, and thus we get the "expected" order of energy levels: ___ σ[p]^* ___ ___ π[p]^* ___ ___ π[p] ___ σ[p] ___ σ[s]^* ___ σ[s] The observation that the formation of hybrid orbitals is much less energetically favorable for smaller, more electronegative atoms (which are found in the first row) is due to the energy difference in the atom between the 2s and 2p orbitals. This energy difference increases from left to right along a row and from top to bottom down a column of the periodic table, so is highest for fluorine, which has the lowest mixing of s and p in the MOs. Mixing is most important when the energy difference is small. If we were working with diatomic oxygen, we would use this MO diagram. In this case, there would be 12 electrons to place into molecular orbitals; the first ten go into the five orbitals of lowest energy; the last two, however, occupy separate π[p]^* orbitals. The bond order is reduced to 2 since this is an antibonding orbital; also, the two unpaired electrons make liquid oxygen paramagnetic, which is not explained by the localized electron model. A further observation is that molecular orbital theory explains why the dicarbon molecule, C[2], does not contain a quadruple bond in its ground state although it would complete the octet - there are four bonding orbitals, but the top three cannot be occupied before one antibonding orbital is occupied. More quantitative approach To obtain quantitative values for the molecular energy levels, one needs to have molecular orbitals which are such that the configuration interaction (CI) expansion converges fast towards the full CI limit. The most common method to obtain such functions is the Hartree-Fock method, which expresses the molecular orbitals as eigenfunctions of the Fock operator. One usually solves this problem by expanding the molecular orbitals as linear combinations of gaussian functions centered on the atomic nuclei (see linear combination of atomic orbitals and basis set (chemistry)). The equation for the coefficients of these linear combinations is a generalized eigenvalue equation known as the Roothaan equations which are in fact a particular representation of the Hartree-Fock equation. Simple accounts often suggest that experimental molecular orbital energies can be obtained by the methods of ultraviolet photoelectron spectroscopy for valence orbitals and X-ray photoelectron spectroscopy for core orbitals. This, however, is incorrect as these experiments measure the ionization energy, the difference in energy between the molecule and one of the ions resulting from the removal of one electron. Ionization energies are linked approximately to orbital energies by Koopmans' theorem. While the agreement between these two values can be close for some molecules, it can be poor in other cases. HOMO and LUMO are acronyms for highest occupied molecular orbital and lowest unoccupied molecular orbital, respectively. The difference between the energies of the HOMO and LUMO, termed the band gap can sometimes serve as a measure of the excitability of the molecule: the smaller the energy, the more easily it will be excited. The HOMO level is to organic semiconductors what the valence band is to inorganic semiconductors. The same analogy exists between the LUMO level and the conduction band. The energy difference between the HOMO and LUMO level is regarded as band gap energy. When the molecule forms a dimer or an aggregate, the proximity of the orbitals of the different molecules induce a splitting of the HOMO and LUMO energy levels. This splitting produces vibrational sublevels, each of which has its own energy, slightly different from that of another. There are as many vibrational sublevels as there are molecules that interact together. When there are enough molecules influencing each other (such as in an aggregate), there are so many sublevels that we no longer perceive their discrete nature: they form a continuum. We no longer consider energy levels, but energy bands. See also • Chang, Raymond. 2006. Chemistry, 9th ed. New York: McGraw-Hill. ISBN 0073221031. • Daintith, J. 2004. Oxford Dictionary of Chemistry. New York: Oxford University Press. ISBN 0198609183. • Kutzelnigg, Werner, "Friedrich Hund and Chemistry" (on the occasion of Hund's 100th birthday), Angewandte Chemie 35 (1996): 573-586. • Pope, Martin, and Charles E. Swenberg. 1999. Electronic Processes in Organic Crystals and Polymers, 2nd ed. New York: Oxford University Press. ISBN 0195129636. • Tipler, Paul, and Ralph Llewellyn. 2003. Modern Physics, 4th ed. New York: W.H. Freeman. ISBN 0716743450. External links New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed. Research begins here...
{"url":"http://www.newworldencyclopedia.org/entry/Orbital","timestamp":"2014-04-18T23:24:08Z","content_type":null,"content_length":"70568","record_id":"<urn:uuid:ac0c661d-6cef-424b-9e38-8e6dfb9d8322>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
What is game theory and what are some of its applications? Saul I. Gass, professor emeritus at the University of Maryland's Robert H. Smith School of Business, explains. Game: A competitive activity involving skill, chance, or endurance on the part of two or more persons who play according to a set of rules, usually for their own amusement or for that of spectators ( The Random House Dictionary of the English Language,1967). Consider the following real-world competitive situations: missile defense, sales price wars for new cars, energy regulation, auditing tax payers, the TV show "Survivor," terrorism, NASCAR racing, labor- management negotiations, military conflicts, bidding at auction, arbitration, advertising, elections and voting, agricultural crop selection, conflict resolution, stock market, insurance, and telecommunications. What do they have in common? A basic example helps to illustrate the point. After learning how to play the game tick-tack-toe, you probably discovered a strategy of play that enables you to achieve at least a draw and even win if your opponent makes a mistake and you notice it. Sticking to that strategy ensures that you will not lose. This simple game illustrates the essential aspects of what is now called game theory. In it, a game is the set of rules that describe it. An instance of the game from beginning to end is known as a play of the game. And a pure strategy--such as the one you found for tick-tack-toe--is an overall plan specifying moves to be taken in all eventualities that can arise in a play of the game. A game is said to have perfect information if, throughout its play, all the rules, possible choices, and past history of play by any player are known to all participants. Games like tick-tack-toe, backgammon and chess are games with perfect information and such games are solved by pure strategies. But whereas you may be able to describe all such pure strategies for tick-tack-toe, it is not possible to do so for chess, hence the latter's age-old intrigue. Games without perfect information, such as matching pennies, stone-paper-scissors or poker offer the players a challenge because there is no pure strategy that ensures a win. For matching pennies you have two pure strategies: play heads or tails. For stone-paper-scissors you have three pure strategies: play stone or paper or scissors. In both instances you cannot just continually play a pure strategy like heads or stone because your opponent will soon catch on and play the associated winning strategy. What to do? We soon learn to try to confound our opponent by randomizing our choice of strategy for each play (for heads-tails, just toss the coin in the air and see what happens for a 50-50 split). There are also other ways to control how we randomize. For example, for stone-paper-scissors we can toss a six-sided die and decide to select stone half the time (the numbers 1, 2 or 3 are tossed), select paper one third of the time (the numbers 4 or 5 are tossed) or select scissors one sixth of the time (the number 6 is tossed). Doing so would tend to hide your choice from your opponent. But, by mixing strategies in this manner, should you expect to win or lose in the long run? What is the optimal mix of strategies you should play? How much would you expect to win? This is where the modern mathematical theory of games comes into play. Games such as heads-tails and stone-paper-scissors are called two-person zero-sum games. Zero-sum means that any money Player 1 wins (or loses) is exactly the same amount of money that Player 2 loses (or wins). That is, no money is created or lost by playing the game. Most parlor games are many-person zero-sum games (but if you are playing poker in a gambling hall, with the hall taking a certain percentage of the pot to cover its overhead, the game is not zero-sum). For two-person zero-sum games, the 20th century¿s most famous mathematician, John von Neumann, proved that all such games have optimal strategies for both players, with an associated expected value of the game. Here the optimal strategy, given that the game is being played many times, is a specialized random mix of the individual pure strategies. The value of the game, denoted by v, is the value that a player, say Player 1, is guaranteed to at least win if he sticks to the designated optimal mix of strategies no matter what mix of strategies Player 2 uses. Similarly, Player 2 is guaranteed not to lose more than v if he sticks to the designated optimal mix of strategies no matter what mix of strategies Player 1 uses. If v is a positive amount, then Player 1 can expect to win that amount, averaged out over many plays, and Player 2 can expect to lose that amount. The opposite is the case if v is a negative amount. Such a game is said to be fair if v = 0. That is, both players can expect to win 0 over a long run of plays. The mathematical description of a zero-sum two-person game is not difficult to construct, and determining the optimal strategies and the value of the game is computationally straightforward. We can show that heads-tails is a fair game and that both players have the same optimal mix of strategies that randomizes the selection of heads or tails 50 percent of the time for each. Stone-paper-scissors is also a fair game and both players have optimal strategies that employ each choice one third of the time. Not all zero-sum games are fair, although most two-person zero-sum parlor games are fair games. So why do we then play them? They are fun, we like the competition, and, since we usually play for a short period of time, the average winnings could be different than 0. Try your hand at the following game that has a v = 1/5. The Skin Game: Two players are each provided with an ace of diamonds and an ace of clubs. Player 1 is also given the two of diamonds and Player 2 the two of clubs. In a play of the game, Player 1 shows one card, and Player 2, ignorant of Player 1¿s choice, shows one card. Player 1 wins if the suits match, and Player 2 wins if they do not. The amount (payoff) that is won is the numerical value of the card of the winner. But, if the two deuces are shown, the payoff is zero. [Here, if the payoffs are in dollars, Player 1 can expect to win $0.20. This game is a carnival hustler¿s (Player 1) favorite; his optimal mixed strategy is to never play the ace of diamonds, play the ace of clubs 60 percent of the time, and the two of diamonds 40 percent of the time.] The power of game theory goes way beyond the analysis of such relatively simple games, but complications do arise. We can have many-person competitive situations in which the players can form coalitions and cooperate against the other players; many-person games that are nonzero-sum; games with an infinite number of strategies; and two-person nonzero sum games, to name a few. Mathematical analysis of such games has led to a generalization of von Neumann¿s optimal solution result for two-person zero-sum games called an equilibrium solution. An equilibrium solution is a set of mixed strategies, one for each player, such that each player has no reason to deviate from that strategy, assuming all the other players stick to their equilibrium strategy. We then have the important generalization of a solution for game theory: Any many-person non-cooperative finite strategy game has at least one equilibrium solution. This result was proven by John Nash and was pictured in the movie, A Beautiful Mind. The book (A Beautiful Mind, by Sylvia Nasar; Simon & Schuster, 1998) provides a more realistic and better-told story. By now you have concluded that the answer to the opening question on competitive situations is "game theory." Aspects of all the cited areas have been subjected to analysis using the techniques of game theory. The web site www.gametheory.net lists about 200 fairly recent references organized into 20 categories. It is important to note, however, that for many competitive situations game theory does not really solve the problem at hand. Instead, it helps to illuminate the problem and offers us a different way of interpreting the competitive interactions and possible results. Game theory is a standard tool of analysis for professionals working in the fields of operations research, economics, finance, regulation, military, insurance, retail marketing, politics, conflict analysis, and energy, to name a few. For further information about game theory see the aforementioned web site and http://william-king.www.drexel.edu/top/eco/game/game.html.
{"url":"http://www.scientificamerican.com/article/what-is-game-theory-and-w/","timestamp":"2014-04-16T08:10:34Z","content_type":null,"content_length":"64637","record_id":"<urn:uuid:35d07247-5b69-49ac-a633-78b6f99321bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Solution of a system of nonliear ODEs December 26th 2010, 06:20 PM #1 Aug 2010 Solution of a system of nonliear ODEs I have a system of nonlinear ODEs that are of this form where x are my states, F G and H are matrices (and also functions of the state and rate of change of state), and R is the forces on the system. u is a user input to the system. So, I need to solve these equations. At the moment I'm using a time-marching method, which is good enough, but I'd like to explore other solutions that may solve some time. In particular I'm looking for a solution that may be very quick, and not necessarily accurate, but provide acceptable qualitative values that could be used to develop a control or optimisation algorithm for the I've looked at potentially using asymptotic methods to approximate the solution, but I'm not sure how to apply it. The advantage of the particular application of these equations are that the frequency response will be very low, so the first approximation for the asymptotic method could be the solution where all rates of change of my system state are zero, and build from there. Anyone with knowledge of asymptotic methods, or who has any ideas on the equation please message me. I'd be interested to hear your thoughts. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-equations/166926-solution-system-nonliear-odes.html","timestamp":"2014-04-19T20:36:12Z","content_type":null,"content_length":"30212","record_id":"<urn:uuid:a815e588-3af4-414e-a71d-cd27bcb9143b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I wrote code to calculate the 1000th prime number: http://dpaste.com/hold/640512/ . It works, but upon reviewing, I realized I do not know why. Specifically, the function that's determines prime-ness at line 20 is not in the "while loop" (lines 23-29). What is causing or commanding "x + 2" in lines 25 or 29 to be fed back into the function at line 20? It also works if I put the function in after line 29 as a recursion http://dpaste.com/hold/640514/ . Would appreciate if someone could explain the mechanics of what's going on. Thanks a lot. I wrote code to calculate the 1000th prime number: http://dpaste.com/hold/640512/ . It works, but upon reviewing, I realized I do not know why. Specifically, the function that's determines prime-ness at line 20 is not in the "while loop" (lines 23-29). What is causing or commanding "x + 2" in lines 25 or 29 to be fed back into the function at line 20? It also works if I put the function in after line 29 as a recursion http://dpaste.com/hold /640514/ . Would appreciate if someone could explain the mechanics of what's going on. Thanks a lot. @MIT 6.00 Intro Co… Best Response You've already chosen the best response. your prime test function is in the while loop on line 24 and 26. As for what causes your function to iterate, it is because you are using a while loop which will do whatever is in it while a condition is or is not met... does that make sense? Best Response You've already chosen the best response. Yeah, that does make sense, thanks a lot. I was thinking about lines 24 and 26 as checking the result of 20, but now I see what's going on. Best Response You've already chosen the best response. Further in view of this explanation, I see that the function at line 20 is superfluous, since all the action is taking place in the while loop. I deleted line 20 and it returns the correct Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ea617d8e4b05519bdb374bd","timestamp":"2014-04-18T16:12:19Z","content_type":null,"content_length":"34091","record_id":"<urn:uuid:ada9048a-48f8-4c64-8cb9-bd840676d849>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help pinpointing a problem 05-12-2007 #1 Registered User Join Date May 2007 Need help pinpointing a problem I just cant figure out where i did wrong in my maxheap implementation... i didnt even discover it had a problem until i changed it to a minheap to use in a disjktra i'm pretty sure it's related to the fact that arrays in C start at 0, but i cant seem to find the error... so i tought an outsider eye couldnt pinpoint it since i probably keep making the same error over and over again here's the code i apreciated all your help *Edited code to hv a main and it now shows my problem* #include <stdio.h> #include <stdlib.h> #define LEFT(x) (2*x) /* left child of a node */ #define RIGHT(x) ((2*x)+1) /* right child of a node */ #define PARENT(x) (x/2) /* parent of a node */ #define SWAP(tmp,x,y) tmp = x ; x = y ; y = tmp /* swap to variables */ void max_heapify(int A[], int i, int size) { int l, r, largest, temp; l = LEFT(i); r = RIGHT(i); if(l <= size && A[l-1] > A[i-1]) largest = l; else largest = i; if(r <= size && A[r-1] > A[largest-1]) largest = r; if(largest != i) { SWAP(temp, A[i-1], A[largest-1]); max_heapify(A, largest, size); void build_max_heap(int A[], int size) { int i; for(i = (size/2); i>0; i--) max_heapify(A, i, size); int heap_maximum(int A[]) { return A[0]; int heap_extract_max(int A[], int *size) { int max; if(*size < 1) { printf("Heap underflow.\n"); return -1; max = A[0]; A[0] = A[(*size)-1]; *size = (*size)-1; max_heapify(A, 1, *size); return max; void heap_increase_key(int A[], int i, int key) { int temp; if(key<A[i]) { printf("New key is smaller than current key.\n"); A[i] = key; while(i > 0 && A[PARENT(i)] < A[i]) { SWAP(temp, A[i], A[PARENT(i)]); i = PARENT(i); void max_heap_insert(int A[], int *size, int max_size, int key) { if(*size >= max_size) { printf("Heap capacity exceeded, new element not added.\n"); *size = (*size)+1; A[*size]=-1; /*should be - infinity*/ heap_increase_key(A, *size, key); int main(int argc, char *argv[]) { int maxsize, size, i; int *array; array = malloc(maxsize*sizeof(int)); while(i<maxsize) { build_max_heap(array, size); for(i=0; i< size; i++) printf("&#37;i ", array[i]); heap_extract_max(array, &size); for(i=0; i< size; i++) printf("%i ", array[i]); max_heap_insert(array, &size, maxsize, 2); for(i=0; i< size; i++) printf("%i ", array[i]); heap_extract_max(array, &size); for(i=0; i< size; i++) printf("%i ", array[i]); max_heap_insert(array, &size, maxsize, 10); for(i=0; i< size; i++) printf("%i ", array[i]); return 0; Last edited by alwaystired; 05-12-2007 at 10:15 AM. > i'm pretty sure it's related to the fact that arrays in C start at 0, but i cant seem to find the error Some signs of where you could be going wrong - using <= rather than < for checking whether something is a valid subscript. - excessive use of -1 to convert a 1-based index into a 0-based index. I have to say that your use of 'i' and 'l' as variable names, and lots of '1' adjustments makes for very hard reading of the code. It would also be useful to post a main() which calls these functions. Maybe they work just fine, and the problem is in the way you're calling them. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. i edited it to have the necessary information to run the test that i'm failing... sorry for the crude main, but it does the job end result after running it: 7 3 -1 0 -1 3 0 -1 -1 3 2 0 -1 -1 2 -1 0 -1 10 2 -1 -1 -1 as u can see, in the last insert i get 10 2 -1 -1 -1... but the result from the extract before it, as you can see is 2 -1 0 -1, so where did the 0 go? i cant seem to find any reason to my algorithm replacing the 0 with -1 also, i had to use the -1 since the way u discover the left and right is by multiplication, so 0 wouldnt work there.. so i call it with 1 and adjust with -1 later anyone? i'm still stuck at this *edit* well, i now know that the problem is in heap_increase_key... Last edited by alwaystired; 05-13-2007 at 10:18 AM. #include <stdio.h> #include <stdlib.h> #include <assert.h> #define LEFT(x) (2*x) /* left child of a node */ #define RIGHT(x) ((2*x)+1) /* right child of a node */ #define PARENT(x) (x/2) /* parent of a node */ #define SWAP(tmp,x,y) tmp = x ; x = y ; y = tmp /* swap to variables */ #define ARANGE(x,lim) assert( (x)>=0 && (x)<lim ) void max_heapify(int A[], int i, int size) { int l, r, largest, temp; l = LEFT(i); r = RIGHT(i); if(l <= size && A[l-1] > A[i-1]) largest = l; else largest = i; if(r <= size && A[r-1] > A[largest-1]) largest = r; if(largest != i) { SWAP(temp, A[i-1], A[largest-1]); max_heapify(A, largest, size); I compiled with these ARANGE checks and got $ ./a.exe assertion "(l-1)>=0 && (l-1)<size" failed: file "foo.c", line 20 If you run this in the debugger, the debugger should catch the assert, and allow you to look around the code to figure out where it all went wrong (why is the subscript apparently out of range). When you understand why, then you can fix the code. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. fixed the problem, thnx for all the help one last question, i changed the heap to receive a array of structs instead of ints, and i'm having a problem that sometimes 1 insert changes 2 values... so i'm pretty sure it something related to the pointers if the swap macro receives pointers instead of ints does it work? if not, could u explain me how to do it? i'm doing something in the lines of A[*size-1]->value=10; and it's chaging both the *size-1 position of the array and another one at the same time... 05-12-2007 #2 05-12-2007 #3 Registered User Join Date May 2007 05-13-2007 #4 Registered User Join Date May 2007 05-13-2007 #5 05-13-2007 #6 Registered User Join Date May 2007
{"url":"http://cboard.cprogramming.com/c-programming/89777-need-help-pinpointing-problem.html","timestamp":"2014-04-17T23:07:45Z","content_type":null,"content_length":"64562","record_id":"<urn:uuid:b4474f55-29a8-4ce1-950f-b9963008b2e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
basic logarithm problem March 16th 2008, 06:58 PM #1 Sep 2007 basic logarithm problem Hi, I just don't get this sort of Logarithm problems that have same two variable in the equation. The problem is: Solve for x log x + log(x+3)=1 thanks for the help =] Rewrite using log properties rewrite in expo form. $x(x+3)=10 \iff x^2+3x-10=0$ So x=2 or x=-5 But if you plug x=-5 into the original equation you end up with the log(-5) which is undefined. So the only solution is x=2. March 16th 2008, 07:02 PM #2
{"url":"http://mathhelpforum.com/algebra/31184-basic-logarithm-problem.html","timestamp":"2014-04-16T10:28:11Z","content_type":null,"content_length":"33428","record_id":"<urn:uuid:c05aa666-1753-400f-9e29-4da29b8e9f1f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: Graphing table of odds ratios generated by e.g., gologit2 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: st: Graphing table of odds ratios generated by e.g., gologit2 From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject RE: st: Graphing table of odds ratios generated by e.g., gologit2 Date Fri, 17 Jul 2009 18:04:40 +0100 Thanks for this. Note that the code else { replace `coefficient'=. in `num' replaces missings with missings, to no useful purpose. On your question, I am guessing that you want the items of `eqlist' to end up as value labels. tokenize "`eqlist'" forval i = 1/`eqnum' { local call `call' `i' "``i''" ... xla(`call') would be one way to do that. Roy Wada Thanks, Nick. Dropping variables will not bite since -preserve- is applied, but it is a poor programming style. A better way is to give labels to temp variables, which would have prevented them from showing up on the axis. The passthru options did occur to me when I started thinking about the color. I twisted it so it is now available for the scatter graph, for the qfit graph, and for both graphs (the twoway). The program now comes with options for saving the graph points should someone wants to make a graph on their own. An interesting extension is to make a similar graph across estimates, sort of like a graphic version of a regression table. It should be useful when CI is added. Feel free to chime in if someone has suggestions. If anyone knows how to pass the label values to the graph values (replace 1, 2, etc, with value labels on the axis), I would be interested. sysuse auto, clear reg3 (price mpg) (price mpg rep78) (price mpg rep78 headroom) (price mpg rep78 headroom length) paragr mpg, qfit(xtitle(X-Axis Title)) scatter(mcolor(blue)) title(Big Title) eq(myeq) coef(mycoef) *! paragr 1.0.1 16Jul2009 by roywada@hotmail.com *! parallel graphing of a coefficient across different equations prog define paragr version 8.0 syntax varlist(min=1 max=1) [, qfit QFIT2(str asis) SCATter(str asis) EQsave(string) COEFsave(string) *] local var `varlist' qui { tempname coef coefficient equation mat `coef'=e(b) local eqlist : coleq `coef' local eqlist: list clean local(eqlist) local eqlist: list uniq local(eqlist) local eqnum: word count `eqlist' gen `equation'=_n gen `coefficient'=. forval num=1/`eqnum' { local name: word `num' of `eqlist' cap mat temp`num'=`coef'[1,"`name':`var'"] if _rc==0 { local temp`num'=temp`num'[1,1] replace `coefficient'=`temp`num'' in `num' else { replace `coefficient'=. in `num' * quantile stuff if "`e(cmd)'"=="qreg" | "`e(cmd)'"=="iqreg" | "`e(cmd)'"=="sqreg" | "`e(cmd)'"=="bsqreg" { local tempname=subinstr("`name'","q",".",.) replace `equation'=`tempname' in `num' * labels label var `equation' "Equations" if "`e(cmd)'"=="qreg" | "`e(cmd)'"=="iqreg" | "`e(cmd)'"=="sqreg" | "`e(cmd)'"=="bsqreg" { label var `equation' "Quantiles" local content: var label `var' label var `coefficient' "`content'" label define vallab 0 "no" 1 "yes" label val `equation' vallab if "`qfit'"=="" & "`qfit2'"=="" { twoway (scatter `coefficient' `equation' in 1/`eqnum', `scatter'), else { twoway (scatter `coefficient' `equation' in 1/`eqnum', `scatter') /* */ (qfit `coefficient' `equation' in 1/`eqnum', `qfit2' ), `options' * save variables if "`eqsave'"~="" { local N=_N replace `equation'=. in `=`eqnum'+1'/`N' gen `eqsave'=`equation' if "`COEFsave'"~="" { gen `COEFsave'=`coefficient' } /* quiet */ > 1. The program works with variables -coefficient- and -equation-, > -drop-ping any existing instance of either variable. This is not > necessary and often considered to be poor Stata programming style. > 2. The options -xlabel()- and -ylabel()- are used in non-standard > In a Stata graphics context these always mean axis labels, not axis > titles. (I do realise that "axis label" often means elsewhere what > takes to be axis title, but Stata conventions are what count in > 3. Any serious user of a graphics program will want to reach through > tune any detail of the graph. This is at present only possible through > the Graph Editor. > 4. The qualifier -in 1/`eqnum'- will be faster than -if _n <= > 5. -replace-ing missings by missings is unnecessary. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-07/msg00755.html","timestamp":"2014-04-19T04:55:36Z","content_type":null,"content_length":"11360","record_id":"<urn:uuid:e1302e52-e82e-4ee8-b0e4-a0eae8e68d92>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Collaborative Mathematics Environments Next: Introduction Collaborative Mathematics Environments Paul Chew, Robert L. Constable, Keshav Pingali, Steve Vavasis, Richard Zippel Computational science will be the dominant paradigm for science in the next century. This proposal addresses one of the major challenges facing this new kind of science---the demand for better software support for computational mathematics. The task of providing this support is sufficiently central to the national interest and sufficiently comprehensive that it could serve as a Grand Challenge problem for computer science. A strategy for meeting this challenge has evolved from inter-project cooperation at Cornell on the elements of scientific computing. This proposal represents a collaboration among five computer scientists with diverse backgrounds: scientific computing, computational geometry, computer algebra, applied logic, and programming languages. In various combinations these people have worked together, and software from their separate projects has been linked. Their experience with the difficulty of this linking process has led to the identification and to the prospective solutions of three major problems: the connectivity problem, the code generation problem, and the explanation problem. The problems and their solutions are briefly explained below. This proposal outlines a plan to design and implement an open system architecture that will integrate a variety of computational science tools into an environment that supports collaborative activity. Many interesting and powerful tools exist to support computational mathematics (for example, Matlab, Lapack, Mathematica, Axiom, Ellpack, PLTMG, Autocad, and LEDA), but most of these are focused on one specific area or on one specific style of computation. These systems are largely self-contained and closed, connecting to other software only at a very low level of abstraction, using, for instance, string-to-string communication. They do not have a common semantic base that would allow one system to ``collaborate'' with another. This is the connectivity problem. To address the connectivity problem, a common mathematical bus (the MathBus) will serve as the backbone of the system. Its communication protocols will be based on a typed formal language which provides the semantics for collaboration. A major design objective is to raise the level of communication among software tools, allowing the communication of mathematical objects instead of being restricted to simple strings. Although existing software has contributed substantially to scientific programming productivity, the time taken to generate code remains a major impediment to progress in computational science. This is the code creation problem. In part, this problem is due to the difficulty of expressing certain mathematical techniques as subroutines. The problem of code creation is addressed with a method of transformation and refinement, allowing the transformation of high-level mathematical expressions into more-traditional code. One of the reasons that sharing code with a colleague is difficult is because there is no common language for explaining what a program does and for precisely giving the conditions necessary to apply it. This is the explanation problem. The solution to the connectivity problem also provides an approach to explanation, namely to provide formal and semi-formal semantic standards for communications and linkage on the proposed MathBus. The problem solutions outlined here lead to an additional opportunity. Once tools can inter-operate and mathematical models can be shared, it becomes possible to create collections of mathematical theorems, explanations, and examples and counterexamples. Such a mathematical database could capture an important part of mathematical knowledge that is at best poorly represented by books and Collaborative Mathematics Environments Paul Chew, Robert L. Constable, Keshav Pingali, Steve Vavasis, Richard Zippel Next: Introduction nuprl project Tue Nov 21 08:50:14 EST 1995
{"url":"http://www.cs.cornell.edu/Info/Projects/NuPrl/documents/colmath/it.html","timestamp":"2014-04-17T06:43:46Z","content_type":null,"content_length":"10753","record_id":"<urn:uuid:115ae0e2-ae25-448f-aaf0-43760db39901>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: (Geometry! Pythagorean Theorem I think!) What is the length of the missing side of the right triangle shown below? • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. you're correct in assuming you'll use Pythagorean theorem. Best Response You've already chosen the best response. do you know the equation? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/507cadb3e4b040c161a28649","timestamp":"2014-04-17T22:11:52Z","content_type":null,"content_length":"33430","record_id":"<urn:uuid:e6f8c4bc-5edf-4b81-836f-0aa1afe060e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there an "elegant" non-recursive formula for these coefficients? Also, how can one get proofs of these patterns? up vote 28 down vote favorite Not sure if this is a "good" question for this forum or if it'll get panned, but here goes anyway... Consider this problem. I've been trying to find a formula to expand the "regular iteration" of "exp". Regular iteration is a special kind of complex function that is a solution of the equation $$f(z+1) = \exp(f(z))$$ (or more generally for functions other than $\exp$. It is called "regular" because as a solution it is characterized by the fact the the functional iterates $F^t(z) = f(t + f^{-1}(z))$, with $F$ being the function that is $\exp$ in this case, are "regular", or analytic, at a chosen fixed point of $F$, for all non-integer $t$. There are regular iterations for every fixed point.) This regular iteration in particular is an entire function. To get it, we take a fixed point $L$ of $\exp$ and expand a solution in powers of $L^z$. The result is to obtain a Fourier series $$f(z) = \sum_{n=0}^{\infty} a_n L^{nz}$$ $$a_0 = L$$ $$a_1 = 1$$ $$a_n = \frac{B_n(1! a_1, 2! a_2, ..., (n-1)! a_{n-1}, 0)}{n!(L^{n-1} - 1)}$$ with $B_n$ being the nth "complete" Bell polynomial. This recursive formula yields the following expansions: $$a_2 = \frac{1}{2L - 2}$$ $$a_3 = \frac{L + 2}{6L^3 - 6L^2 - 6L + 6}$$ $$a_4 = \frac{L^3 + 5L^2 + 6L + 6}{24L^6 - 24L^5 - 24L^4 + 24L^2 + 24L - 24}$$ $$a_5 = \frac{L^6 + 9L^5 + 24L^4 + 40L^3 + 46L^2 + 36L + 24}{120L^{10} - 120L^9 - 120L^8 + 240L^5 - 120L^2 - 120L + 120}$$ ... It appears that, by pattern recognition and factoring the denominators, $$a_n = \frac{\sum_{j=0}^{\frac{(n-1)(n-2)}{2}} mag_{n,j} L^j}{\prod_{j=2}^{n} j(L^{j-1} - 1)}$$ where $\mathrm{mag}_{n,j}$ is a sequence of "magic" numbers (integers) that looks like this (with the leftmost column being $j = 0$): n = 1: 1 n = 2: 1 n = 3: 2, 1 n = 4: 6, 6, 5, 1 n = 5: 24, 36, 46, 40, 24, 9, 1 n = 6: 120, 240, 390, 480, 514, 416, 301, 160, 64, 14, 1 n = 7: 720, 1800, 3480, 5250, 7028, 8056, 8252, 7426, 5979, 4208, 2542, 1295, 504, 139, 20, 1 n = 8: 5040, 15120, 33600, 58800, 91014, 124250, 155994, 177220, 186810, 181076, 163149, 134665, 102745, 71070, 44605, 24550, 11712, 4543, 1344, 265, 27, 1 n = 9: 40320, 141120, 352800, 695520, 1204056, 1855728, 2640832, 3473156, 4277156, 4942428, 5395818, 5561296, 5433412, 5021790, 4391304, 3625896, 2820686, 2056845, 1398299, 879339, 504762, 260613, 117748, 45178, 13845, 3156, 461, 35, 1 n = 10: 362880, 1451520, 4021920, 8769600, 16664760, 28264320, 44216040, 64324680, 88189476, 114342744, 141184014, 166279080, 187614312, 202901634, 210825718, 210403826, 201934358, 186191430, 164980407, 140216446, 114231817, 88934355, 66047166, 46576620, 31071602, 19460271, 11365652, 6112650, 2987358, 1298181, 488878, 153094, 37692, 6705, 749, 44, 1 But what is the simplest (or at least "reasonably" simple) non-recursive formula for these numbers, or perhaps the numerators in general? Like a sum formula, or something like that. Is there some kind of "combinatorical"-like formula here (sums/products, perhaps nested, of factorials and powers and stuff like that, binomial coefficients, special numbers, etc.)? I notice that the first column is factorials... (how can one prove that?) And regardless of the formula for the "mag", can one prove from the recurrence formula that the $a_n$ have the form given, and if so, how? Especially, how can one prove the numerator has degree $\ frac{(n-1)(n-2)}{2}$? Perhaps that might provide insight into how to find the formula for the "mag". The ultimate goal here is to try and obtain a series expansion for the "tetration" function $^z e$, more specifically, Kneser's tetrational function, described in Kneser's papers on solutions of $f(f (x)) = \exp(x)$ and related equations (paper is in German, I only saw the translations.). Though this may not be the best way to go, since after constructing this regular iteration function, we then need a special mapping derived from a Riemann mapping to "distort" it so it becomes real-valued at the real axis, and I don't know if there's any good way to construct Riemann mappings even as "non-closed" infinite expansions. But I'm still curious to see if at least a formula for this function is possible. EDIT: Oh, and for all its worth, apparently $$\sum_{j=0}^{\frac{(n-1)(n-2)}{2}} \mathrm{mag}_{n,j} = \frac{n!(n-1)!}{2^{n-1}}$$ if that helps any (don't see how it would, and this is not proven, I just got it by looking up the sums on the integer sequences dictionary site.). Perhaps maybe some hints as to why it has that value could help in finding the formula, though... Justification for thinking a formula exists Why do I think this even exists, when there's no guarantee that this kind of really non-trivial recurrence relation should even have a non-recursive solution in the first place? Well, for one, the fact that so much of it could be put in simple form as given, and also I did manage to come up with an explicit formula from a very roundabout way but this formula is excessively complicated and based on very general techniques. It is difficult to describe that formula here, but the outline of the process to construct it is this, for all its worth: 1. A general recurrence of the form $$A_1 = r_{1, 1}$$ $$A_n = \sum_{m=1}^{n-1} r_{n,m} A_m$$ has a non-recursive solution formula. This I found myself, but it is hideous and involves binary bit operations. This kind of recurrence is very general, and it also includes the recurrence for the Bernoulli numbers and other kinds of recurrences. 1. The "regular Schroder function" of $F(z) = e^{uz} - 1$, i.e. the function satisfying $\mathrm{RSF}(F(z)) = K \mathrm{RSF}(z)$ (sometimes called the Schroder functional equation, hence the name) which is "regular" in that it can be turned into the regular iteration of $F$ (as we do next), can be given as a Taylor series $$\mathrm{RSF}(z) = \sum_{n=1}^{\infty} A_n z^n$$ where $A_n$ is given by the recurrence-solving formula with $r_{1,1} = 1$ and $r_{n, m} = \frac{u^{n-1}}{1 - u^{n-1}} \frac{m!}{n!} S(n, m)$ (here, $S(n, m)$ is a Stirling number of the 2nd kind). This is hideous, involving lots of "binary bit manipulation" stuff such as counting 1 bits and positions of 1 bits, which have not-so-nice formulas (the latter involves a set indicator function, at least in the formulation I found myself...). Not sure at all how this could be simplified. The formulas just don't seem to lend themselves to simplification, at least not any that I know of. 1. Invert the regular Schroder function using the Lagrange inversion theorem. This can be expanded in an explicit "non-recursive" form, but it needs so-called "potential polynomials" and other complexity. Plug the huge $A_n$ formula into this. Horrific! 2. Now $U(z) = \mathrm{RSF}^{-1}(u^z)$ is a "regular iteration" of $e^{uz} - 1$, giveable as a Fourier series, or Taylor series in $u^z$. 3. Apply the topological conjugation to conjugate it to iteration of $e^{vz}$ by taking $v = ue^{-u}$ thus $u = -W(-v)$ (Lambert's W-function). Take $H(z) = e^{-u} z - 1$ then find $H^{-1} o U o H$. This gives a regular iteration of $e^{vz}$, thus set $v = 1$ ($u = -W(-1) = \mathrm{fixed\ point\ of\ exponential}$). Though, there may be a constant displacement of some kind offsetting this regular from the one given by the $a_n$-formula. EDIT: Oops!!!! That should be $H^{-1}(U(U^{-1}(H(U(0))) + z))$, but wait, that's just a constant-shift of $H^{-1} o U$, so just take $H^{-1} o U$ as the regular iteration of $e^{vz}$, probably displaced (in $z$) from the one we're trying to solve for by a constant, but should be structurally identical (and you can try and compute $U^{-1}(H (U(0)))$. Perhaps that is the shift required, but I don't know.). (EDIT: Apparently the step-numbering above isn't working right for some reason.) So by this, I think an explicit formula exists (though that constant-shift at the end may be a little problem, but not much, since it is immaterial to the structure of the function). I'm just interested in something simpler than this, preferably something to "fill out" the "mag" formula I gave... EDIT: Now I'm pratically sure explicit non-recursive solution is possible. Using some numerical tests, I figured the constant shift should be (for $v = 1$, i.e. $u = L$) simply -1, that is, take $H^ {-1}(U(z - 1))$ and the coefficients of the Fourier expansion will be equal to $a_n$ in explicit non-recursive form (but atrocious, hence my question, to find something more elegant. This at least evidences that an explicit non-recursive solution is possible, addressing any skeptics' concerns that it isn't and so an elegant one wouldn't exist either. And it is a good bet that if an atrocious formula exists derived from very general principles (note Step 1 above), there may be a more elegant one derived from more specific principles.). So, almost a proof. It could probably be turned into one with a little more work, though that would be much too long to post here. 1 +1 for lots of details and interesting write up. – Theo Johnson-Freyd Mar 7 '11 at 4:06 I second Theo Johnson-Freyd, a very well written exposition. Well, Mike (mike3) knows already the following link so it does not add something for the solution of an explicite formula, but for the reader unfamiliar with that question some more detailed examples may be interesting. See go.helms-net.de/math/tetdocs/APT.htm where the mag-coefficients are collected in the coefficients-matrices for the bivariate powerseries of iterates of $b^x - 1$ – Gottfried Helms Mar 7 '11 at 8:45 The sum of one row of mags can be expressed as sort of factorial of binomials: the sequence is 1,1,3,18,180,2700,... and because in the context of that powerseries I often came on something like that. We can write $1,1=1,3=1*3,18=1*3*6,180=1*3*6*10,2700=1*3*5*10*15,...$ where the factors are just the binomials. Clearly this is the same as your factorial expression, but I suppose this notion here leads to more context. – Gottfried Helms Mar 8 '11 at 14:57 add comment 3 Answers active oldest votes Let $\beta_n$ denote the flag $h$-vector (as defined in EC1, Section 3.13) of the partition lattice $\Pi_n$ (EC1, Example 3.10.4). Then $$ \mathrm{mag}_{n,{n-1\choose 2}-j} = \sum_S \ beta_n(S), $$ where $S$ ranges over all subsets of $\lbrace 1,2,\dots,n-2\rbrace$ whose elements sum to $j$. An explicit formula for $\beta_n(S)$ is given by $$ \beta_n(S) = \sum_{T\ subseteq S} (-1)^{|S-T|} \alpha_n(T), $$ where if the elements of $T$ are $t_1<\cdots < t_k$, then $$ \alpha_n(T) = S(n,n-t_1)S(n-t_1,n-t_2) S(n-t_2,n-t_3)\cdots S(n-t_{k-1},n-t_k). $$ Here $S(m,j)$ denotes a Stirling number of the second kind. Addendum. A combinatorial description of the mag numbers is somewhat complicated. Consider all ways to start with the $n$ sets $\lbrace 1 \rbrace,\dots, \lbrace n \rbrace$. At each step we up vote 11 take two of our sets and replace them by their union. After $n-1$ steps we will have the single set $\lbrace 1,2,\dots,n \rbrace$. An example for $n=6$ is (writing a set like $\lbrace down vote 2,3,5\rbrace$ as 235) 1-2-3-4-5-6, 1-2-36-4-5, 14-36-2-5, 14-356-2, 14356-2, 123456. At the $i$th step suppose we take the union of two sets $S$ and $T$. Let $a_i$ be the least integer $j$ accepted such that $j$ belongs to one of the sets $S$ or $T$, and some number smaller than $j$ belongs to the other set. For the example above we get $(a_1,\dots,a_5)=(6,4,5,3,2)$. If $\nu$ denotes this merging process, then let $f(\nu) = \sum i$, summed over all $i$ for which $a_i>a_{i+1}$. For the above example, $f(\nu) = 1+3+4=8$. (The number $f(\nu)$ is called the major index of the sequence $(a_1,\dots,a_{n-1})$.) Then $\mathrm{mag}_{n,{n-1\choose 2}-j}$ is the number of merging processes $\nu$ for which $f(\nu)=j$. This might look completely contrived to the uninitiated, but it is very natural within the theory of flag $h$-vectors. Thanks for this response. But what is EC1? – mike3 Jan 26 '12 at 19:39 Never mind, I saw your website. But another question: is this as simple a formula as it can get, or could there be one that sums over less than exponentially many terms? – mike3 Jan 26 '12 at 20:10 Lulz, I just posted my post seconds after you gave an answer! Oops... :) – mike3 Jan 26 '12 at 20:10 @Mike, I don't see how to get less than exponentially many terms. On the other hand, Examples 3.14.4 and 3.14.5 in EC1 apply to $\Pi_n$, allowing combinatorial interpretations of $\ beta_n(S)$ showing that these numbers are positive integers. – Richard Stanley Jan 26 '12 at 20:17 Hmm. What kind of combinatorial interpretations do these numbers have? – mike3 Jan 26 '12 at 20:45 show 5 more comments Mike, possibly you know this all, but for my own pleasure (and for the reader who's not yet much familiar with this all) I can give an explicite description for the mag-numbers. However, this is simply an explication of the terms of some series/arrays which are in fact recursive, but the recursion is so flat that we can resolve it without too much hazzle to direct references on factorials/binomials and the log of the fixpoint only. I employ the notation of (operator-)matrices which are known as Bell-/Carleman-matrices. The text became too long for the answer-field here, so I link to a pdf-file on my homepage. (If you don't like pdf there is also a html-version, but automatically generated by word and not perfect formatted) up vote 3 down Since I describe this using the known procedure of diagonalization of a triangular matrix some of your questions concerning the structure of coefficients may be answered or possibly a vote rigorous answer lays on the hand. (P.s.: if it is more convenient for the MO-readers I could upload or possibly reformat the text for mathjax, but the latter would be much unwanted work...) [update]: updated the pdf-file for readability However, there doesn't seem to be any "explicit non-recursive formula" for diagnonalizing a matrix... So I'm not sure of how much help this is... And even if there was, that may not necessarily offer the most elegant formula as it'd be applying a very general formula that may not give an easily-simplifiable answer. Note how I mentioned that it seems an explicit formula can already be built, but from very general techniques that result in an inelegant, seemingly too complicated solution. – mike3 Mar 10 '11 at 9:15 Hmm, the values in the eigenvector-matrix (the w-values) are finitely composed by stirling-numbers 2'nd kind and can even be expressed by sums of factorials in fractions, so this is "non-recursive" and "explicite". - However the number of involved summands increases nonlinearly with the row-index and include dependencies of the (additive) partition-scheme of the row-index of a w-value in the eigenvector-matrix W - so I'd agree, that this it not really elegant. But the example was meant to show that the explicite representation is not too strange. – Gottfried Helms Mar 10 '11 at 13:54 And I take it, there's no explicit non-recursive formula for w? – mike3 Mar 11 '11 at 2:01 Hmm, I seem to have some communication problem here. On page 3, bottom, I have under the header "removing recursion" an explicte expression of four summands for $w_{4,1}$ Each summand is the product of some stirling-numbers and the base-depending u-constant. The number of factors in the products and the number of summands depend on the rowindex r in $w_{r,c}$ I think this is -in principle- no more recursive than, say, the definition of the factorial or that of additive partioning of a natural number. But, well, I'm not going to insist, perhaps there is something in it which I don't catch. – Gottfried Helms Mar 11 '11 at 5:14 I see, but it does not seem to yield an elegant general formula. And the powers of u in one thing there look suspiciously like binary bit patterns, suggesting the binary counting (which makes the "hideous" formula I mention in the main post so inelegant and ugly) doesn't want to go away or simplify/minimize itself... :( – mike3 Mar 11 '11 at 7:29 show 1 more comment This is a comment not an answer. Here are a few apparent patterns: up vote 1 down $$mag_{n,0} &= (n-1)! \\\ mag_{n,1} &= (n-1)! \frac{n-2}{2} \\\ mag_{n,{n-1 \choose 2}} &= 1 \\\ mag_{n,{n-1 \choose 2}-1} &= {n\choose2}-1 \\\ mag_{n,{n-1 \choose 2}-2} &= \frac vote {(n-2)(n-1)n(3n-5)}{24}-1$$ Thanks, but many of those patterns I already knew. It doesn't seem to help any, and $mag_{n, {n-1 \choose 2}-3}$ does not seem to have a polynomial solution. – mike3 Mar 8 '11 at add comment Not the answer you're looking for? Browse other questions tagged sequences-and-series taylor-series fourier-analysis fa.functional-analysis co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/57627/is-there-an-elegant-non-recursive-formula-for-these-coefficients-also-how-ca/57872","timestamp":"2014-04-19T12:09:58Z","content_type":null,"content_length":"88086","record_id":"<urn:uuid:7e355d56-860b-405c-8815-04cbd686bf96>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about contrasts on Serious Stats All posts tagged contrasts Posted by Thom Baguley on March 23, 2012 In section 10.4.4 of Serious stats (Baguley, 2012) I discuss the rank transformation and suggest that it often makes sense to rank transform data prior to application of conventional ‘parametric’ least squares procedures such as t tests or one-way ANOVA. There are several advantages to this approach over the usual approach (which involves learning and applying a new test such as Mann-Whitney U, Wilcoxon T or Kruskal-Wallis for almost every situation). One is pedagogic. It is much easier to teach or learn the rank transformation approach (especially if you also cover other transformations in your course). Another reason is that there are situations where widely used rank-randomization tests perform very badly, yet the rank transformation approach does rather well. In contrast, Conover and Iman (1981) show that rank transformation versions of parametric tests mimic the properties of the best known rank randomization tests (e.g., Spearman’s rho, Mann-Whitney U or Wilcoxon T) rather closely with moderate to large sample sizes. The better rank randomization tests tend to have the edge on rank transformation approaches only when sample sizes are small (and that advantage may not hold if there are many ties). The potential pitfalls of rank randomization tests is nicely illustrated with the case of the Friedman test (and related tests such as Page’s L). I’ll try and explain the problem here. Why the Friedman test is an impostor … I’ve always thought there was something odd about the way the Friedman test worked. Like most psychology students I first learned the Wilcoxon signed ranks (T) test. This is a rank randomization analog of the paired t test. It involves computing the absolute difference between paired observations, ranking them and then adding the original sign back in. Imagine that the raw data consist of the following paired measurements (A and B) from four people (P1 to P4): │ │ A │ B │ │ P1 │ 13 │ 4 │ │ P2 │ 6 │ 9 │ │ P3 │ 11 │ 9 │ │ P4 │ 12 │ 6 │ This results in the following ranks being assigned: │ │ A – B │ Rank │ │ P1 │ +9 │ +4 │ │ P2 │ -3 │ -2 │ │ P3 │ +2 │ +1 │ │ P4 │ +6 │ +3 │ The signed ranks are then used as input to a randomization (i.e., permutation) test that, if there are no ties, gives the exact probability of the observed sum of the ranks (or a sum more extreme) being obtained if the paired observations had fallen into the categories A or B at random (in which case the expected sum is zero). The basic principle here is similar to the paired t test (which is a one sample t test on the raw differences). The Friedman test is (incorrectly) generally considered to be a rank randomization equivalent of one-way repeated measures (within-subjects) ANOVA in the same way that the Wilcoxon test is a a rank randomization equivalent of paired t. It isn’t. To see why, consider three repeated measures (A, B and C) for two participants. Here are the raw scores: │ │ A │ B │ C │ │ P1 │ 6 │ 7 │ 12 │ │ P2 │ 8 │ 5 │ 11 │ Here are the corresponding ranks: The ranks for the Friedman test depend only on the order of scores within each participant – they completely ignore the differences between participants. This differs dramatically from the Wilcoxon test where information about the relative size of differences between participants is preserved. Zimmerman and Zumbo (1993) discuss this difference in procedures and explain that the Friedman test (devised by the noted economist and champion of the ‘free market’ Milton Friedman) is not really a form of ANOVA but an extension of the sign test. It is an impostor. This is bad news because the sign test tends to have low power relative to the paired t test or Wilcoxon sign rank test. Indeed, the asymptotic relative efficiency relative to ANOVA of the Friedman test is .955 J/(J+1) where J is the number of repeated measures (see Zimmerman & Zumbo, 1993). Thus it is about .72 for J = 3 and .76 for J = 4, implying quite a big hit in power relative to ANOVA when the assumptions are met. This is a large sample limit, but small samples should also have considerably less power because the sign test and the Friedman test, in effect, throw information away. The additional robustness of the sign test may sometimes justify its application (as it may outperform Wilcoxon for heavy-tailed distributions), but this does not appear to be the case for the Friedman test. Thus, where one-way repeated measures ANOVA is not appropriate, rank transformation followed by ANOVA will provide a more robust test with greater statistical power than the Friedman Running one-way repeated measures ANOVA with a rank transformation in R The rank transformation version of the ANOVA is relatively easy to set up. The main obstacle is that the ranks need to be derived by treating all nJ scores as a single sample (where n is the number of observations per J repeated measures conditions – usually the number of participants). If your software arranges repeated measures data in broad format (e.g., as in SPSS) this can involve some messing about cutting and pasting columns and then putting them back (for which I would use Excel). For this sort of analysis I would in case prefer R – in which case the data would tend to be in a single column of a data frame or in a single vector anyway. The following R code using demo data from the excellent UCLA R resources runs first a friedman test, then a one-way repeated measures ANOVA and then the rank transformation version ANOVA. For these data pulse is the DV, time is the repeated measures factor and id is the subjects identifier. demo3 <- read.csv("http://www.ats.ucla.edu/stat/data/demo3.csv") friedman.test(pulse ~ time|id, demo3) lme.raw <- lme(fixed = pulse ~ time, random =~1|id, data=demo3) rpulse <- rank(demo3$pulse) lme.rank <- lme(fixed = rpulse ~ time, random =~1|id, data=demo3) It may be helpful to point out a couple of features of the R code. The Friedman test is built into R and can take formula or matrix input. Here I used formula input and specified a data frame that contains the demo data. The vertical bar notation indicates that the time factor varies within participants. The repeated measures ANOVA can be run in many different ways (see Chapter 16 of Serious stats ). Here I chose ran it as a multilevel model using the nlme package (which should still work even if the design is unbalanced). As you can see, the only difference between the code for the conventional ANOVA and the rank transformation version is that the DV is rank transformed prior to analysis. Although this example uses R, you could almost as easily use any other software for repeated measures ANOVA (though as noted it is simplest with software that take data structured in long form – with the DV in a single column or vector). Other advantages of the approach The rank transformation is, as a rule, more versatile than using rank randomization tests. For instance, ANOVA software often has options for testing contrasts or correcting for multiple comparisons. Although designed for analyses of raw data some procedures are very general and can be straightforwardly applied to the rank transformation approach – notably powerful modified Bonferroni procedures such as the Hochberg or Westfall procedures. A linear contrast can also be used to run the equivalent of a rank randomization trend test such as the Jonckheere test (independent measures) or Page’s L (repeated measures). A rank transformation version of the Welch-Satterthwaite t test is also superior to the more commonly applied Mann-Whitney U test (being robust to homogeneity of variance when sample sizes are unequal which the Mann-Whitney U test is not). Baguley, T. (2012, in press). Serious stats: A guide to advanced statistics for the behavioral sciences. Basingstoke: Palgrave. Conover, W. J., & Iman, R. L. (1981). Rank transformations as a bridge between parametric and nonparametric statistics. American Statistician, 35, 124-129. Zimmerman, D. W., & Zumbo, Bruno, D. (1993). Relative power of the Wilcoxon test, the Friedman test, and repeated-measures ANOVA on ranks. Journal of Experimental Education, 62, 75-86. N.B. R code formatted via Pretty R at inside-R.org Posted by Thom Baguley on February 14, 2012
{"url":"http://seriousstats.wordpress.com/tag/contrasts/","timestamp":"2014-04-21T07:06:28Z","content_type":null,"content_length":"66745","record_id":"<urn:uuid:0a891e7e-67f5-47fe-b4fe-cc279dcd99d7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Why does the integral of cos(x) from negative infinity to positive infinity diverge? June 11th 2013, 06:40 AM Why does the integral of cos(x) from negative infinity to positive infinity diverge? Hello everyone. I understand that sin(x) is an odd function and therefore its integral over a symmetrical domain is equal to zero. In class today, I was told that this is true even for boundaries from negative infinity to positive infinity. This is because for every positive area there is a corresponding negative area traced out by the function, so the integral cancels. As an even function, cos(x) is not symmetrical about the origin, so this does not apply. However, when I look at the graph of cos(x) piecewise from negative infinity to negative pi/2, and pi/2 to infinity, it appears to be repeatedly symmetrical about various points on the x-axis. Another way of looking at it is expressing cos(x) = sin(x + pi/2). It appears that I can *convert* an even function into an odd function? Why can't I use this reasoning to find a value for the integral of cos(x) from negative to positive infinity? Namely, I'd like to assume that areas in negative x cancel with each other, and areas in positive x cancel with each other, so that I can rearrange the integral: $\int_{-\infty}^{\infty}cos{x}\,dx =\int_{\frac{-\pi}{2}}^{\frac{\pi}{2}}cos{x}\,dx$ Thank you for reading my question and any advice you might be able to give! :) June 11th 2013, 07:09 AM Re: Why does the integral of cos(x) from negative infinity to positive infinity diver Hello everyone. I understand that sin(x) is an odd function and therefore its integral over a symmetrical domain is equal to zero. In class today, I was told that this is true even for boundaries from negative infinity to positive infinity. This is because for every positive area there is a corresponding negative area traced out by the function, so the integral cancels. As an even function, cos(x) is not symmetrical about the origin, so this does not apply. However, when I look at the graph of cos(x) piecewise from negative infinity to negative pi/2, and pi/2 to infinity, it appears to be repeatedly symmetrical about various points on the x-axis. Another way of looking at it is expressing cos(x) = sin(x + pi/2). It appears that I can *convert* an even function into an odd function? Why can't I use this reasoning to find a value for the integral of cos(x) from negative to positive infinity? Namely, I'd like to assume that areas in negative x cancel with each other, and areas in positive x cancel with each other, so that I can rearrange the integral: $\int_{-\infty}^{\infty}cos{x}\,dx =\int_{\frac{-\pi}{2}}^{\frac{\pi}{2}}cos{x}\,dx$ Thank you for reading my question and any advice you might be able to give! :) Your comments apply only to finite integration limits. Please be aware that $\int_{-\infty}^{\infty} cos(x)~dx = \lim_{a \to \infty} \int_{-a}^a cos(x)~dx$ does not exist because $\lim_{x \to \infty}cos(x)$ does not exist. A similar argument is true for sin(x). June 11th 2013, 10:08 AM Re: Why does the integral of cos(x) from negative infinity to positive infinity diver Thanks for your reply, Dan. Does this mean that: also doesn't converge? June 11th 2013, 10:15 AM Re: Why does the integral of cos(x) from negative infinity to positive infinity diver Yes, it does mean that. Because $\int_{0}^{\infty}sin{x}\,dx$ does not converge neither does $\int_{-\infty}^{\infty}sin{x}\,dx$. Now $\forall B\,~\int_{-B}^{B}sin{x}\,dx=0$. June 11th 2013, 11:31 AM Re: Why does the integral of cos(x) from negative infinity to positive infinity diver A slight clarification. That is the "Cauchy principal value" which may exist even if the integral itself does not (and is equal to the integral if the integral converges). The correct integral is $\int_{-\infty}^\infty cos(x) dx= \lim_{a\to -\infty}\lim_{b\to\infty} \int_a^b cos(x) dx$. The difference is not really important here because both the integral and the "Cauchy principal value" diverge but, because sin(x) is an "odd" function, $\int_{-a}^a sin(x) dx= \left[- cos(x)\ right]_{-a}^a= 0$ for all a so that the "Cauchy principle value" is 0 while the integral $\int_{-\infty}^\infty sin(x) dx= \lim_{a\to -\infty}\lim_{b\to\infty} \int_a^b cos(x) dx$ does not exist, as topsquark said. June 11th 2013, 12:10 PM Re: Why does the integral of cos(x) from negative infinity to positive infinity diver Thank you all for your helpful replies. Now in Plato's post is written $\int_{-b}^{b}sinx\,dx$= 0 (HallsofIvy used a instead of b) These boundaries do not include infinity, right? Plato seems to have said as much by saying the value for infinity diverges, but I just wanted to clarify. Sorry to make you keep coming back here. I am a little confused since my TA said that $\int_{-\infty}^{\infty}sin{x}\,dx$ = 0, but Plato in his post said that this integral diverges. June 11th 2013, 01:11 PM Re: Why does the integral of cos(x) from negative infinity to positive infinity diver Thank you all for your helpful replies. Now in Plato's post is written $\int_{-b}^{b}sinx\,dx$= 0 (HallsofIvy used a instead of b) These boundaries do not include infinity, right? Plato seems to have said as much by saying the value for infinity diverges, but I just wanted to clarify. Sorry to make you keep coming back here. I am a little confused since my TA said that $\int_{-\infty}^{\infty}sin{x}\,dx$ = 0, but Plato in his post said that this integral diverges. First, this post is in the caculus sub-forum therefore it never occurred to me too consider the "Cauchy principal value". In basic calculus most of us use this definition: If $\int_{-\infty}^{c}f(x)\,dx$ and $\int_{c}^{\infty}f(x)\,dx$both exist the we define $\int_{-\infty}^{\infty}f(x)\,dx$= $\int_{-\infty}^{c}f(x)\,dx+\int_{c}^{\infty}f(x)\,dx$. Because neither $\int_{c}^{\infty}\sin(x)\,dx$ nor $\int_{c}^{\infty}\cos(x)\,dx$ exists then using that definition neither $\int_{-\infty}^{\infty}\sin(x)\,dx$ nor $\int_{-\infty}^{\infty}\cos (x)\,dx$ exists. Again, definitions do differ. So your TA needs to clarify the exact definition used here. Finally it is true that $\int_{-c}^{c}\sin(x)\,dx=0$ and $\int_{-c}^{c}\cos(x)\,dx=2\int_{0}^{c}\cos(x)\,dx$. exists June 19th 2013, 11:40 AM Re: Why does the integral of cos(x) from negative infinity to positive infinity diver Hello everyone. I understand that sin(x) is an odd function and therefore its integral over a symmetrical domain is equal to zero. In class today, I was told that this is true even for boundaries from negative infinity to positive infinity. This is because for every positive area there is a corresponding negative area traced out by the function, so the integral cancels. As an even function, cos(x) is not symmetrical about the origin, so this does not apply. However, when I look at the graph of cos(x) piecewise from negative infinity to negative pi/2, and pi/2 to infinity, it appears to be repeatedly symmetrical about various points on the x-axis. Another way of looking at it is expressing cos(x) = sin(x + pi/2). It appears that I can *convert* an even function into an odd function? Why can't I use this reasoning to find a value for the integral of cos(x) from negative to positive infinity? Namely, I'd like to assume that areas in negative x cancel with each other, and areas in positive x cancel with each other, so that I can rearrange the integral: $\int_{-\infty}^{\infty}cos{x}\,dx =\int_{\frac{-\pi}{2}}^{\frac{\pi}{2}}cos{x}\,dx$ Thank you for reading my question and any advice you might be able to give! :) A short note to first post. $\int_{-\infty}^{\infty}\sin(u) \mathrm{d}u$ turns out to be $\int_{-\infty}^{\infty}\cos(v) \mathrm{d}v$ with the substitution $v=u-\frac{\pi}{2}$. Hence both either diverge (for sure) or converge.
{"url":"http://mathhelpforum.com/calculus/219748-why-does-integral-cos-x-negative-infinity-positive-infinity-diverge-print.html","timestamp":"2014-04-23T16:40:17Z","content_type":null,"content_length":"23397","record_id":"<urn:uuid:a59f47c3-00bd-47d6-9c3f-815111e8a13d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Lansing, IL Algebra 2 Tutor Find a Lansing, IL Algebra 2 Tutor ...Math is a subject that never ever changes. Math is concrete. I truly believe that if one understands the basic math concepts and apply them, by using a step by step method, and then solving all math problems would become easier. 7 Subjects: including algebra 2, statistics, algebra 1, precalculus ...I have taught Microsoft Outlook to parents, high school students and attorneys. I have taught email, calendaring, contacts, rules, and managing preferences in many versions of Outlook through Outlook 2010. I was a Help Desk Manager in several Chicago law firms. 14 Subjects: including algebra 2, geometry, algebra 1, GED ...On a daily basis I solve some complex equations and write computer algorithms to solve problems often encounter in daily life. Outside I enjoy music and food! I attended Luther South and graduated from the University of Minnesota, Morris with a bachelors degree in Computer Science and Minor in Mathematics. 16 Subjects: including algebra 2, calculus, piano, algebra 1 ...Our program consisted of a "Math Lab" where beginning college students taking these classes could come in and work on their homework with free resources, including on-duty math tutors. As one of these tutors it was my responsibility to assist students with questions and to periodically check in ... 7 Subjects: including algebra 2, calculus, physics, geometry ...This commitment is not just for enjoyment, but a dedication to make a change. This dedication has warranted me many honors, awards and memories. The recent shift in my career change has allowed me to make the decision that this is my purpose in life; this is who I am. 4 Subjects: including algebra 2, algebra 1, prealgebra, probability Related Lansing, IL Tutors Lansing, IL Accounting Tutors Lansing, IL ACT Tutors Lansing, IL Algebra Tutors Lansing, IL Algebra 2 Tutors Lansing, IL Calculus Tutors Lansing, IL Geometry Tutors Lansing, IL Math Tutors Lansing, IL Prealgebra Tutors Lansing, IL Precalculus Tutors Lansing, IL SAT Tutors Lansing, IL SAT Math Tutors Lansing, IL Science Tutors Lansing, IL Statistics Tutors Lansing, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Lansing_IL_algebra_2_tutors.php","timestamp":"2014-04-19T10:14:37Z","content_type":null,"content_length":"23969","record_id":"<urn:uuid:58367377-7e0f-4ce5-89cd-20e09a2344a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Cabin John Calculus Tutor Find a Cabin John Calculus Tutor ...These courses involved solving differential equations related to applications in physics and electrical engineering. As an undergraduate student in Electrical Engineering and Physics and as a graduate student, I took courses in mathematical methods for physics and engineering. These courses inc... 16 Subjects: including calculus, physics, statistics, geometry ...Algebra 2 is one of the most challenging courses students will take, honors or non-honors. It is important that students keep up with the work level and keep up with practicing. I have found that when students don't keep up with the work load they tend to fall behind quickly, and it becomes increasingly difficult to keep up with the course. 24 Subjects: including calculus, reading, geometry, ASVAB ...In high school, calculus and physics were the two subjects that ignited my enthusiasm to become an engineer. Before graduation I was able to earn 5's on both the calc AB/BC, and physics mechanics and E&M AP exams. The difference maker was excellent teaching (I'm a Mt. 5 Subjects: including calculus, physics, SAT math, precalculus ...However, for most students I focus on their weaknesses and problem solving skills while instilling confidence and positive thinking in parallel. Usually, I help them do as many problems as possible until they grasp the underlying concept very well. When needed, I use real-life examples or creat... 14 Subjects: including calculus, chemistry, physics, geometry ...I am well qualified to teach math and science courses, but also an avid reader and very interested in English. I have lots of experience writing papers, and published 14 manuscripts in well respected journals. From this I am well suited to editing papers, and learning literature searches. 26 Subjects: including calculus, reading, GRE, writing
{"url":"http://www.purplemath.com/cabin_john_md_calculus_tutors.php","timestamp":"2014-04-19T20:11:51Z","content_type":null,"content_length":"24117","record_id":"<urn:uuid:e0236551-8768-4cfa-bbff-2f622a8c691c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Mass Without a Scale What is Mass? When I teach any introductory-level course, I always ask my students to list the important fundamental properties an object can have — properties that are unchanging under ordinary conditions. They always remember to list mass, though they don’t always remember what mass is. That’s perfectly understandable: mass is one of those fundamental concepts that’s a little slippery, not least because the ways we have to measure mass are often indirect. Here’s a 2-fold definition that works pretty well for the purposes of this post: 1. Mass is the property of an object that measures how hard it is to change its velocity (i.e., start it moving if it’s at rest, change its direction of motion, etc.); the more it resists change, the greater its mass. Photons — particles of light — are massless because they move at the same speed all the time. Gravity still affects them, but that’s another story. 2. Mass also dictates the strength of gravitational attraction: a larger mass will have a larger gravitational influence on other objects. (That this is the same mass as in definition 1 is called the equivalence principle, and is the starting point for Einstein’s general theory of relativity. The equivalence principle definitely deserves its own post at some point.) We’re spoiled in our daily lives, in an odd way: we can stand on an inexpensive scale and determine our mass in kilograms. Sure, it’s not really mass that’s being measured — the scale is measuring how much we compress a spring (or strain gauge in the case of an electronic scale), which has a simple correspondence to mass. If you have any question about why the scale isn’t actually measuring mass, just locate an elevator, stand on the scale, and — taking care to ignore the weird looks you get — watch what happens to the reading as the elevator moves. Your mass isn’t changing, but the force you exert on the scale is. At the extreme end of this kind of behavior, you get free-fall, where your weight is zero even though gravity is still acting on you. Even so, most of us aren’t going to be reading our weight on an elevator, so the scale you read in your bathroom is a good proxy for mass. (Kilograms are the standard for mass in the international system of units, but pounds are a unit of force. Mass in so-called “English” units is measured in slugs.) However, things get a lot more complicated when you can’t put an object on a scale — when the object in question is either too big or too small. In both of these cases, though, we have ways. Oh yes…we have our ways. Big Stuff: Using Gravity to Measure Mass I had been intending to write a post on this subject for a while, but the specific impetus to write it today is the arrival of the Dawn probe at the asteroid Vesta. At present, Dawn is in orbit around the asteroid, but at a large altitude for safety — Vesta’s mass is not currently known very precisely and without that, the gravitational strength is unknown. By the very act of orbiting, Dawn will be able to measure the mass of Vesta, which in turn will tell us a lot about its composition. The method for measuring mass through orbiting is also how we know the mass of Earth (through the orbit of the Moon), the mass of the Sun (through all the planets), the mass of Jupiter (through its moons), and also the mass of many stars (which are frequently in binary systems). The seed of this technique goes all the way back to Johannes Kepler, the 17th century astronomer who formulated three laws of planetary motion. We only need his third law, which in combination with Isaac Newton’s law of gravity yields a very simple relationship between the average distance between a satellite and the object it’s orbiting (usually labeled a), the length of time an orbit takes (labeled P), and the mass of the object being orbited (M). In equation form, Kepler’s third law is which isn’t that hard to understand, even if your math allergies are strong: 1. The larger the mass of the object being orbited, the less time it will take a satellite to complete an orbit of a certain size; 2. If two satellites are orbiting the same object at different distances, the satellite that is farther away will take more time to complete its orbit; 3. If you can measure the size of an orbit and the time to complete an orbit, you have the mass of the object being orbited! In the case of most planets or other objects with natural satellites, Kepler’s third law is the best means we have of determining mass. In the case of moonless Mercury and Venus, the first truly accurate mass measurements were made by robotic probes, which played the role of artificial satellites; the Dawn mission will perform the same measurement for Vesta as it orbits the asteroid over the next year, then repeat the process for the largest asteroid, Ceres (also considered a dwarf planet, along with Pluto). To summarize the story so far: without a direct way to take the mass of astronomical objects like planets, asteroids, and so forth, we rely on a detailed understanding of satellite motion to find the mass from motion. Keep that idea in mind as we turn our attention to…. Small Stuff: Using Magnetic Fields to Measure Mass Gravity is the force of nature that holds the Solar System together, and keeps moons orbiting around their host body. On microscopic scales, other forces dominate, notably the electromagnetic force, which is responsible for holding atoms together. Mass is still going to play a role in resisting change of motion (definition 1 from above), but there won’t be a set of Kepler’s laws to guide us. Instead, let’s look at how an electrically-charged particle behaves in a magnetic field. The figure shows a schematic view of a large magnet, and the motion of an electron within that field: the electron follows a circular orbit! The diameter of the orbit depends on how strong the magnetic field is…and the mass of the electron. If you put a proton into this setup, you will also get a circular orbit, but because the proton is much more massive than an electron, it will have a larger orbit for the same magnetic field, since it’s that much harder to make it change its path of motion. It will also orbit in the opposite direction, since it’s a positive charge, as opposed to the negatively-charged electron, which is a simple way to distinguish positive from negative. (Of course, you need an experiment to measure the electric charge independently of mass, but such things do exist. You may even have performed a classic version in high school or college: the Millikan oil-drop experiment.) What I’ve described here is just a skeleton experiment; realistic experiments (carrying names like mass spectrometers and bubble chambers) necessarily have more detailed procedures to get everything right, just as I glossed over exactly how space probes measure distance and time. High-energy particle experiments have other ways of measuring mass as well, but things can be complicated if a particle is neutral — as with neutrinos, whose mass we still haven’t determined except to say that it’s much smaller than any other measured particle mass. Farther Afield Because mass can’t be determined directly, it’s a difficult physical properties to measure, no matter how fundamental it is. To make it worse, things can get tricky when interactions between objects are strong. The mass of a proton inside a nucleus is not the same as its mass when it is free, for example — part of the proton’s mass gets changed into energy (using Einstein’s famous E = m c^2 equation) that is used to bind the nucleus together. Another challenge is that the Standard Model, the most widely-accepted theory for particles and interactions, has no way to predict the masses of elementary particles from first principles, so we don’t have a theoretical prediction with which to compare our experimental results. That’s all a subject for another day! Mass dictates the evolutionary path of a star or a black hole on the astronomical scale, and relates to a lot of the interestingly strange quantum properties on the smallest size scale. Two of the most important questions anyone can ask of a scientist is “how do we know? how can we measure?” Think on that as you watch the news of the Dawn probe and the Large Hadron Collider. 21 Responses to “Finding Mass Without a Scale” 1. September 4, 2011 at 09:41 Awesome read to start the day, thanks for it. I have a simpleton`s question, `p` in Kepler`s third law of motion is regularly meassured in seconds, am I wrong? □ September 4, 2011 at 20:36 Thanks for the question! What units you use in Kepler’s 3rd law depends on which form you are using. The form I teach in introductory astronomy is P^2 = a^3/M, where P is in years, a is in astronomical units, and M is mass in units of the Sun’s mass. If you want to derive Kepler’s 3rd law from Newton’s law of gravity, you will get a version with P in seconds, but you’ll pick up a few extra factors like Newton’s constant G and (inevitably) Does this make sense? 2. September 6, 2011 at 11:33 Yes it does, since we are talking -in Kepler`s 3rd law- about the translation of Earth and other heavenly-bodies around the Sun it makes complete sense that it `P` is measured in years, which is the actual time of one full rotation around the Sun. I missed that one by lots! Hehe, now… since you touch the subject, how come one can derive Kepler`s principles for his third law of motion from Newtons equations, I mean… was it this way how Kepler himself arrived at his conclusions or did he, after making his conclusions, eventually saw fit to prove his theory by applying them to Newton`s equations? I apologize if my inquiry seems a tad squared… And also, would corroborate, G stands for gravity, right? Is that it? Or is it something completely different? If it is, would you be so kind as to explain to me this constant? Or share a link with me that does? Thanks. Your blog is great, keep up the awesome job of divulging the facts. □ September 6, 2011 at 19:23 Kepler died before Newton was born, actually. Kepler derived his three laws from careful astronomical observations and mathematical analysis. One of the early triumphs of Isaac Newton’s physics was that he could obtain Kepler’s laws from his newly-formulated law of gravity and the laws of motion (which bear his name, though a lot of credit is also owed to Galileo). In fact, there was a great debate in Newton’s day about whether planetary orbits were circular (Galileo’s view, following Copernicus) or elliptical (following Kepler), so Newton showing that his law of gravitation led directly to elliptical orbits was a vindication of Kepler. (And yes, G is for gravitation: it’s known as Newton’s gravitational constant, and its value is a measure of the strength of gravity.) So, here’s the chronology in brief: Copernicus proposed a Sun-centered Solar System in the early 16th century, with circular planetary orbits. Galileo endorsed the Copernican view when he wrote his classic works in the 17th century, but was in communication with his contemporary Kepler, who showed that elliptical orbits fit the data better and also could predict the size of an orbit based on how long it took. Newton was born the year Galileo died, and assembled the work of Kepler, Galileo, Descartes, and others into a coherent whole, showing how physics and astronomy were related subjects. I’m glad you’re enjoying the blog! 3. September 7, 2011 at 11:06 Man, you are the best, thanks for this thorough explanation of how these ideas came to fruition. I am grateful for your honesty and dedication. I salute your good-hearted and deeply intelligent □ September 7, 2011 at 15:40 Thank you for your kind words — I greatly appreciate them! 1. 1 Trackback on August 2, 2011 at 14:04 2. 2 Trackback on August 24, 2011 at 15:13 3. 3 Trackback on September 8, 2011 at 16:48 4. 4 Trackback on September 15, 2011 at 16:15 5. 5 Trackback on September 23, 2011 at 09:06 6. 6 Trackback on September 26, 2011 at 12:14 7. 7 Trackback on September 30, 2011 at 14:41 8. 8 Trackback on December 8, 2011 at 13:17 9. 9 Trackback on December 27, 2011 at 18:02 10. 10 Trackback on December 30, 2011 at 10:11 11. 11 Trackback on March 5, 2012 at 17:05 12. 12 Trackback on April 10, 2012 at 08:31 13. 13 Trackback on June 28, 2012 at 10:31 14. 14 Trackback on May 29, 2013 at 08:30 15. 15 Trackback on May 31, 2013 at 11:57 Comments are currently closed.
{"url":"http://galileospendulum.org/2011/07/21/finding-mass-without-a-scale/","timestamp":"2014-04-19T09:24:38Z","content_type":null,"content_length":"90630","record_id":"<urn:uuid:d8100b50-9562-469f-bac4-93208d29fc4c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
What is game theory and what are some of its applications? Saul I. Gass, professor emeritus at the University of Maryland's Robert H. Smith School of Business, explains. Game: A competitive activity involving skill, chance, or endurance on the part of two or more persons who play according to a set of rules, usually for their own amusement or for that of spectators ( The Random House Dictionary of the English Language,1967). Consider the following real-world competitive situations: missile defense, sales price wars for new cars, energy regulation, auditing tax payers, the TV show "Survivor," terrorism, NASCAR racing, labor- management negotiations, military conflicts, bidding at auction, arbitration, advertising, elections and voting, agricultural crop selection, conflict resolution, stock market, insurance, and telecommunications. What do they have in common? A basic example helps to illustrate the point. After learning how to play the game tick-tack-toe, you probably discovered a strategy of play that enables you to achieve at least a draw and even win if your opponent makes a mistake and you notice it. Sticking to that strategy ensures that you will not lose. This simple game illustrates the essential aspects of what is now called game theory. In it, a game is the set of rules that describe it. An instance of the game from beginning to end is known as a play of the game. And a pure strategy--such as the one you found for tick-tack-toe--is an overall plan specifying moves to be taken in all eventualities that can arise in a play of the game. A game is said to have perfect information if, throughout its play, all the rules, possible choices, and past history of play by any player are known to all participants. Games like tick-tack-toe, backgammon and chess are games with perfect information and such games are solved by pure strategies. But whereas you may be able to describe all such pure strategies for tick-tack-toe, it is not possible to do so for chess, hence the latter's age-old intrigue. Games without perfect information, such as matching pennies, stone-paper-scissors or poker offer the players a challenge because there is no pure strategy that ensures a win. For matching pennies you have two pure strategies: play heads or tails. For stone-paper-scissors you have three pure strategies: play stone or paper or scissors. In both instances you cannot just continually play a pure strategy like heads or stone because your opponent will soon catch on and play the associated winning strategy. What to do? We soon learn to try to confound our opponent by randomizing our choice of strategy for each play (for heads-tails, just toss the coin in the air and see what happens for a 50-50 split). There are also other ways to control how we randomize. For example, for stone-paper-scissors we can toss a six-sided die and decide to select stone half the time (the numbers 1, 2 or 3 are tossed), select paper one third of the time (the numbers 4 or 5 are tossed) or select scissors one sixth of the time (the number 6 is tossed). Doing so would tend to hide your choice from your opponent. But, by mixing strategies in this manner, should you expect to win or lose in the long run? What is the optimal mix of strategies you should play? How much would you expect to win? This is where the modern mathematical theory of games comes into play. Games such as heads-tails and stone-paper-scissors are called two-person zero-sum games. Zero-sum means that any money Player 1 wins (or loses) is exactly the same amount of money that Player 2 loses (or wins). That is, no money is created or lost by playing the game. Most parlor games are many-person zero-sum games (but if you are playing poker in a gambling hall, with the hall taking a certain percentage of the pot to cover its overhead, the game is not zero-sum). For two-person zero-sum games, the 20th century¿s most famous mathematician, John von Neumann, proved that all such games have optimal strategies for both players, with an associated expected value of the game. Here the optimal strategy, given that the game is being played many times, is a specialized random mix of the individual pure strategies. The value of the game, denoted by v, is the value that a player, say Player 1, is guaranteed to at least win if he sticks to the designated optimal mix of strategies no matter what mix of strategies Player 2 uses. Similarly, Player 2 is guaranteed not to lose more than v if he sticks to the designated optimal mix of strategies no matter what mix of strategies Player 1 uses. If v is a positive amount, then Player 1 can expect to win that amount, averaged out over many plays, and Player 2 can expect to lose that amount. The opposite is the case if v is a negative amount. Such a game is said to be fair if v = 0. That is, both players can expect to win 0 over a long run of plays. The mathematical description of a zero-sum two-person game is not difficult to construct, and determining the optimal strategies and the value of the game is computationally straightforward. We can show that heads-tails is a fair game and that both players have the same optimal mix of strategies that randomizes the selection of heads or tails 50 percent of the time for each. Stone-paper-scissors is also a fair game and both players have optimal strategies that employ each choice one third of the time. Not all zero-sum games are fair, although most two-person zero-sum parlor games are fair games. So why do we then play them? They are fun, we like the competition, and, since we usually play for a short period of time, the average winnings could be different than 0. Try your hand at the following game that has a v = 1/5. The Skin Game: Two players are each provided with an ace of diamonds and an ace of clubs. Player 1 is also given the two of diamonds and Player 2 the two of clubs. In a play of the game, Player 1 shows one card, and Player 2, ignorant of Player 1¿s choice, shows one card. Player 1 wins if the suits match, and Player 2 wins if they do not. The amount (payoff) that is won is the numerical value of the card of the winner. But, if the two deuces are shown, the payoff is zero. [Here, if the payoffs are in dollars, Player 1 can expect to win $0.20. This game is a carnival hustler¿s (Player 1) favorite; his optimal mixed strategy is to never play the ace of diamonds, play the ace of clubs 60 percent of the time, and the two of diamonds 40 percent of the time.] The power of game theory goes way beyond the analysis of such relatively simple games, but complications do arise. We can have many-person competitive situations in which the players can form coalitions and cooperate against the other players; many-person games that are nonzero-sum; games with an infinite number of strategies; and two-person nonzero sum games, to name a few. Mathematical analysis of such games has led to a generalization of von Neumann¿s optimal solution result for two-person zero-sum games called an equilibrium solution. An equilibrium solution is a set of mixed strategies, one for each player, such that each player has no reason to deviate from that strategy, assuming all the other players stick to their equilibrium strategy. We then have the important generalization of a solution for game theory: Any many-person non-cooperative finite strategy game has at least one equilibrium solution. This result was proven by John Nash and was pictured in the movie, A Beautiful Mind. The book (A Beautiful Mind, by Sylvia Nasar; Simon & Schuster, 1998) provides a more realistic and better-told story. By now you have concluded that the answer to the opening question on competitive situations is "game theory." Aspects of all the cited areas have been subjected to analysis using the techniques of game theory. The web site www.gametheory.net lists about 200 fairly recent references organized into 20 categories. It is important to note, however, that for many competitive situations game theory does not really solve the problem at hand. Instead, it helps to illuminate the problem and offers us a different way of interpreting the competitive interactions and possible results. Game theory is a standard tool of analysis for professionals working in the fields of operations research, economics, finance, regulation, military, insurance, retail marketing, politics, conflict analysis, and energy, to name a few. For further information about game theory see the aforementioned web site and http://william-king.www.drexel.edu/top/eco/game/game.html.
{"url":"http://www.scientificamerican.com/article/what-is-game-theory-and-w/","timestamp":"2014-04-16T08:10:34Z","content_type":null,"content_length":"64637","record_id":"<urn:uuid:35d07247-5b69-49ac-a633-78b6f99321bc>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Precalculus Archive | October 01, 2011 | Chegg.com Look at this website. Use it to investigate the function f(x) = x2. (We’ll come back to f(x) = ex later.) a. What can you use this tool to demonstrate? (Use words such as limit, slope, derivative, secant,…) 2. Look at this website. Work through it, to the end (to where you choose the graph of the derivative). a. I think you will find an error. What is it? b. Would you recommend this website to other students? Explain why or why not. • Show less
{"url":"http://www.chegg.com/homework-help/questions-and-answers/precalculus-archive-2011-october-01","timestamp":"2014-04-18T04:18:17Z","content_type":null,"content_length":"26299","record_id":"<urn:uuid:ba60cce0-42d4-4def-8f77-c12470f467d1>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Engineering Data Analysis (with R and ggplot2) – a Google Tech Talk given by Hadley WickhamEngineering Data Analysis (with R and ggplot2) - a Google Tech Talk given by Hadley Wickham Engineering Data Analysis (with R and ggplot2) – a Google Tech Talk given by Hadley Wickham It appears that just days ago, Google Tech Talk released a new, one hour long, video of a presentation (from June 6, 2011) made by one of R’s community more influential contributors, Hadley Wickham. This seems to be one of the better talks to send a programmer friend who is interested in getting into R. Talk abstract Data analysis, the process of converting data into knowledge, insight and understanding, is a critical part of statistics, but there’s surprisingly little research on it. In this talk I’ll introduce some of my recent work, including a model of data analysis. I’m a passionate advocate of programming that data analysis should be carried out using a programming language, and I’ll justify this by discussing some of the requirement of good data analysis (reproducibility, automation and communication). With these in mind, I’ll introduce you to a powerful set of tools for better understanding data: the statistical programming language R, and the ggplot2 domain specific language (DSL) for visualisation. The video More resources
{"url":"http://www.r-statistics.com/2011/06/engineering-data-analysis-with-r-and-ggplot2-a-google-tech-talk-given-by-hadley-wickham/","timestamp":"2014-04-20T00:38:38Z","content_type":null,"content_length":"54842","record_id":"<urn:uuid:75ac9b76-a8b6-4b9d-b5b9-d234c908b4b9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Revista Brasileira de Ensino de Física Services on Demand Related links Print version ISSN 1806-1117 Rev. Bras. Ensino Fís. vol.34 no.1 São Paulo Jan./Mar. 2012 ARTIGOS GERAIS Impedance of rigid bodies in one-dimensional elastic collisions Impedância de corpos rígidos em colisões elásticas unidimensionais Janilo Santos^I,^1; Bruna P.W. de Oliveira^II; Osman Rosso Nelson^I ^IDepartamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, Natal, RN, Brazil ^IIDepartment of Physics and Astronomy, University of Southern California, Los Angeles, CA, USA In this work we study the problem of one-dimensional elastic collisions of billiard balls, considered as rigid bodies, in a framework very different from the classical one presented in text books. Implementing the notion of impedance matching as a way to understand efficiency of energy transmission in elastic collisions, we find a solution which frames the problem in terms of this conception. We show that the mass of the ball can be seen as a measure of its impedance and verify that the problem of maximum energy transfer in elastic collisions can be thought of as a problem of impedance matching between different media. This approach extends the concept of impedance, usually associated with oscillatory systems, to system of rigid bodies. Keywords: impedance, energy transmission, elastic collisions. Neste trabalho estudamos o problema de colisões elasticas unidimensionais de bolas de bilhar, consideradas como corpos rígidos, dentro de uma abordagem muito diferente da abordagem classica apresentada nos livros textos. Implementando a noção de casamento de impedancia como uma maneira de entender eficiencia de transmissão de energia em colisões elásticas, nos encontramos uma solução que enquadra o problema em termos deste conceito. Mostramos que a massa da bola pode ser vista como uma medida de sua impedancia e verificamos que o problema de maxima transferencia de energia em colisões elasticas pode ser pensado como um problema de casamento de impedância entre meios diferentes. Esta abordagem amplia o conceito de impedância, usualmente associado a sistemas oscilatíorios, para sistemas de corpos rígidos. Palavras-chave: impedância, transmissao de energia, colisoes elasticas. 1. Introduction A good teacher knows the value of analogy and universality when explaining difficult concepts. A hard problem can be much simpler to elucidate when students have been exposed to a similar problem. For instance, students understand electrical forces better after they are acquainted with gravitational forces. Terms such as energy become gradually more familiar as it is encountered in a variety of contexts. In this paper we aim to introduce and enlarge the concept of impedance to undergraduates and advanced high school students by investigating energy transfer in mechanical collisions and tracing a parallel with the propagation of light in electromagnetic systems. We define the characteristic impedance of a system as the ratio between a forcelike quantity and a velocity-like quantity [1]. From this definition we derive an expression for the mechanical impedance of a billiard ball, which tells how to enhance the energy transfer from one mass to another in elastic collisions. Most importantly, we investigate how impedance matching appears in mechanical systems and we compare our results with the well-known problem of impedance matching in optical systems. 2. Transmission of kinetic energy in a head-on elastic collision Our mechanical system consists of the one-dimensional elastic non-relativistic collision between two or three particles with different masses. This is simply the popular textbook problem of one-dimensional elastically colliding billiard balls. We observe how much kinetic energy is transmitted from one ball to the other during the collision. Before we introduce the idea of impedance in mechanical systems, let us use the conservation laws of linear momentum and kinetic energy in elastic collisions to find the fraction of transmitted energy from one object to the other. We consider three rigid billiard balls of masses m[1], m[2], and m[3]. Let us assume that ball 1 has a finite speed and both balls 2 and 3 are at rest before the collision. After the first collision between m[1] and m part of the kinetic energy from ball 1 has been transmitted to ball 2, which now has a velocity in the same direction as the initial velocity of ball 1 (see Fig. 1). Using momentum and kinetic energy conservation, one obtain, for the fraction of kinetic energy transmitted from the first to the second ball, where we define µ[12] = m[1]/m[2], K[1i] is the initial kinetic energy of ball 1 and K[2f] is the kinetic energy of ball 2 after the collision. The fraction of energy that remains in the first ball, which we consider as a "reflected" energy, is given by R[12] = K[1f ]/K[1i ] = µ[12] = -1) ^2 /(µ[12] + 1)^2, where K[1 f] is the kinetic energy of the first ball after the collision. Analogously, in the second collision between balls 2 and 3, the fraction of kinetic energy that is transferred to the third ball is where µ[23] m[2]/m[3] and K[3f ]is the kinetic energy of m3 after the second collision. The fraction of energy transferred from the first to the third ball in the process is given by T[13] = K[3f ] / K[1i]=T[12] T[23], which can be written, using Eqs. (1) and (2), as where we define µ[13] m[1] /m[3] and µ[12] has been replaced with the equivalent expression µ[13] /µ[23]. From this equation we see that, for any fixed value of µ[13] , there are many values of µ[23] which give different fractions of transmitted kinetic energy from the first to the third ball. We compare it with the configuration when the intermediate ball m[2] is removed, in which case the transferred energy is given by T[13] = 4 µ [13]/(1 + µ[13] ^2. Equating this with Eq. (3) we find two roots: µ[23] = 1 and µ[23]= µ[13] . The plot for this configuration is shown in Fig. 2, where we examine the behavior of (3) for two particular values of the ratio µ[13]. We observe that, for each µ[13], there exists a range of values between µ[23] = 1 and µ[13] , such that more energy is transmitted in the presence of m[2] than when this intermediate mass is absent. In order to proceed further, we ask ourselves whether this special range of values can be enlarged such that a maximum amount of kinetic energy can be transferred from the first to the third ball. Indeed, fixing m[1] and m[3] , this can be obtained by taking dT[13]/dµ[23] = 0. Investigating the second derivative, we find that T[13] has a maximum at µ[23] = m[2]= µ[13] and is given by This answers our question about the value of the intermediate mass: when m[2] is equal to the geometric mean of m[1] and m[3], the transmitted kinetic energy is a maximum. Fig. 3 shows the behavior of Eq. (4) for several values of the ratio µ[13] and compares it with the configuration where the intermediate ball m[2] is absent. We observe that with an intermediate ball with mass m[2]= [1] to m [3], and when µ[13] = 1, that is, m[1] = m[3], the presence of m[2] is irrelevant (transmission coefficient is equal to unity). A similar calculation for partially elastic collisions between n masses was carried out by J.B. Hart and R.B. Herrmann [2]. We expand on their results by emphasizing the analogy with impedance matching in the following sections. If this is done in class as a demonstration, the students will be faced with the question that arises from the results: Why does the presence of the intermediate ball facilitate the transmission of energy? Wouldn't it be more reasonable to expect that the presence of an extra ball would reduce the transmission of kinetic energy ? This question, as we will see in the following sections, is more easily answered if it is introduced in the context of impedance matching. 3. Impedance matching We know from electromagnetism that the transfer of energy through the interface between different media depends on their respective values of impedance Z. For an electromagnetic wave traveling from, say, medium 1 to medium 3, the coefficients of reflection (r ) and transmission (t ), known as Fresnel coefficients [3], are associated with the fraction of reflected energy I? and transmitted energy T such that I? = T[13] = (n[3]/n[1] )n[1] and n[3] are the indices of refraction of the media 1 and 3 respectively and r = 1-113). Although in optical systems the coefficient r is given in terms of the indices of refraction as r = (n[3]- n[1])/(n 3 + n ), more generally it can be expressed in terms of the impedances of the media as r = (Z[3] - Z[1])/(Z[3]+ Z[1]), where Zi is the impedance of medium i. Since the sum of the reflected and transmitted parts has to be unity, we obtain t[13] = 2Z[1]/(Z[3] + Z[1]). Therefore, when the two media have the same impedance, all energy is transmitted and t[13] = 1, r[13] = 0. This problem is similar to the mechanical problem we described in the previous section if we add an intermediate medium with impedance Z[2] . Once again, we are interested in the energy transfer in the problem and we can ask the question: What is the value of Z[2] for which the transmission of energy from medium 1 to medium 3 is maximum? In order to solve this problem, we note that in this configuration the fraction of energy transmitted from medium 1 to medium 2 is T[12] = 4Z[1]Z[2]/(Z[1] + Z[2])^2 and the fraction transmitted from medium 2 to medium 3 is T[23] = 4Z[2]Z[3]/(Z[2] + Z[3])^2 . Thus, the transmission coefficient from medium 1 to medium 3, T[13] = T[12]T[23], is given by The maximum transmission (dT[13]/dZ[2] = 0) occurs for Z[2] =Z[2] that allows for maximum energy transfer from medium 1 to medium 3 is the geometric mean of Z[1] and Z[3], which represents the so-called impedance matching. This derivation can be found in many advanced textbooks in electromagnetism, acoustics and optics [4]; in introductory texts of physics usually impedance matching is only briefly mentioned in the study of electric circuits [5,6]. It is worth mentioning that impedance matching is also the concept behind the anti-reflective coatings found in eyeglasses, binoculars, and other lenses. Notice that the value found for the matching impedance Z[2] resembles our previous result for the intermediate mass m[2] in section 2 ( m[2] = 4. Impedance of a rigid billiard ball We now return to the concept of impedance as the ratio between a force-like quantity and a velocity-like quan tity [1] in order to find out what would play the role of impedance in a mechanical system such as rigid billiard balls. As investigated in Section 2, in these collisions the force-like quantity is not simply the force F due to the collision, but the integrated effect of this force during the collision time Δt = t[f] - t[i]. This is the impulse J that the target ball receives from the incident one. Here, t[i] and t[f] are the initial and final time of collision, respectively. Therefore, in the general case of a frontal collision between two balls in which the target ball is at rest, we obtain In Eq. (6) V[f] is the response of the ball to the impulse J. We ascribe the impedance to a rigid billiard ball, considered as a particle of mass m. This explains why the presence of the intermediate ball facilitates the transmission of energy in the elastic collisions studied in Sec. 2. The choice m[2] = [1] = m[1] and Z[3] = m[3]. 5. Conclusion In a very well-known problem in classical mechanics, one aligns three rigid balls of different masses m[1], m[2], and m[3] The value of m[2] , a function of m[1] and m[3], is to be determined such that when the one-dimensional collisions between these objects are elastic, the transmission of kinetic energy from the first ball to the last ball is maximized. This problem is easily solved using the laws of energy and linear momentum conservation, and we verify that the presence of an intermediate ball enhances rather than suppresses the transmission of energy. In this paper we present an explanation for this problem by proposing an extension of the concept of impedance, usually associated with oscillatory systems, to a rigid billiard ball. We have shown that in the case of one-dimensional elastic collisions, the mass of a particle can be seen as a measure of its impedance. Once this is assumed, we verify that for maximum energy transfer the intermediate mass must be chosen such that it matches the impedances of the first and third mass, each considered as a different medium with their respective impedances. This can be easily explored in the classroom, either by a computer simulation or an actual experiment (an experimental device has been proposed by Hart and Herrmann [2]). Once students are exposed to the idea behind impedance matching with a simple classical collision problem, this can be expanded into a discussion of impedance of electric circuits, acoustics, and optical media. Author Bruna P.W. Oliveira would like to thank N.T. Jacobson for his useful comments and review of the manuscript. J. Santos thanks the financial support from CNPq and also thanks Prof. Mario K. Takeya for helpful discussions about some ideas presented in this article. [1] F.S. Crawford, Waves: Berkeley Physics Course - V. 3 (McGraw-Hill, New York, 1968). [ Links ] [2] J.B. Hart and R.B. Herrmann, Am. J. Phys, 36,46(1968). [ Links ] [3] J.R. Reitz and F.J. Milford, Foundations of Electromagnetic Theory (Addison-Wesley, Boston, 1967). [ Links ] [4] F. Graham Smith and J.H. Thomson, Optics (John Wiley & Sons, New York, 1971), chap. 3. [ Links ] [5] H.D. Young and R.A. Freedman, University Physics (Addison Wesley, Boston, 2004), 11^th ed. [ Links ] [6] D. Halliday, R. Resnick and J. Walker Fundamentals of Physics-Extended (John Wiley & Sons, New York, 1997), 5^th ed. [ Links ] Recebido em 15/7/2011; Aceito em 29/8/2011; Publicado em 27/2/2012 1 E-mail: janilo@dfte.ufrn.br.
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S1806-11172012000100005&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-18T11:01:41Z","content_type":null,"content_length":"40886","record_id":"<urn:uuid:385504b1-b39e-4c4d-9854-f0dc3443f41b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Puzzles, Groups, and Groupoids Over at Good Math, Bad Math, MarkCC has a nice post introducing groupoids which uses the fifteen puzzle as an example. I like this example a lot, and I thought it would be interesting to expand on it a bit. So I’m going to tell you: 1. Why the Rubik’s Cube is a finite group, 2. Why the fifteen puzzle is a finite groupoid, and 3. How to solve the fifteen puzzle. I’m not going to assume any knowledge of groups or groupoids, but if you don’t know much group theory, you’ll have to skip over certain parts of the second half. The Rubik’s Cube You’ve probably played with a Rubik’s cube at least once: Each face of the cube is cut into nine pieces, which are colored red, orange, yellow, green, blue, or white. The goal is to rotate the sides of the cube until all of the faces are a solid color. The first thing to realize about a Rubik’s Cube is that the squares in the centers of the six faces are fixed. They turn in place when you rotate the sides, but otherwise they don’t really move. In fact, if you take a Rubik’s cube apart, you will see that these six squares are connected to three axles along which the faces rotate: Because the center squares are fixed, we can assign names to the six sides of the cube. For example, the front of the cube can be the face with a red square in the center, the top of the cube can be the face with the green square in the center, and so forth. Understanding this idea is the first step in understanding most solutions. Moves and Transformations I’m not going to go into detail on exactly how to solve a Rubik’s cube. Instead, I’d like to explain why the Rubik’s cube fits into a part of mathematics called group theory. First we need some terminology. When manipulating the cube, a move is any 90-degree clockwise rotation of one of the faces. There are six possible moves, which we will label according to the face that was rotated: Left, Right, Top, Bottom, Front, Back Any sequence of moves gives a transformation of the cube. For example, one way to transform the cube is to first rotate the Front face, then rotate the Right face, and then rotate the Back face Front · Right · Back · Back Two transformations are equal if they always have the same effect on the cube. For example, if you want to rotate the front and back faces, it doesn’t matter which order you do it in, so: Front · Back = Back · Front also, if you rotate any face four times, you always get back to where you started: Left · Left · Left · Left = Doing Nothing Note that “Doing Nothing” is considered a transformation of the cube. It is called the identity transformation. The Rubik’s Cube Group In total, there are 43,252,003,274,489,856,000 different possible transformations of the Rubik’s cube, and together they form a mathematical object known as a group. From a mathematician’s point of view, understanding the Rubik’s cube is the same as understanding its transformation group. Roughly speaking, a group is any collection of transformations of an object, subject to the following requirements: 1. If you compose two transformations (i.e. perform one and then the other), the result is also a transformation. 2. Any transformation can be undone by some other transformation. To clarify the idea of a group, I’d like to give two more examples. The first is the sixteen possible transformations of an octagon: Each stop sign above illustrates one transformation. The first illustrates the identity, or “Do Nothing” transformation; the others on the top row represent rotations, while those on the bottom row represent reflections. Together, these sixteen transformations form a group, namely the dihedral group of the octagon. Dihedral groups are important in chemistry, where they describe the symmetries of certain crystals. Another example of a group is the set of all possible rotations of a sphere. Because you can rotate a sphere about any axis, and through any angle between 0 and 360 degrees, this group consists of infinitely many different transformations. The rotation group is very important in physics for studying angular momentum and the motion of rigid bodies. The theory of groups is large and important subject in modern mathematics, with many applications to physics, chemistry (especially the study of crystals), and so forth. Most undergraduate math majors take a course in group theory, but often the theme doesn’t come through clearly enough: group theory is nothing less than the mathematical study of symmetry. The Fifteen Puzzle The picture below shows the fifteen puzzle. Fifteen numbered squares are arranged on a 4×4 grid, with one square missing. Usually the squares are jumbled, and the goal is to slide the squares around until you manage to put them in numerical order: You can play the fifteen puzzle online here. If you want to solve the fifteen puzzle, the most important thing to realize is that the blank square is the one that moves. For example, if you want to move a certain piece to the right, the thing to do is move the blank square until it lies to the right of the piece, and then move the piece. Try using this technique to put the 1, 2, and 3 into the correct positions in the online game. (After you position the 1, 2, and 3, it’s a bit harder to position the 4 correctly. See the solution below.) Mathematically, the fifteen puzzle is similar to the Rubik’s Cube: the goal is to arrive a certain position by performing the correct sequence of moves. However, unlike the Rubik’s cube, the available moves depend on the current position. For example, if the blank square is in the upper-left corner, it can only be moved right or down. The following picture shows the sixteen possible positions of the blank square and the moves between them: Transformations of the Fifteen Puzzle You can transform the fifteen puzzle by performing a sequence of moves. For example, if the blank square starts in the lower right (position 16), you can move it Up, Left, Up, Right, and Down. This is called a transformation: 16 → 12: Up, Left, Up, Right, Down The “16 → 12″ indicates that the transformation begins at position 16, and ends at position 12. This transformation has the following effect on nearby pieces: Two transformations are equal if they have the same starting and ending positions, and they effect the numbered pieces in the same way. In total, there are 167,382,319,104,000 different transformations of the fifteen puzzle. The Transformation Groupoid Unlike the transformations of the Rubik’s cube, transformations of the fifteen puzzle cannot always be composed. Specifically, you can only compose two transformations if the ending position of the first is the same as the starting position of the second. For this reason, the transformations of the fifteen puzzle are not a group — instead they form what’s called a groupoid. Roughly speaking, a groupoid consists of: 1. A set of possible positions, and 2. A collection of transformations Each transformation has a starting position and an ending position. The transformations are subject to the following requirements: 1. If you compose two transformations (i.e. perform one and then the other), the result is also a transformation. However, you can only compose two transformations if the ending position of the first is the same as the starting position of the second. 2. Any transformation can be undone by some other transformation. A group is a special case of a groupoid: groups are groupoids that have only one position. The following picture illustrates the difference: Formal Definition For those of you who like rigorous mathematics, I should probably give a formal definition of a groupoid. Just skip over this part if you’re not used to formal mathematical language. Definition. A groupoid consists of: 1. A set P of positions, 2. A set G of transformations, 3. A pair of functions start: G → P and end: G → P, and 4. A partially defined binary operation G × G → G. subject to the following requirements: 1. The product g · h is defined if and only if end(g) = start(h). 2. If g · h and h · k are both defined, then g · (h · k) = (g · h) · k. 3. For every position p ∈ P, there exists an identity e(p) ∈ G, which starts and ends at p. This element satisfies g · e(p) = g for any g that ends at p, and e(p) · h = h for any h that starts at p. 4. For any g ∈ G starting at p and ending at q, there exists an inverse h starting at q and ending at p, such that g · h = e(p) and h · g = e(q). If you know what a category is, a groupoid is just a category where every morphism has an inverse. However, I always feel like this statement is a little bit misleading: usually when someone says “category” I think of something like “topological spaces and continuous functions” or “modules and homomorphisms”. When someone says “groupoid”, I think of the fifteen puzzle. In this respect, groupoids are much more like groups than they are like categories. By the way, I once had a friend who knew what a groupoid was, but not a category. When I explained the definition of a category, his face lit up, and he said, “Oh, it’s just a monoidoid!” The Fifteen Puzzle Group Now that I’ve explained why the fifteen puzzle is a groupoid, I’d like to backpedal and explain how it’s still possible to view it as a group. In fact, it’s possible to view any groupoid as a group: the secret is to ignore all but one position. Let me start by explaining it for the three puzzle: This puzzle has only four positions for the blank square, and the only productive activity is to move it around in a circle: As you can see, a complete revolution of the blank square rotates the positions of three numbered pieces. If we think of this as a single move, then our understanding of the three puzzle becomes a lot simpler: all you have to do to solve the puzzle is rotate the pieces into the correct place. By focusing on the position where the blank square is in the lower-right corner, we have simplified the groupoid into a group with three elements (namely the identity, the rotation 1 → 2 → 3 → 1, and the rotation 1 ← 2 ← 3 ← 1). Now let’s do the five puzzle: If we focus on the position where the blank square is in the center bottom, there are two obvious moves available: we can move the blank square around the left half of the puzzle, or around the right half. The first rotates the pieces in postions 1, 2, and 3, while the second rotates the pieces in positions 3, 4, and 5. These generate a group of 60 transformations, all of which start and end with the blank square in the center bottom position. You can learn about this group by playing the five puzzle (set the applet to two rows and three columns). It really is helpful to keep the blank square in the bottom center, and to use the two rotations as basic moves. Incidentally, if you know about permutation groups, observe that the five puzzle is really just the group of permutations of the set {1, 2, 3, 4, 5} generated by the rotations 1 → 2 → 3 → 1 and 3 → 4 → 5 →3. These two permutations generate the alternating group $A_{5}$, i.e. the group of all even permutations. In general, given any groupoid G of transformations, you can get a group from G by focusing on all of the transformations that start and end at a certain position. It doesn’t matter which position you pick — different positions give isomorphic groups. (Strictly speaking, this is only true if the groupoid is connected, meaning that there is at least one transformation between any two For the fifteen puzzle, any fixed position for the blank square gives you a group of 15!/2 = 65,383,7184,000 different transformations. This is the alternating group $A_{15}$. Unfortunately, the fifteen puzzle doesn’t have a simple set of “moves” based at a certain position like the five puzzle, so we’ll need to look elsewhere for a good strategy. How to Solve the Fifteen Puzzle Our strategy will involve the solutions to the three puzzle and the five puzzle. If you haven’t figured out the five puzzle yet, you should play using this applet until you understand how it works. (Remember: start with the blank square in the bottom center, and use the two rotations to move the numbered pieces.) OK, here’s the solution to the fifteen puzzle: 1. First put the 1 and 2 into the correct places. 2. Now move the 3, the 4, and the blank square close to the upper-right corner, like this: Move the blank square around the red box to put the 3 and 4 into place. 3. Use the same process to place the 5, 6, 7, and 8. 4. The final two rows are more tricky. To start, move the 9 and 13 near the lower left corner. It doesn’t matter exactly where: Play the five puzzle inside the red box to put the 9 and 13 into place. 5. Finally, play the five puzzle in the lower right to place the remaining pieces: That’s it. It’s not the fastest solution possible, but it should let you solve the fifteen puzzle consistently within two or three minutes. Enjoy! Tags: math.GR John Armstrong Says: January 27, 2008 at 1:13 pm | Reply Those who are interested in the Rubik’s Cube from a group theory perspective might like my series on the subject (arranged, as WordPress does, in reverse chronological order). Jim Belk Says: January 27, 2008 at 4:04 pm | Reply Indeed. There is also a series of posts at neverendingbooks on the 15-puzzle, the related Conway M(13) puzzle, and a game called Mathieu’s blackjack. For those interested in groupoids, I should mention the wonderful AMS Notices article by Alan Weinstein entitled Groupoids: Unifying Internal and External Symmetry. Simon Says: January 27, 2008 at 9:53 pm | Reply “That’s it. It’s not the fastest solution possible, but it should let you solve the fifteen puzzle consistently within two or three minutes.” Assuming a solution exists. The configuration of the 15-puzzle has a parity, 0 for an even permutation of 1…15 or 1 for an odd permutation. This parity can not be changed by any move, thus if the puzzle starts in an odd permutation then it can not be solved. This is the basis for the $1000 prize offered by Sam Loyd in 1880 to the first person who could solve the 14-15 puzzle. (Which was sold with all numbers in order except 14-15… obviously the prize was never successfully claimed) lieven Says: January 28, 2008 at 8:06 am | Reply @jim : very nice post! also, you are more in tune with what students want than i was when i mentioned the 15-puzzle in a group theory course and explained the connection with A(15). all they wanted to knw was how to solve the puzzle… i had simliar experiences with sudoku, they dont want to hear about the symmetries of solutions but rather how to solve and construct sudokus. needless to say that my attemp to use the rubik-cube group as an illustration to the Jordan-Holder theorem also failed… probably ill have to rethink my courses. btw. thanks for mentioning the neverendingbooks-series! Aaron F. Says: January 28, 2008 at 11:33 pm | Reply Oooh, cool! The Rubik’s cube has just become my favorite example of a group of transformations. ^_^ Traditional examples, like symmetry groups, have always disappointed me. The symmetry object is usually presented without color, making it hard to see that the transformations are actually doing anything to it, and is usually such a pointless shape that it’s hard to see why you’d want to do anything to it. The Rubik’s cube, on the other hand, is brightly colored, and lots of people have played with it before. The Rubik’s cube might be an especially good example for non-mathematicians in freshman-seminar-type classes. The “math as fun toy” angle is something that I think should be played up a lot more, and the cube could also help students get a feel for the experimental side of mathematics. Is the Rubik’s group commutative? Does it have any non-trivial subgroups? The answers are just a few twists John Armstrong Says: January 29, 2008 at 12:13 am | Reply Does it have any non-trivial subgroups? Tons! In fact, the solution (inspired by Jeff Adams of $E_8$ fame) I present in the above-linked series of posts works its way down through a tower of nested subgroups, each preserving the structure we’ve already solved, until we get down to the trivial subgroup and the solved cube. John Baez Says: January 30, 2008 at 12:52 am | Reply “Usually when someone says “category” I think of something like “topological spaces and continuous functions” or “modules and homomorphisms”.” That used to be true for me too. But that’s because the people who first taught us category theory did it in a really obnoxious way… sort of like teaching group theory and starting with this example: “the group of all permutations of the class of all sets” – hard to visualize and mired in set-theoretic subtleties. When we think of a category, we should think of something like the 15 puzzle but where we’re not allowed to reverse some of the moves! Jim Belk Says: January 30, 2008 at 3:14 am | Reply I partially agree with you. There certainly do seem to be a lot of situations where it helps to think of categories as primarily algebraic objects, i.e. as “monoidoids”. On the other hand, it certainly helps in understanding fields like algebraic topology or commutative algebra to consider large categories such as Top or Mod(R). Indeed, so many mathematicians study these fields that that there is certainly some merit to focusing on “large” categories in an introduction for graduate students. The real problem is that “large category” and “small category” are essentially two different concepts with the same name. Thinking about large categories certainly doesn’t help if you’re trying to understand the meaning of the word “groupoid”. In some contexts, it doesn’t even help you understand what is meant by the word “category”! John Armstrong Says: January 30, 2008 at 1:03 pm | Reply Here’s the one I always like to start with: matrices. Someone can be a little shaky on thinking of all vector spaces and linear transformations, but they probably have a good idea what a matrix is. So take the objects to be natural numbers and the morphisms from $m$ to $n$ to be $m\times n$ matrices, with matrix multiplication as composition. I think it splits the difference between the abstraction of the big categories and the toylike nature of the small ones. Micromegas Says: January 31, 2008 at 5:39 am | Reply “But that’s because the people who first taught us category theory did it in a really obnoxious way…” Shouldn’t dwarfs on the shoulders on giants be a little less arrogant? quotes of the day | neverendingbooks Says: February 1, 2008 at 11:38 am | Reply [...] a comment over at The Everthing Seminar Shouldn’t dwarfs on the shoulders on giants be a little less [...] John Baez Says: February 10, 2008 at 3:28 am | Reply If the people who first taught me category theory had been “giants” in the field, I might have gotten interested in it sooner. play free online blackjack Says: October 30, 2011 at 12:59 am | Reply play free online blackjack… [...]Puzzles, Groups, and Groupoids « The Everything Seminar[...]… Скачать оперу мини Says: December 26, 2011 at 4:42 pm | Reply Скачать оперу мини… [...]Puzzles, Groups, and Groupoids « The Everything Seminar[...]…
{"url":"http://cornellmath.wordpress.com/2008/01/27/puzzles-groups-and-groupoids/","timestamp":"2014-04-16T13:34:33Z","content_type":null,"content_length":"82703","record_id":"<urn:uuid:99683307-0ae3-4150-96ca-33d1ab58eb11>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of Chemical Theory and Computation (JCTC) Published while none of the authors were employed at the CMM The nature of the multicenter, long bond in ditetracyanoethylene dianion complex [TCNE]22– is elucidated using high level ab initio Valence Bond (VB) theory coupled with Quantum Monte Carlo (QMC) methods. This dimer is the prototype of the general family of pancake-bonded dimers with large interplanar separations. Quantitative results obtained with a compact wave function in terms of only six VB structures match the reference CCSD(T) bonding energies. Analysis of the VB wave function shows that the weights of the VB structures are not compatible with a covalent bond between the π* orbitals of the fragments. On the other hand, these weights are consistent with a simple picture in terms of two resonating bonding schemes, one displaying a pair of interfragment three-electron σ bonds and the other displaying intrafragment three-electron π bonds. This simple picture explains at once (1) the long interfragment bond length, which is independent of the countercations but typical of three-electron (3-e) CC σ bonds, (2) the interfragment orbital overlaps which are very close to the theoretical optimal overlap of 1/6 for a 3-e σ bond, and (3) the unusual importance of dynamic correlation, which is precisely the main bonding component of 3-e bonds. Moreover, it is shown that the [TCNE]22– system is topologically equivalent to the square C4H42– dianion, a well-established aromatic system. To better understand the role of the cyano substituents, the unsubstituted diethylenic Na+2[C2H4]22– complex is studied and shown to be only metastable and topologically equivalent to a rectangular C4H42– dianion, devoid of aromaticity.
{"url":"http://molmod.ugent.be/journals/journal-chemical-theory-and-computation-jctc","timestamp":"2014-04-21T07:05:10Z","content_type":null,"content_length":"60185","record_id":"<urn:uuid:9916ca96-31c7-4845-9a7f-9c35c1328f4e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply I hope this picture is more clear: i49.tinypic.com/r8iic2.jpg There are 4 parameters. Alpha, Beta, Yamma and the Circle with the line through. Then a, a*, b, b*, g, g*, r and e. These are meanings behind the letters, but there are no other links between these variables apart from those in the equations. The question is: Derive an equation for 'r' which does NOT contain a*, e, or r. and Derive an equation for 'e' which does NOT contain a*, e, or r. This is the whole chapter completed if you guys can help me with this! I have an answer for 'e', which can hopefully be checked against one of your answers but for 'r', I keep going around in circles and getting r=r, 0=0 or some equivalent statement but no expression! I look forward to hearing your replies, thank you all!
{"url":"http://www.mathisfunforum.com/post.php?tid=19086&qid=257178","timestamp":"2014-04-19T07:07:12Z","content_type":null,"content_length":"16992","record_id":"<urn:uuid:a739a4da-9395-48c4-b8f9-afac04103727>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Improvement on the Decay of Crossing Numbers Cerný, Jakub and Kyncl, Jan and Tóth, Géza (2008) Improvement on the Decay of Crossing Numbers. In: Graph Drawing 15th International Symposium, GD 2007, September 24-26, 2007, Sydney, Australia , pp. 25-30 (Official URL: http://dx.doi.org/10.1007/978-3-540-77537-9_5). Full text not available from this repository. We prove that the crossing number of a graph decays in a "continuous fashion" in the following sense. For any varepsilon>0 there is a delta>0 such that for n sufficiently large, every graph G with n vertices and mge n^1+varepsilon edges has a subgraph G' of at most (1-delta)m edges and crossing number at least (1-varepsilon)cro(G). This generalizes the result of J. Fox and Cs. Tóth. Repository Staff Only: item control page
{"url":"http://gdea.informatik.uni-koeln.de/825/","timestamp":"2014-04-21T14:52:11Z","content_type":null,"content_length":"21414","record_id":"<urn:uuid:86487957-b581-4922-874d-056e674e9fb9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do you do 2/5x+3/7=1-4/7x ? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/503fea41e4b0ece102e0f195","timestamp":"2014-04-21T02:17:00Z","content_type":null,"content_length":"34719","record_id":"<urn:uuid:d66d39a2-7904-40b3-9198-6572e5597757>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Section: LAPACK routine (version 2.0) (l) Updated: 12 May 1997 Local index Up DSTEQR2 - i a modified version of LAPACK routine DSTEQR COMPZ, N, D, E, Z, LDZ, NR, WORK, INFO ) CHARACTER COMPZ INTEGER INFO, LDZ, N, NR DOUBLE PRECISION D( * ), E( * ), WORK( * ), Z( LDZ, * ) DSTEQR2 is a modified version of LAPACK routine DSTEQR. DSTEQR2 computes all eigenvalues and, optionally, eigenvectors of a symmetric tridiagonal matrix using the implicit QL or QR method. DSTEQR2 is modified from DSTEQR to allow each ScaLAPACK process running DSTEQR2 to perform updates on a distributed matrix Q. Proper usage of DSTEQR2 can be gleaned from examination of ScaLAPACK's PDSYEV. COMPZ (input) CHARACTER*1 = 'N': Compute eigenvalues only. = 'I': Compute eigenvalues and eigenvectors of the tridiagonal matrix. Z must be initialized to the identity matrix by PDLASET or DLASET prior to entering this subroutine. N (input) INTEGER The order of the matrix. N >= 0. D (input/output) DOUBLE PRECISION array, dimension (N) On entry, the diagonal elements of the tridiagonal matrix. On exit, if INFO = 0, the eigenvalues in ascending order. E (input/output) DOUBLE PRECISION array, dimension (N-1) On entry, the (n-1) subdiagonal elements of the tridiagonal matrix. On exit, E has been destroyed. Z (local input/local output) DOUBLE PRECISION array, global dimension (N, N), local dimension (LDZ, NR). On entry, if COMPZ = 'V', then Z contains the orthogonal matrix used in the reduction to tridiagonal form. On exit, if INFO = 0, then if COMPZ = 'V', Z contains the orthonormal eigenvectors of the original symmetric matrix, and if COMPZ = 'I', Z contains the orthonormal eigenvectors of the symmetric tridiagonal matrix. If COMPZ = 'N', then Z is not referenced. LDZ (input) INTEGER The leading dimension of the array Z. LDZ >= 1, and if eigenvectors are desired, then LDZ >= max(1,N). NR (input) INTEGER NR = MAX(1, NUMROC( N, NB, MYPROW, 0, NPROCS ) ). If COMPZ = 'N', then NR is not referenced. WORK (workspace) DOUBLE PRECISION array, dimension (max(1,2*N-2)) If COMPZ = 'N', then WORK is not referenced. INFO (output) INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value > 0: the algorithm has failed to find all the eigenvalues in a total of 30*N iterations; if INFO = i, then i elements of E have not converged to zero; on exit, D and E contain the elements of a symmetric tridiagonal matrix which is orthogonally similar to the original matrix. This document was created by man2html, using the manual pages. Time: 21:45:20 GMT, April 16, 2011
{"url":"http://www.makelinux.net/man/3/D/dsteqr2","timestamp":"2014-04-21T05:20:45Z","content_type":null,"content_length":"10497","record_id":"<urn:uuid:727b3e0b-ab6d-4d37-80c7-b55e4b3a045d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Mental FlossHow Far Can You Drive on Empty? Gas lines are currently stretched across New Jersey and other states hit by Hurricane Sandy. If you're struggling to find a place to fill up, you might be asking yourself this very question, which we originally posted last year. The more fun answer is “Find out the hard way!” Justin Davis runs a site called TankOnEmpty.com that lets drivers record just how far they’ve driven certain types of cars after their empty lights came on. You can poke around and see how your car has fared. The results are a little less than scientific, though. Even for a car with a large number of data points, the estimates aren’t super-precise. The Honda Civic, for example, has 248 entries and an average range of 44.38 miles after the light comes on, but the standard deviation of the data is almost 24 miles. Of course, the major question in play asks when the car’s fuel light comes on in the first place. Sure, driving conditions and number of passengers will affect your car’s range after the light comes on, but if you can pinpoint how much gas is left in the tank when the warning appears, you can at least ballpark what sort of range you’ve got left. Click and Clack of Car Talk fame have estimated that most cars’ “empty” lights come on once the gas level dips below an eighth of a tank or so, but they have also advocated driving until the light comes on, then immediately stopping to fill all the way up, and then comparing how much fuel your car took with the tank’s capacity published in your owner’s manual. Once you repeat this process a few times, you should have a pretty good estimate of how much gas is left. 20/20. Stossel got behind the wheel of his minivan and drove until he ran out of gas. He ended up making it 65 miles after his gas dial claimed the car was empty, including 40 miles after his van’s computerized estimate of its remaining fuel range hit zero. What about you? Have you ever pushed the limits of your car’s tank, like Cosmo Kramer did on that Seinfeld episode?
{"url":"http://mentalfloss.com/node/12968/atom.xml","timestamp":"2014-04-18T18:37:53Z","content_type":null,"content_length":"5493","record_id":"<urn:uuid:6e81872e-e001-4891-95de-938de7c760f1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Vandell's free cash flow (FCF) is million per year and is expected to grow at a constant rate of 5% a year, its beta is 1.4. 1. 175037 Vandell's free cash flow (FCF) is million per year and is expected to grow at a constant rate of 5% a year, its beta is 1.4. Vandell's free cash flow (FCF) is million per year and is expected to grow at a constant rate of 5% a year, its beta is 1.4. What is the value of Vandell's operations? If Vandell has $10.82 millions in debt, What is the current value of Vandell's stock? (Hi nt Use the corporate valuation model of ch-15) Use the corporate tax rate of 40%. Problem 25-2 Merger Valuation: Hastings estimates that if it acquires Vandells, interest payments will be $1,500,000 per year for 3 years, after which the current target capital structure of 30%debt will be maintained. Interest in thee fourth year will be $1.472 million , after which interest and the tax shield will grow at 5%. Synergies will cause the free cash flows to be $2.5 million, $2.9 million, $3.4 million, and then $3.57, in years 1 unlevered value of Vandell to Hastings Corporation? Assume Vandell now has $10.82 million in debt. Problem 25-5 Merger Analysis: Maraston Marble Corporation is considering a merger with the Conroy Concrete Company. Conroy is publicly traded company, and its beta is 1.30. Conroy has been barely profitable, so it has paid an average of only 20% in taxes during the last several years. In addition, it uses little debt, having a target ratio of just 25% , with the cost of debt 9%. If the acquisition were made, Marston would operate Conroy as a separate, wholly owned subsidiary. Marston would pay taxes on a consolidate basis, and the tax rate would therefore increase to 35%. Marston also would increase the debt capitalization in the Conroy subsidiary to wD = 40% for a total of $22.27 million in debt by the end of year 4 and pay 9.5% on the debt. Marston's acquisition department estimates that Conroy, if acquired, would generate the following free cash flows and interest expenses (in millions of dollars) in Year 1-5: YEAR Free Cash Flows Interest Expense 1 $1.2 $1.30 2 1.50 1.7 3 1.75 2.8 4 2.00 2.1 5 2.12 ? In Year 5 Conroy's interest expense would be based on its beginning of year (that is, the end of Year 4) debt, and in subsequent years both interest expense and free cash flows are projected to grow at a rate of 6%. These cash flows include all acquisition effects. Marston's cost of equity is 10.5%, its beta is 1.0, and its cost of debt is 9.5%. The risk free rate is 6%, and the market risk premium is 4.5%. A- What is the value of Conroy's unlevered operation, and what is the value of Conroy's tax shield under the proposed merger and financing arrangements? B- What is the dollar value of Conroy's operations? If Conroy has $10 million in debt outstanding, how much would Marston be willing to pay for Conroy?
{"url":"https://brainmass.com/business/discounted-cash-flows-model/175037","timestamp":"2014-04-17T00:50:17Z","content_type":null,"content_length":"30708","record_id":"<urn:uuid:66de4ada-5ff6-47fe-8543-6159005d8e0c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig identity for sqrt(2) February 3rd 2011, 09:36 AM Trig identity for sqrt(2) Upon checking an intergral (that I had worked out by hand) using mathcad(some math software I have) I noticed a simplification of part of my intergral the computer had performed and couldnt work out how it was made. it is -sec(6pi/8) = sqrt(2) I was wondering if anyone knows how one gets from one to the other ? I have even checked with my my tutor and he isnt sure either ! is there some identity I am not aware of ? February 3rd 2011, 09:45 AM Archie Meade Upon checking an intergral (that I had worked out by hand) using mathcad(some math software I have) I noticed a simplification of part of my intergral the computer had performed and couldnt work out how it was made. it is -sec(6pi/8) = sqrt(2) I was wondering if anyone knows how one gets from one to the other ? I have even checked with my my tutor and he isnt sure either ! is there some identity I am not aware of ? $\displaystyle\ -sec\left(\frac{6{\pi}}{8}\right)=-\frac{1}{cos\left(\frac{6{\pi}}{8}\right)}=\frac{1 }{cos\left(\frac{2{\pi}}{8}\right)}$ since $cos(\pi-A)=-cosA$ $\displaystyle\frac{1}{cos\left(\frac{\pi}{4}\right )}=\frac{1}{\left(\frac{1}{\sqrt{2}}\right)}=\sqrt {2}$ February 3rd 2011, 09:59 AM $\dfrac{6\pi}{8} = \dfrac{3\pi}{4}$... a unit circle angle. since ... $\cos\left(\dfrac{3\pi}{4}\right) = -\dfrac{\sqrt{2}}{2}$ $\sec\left(\dfrac{3\pi}{4}\right) = -\dfrac{2}{\sqrt{2}} = -\sqrt{2}$ $-\sec\left(\dfrac{3\pi}{4}\right) = \sqrt{2}$ February 3rd 2011, 10:24 AM Thanks very much to both of you!!
{"url":"http://mathhelpforum.com/calculus/170115-trig-identity-sqrt-2-a-print.html","timestamp":"2014-04-17T10:48:33Z","content_type":null,"content_length":"8260","record_id":"<urn:uuid:62ce461f-2f82-4966-a5b0-7bdef2a996d6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Los Altos Hills, CA SAT Math Tutor Find a Los Altos Hills, CA SAT Math Tutor ...I know the subject matter of finance well, including the mathematics of finance up through stochastic calculus for Black-Scholes / option pricing. I am an excellent tutor for GMAT preparation, specifically for the quantitative portion of the exam that covers data analysis, reasoning skills, patt... 22 Subjects: including SAT math, calculus, geometry, statistics ...Upon graduation, while working as a researcher at UC Davis, I began working with students on the SAT and ACT part-time. Since then the passion to have a positive impact on test takers college aspirations has remained and I have always made time for it regardless of where my primary career has ta... 10 Subjects: including SAT math, ASVAB, ACT Math, SAT reading ...So, I am able to connect with these students and keep them interested while tutoring the subject needed. I have been working successfully with students of special needs for years now. I run my own school and have often enrolled students with special needs and teach them right along with everyone else. 35 Subjects: including SAT math, reading, chemistry, English ...In addition, I've prepared many students for the SAT 1, SAT 2, and ACT standardized tests including some who achieved perfect scores. If my experience as an educator has taught me anything, it has taught me that every student is different: different personalities, different motivations, differen... 14 Subjects: including SAT math, chemistry, calculus, physics ...Additionally, A higher level of differential equations was completed with a B. I spent 4 years in taekwondo and received a black belt through a west coast martial arts program. I've been through college admission test prep for for UC essays and private school personal statements. 32 Subjects: including SAT math, chemistry, English, statistics Related Los Altos Hills, CA Tutors Los Altos Hills, CA Accounting Tutors Los Altos Hills, CA ACT Tutors Los Altos Hills, CA Algebra Tutors Los Altos Hills, CA Algebra 2 Tutors Los Altos Hills, CA Calculus Tutors Los Altos Hills, CA Geometry Tutors Los Altos Hills, CA Math Tutors Los Altos Hills, CA Prealgebra Tutors Los Altos Hills, CA Precalculus Tutors Los Altos Hills, CA SAT Tutors Los Altos Hills, CA SAT Math Tutors Los Altos Hills, CA Science Tutors Los Altos Hills, CA Statistics Tutors Los Altos Hills, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/los_altos_hills_ca_sat_math_tutors.php","timestamp":"2014-04-20T13:49:32Z","content_type":null,"content_length":"24453","record_id":"<urn:uuid:7cb06404-4982-40f1-92de-f2ba0a8ed35f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximum Likelihood Estimator / Best Estimator .. Related to survey.. Help needed :( December 12th 2012, 09:50 PM #1 Oct 2012 Maximum Likelihood Estimator / Best Estimator .. Related to survey.. Help needed :( Some respondents give false responses to survey questions. Denote the proportion of the members of a particular population that smoked more than 50 cigarettes last week by p. A random sample of n members of this population is taken, and each person in the sample is asked “Did you smoke more than 50 cigarettes last week?” If a person really smoked more than 50 cigarettes, the probability that he will give a truthful answer to this question is 1-x[1]. If a person did not smoke more than 50 cigarettes, the probability that he will give a truthful answer is 1-x[2]. From past data, x [1] and x[2] are known, with 0<x[1]<0.5, 0<x[2]<0.5. a) For a sample of size one, find the likelihood function if the answer is “yes” and find the likelihood function if the answer is “no.” b) For a random sample of size n, find the likelihood function and sufficient statistics. c) Find the maximum likelihood estimator for p. d) Assume that x[1]=0.1, x[2]=0, and there is one “yes” answer in a random sample of size 10. What is your best estimate of p and why? e) Consider the same scenario as in (d), but assume that x[1] is unknown (0<x[1]<1). In this case, what would be your best estimate of p and why? Re: Maximum Likelihood Estimator / Best Estimator .. Related to survey.. Help needed Hey drinkingwater. Can you show us what you have tried? (Hint: the likelihood for sample size 1 is just the probability of that thing happening, so in other words its just the PDF where you substitute in a given x value corresponding to the sample December 13th 2012, 08:02 PM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/statistics/209713-maximum-likelihood-estimator-best-estimator-related-survey-help-needed.html","timestamp":"2014-04-18T09:11:03Z","content_type":null,"content_length":"33637","record_id":"<urn:uuid:b4928cf2-eb36-48dc-a694-60ee236f706f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Meeting Details For more information about this meeting, contact Mary Anne Raymond. Title: Foliations and their characteristic classes Seminar: Slow Pitch Seminar Speaker: Dmitry Fuchs, UC Davis A k-dimensional foliation on a manifold M is as an integrable field of k-dimensional subspaces of tangent spaces of M (where k < dim M). Integrability means that every point of M belongs to a k-dimensional submanifold of M everywhere tangent to the planes of the field. These manifolds, called leaves, locally look like families of parallel k-planes; however, globally their behaviour is less decent. (Examples will be shown.) In the 70-s, characteristic classes of foliations were discovered; the simplest of them, the so called Godbillon-Vey class, is a 3-dimensional cohomology class of a manifold with a codimension 1 (dimension dim M -1) foliation. Although the definition of this class is extremely simple, its geometric meaning remains very much unclear. There are exciting unsolved problems which are very easy to formulate but, apparently, not so easy to solve. The talk will be elementary. The most advanced notion used will be that of a diffrential form. Room Reservation Information Room Number: MB106 Date: 02 / 19 / 2008 Time: 05:15pm - 06:15pm
{"url":"http://www.math.psu.edu/calendars/meeting.php?id=1859","timestamp":"2014-04-19T00:08:37Z","content_type":null,"content_length":"3989","record_id":"<urn:uuid:32f29ca8-82a5-4a3c-8ff9-47920c6a4d90>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Integral March 19th 2010, 07:50 PM #1 Trig Integral $\int sin^2(2x)cos^3(2x) dx$ $\int sin^2(2x)cos^2(2x)cos(2x) dx$ $\int sin^2(2x)(1-sin^2(2x))cos(2x) dx$ let $u = sin(2x)$$du = 2cos(2x)dx \rightarrow \frac{1}{2}du = cos(2x)dx$ $\frac{1}{2}\int u^2(1-u^2)du = \frac{1}{2} \int u^2 - u^4 du$ $\frac{1}{2}\left[\frac{u^3}{3} - \frac{u^5}{5}\right] + C \rightarrow \frac{1}{2}\left[\frac{5u^3 - 3u^5}{15}\right] + C$ $=\frac{1}{6}sin^3(2x) - \frac{1}{10}sin^5(2x) + C$ I thought I did everything right but wolfram got a different answer than me. Is what I did here correct? Hello, VitaX! Your work is correct. What was wolfram's answer? Wolfram is telling me that both solutions are equal. You can probably get from one solution to the other using a lot of double angle formulas and the like. In the meantime, rest assured that your solution is correct. March 19th 2010, 08:00 PM #2 Super Member May 2006 Lexington, MA (USA) March 19th 2010, 08:06 PM #3 March 19th 2010, 09:02 PM #4 Senior Member Jan 2010
{"url":"http://mathhelpforum.com/calculus/134654-trig-integral.html","timestamp":"2014-04-18T19:57:44Z","content_type":null,"content_length":"39496","record_id":"<urn:uuid:560c6198-0848-4346-a9dd-60fca03386c4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of application visualisation system System dynamics is an approach to understanding the behaviour of complex systems over time. It deals with internal feedback loops and time delays that affect the behaviour of the entire system. What makes using system dynamics different from other approaches to studying complex systems is the use of loops and stocks and flows . These elements help describe how even seemingly simple systems display baffling System dynamics is an aspect of systems theory as a method for understanding the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system — the many circular, interlocking, sometimes time-delayed relationships among its components — is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory social dynamics . It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts. Topics in systems dynamics The elements of system dynamics diagrams are feedback, accumulation of flows into stocks and time delays. As an illustration of the use of system dynamics, imagine an organisation that plans to introduce an innovative new durable consumer product. The organisation needs to understand the possible market dynamics in order to design marketing and production plans. Causal loop diagrams A causal loop diagram is a visual representation of the feedback loops in a system. The causal loop diagram of the new product introduction may look as follows: There are two feedback loops in this diagram. The positive reinforcement (labeled R) loop on the right indicates that the more people have already adopted the new product, the stronger the word-of-mouth impact. There will be more references to the product, more demonstrations, and more reviews. This positive feedback should generate sales that continue to grow. The second feedback loop on the left is negative reinforcement (or "balancing" and hence labeled B). Clearly growth can not continue forever, because as more and more people adopt, there remain fewer and fewer potential adopters. Both feedback loops act simultaneously, but at different times they may have different strengths. Thus one would expect growing sales in the initial years, and then declining sales in the later Stock and flow diagrams The next step is to create what is termed a stock and flow diagram. A stock is the term for any entity that accumulates or depletes over time. A flow is the rate of change in a stock. In our example, there are two stocks: Potential adopters and Adopters. There is one flow: New adopters. For every new adopter, the stock of potential adopters declines by one, and the stock of adopters increases by one. The real power of system dynamics is utilised through simulation. Although it is possible to perform the modeling in a , there is a variety of software packages that have been optimised for this. The steps involved in a simulation are: • Define the problem boundary • Identify the most important stocks and flows that change these stock levels • Identify sources of information that impact the flows • Identify the main feedback loops • Draw a causal loop diagram that links the stocks, flows and sources of information • Write the equations that determine the flows • Estimate the parameters and initial conditions. These can be estimated using statistical methods, expert opinion, market research data or other relevant sources of information. • Simulate the model and analyse results The equations for the causal loop example are: $Adopters = int_\left\{0\right\} ^\left\{t\right\} mbox\left\{New adopters \right\},dt$$mbox\left\{Potential adopters\right\} = int_\left\{0\right\} ^\left\{t\right\} mbox\left\{-New adopters \right \},dt$$mbox\left\{New adopters\right\}=mbox\left\{Innovators\right\}+mbox\left\{Imitators\right\}$$mbox\left\{Innovators\right\}=p cdot mbox\left\{Potential adopters\right\}$$mbox\left\{Imitators\ right\}=q cdot mbox\left\{Adopters\right\} cdot mbox\left\{Probability that contact has not yet adopted\right\}$$mbox\left\{Probability that contact has not yet adopted\right\}=frac\left\{mbox\left\ {Potential adopters\right\}\right\}\left\{mbox\left\{Potential adopters \right\} + mbox\left\{ Adopters\right\}\right\}$ Simulation results The simulation results show that the behaviour of the system would be to have growth in adopters that follows a classical s-curve shape. The increase in adopters is very slow initially, then exponential growth for a period, followed ultimately by saturation. System dynamics has found application in a wide range of areas, for example systems, which usually interact strongly with each other. System dynamics have various "back of the envelope" management applications. They are a potent tool to: • Teach system thinking reflexes to persons being coached • Analyze and compare assumptions and mental models about the way things work • Gain qualitative insight into the workings of a system or the consequences of a decision • Recognize archetypes of dysfunctional systems in everyday practice Computer software is used to simulate a system dynamics model of the situation being studied. Running "what if" simulations to test certain policies on such a model can greatly aid in understanding how the system changes over time. System dynamics is very similar to systems thinking and constructs the same causal loop diagrams of systems with feedback. However, system dynamics typically goes further and utilises simulation to study the behaviour of systems and the impact of alternative policies. System dynamics has been used to investigate resource dependencies, and resulting problems, in product development. . See also Related subjects Related fields Related scientists External links
{"url":"http://www.reference.com/browse/application+visualisation+system","timestamp":"2014-04-16T08:40:28Z","content_type":null,"content_length":"84313","record_id":"<urn:uuid:e3e7bf83-c1c4-4094-9c40-f6947639157d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] Surface Area - check my work? March 10th 2009, 07:25 PM #1 [SOLVED] Surface Area - check my work? Find the area of the surface obtained by rotating the following curve about the x-axis: $y = x^3$, $0 \leq x \leq 3$ Here is my work, which I'm pretty sure is correct, but wrong in Webassign: $2\pi \int\limits^{3}_{0} (x^3) \sqrt{1+9x^4} dx$ $\frac{\pi}{18} \int\limits^{729}_{1} \sqrt{u} du$ $= \frac{\pi}{18} \biggl[\frac{2}{3} u^{\frac{3}{2}} \biggr]^{729}_{1}$ $= \frac{\pi}{27} \biggl[729^{\frac{3}{2}} - 1 \biggr]$ Can someone please check my work and let me know where the error is? Thanks!! Molly Find the area of the surface obtained by rotating the following curve about the x-axis: $y = x^3$, $0 \leq x \leq 3$ Here is my work, which I'm pretty sure is correct, but wrong in Webassign: $2\pi \int\limits^{3}_{0} (x^3) \sqrt{1+9x^4} dx$ $\frac{\pi}{18} \int\limits^{7{\color{red}30}}_{1} \sqrt{u} du$ $= \frac{\pi}{18} \biggl[\frac{2}{3} u^{\frac{3}{2}} \biggr]^{7{\color{red}30}}_{1}$ $= \frac{\pi}{27} \biggl[7{\color{red}30}^{\frac{3}{2}} - 1 \biggr]$ Can someone please check my work and let me know where the error is? Thanks!! Molly Note my corrections in red. March 10th 2009, 07:32 PM #2
{"url":"http://mathhelpforum.com/calculus/78057-solved-surface-area-check-my-work.html","timestamp":"2014-04-16T15:04:41Z","content_type":null,"content_length":"37486","record_id":"<urn:uuid:7d66f0ec-6cf8-46b7-a241-7f567ee2ee37>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Alameda ACT Tutor Find an Alameda ACT Tutor ...I am also available to tutor adults who are preparing for the GRE, LSAT, or wish to learn a second language. I'm fluent in French and am currently tutoring a number of kids in French immersion and French American schools in a variety of subjects, including math. I also teach people how to excel at standardized tests. 48 Subjects: including ACT Math, reading, Spanish, English ...I have been teaching math and computer science as a tutor and lecturer in multiple countries at high schools, boarding schools, and universities. I am proficient in teaching math at any high school, college, or university level.Algebra 1 is the first math class that introduces equations with var... 41 Subjects: including ACT Math, calculus, geometry, statistics ...Previously, I graduated from Caltech with a BS in Electrical Engineering, a BS in Business Economics and Management, and an MS in Electrical Engineering with a 4.1/4.3 GPA. I love, love, love math, and am very enthusiastic about teaching others! I have been a Teaching Assistant for undergrad and graduate level probability and statistics courses both at Caltech and Cal. 27 Subjects: including ACT Math, chemistry, physics, calculus ...In fact, the lowest grade any of my regular private students has received was a B+. I have helped a lot of students on their math section of the SATs. Generally speaking, my students average an increase of around 100-150 points on their math section alone. Imagine going from 550, a slightly higher than average score, to about a 700, an amazing score. 27 Subjects: including ACT Math, chemistry, calculus, physics ...PreCalculus is a combination of reviewing Alg. II topics and introducing new ones. It is an important time to practice skills and develop depth in critical thinking! 13 Subjects: including ACT Math, calculus, statistics, geometry
{"url":"http://www.purplemath.com/Alameda_ACT_tutors.php","timestamp":"2014-04-20T21:42:33Z","content_type":null,"content_length":"23663","record_id":"<urn:uuid:d3ed8723-74c4-41b3-a176-e527dbc19641>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Percolation theory is the study of the connectivity of networks. If you take a piece of paper and punch small holes in it at random positions, it will remain connected if the density of holes is small. If you punch so many holes that most of the paper has been punched away, the paper will fall apart into small clusters. There is a phase transition in percolation, where the paper first falls apart. Let p be the probability that a given spot in the paper has been punched away. There is a critical probability p[c] below which the paper is still connected from top to bottom, and above which the paper has fallen into small pieces (say, if it is being held along the top edge). Learning Goals Science: You will learn about percolation transitions on networks, and of universality of critical phenomena near the percolation critical point. Computation: You will first use breadth-first search techniques to compute the set of connected components (clusters) in an undirected graph. Then the scaling exercise introduces scaling and critical phenomena methods for studying the phase transtion for percolation. By now, you should have completed the Introduction to Networks exercise , and should have working code to create undirected graphs. If you've already done the Small world networks exercise (this is a prerequisite), then the concept of breadth-first searches should be clear to you. Creating bond percolation networks 1. Consult the material in Percolation Computation Exercise for an overview of the exercise and details on writing algorithms. 2. Download the file PercolationHints.py from the course webpage, and save it as "Percolation.py". It is easiest if you store this file in the same directory as the networks files you worked with earlier (Networks.py and NetGraphics.py), since those files will be needed for this exercise. [If you need those files, you can get them from the following links Answers file for network algorithms and Network graphics software -- be careful not to overwrite the files you previously made, it would be a shame to loose all that hard work.] 3. Build a bond percolation network on a 2D square lattice with periodic boundary conditions. Define a function MakeSquareBondPercolation that takes two arguments, the size of the lattice in each dimension (L), and the probability that any given bond between nearest neighbors exists (p). (As with ring graphs, the easiest way to implement periodic boundary conditions is to use the Python modular arithmetic operator %: see the python documentation.) This routine will create an instance of the UndirectedGraph class defined in the Introduction to Networks exercise. Finding connected components in networks 1. You will now write functions for finding clusters (connected components) in undirected graphs. This will involve writing and debugging the following routines: FindClusterFromNode and 2. Use the graphics software provided in NetGraphics.DrawSquareNetworkBonds to visualize the different clusters in the percolation network. Creating site percolation networks In bond percolation, bonds are created (or cut) at random. In site percolation, sites are occupied (or removed) at random, and bonds in the corresponding graph are inferred if two neighboring sites are occupied. The same class can be used in both cases, just with different algorithms for populating such graphs. 1. Build a site percolation network on a 2D triangular lattice with periodic boundary conditions. Define a function MakeTriangularSitePercolation that takes two arguments, the size of the lattice in each dimension (L), and the probability that any given site is occupied (p). 2. Test your cluster finding function on the site percolation lattice. (It should work with no modifications if you've written it correctly.) 3. Use the graphics software provided in NetGraphics.DrawTriangularNetworkSites to visualize the different clusters in the percolation network. Scaling analyses of percolation networks 1. Consult the material in Percolation Scaling Exercise (Exercise 12.12) for an overview of the critical phenomena of percolation, the use of scaling theory to understand such phenomena, and specifics of computations to be performed. 2. (Exercise 12.12(b)) For both bond percolation on the square lattice and site percolation on the triangular lattice, calculate the cluster size distribution n(S) for L=400 and p=p_c=0.5. Plot log (n(S)) versus log(S) for both, along with the theoretical result n(S) ~ S^(-187/91). Do the data show evidence of power-law cluster size distributions? 3. (Exercise 12.12(c)) For both bond percolation on the square lattice and site percolation on the triangular lattice, calculate the fraction of nodes that are part of the largest cluster, for L=400 and p=p_c+2^(-n) for n from roughly 0 to 9. Plot log(P(p)) versus p-p_c, and compare with the theoretical prediction P(p) ~ (p_c-p)^(5/36). 4. (Exercise 12.12(d,e,f,g)) Explore finite size scaling in bond and site percolation, using scaling collapses to relate simulation data for different system size L and bond/site fraction p. When you try to import a module in Python (e.g., via import somemodule ), the interpreter looks through a sequence of directories to find the specified module. This sequence (a Python list) is stored in the variable : you can see what is in the default search path by first importing the sys module ( import sys ) and then printing the path ( print sys.path ). You will see that an empty directory ( ) is in the path; this corresponds to your current directory. For this reason, it is often easiest to put a group of related files together in the same directory, such as is recommended here. You will see that there are also directories in . These indicate where various built-in and third-party packages are installed. You can extend the default search path in a couple of different ways. One is to define the shell variable PYTHONPATH. Any directories placed in that list will be appended (i.e., stuck at the end) of sys.path. In the bash shell, for example, one could type at the command line or place in the ~/.bashrc file, a command like: export PYTHONPATH=~/mypythonlibrary and anything Python files in that directory would be accessible for import. If you find yourself developing source files that you reuse for a number of different projects, creating a central repository like this might be useful. (Such a repository can be hierarchically structured as well.) The other way to augment the path is to add to the sys.path variable directly within your Python code: e.g., sys.path.append('~/ mypythonlibrary') will add that directory to your path, but only for the currently running session. Dynamic typing and node types in UndirectedGraphs When we defined our class, we did not specify what sorts of objects could be used as nodes in a graph. Because Python is dynamically typed, we do not need to declare object types when defining functions and methods; instead, the only requirement (based on what specific operations that we call on objects passed as nodes) is that nodes be storeable as keys in a dictionary. Most Python objects can be so stored, except for mutable types like lists. In the small worlds exercise, we used integers as node identifiers (keys). If we were playing the Kevin Bacon game connecting different actors to one another, we would use actors' names (character strings) as nodes (although, somewhat more efficiently, we might want to construct a bipartite graph connecting actors' names to the names of movies that they appear in). To study percolation on a lattice, it is most straightforward to use a tuple of lattice indices (i,j) as a node identifier. Because of dynamic typing, we can do this without rewriting our UndirectedGraph class. By contrast, in statically typed languages, we would need to define some sort of Node class or interface, from which other more specific node classes would be derived, in order to make our graph class broadly applicable to a variety of different types of nodes. While static typing does have the advantage of catching certain sorts of programming errors (e.g., if we pass an object that is not derived from type Node), in many cases the additional programming and design required to support flexible and expressive code is excessive. The multiplot software is available at: Scaling collapse software (MultiPlot) Conceptually, scaling collapses are extremely straightforward. In a family of x-y data sets, the x axis in each set is scaled by one formula, and the y-axis by another, where the formulas depend on parameters that distinguish each data set from one another. The finite-size scaling exercises in Percolation Scaling Exercise describe the types of scaling formulas appropriate for percolation. While conceptually simple, computing scaling collapses requires managing multiple data sets, their associated parameter values, and the functional form of the desired scaling collapse. To address this complexity, we have written the MultiPlot module to facilitate scaling collapses, although the collapses can certainly be done without using MultiPlot. The key to the MultiPlot module is to store families of data sets in a Python dictionary, where the keys to the dictionary are the parameter values for each data set, and the values are scipy arrays containing the raw, unscaled data. The x-axis data and y-axis data are stored separately in different dictionaries. By using Python's capabilities for dynamic code evaluation, we can encode (nearly) arbitrary functional forms for scaling collapses in Python strings, rather than having to hardcode specific functional forms. For example, let xdata and ydata describe data that were generated for different values of parameters A and B, say, for A = 100, 150 and 200, and B = 2,4, and 6. Then xdata would be a dictionary keyed on (A,B) tuples: xdata[(100,2)] = # some data xdata[(100,4)] = # more data xdata[(100,6)] = # more data xdata[(150,2)] = # etc. and ydata would contain the y-axis data from the same parameter values. In MultiPlot, we can specify the functional form of the collapse as a function of the parameters A and B.
{"url":"http://pages.physics.cornell.edu/~myers/teaching/ComputationalMethods/ComputerExercises/Percolation/Percolation.html","timestamp":"2014-04-21T09:35:57Z","content_type":null,"content_length":"17316","record_id":"<urn:uuid:fac8ac9d-99b5-47f4-9d47-7d8e9e2c3631>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply I think MathsIsFun summed up that solution perfectly: It's right, as far as I can tell, but it just annoys me when things can't be simplified and you have to leave them in a mess like that. The best alternative I could come up with is: h = (p-a)/2 - a²/2(p-a) But that's not really any simpler. Maybe a little bit.
{"url":"http://www.mathisfunforum.com/post.php?tid=2599&qid=25586","timestamp":"2014-04-17T12:38:39Z","content_type":null,"content_length":"20205","record_id":"<urn:uuid:d3b6c241-d8b4-416b-ad80-0f9b0d9462c0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Edgewater, NJ Statistics Tutor Find an Edgewater, NJ Statistics Tutor ...As a full time Statistician, I continue to use SAS almost on a daily basis. I took Biostatistics in the graduate level during my Master's program and received an A. It dealt with applying statistical methods in biology and medicine. 18 Subjects: including statistics, calculus, algebra 1, algebra 2 ...Do you have a student who has difficulties with writing such as generating or getting ideas onto paper, organizing writing and grammatical problems? Do you have a student whose life you would like to enrich with piano lessons? If you answered yes to any of these questions, I can help. 30 Subjects: including statistics, English, piano, reading ...I have helped many math students raise their grades dramatically in short periods of time. I accomplish this by focusing on improving a student's problem solving ability, a skill that is not often taught well in school. I have worked with students with learning disabilities as well as gifted students taking advanced classes or classes beyond their grade level. 34 Subjects: including statistics, calculus, writing, GRE ...I have an advanced Regents diploma so I would be willing to help students with Regents exams. I have taken approximately two years of university-level math courses (one year of Calculus, one year of statistics), I am completing a minor in biology so I have two years+ of coursework in biology. I also have a concentration in chemistry. 23 Subjects: including statistics, chemistry, physics, writing ...I also have majored and I am currently minoring in mathematics and have taken several mathematics courses. All the mathematics material on the GED exam I have covered several times over in my coursework. I am majoring in accounting, economics and finance at Queens College Cuny, three majors that make up the business field. 9 Subjects: including statistics, writing, accounting, GED Related Edgewater, NJ Tutors Edgewater, NJ Accounting Tutors Edgewater, NJ ACT Tutors Edgewater, NJ Algebra Tutors Edgewater, NJ Algebra 2 Tutors Edgewater, NJ Calculus Tutors Edgewater, NJ Geometry Tutors Edgewater, NJ Math Tutors Edgewater, NJ Prealgebra Tutors Edgewater, NJ Precalculus Tutors Edgewater, NJ SAT Tutors Edgewater, NJ SAT Math Tutors Edgewater, NJ Science Tutors Edgewater, NJ Statistics Tutors Edgewater, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Edgewater_NJ_Statistics_tutors.php","timestamp":"2014-04-17T15:53:18Z","content_type":null,"content_length":"24351","record_id":"<urn:uuid:59a26b54-d772-472c-8155-2113cfc634ee>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Side Lengths of a Scalene Triangle Date: 6/2/96 at 19:31:27 From: Anonymous Subject: Finding side lengths of a scalene triangle This following question was on a very good university entrance exam in Brazil, in 1993. It states that: "Two observers on points A and B of a national park see a beginning fire on point C. Knowing that the angles CAB=45 degrees, ABC=105 degrees and that the distance between points A and B is of 15 kilometers, determine the distances between B and C, and between A and Although there was no illustration in the original question, one is roughly drawn below: /^\<this angle measures 105 degrees 15 km> / \ / \ / \ / \ A /_ _ _ _ _ _ _ _ _\C this is 45 degrees this is, consequently, 30 degrees I tried to use the equation of areas: A= side*side*sin(of angle between the sides)/2 but I didn't get any results. Date: 6/3/96 at 10:33:54 From: Doctor Pete Subject: Re: Finding side lengths of a scalene triangle The fact that angle BAC is 45 degrees and ACB is 30 degrees was very suggestive to me, so I drew the perpendicular from point B to side AC, which meets at point D. Then BD = AD = AB/sqrt(2) = 15/sqrt(2), since triangle ABD is 45-45-90 and thus isosceles. Also, triangle BCD is 30-60-90, so BC = 2BD = 30/sqrt(2), and CD = sqrt(3)*BD = 15*sqrt(3/2). Therefore the lengths we wish to find BC = 15*sqrt(2), AC = AD+CD = 15/sqrt(2)+15*sqrt(3/2) = 15(1+sqrt(3))/sqrt(2). Alternatively, you could use the Law of Sines, which states sin(A) sin(B) sin(C) ------ = ------ = ------ , a b c where A, B, C are angles and a, b, c are the lengths of the sides they subtend (are opposite to). So side AB is "c" in the above equation. sin(A) and sin(C) are easy to find; they are 1/sqrt(2) and 1/2, sin(B) = sin(105) = sin(45+60) = sin(45)cos(60)+cos(45)sin(60) = (1/sqrt(2))*(sqrt(3)/2)+(1/sqrt(2))*(1/2) = (1+sqrt(3))/(2*sqrt(2)). So we have 1/sqrt(2) (1+sqrt(3))/(2*sqrt(2)) 1/2 --------- = ----------------------- = --- , a b 15 so b = 15(1+sqrt(3))/sqrt(2) = AC, and a = 15*sqrt(2) = BC, which agrees with our previous results. In general, area considerations are a poor way of obtaining relations between angles and sides, because they are often very complicated and often come in a form that requires knowing the lengths of more than one side. If you know a lot of angles, a better approach is to think of the Law of Sines or the Law of Cosines (c^2 = a^2+b^2-2*a*b*cos(C)). Notice that the values of the angles were special because they allowed the first solution I gave. In general, given a side and two angles, you must use the Law of Sines to find the other lengths. -Doctor Pete, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/51719.html","timestamp":"2014-04-17T01:11:39Z","content_type":null,"content_length":"8299","record_id":"<urn:uuid:d753bdbf-d392-471c-9992-5e5adf217193>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
005b: Sums of n squares - A Collection of Algebraic Identities Return to Index IV. Some Identities of Squares 1. Euler-Aida Ammei Identity 2. Brahmagupta-Fibonacci Two-Square Identity 3. Euler Four-Square Identity 4. Degen-Graves-Cayley Eight-Squares Identity 5. V. Arnold’s Perfect Forms 6. Lagrange’s Identity 7. Difference of Two Squares Identity (Update, 10/26/09): The Lebesgue Polynomial Identity is given by, (a^2+b^2-c^2-d^2)^2 + (2ac+2bd)^2 + (2ad-2bc)^2 = (a^2+b^2+c^2+d^2)^2 We can generalize this to the form, x[1]^2+x[2]^2+… +x[m]^2 = (y[1]^2+y[2]^2+… +y[n]^2)^2 It can be proven there are polynomial identities with integer coefficients for all m = n (discussed in Sums of Three Squares). But what about when m < n? It turns out we can generalize the one by Lebesgue by looking at its underlying structure. Note that this can be expressed as, (p[1]^2+p[2]^2+q[1]^2+q[2]^2)^2 - (p[1]^2+p[2]^2-q[1]^2-q[2]^2)^2 = 4(p[1]^2+p[2]^2)(q[1]^2+q[2]^2) (eq.1) Let {a,b} = {p[1]^2+p[2]^2, q[1]^2+q[2]^2} and this reduces to the basic Difference of Two Squares Identity, (a+b)^2 - (a-b)^2 = 4ab a special case of Boutin's Theorem also given at the end of this section which generalizes it to a sum and difference of kth powers. Since (p[1]^2+p[2]^2)(q[1]^2+q[2]^2) = r[1]^2+r[2]^2 by the Bramagupta-Fibonacci Two-Square Identity, then the RHS of eq.1 can be expressed as the sum of two squares, which explains the Lebesgue Identity. But there are the higher Euler Four-Square Identity and Degen Eight-Square Identity. (See also the article, Pfister's 16-Square Identity.) Thus, let {a,b} = {p[1]^2+p[2]^2+p[3]^2+p[4]^2, q[1]^2+q[2]^2+q[3]^2+q[4]^2} and we get, (p[1]^2+p[2]^2+p[3]^2+p[4]^2+q[1]^2+q[2]^2+q[3]^2+q[4]^2)^2 - (p[1]^2+p[2]^2+p[3]^2+p[4]^2-q[1]^2-q[2]^2-q[3]^2-q[4]^2)^2 = 4(p[1]^2+p[2]^2+p[3]^2+p[4]^2)(q[1]^2+q[2]^2+q[3]^2+q[4]^2) Since the RHS can be expressed as four squares, this gives an identity for the square of 8 squares as the sum of five squares. Using the Degen Eight-Square, this gives one for the square of 16 squares as nine squares. In general, we can have a Theorem 3 to complement the first two in Sums of Three Squares. Theorem 3: Given x[1]^2+x[2]^2+… +x[m]^2 = (y[1]^2+y[2]^2+… +y[n]^2)^2, where m ≤ n, one can identically express the square of n squares as m squares for: I. All m = n. II. For m < n, with n even (v > 1): a) {m,n} = {2v±1, 2v} b) {m,n} = {4v-3, 4v} c) {m,n} = {8v-7, 8v} III. For m < n, with n odd (v > 1): a) For n = 4v-1, then m = 4v-3. b) For n = {8v-1, 8v-3, 8v-5}, then m = 8v-7. Thus, for example, (y[1]^2+y[2]^2+… +y[16]^2)^2 can be identically expressed as m squares for m = {1, 9, 13, 15, 16}. (Q. Is this the most number of m when m ≤ n?) However, when m < n, and after eliminating the possibility of (x[1]^2+x[2]^2+x[3]^2)^2 expressed as two non-zero squares, two cases are still unresolved: n = 5 and n = 8v-7. Whether or not there are identities for these remains to be seen. Proof: One simply uses the difference of two squares identity: (a+b)^2 + (a-b)^2 = 4ab (eq.1) Let a = (p[1]^2+p[2]^2+p[3]^2+p[4]^2+ ... +p[2u]^2). Disregarding the square numerical factor, the RHS of eq.1 becomes (by distributing in pairs) as, b(p[1]^2+p[2]^2) + b(p[3]^2+p[4]^2) + ... Let b = (q[1]^2+q[2]^2). Since by the Bramagupta-Fibonacci Identity (x[1]^2+x[2]^2)(y[1]^2+y[2]^2) = z[1]^2+z[2]^2, then the RHS can be expressed as 2u squares. Thus, {m,n} = {2u+1, 2u+2} or, equivalently, {2v-1, 2v}. However, if the identity is not used on one pair which is instead expressed as four squares, then the RHS has 2u+2 squares. Thus {m,n} = {2u+3, 2u+2} or, {2v+1, 2v}, proving part (Ia) of Theorem 3. Similarly, let a = (p[1]^2+p[2]^2+p[3]^2+p[4]^2+ ... +p[4u]^2). Disregarding again the square numerical factor, the RHS (by distributing fourwise) is, b(p[1]^2+p[2]^2+p[3]^2+p[4]^2) + b(p[5]^2+p[6]^2+p[7]^2+p[8]^2) + ... Let b = (q[1]^2+q[2]^2+q[3]^2+q[4]^2). Since by the Euler Four-Square Identity (x[1]^2+x[2]^2+x[3]^2+x[4]^2)(y[1]^2+y[2]^2+y[3]^2+y[4]^2) = z[1]^2+z[2]^2+z[3]^2+z[4]^2, then the RHS can be expressed as 4u squares. Thus, {m,n} = {4u+1, 4u+4}, or {4v-3, 4v}, proving part (Ib). The last part is proven using the last such identity (by the Hurwitz Theorem) namely, the Degen Eight-Square Identity. Finally, part II of the Theorem can be proven by simply setting the appropriate number of variables y[i] as equal to zero. For example, we can reduce the {5,8} Identity to a {5,7} by letting one of the eight squares as zero to get, (a^2+b^2+c^2+d^2-e^2-f^2-g^2)^2 + 4(ae-bf-cg)^2 + 4(af+be-dg)^2 + 4(ag+ce+df)^2 + 4(bg-cf+de)^2 = (a^2+b^2+c^2+d^2+e^2+f^2+g^2)^2 and so on. (End update.) 1. Euler-Aida Ammei Identity Theorem: “The square of the sum of n squares is itself a sum of n squares.” (x[1]^2-x[2]^2-…-x[n])^2 + å(2x[1]x[n])^2 = (x[1]^2+x[2]^2+…+x[n]^2)^2 (a^2-b^2)^2 + (2ab)^2 = (a^2+b^2)^2 (a^2-b^2-c^2)^2 + (2ab)^2 + (2ac)^2 = (a^2+b^2+c^2)^2 (a^2-b^2-c^2-d^2)^2 + (2ab)^2 + (2ac)^2 + (2ad)^2 = (a^2+b^2+c^2+d^2)^2 and so on. Note that these can be alternatively expressed as the basic identity, (x[1]^2-x[0])^2 + (2x[1])^2x[0]^ = (x[1]^2+x[0])^2 for arbitrary x[1] and x[0], where for the theorem it was set x[0] = x[2]^2+x[3]^2+…+x[n]^2. By letting x[0] = x[2]^2, one can see this basic identity is essentially the formula for Pythagorean triples. A stronger result by M. Moureaux is that, “The kth power of the sum of n squares, for k a power of 2, is itself a sum of n squares.” After some time with Mathematica, I observed that there seems to be this beautifully consistent pattern in the identities based on a certain algebraic form. Let the summation å be from m = {2 to n}, then, (a)^2 + å(2x[1]x[n])^2 = (x[1]^2+x[2]^2+…+x[n]^2)^2 (b)^2 + å(4ax[1]x[n])^2 = (x[1]^2+x[2]^2+…x[n]^2)^4, (c)^2 + å(8abx[1]x[n])^2 = (x[1]^2+x[2]^2+…x[n]^2)^8, (d)^2 + å(16abcx[1]x[n])^2 = (x[1]^2+x[2]^2+…x[n]^2)^16, (e)^2 + å(32abcdx[1]x[n])^2 = (x[1]^2+x[2]^2+…x[n]^2)^32, and so on, where a = -(x[1]^2-x[2]^2-…-x[n]) and b = (4x[1]^4+4ax[1]^2-a^2), c = (4a^4+4a^2b-b^2), d = (4b^4+4b^2c-c^2), e = (4c^4+4c^2d-d^2), etc. It seems there is this “recurrence relation” involving the algebraic form 4u^4+4u^2v-v^2. I have no proof this in fact goes on, though it would be odd if the pattern stops. (And the algebraic form factors over √2, which is only appropriate since we are dealing with powers of two.) To generalize the Euler-Aida Ammei identity: For what k is the kth power of n diagonal quadratic forms identically a sum of like form? Or, c[1]w[1]^2 + c[2]w[2]^2 + …+ c[n]w[n]^2 = (c[1]x[1]^2 + c[2]x[2]^2 + …+ c[n]x[n]^2)^k It turns out for the monic case, or when c[1] = 1, the answer is for all positive integer k. In a previous section, it was proven that, “the kth power of the sum of n squares (x[1]^2+x[2]^2+… x[n]^ 2)^k is itself the sum of n squares”. It takes only a very small modification of the proof to generalize this. Theorem 1. “The expression (c[1]x[1]^2 + c[2]x[2]^2 + …+ c[n]x[n]^2)^k, for c[1 ]= 1, is identically a sum of n squares of like form for all positive integer n and k.” Proof: (Piezas) To recall, let the expansion of the complex number (a±bi)^k be, U+Vi = (a+bi)^k; U-Vi = (a-bi)^k where U,V are expressions in the arbitrary a,b. Their product, or norm, is, U^2+V^2 = (a^2+b^2)^k Since b can be factored out in V, or V = V[1]b, if we let {a,b} = {p[1],√p[0]}, then, U^2 + p[0]V[1]^2 = (p[1]^2+p[0])^k All these are familiar. The slightly different step is that since p[0] is arbitrary, one can choose it to be the sum of squares of form p[0] = c[2]p[2]^2+c[3]p[3]^2+…+c[n]p[n]^2 and distributing terms, we get, U^2 + c[2]p[2]^2V[1]^2 + … + c[n]p[n]^2V[1]^2 = (p[1]^2 + c[2]p[2]^2 +…+ c[n]p[n]^2)^k thus proving the kth power of the right hand side of the eqn is a sum of squares of like form. (End proof.) For k = 2 and all c[i] = 1, this is of course the Euler-Aida Ammei identity and the inspiration for the proof. For k = 3, this is, (p[1]^3-3p[0]p[1])^2 + p[0](3p[1]^2-p[0])^2 = (p[1]^2+p[0])^3 for p[0] = c[2]p[2]^2 +…+ c[n]p[n]^2 and so on. For the more general non-monic case (c[1]y[1]^2 + …+ c[n]y[n]^2)^k, some particular identities are also known for the case k=3, J. Neuberg ax[1]^2+bx[2]^2+cx[3]^2 = (ap^2+bq^2+cr^2)^3 {x[1],[ ]x[2], x[3]} = {p(y-2z), q(y-2z), ry}, if y = 4(ap^2+bq^2)-z, z = ap^2+bq^2+cr^2 G. de Longchamps ax[1]^2+bx[2]^2+cx[3]^2+dx[4]^2 = (ap^2+bq^2+cr^2+ds^2)^3 {x[1],[ ]x[2], x[3], x[4]} = {p(y-2z), q(y-2z), ry, sy}, if y = 4(ap^2+bq^2)-z, z = ap^2+bq^2+cr^2+ds^2 This author observed that this can be generalized as, ax[1]^2+bx[2]^2+cx[3]^2+dx[4]^2+ex[5]^2 = (ap^2+bq^2+cr^2+ds^2+et^2)^3 {x[1],[ ]x[2], x[3], x[4], x[5]} = {p(y-2z), q(y-2z), ry, sy, ty}, if y = 4(ap^2+bq^2)-z, z = ap^2+bq^2+cr^2+ds^2+et^2 and so on for n variables by simply modifying z. A more systematic approach is given below. Theorem 2. “The non-monic expression (c[1] x[1]^2 + c[2]x[2]^2 + …+ c[n]x[n]^2)^k is identically a sum of n squares of like form for all positive integer n and odd k.” Proof: The proof is a variation of the one above. One simply solves the equation, U^2+c[1]V^2 = (x^2+c[1]y^2)^k, by equating its linear factors, U+V√-c[1] = (x+y√-c[1])^k, U-V√-c[1] = (x-y√-c[1])^k, and easily solving for U,V as expressions in x,y. Since, for odd k, x can be factored out in U, or U = U[1]x, if we let {x,y} = {√p[0], p[1]} then, p[0]U[1]^2 + c[1]V^2 = (p[0]+c[1]p[1]^2)^k, where p[0] can then be set as the sum of squares p[0] = c[2]p[2]^2 +…+ c[n]p[n]^2, giving, c[1]V^2 + (c[2]p[2]^2 +… + c[n]p[n]^2)U[1]^2 = (c[1]p[1]^2 + c[2]p[2]^2 +…+ c[n]p[n]^2)^k proving that, for odd k, the right hand side is identically the sum of squares of like form. (End of proof.) For k = 3, this is, c[1](c[1]p[1]^3-3p[0]p[1])^2 + p[0](3c[1]p[1]^2-p[0])^2 = (c[1]p[1]^2+p[0])^3 for p[0] = c[2]p[2]^2 +…+ c[n]p[n]^2, and so on for all odd k. 2. Brahmagupta-Fibonacci Two-Square Identity (ac+bd)^2 + (ad-bc)^2 = (a^2+b^2)(c^2+d^2) This can be generalized as, (ac+nbd)^2 + n(ad-bc)^2 = (a^2+nb^2)(c^2+nd^2) From the Two-Square we can derive the Euler-Lebesgue Three-Square, (a^2+b^2-c^2-d^2)^2 + (2ac+2bd)^2 + (2ad-2bc)^2 = (a^2+b^2+c^2+d^2)^2 This can be generalized by the Fauquembergue n-Squares Identity. It is a bit difficult to convey with limited notation but in one form can be seen as, (a^2+b^2-c^2-d^2+x)^2 + (2ac+2bd)^2 + (2ad-2bc)^2 + 4x(c^2+d^2) = (a^2+b^2+c^2+d^2+x)^2 where x is arbitrary and can be chosen as any sum of n squares. Note that for x = 0 this reduces to the Euler-Lebesgue. For the case x as a single square this gives, after minor changes in (a^2+b^2+c^2-d^2-e^2)^2 + (2ad+2ce)^2 + (2ae-2cd)^2 + (2bd)^2 + (2be)^2 = (a^2+b^2+c^2+d^2+e^2)^2 distinct from the Euler-Aida Ammei identity for n = 5 which is given by, (a^2-b^2-c^2-d^2-e^2)^2 + (2ab)^2 + (2ac)^2 + (2ad)^2 + (2ae)^2 = (a^2+b^2+c^2+d^2+e^2)^2 For x = e^2+f^2, it results in seven squares whose sum is the square of six squares: (a^2+b^2+c^2+d^2-e^2-f^2)^2 + (2ae+2df)^2 + (2af-2de)^2 + (2be)^2 + (2bf)^2 + (2ce)^2 + (2cf)^2 = (a^2+b^2+c^2+d^2+e^2+f^2)^2 and so on for other x. 3. Euler Four-Square Identity (a^2+b^2+c^2+d^2) (e^2+f^2+g^2+h^2) = u[1]^2 + u[2]^2 + u[3]^2 + u[4]^2 u[1] = ae-bf-cg-dh u[2] = af+be+ch-dg u[3] = ag-bh+ce+df u[4] = ah+bg-cf+de Note that a cubic version, in fact, is possible, (x[1]^3+x[2]^3+x[3]^3+x[4]^3) (y[1]^3+y[2]^3+y[3]^3+y[4]^3) = z[1]^3+ z[2]^3+z[3]^3+ z[4]^3, to be discussed later. Also, by Lagrange's Identity discussed below, the product can be expressed as the sum of seven squares, (a^2+b^2+c^2+d^2) (e^2+f^2+g^2+h^2) = (ae+bf+cg+dh)^2 + (af-be)^2 + (ag-ce)^2 + (ah-de)^2 + (bg-cf)^2 + (bh-df)^2 + (ch-dg)^2 A more general version for squares was also given by Lagrange as, (a^2+mb^2+nc^2+mnd^2) (p^2+mq^2+nr^2+mns^2) = x[1]^2+mx[2]^2+nx[3]^2+mnx[4]^2 x[1] = ap-mbq-ncr+mnds, x[2] = aq+bp-ncs-ndr, x[3] = ar+mbs+cp+mdq, x[4] = as-br+cq-dp In analogy to the Three-Square, we can also find a Five-Square (by yours truly), (a^2+b^2+c^2+d^2-e^2-f^2-g^2-h^2)^2 + (2u[1])^2 + (2u[2])^2 + (2u[3])^2 + (2u[4])^2 = (a^2+b^2+c^2+d^2+e^2+f^2+g^2+h^2)^2 with the u[i] as defined above. Hence this is another case of a square of n squares expressed in less than n squares. And, in analogy to Fauquembergue’s n squares, another kind of n squares identity can be derived from the Five-Square as, (a^2+b^2+c^2+d^2-e^2-f^2-g^2-h^2+x)^2 + (2u[1])^2 + (2u[2])^2 + (2u[3])^2 + (2u[4])^2 + 4x(e^2+f^2+g^2+h^2) = (a^2+b^2+c^2+d^2+e^2+f^2+g^2+h^2+x)^2 where x again can be any number of squares. For x a square, this can give a 9-square identity. 4. Degen-Graves-Cayley Eight-Squares Identity (DGC) (a^2+b^2+c^2+d^2+e^2+f^2+g^2+h^2) (m^2+n^2+o^2+p^2+q^2+r^2+s^2+t^2) = v[1]^2+v[2]^2+v[3]^2+v[4]^2+v[5]^2+v[6]^2+v[7]^2+v[8]^2 v[1] = am-bn-co-dp-eq-fr-gs-ht v[2] = bm+an+do-cp+fq-er-hs+gt v[3] = cm-dn+ao+bp+gq+hr-es-ft v[4] = dm+cn-bo+ap+hq-gr+fs-et v[5] = em-fn-go-hp+aq+br+cs+dt v[6] = fm+en-ho+gp-bq+ar-ds+ct v[7] = gm+hn+eo-fp-cq+dr+as-bt v[8] = hm-gn+fo+ep-dq-cr+bs+at For convenience, let {a, b,…t} = {a[1], a[2],…a[16]}. This can also give a Nine-Square Identity (distinct from the one in the previous section) as, (a[1]^2+…+ a[8]^2- a[9]^2-…- a[16]^2)^2 + (2v[1])^2 + …+ (2v[8])^2 = (a[1]^2+ a[2]^2 +… + a[16]^2)^2 and the DGC n-Squares identity, (a[1]^2+…+ a[8]^2- a[9]^2-…- a[16]^2 + x)^2 + (2v[1])^2 + …+ (2v[8])^2 + 4x(a[9]^2 +… + a[16]^2) = (a[1]^2+ a[2]^2 +… + a[16]^2 + x)^2 Since the DGC is the last bilinear n-squares identity, these two should also be the last of their kind. (Update, 10/26/09): Just like the Two-Square and Four-Square, the Eight-Square Identity can be generalized. For arbitrary {u, v}, (a^2+ ub^2+c^2+ud^2+ve^2+uvf^2+vg^2+uvh^2) (m^2+un^2+o^2+up^2+vq^2+uvr^2+vs^2+uvt^2) = x[1]^2+ux[2]^2+x[3]^2+ux[4]^2+vx[5]^2+uvx[6]^2+vx[7]^2+uvx[8]^2 x[1] = am-bnu-co-dpu-eqv-fruv-gsv-htuv x[2] = bm+an+do-cp+fqv-erv-hsv+gtv x[3] = cm-dnu+ao+bpu+gqv+hruv-esv-ftuv x[4] = dm+cn-bo+ap+hqv-grv+fsv-etv x[5] = em-fnu-go-hpu+aq+bru+cs+dtu x[6] = fm+en-ho+gp-bq+ar-ds+ct x[7] = gm+hnu+eo-fpu-cq+dru+as-btu x[8] = hm-gn+fo+ep-dq-cr+bs+at (End update) 5. V. Arnold’s Perfect Forms Let {a,b,c} be in the integers. Given the equation (au^2+buv+cv^2)(ax^2+bxy+cy^2) = az[1]^2+bz[1]z[2]+cz[2]^2. If for any integral integral {u,v,x,y} one can always find integral {z[1], z[2]}, then the binary quadratic form F(a,b,c) is defined as a perfect form. Theorem 1: “The product of three binary quadratic forms F(a,b,c) is of like form.” Proof: (an[1]^2+bn[1]n[2]+cn[2]^2)(au^2+buv+cv^2)(ax^2+bxy+cy^2) = az[1]^2+bz[1]z[2]+cz[2]^2 z[1] = u(n[3]x+cn[2]y)+cv(n[2]x-n[1]y) z[2] = v(an[1]x+n[4]y)-au(n[2]x-n[1]y) and {n[3], n[4 ]} = {an[1]+bn[2], bn[1]+cn[2]} from which immediately follows, Corollary:“If there are integers {n[1], n[2]} such that F(a,b,c) = 1, then F(a,b,c) is a perfect form.” Note that if F(a,b,c) is monic, the soln {x,y} = {1,0} immediately implies this form is perfect. But by dividing z[1], z[2] with c, a, respectively, and modifying the expressions for {n[3], n[4]} will result in a second theorem, Theorem 2: “If there are integers {n[1], n[2], n[3], n[4]} such that an[1]^2+bn[1]n[2]+cn[2]^2 = ac, n[3] = (an[1]+bn[2])/c, n[4] = (bn[1]+cn[2])/a, then F(a,b,c) is a perfect form.” Proof: (an[1]^2+bn[1]n[2]+cn[2]^2)(au^2+buv+cv^2)(ax^2+bxy+cy^2) = (az[1]^2+bz[1]z[2]+cz[2]^2)(ac) z[1] = v(n[1]x+n[4]y)-u(n[2]x-n[1]y) z[2] = u(n[3]x+n[2]y)+v(n[2]x-n[1]y) and {n[3], n[4 ]} = {(an[1]+bn[2])/c, (bn[1]+cn[2])/a}. The expressions are essentially the same as in Theorem 1 but have been divided by c,a. This second class is relevant to quadratic discriminants d with class number h(d) = 3m. For imaginary fields with h(-d) = 3, there are sixteen fundamental d, all of which have its associated F(a,b,c) as perfect forms. For brevity, only the first three will be given and in the format {a,b,c}, {n[1], n[2], n [3], n[4]}: d = 23; {2,1,3}, {1,1,1,2} d = 31; {2,1,4}, {2,0,1,1} d = 59; {3,1,5}, {-2,1,-1,1} For real fields with h(d) = 3, there are forty-two d in the Online Encyclopedia of Integer Sequences, all of which also have F(a,b,c) as perfect. However, while most of the n[i] for negative d with h(d) = 3,6 were only single digits, for positive d these can get quite large. For ex, d = 2857; {2,51,-32}, {3326866, -127404, -4879, 86873547} Q: Any other theorems regarding the product of two or three binary quadratic forms? We can generalize this somewhat and go to diagonal n-nary quadratic forms (one without cross terms), F(a[1],a[2],…a[n]):= a[1]x[1]^2 + a[2]x[2]^2 +…+ a[n]x[n]^2 If we consider the equation, (a[1]x[1]^2 + a[2]x[2]^2 +…+ a[n]x[n]^2)(a[1]y[1]^2 + a[2]y[2]^2 +…+ a[n]y[n]^2) = a[1]z[1]^2 + a[2]z[2]^2 +…+ a[n]z[n]^2 then for what constants {a[1], a[2],…a[n]} is there such that the product of two diagonal n-nary quadratic forms is of like form? Most of the results have been limited to the special case of all a[i ]= 1 and n = 2,4,8, namely the Brahmagupta-Fibonacci, Euler, and Degen-Graves identities discussed above. The first can be generalized to the form {1,p}, the second to {1, p, q, pq} by Lagrange, and the third to {1,1, p,p, q,q, pq, pq}. Ramanujan in turn generalized Lagrange’s Four-Square Theorem and found 54 {a,b,c,d} such that ax[1]^2+bx[2]^2+cx[3]^2+dx[4]^2 can represent all positive integers, namely, {1,1,1,v}; v = 1-7 {1,1,2,v}; v = 2-14 {1,1,3,v}; v = 3-6 {1,2,2,v}; v = 2-7 {1,2,3,v}; v = 3-10 {1,2,4,v}; v = 4-14 {1,2,5,v}; v = 6-10 which is the complete list. (Note: Incidentally, it would have been expected that the last would be for v = 5-10. What is the smallest positive integer not expressible by the form {1,2,5,5}?) All 54 are then perfect quaternary forms since, needless to say, the product of two positive integers is always a positive integer. For the first case {1,1,1,1} this is just Euler’s four-square identity and the z[i] have a bilinear expression in terms of the x[i] and y[i]. It might be interesting to know if the other 53 {a,b,c,d} have similar formulas for their z[i]. In general, for a[i] not all equal to unity, what other results are there for the product of two n-nary quadratic forms, especially for n not a power of two? Update, 9/21/09: Turns out the form {1,2,5,5} cannot express the number 15. See "The 15 and 290 Theorems" by Conway, Schneeberger, and Bhargava. 6. Lagrange’s Identity A faintly similar identity to the sum-product of n squares given previously is, (x[1]y[1 ]+ … + x[n]y[n ])^2 + S (x[k]y[j ]– x[j]y[k])^2 = (x[1]^2[ ]+ … + x[n]^2) (y[1]^2[ ]+ … + y[n]^2[ ]) for 1£ k<j£n. For n=3, this has 4 addends, (x[1]y[1]+x[2]y[2]+x[3]y[3 ])^2 + (x[1]y[2 ]– x[2]y[1 ])^2 +(x[1]y[3 ]– x[3]y[1 ])^2 + (x[2]y[3 ]– x[3]y[2 ])^2 = (x[1]^2+x[2]^2+x[3]^2) (y[1]^2+y[2]^2+y[3]^2) while n=4 already involves 7 addends, and is an alternative way to express Euler's Four-Square Identity. A general identity was found by A. Cauchy and J. Young. The case n=3 was also rediscovered by T.Weddle while studying the semi-axes of an ellipsoid. 7. Difference of Two Squares Identity A difference-product identity on the other hand is given by, (x[1]^2+ … +x[n]^2 + (y[1]^2+…+y[n]))^2 – (x[1]^2+ … +x[n]^2 – (y[1]^2+…+y[n]))^2 = 4(x[1]^2+ … +x[n]^2)(y[1]^2+ … +y[n]^2[ ]) Proof: Let a = x[1]^2+ … +x[n]^2, b = y[1]^2+ … +y[n]^2, then this is just the basic identity, (a+b)^2-(a-b)^2 = 4ab. (End proof.) If the right hand side of the identity is non-trivially expressible as the sum of n squares, as is the case for n = 2,4,8, this automatically implies a square of 2n squares expressible as the sum of n+1 squares, thus explaining the Three, Five, Nine-Square Identities above. (The case n = 1 just gives the formula for Pythagorean triples.) Let, (a^2+b^2+c^2+d^2+e^2+f^2)^2 - (a^2+b^2+c^2-d^2-e^2-f^2)^2 = 4(a^2+b^2+c^2)(d^2+e^2+f^2) (a^2+b^2+c^2+d^2+e^2+f^2+g^2+h^2)^2 - (a^2+b^2+c^2+d^2-e^2-f^2-g^2-h^2)^2 = 4(a^2+b^2+c^2+d^2)(e^2+f^2+g^2+h^2) and together with Lagrange’s Identity for n = 3,4 applied on the RHS respectively, this can be used to prove that the square of 6 squares can be expressed as the sum of five squares, (a^2+b^2+c^2-d^2-e^2-f^2)^2 + 4(ad+be+cf)^2 + 4(ae-bd)^2 + 4(af-cd)^2 + 4(bf-ce)^2 = (a^2+b^2+c^2+d^2+e^2+f^2)^2 and provide an alternative soln to expressing the square of 8 squares as a sum of 8 squares, (a^2+b^2+c^2+d^2-e^2-f^2-g^2-h^2)^2 + 4((ae+bf+cg+dh)^2 + (af-be)^2 + (ag-ce)^2 + (ah-de)^2 + (bg-cf)^2 + (bh-df)^2 + (ch-dg)^2) = (a^2+b^2+c^2+d^2+e^2+f^2+g^2+h^2)^2 since the Euler-Aida Ammei gives it as, (a^2-b^2-c^2-d^2-e^2-f^2-g^2-h^2)^2 + 4a^2(b^2+c^2+d^2+e^2+f^2+g^2+h^2) = (a^2+b^2+c^2+d^2+e^2+f^2+g^2+h^2)^2 Note 1: By not using the Euler-Aida Ammei identity, are there always alternative solns to the square of n squares as the sum of n squares? Note 2: For what n is the square of n squares identically the sum of less than n squares? (It can be for n = 4, 8, 16 (as 3, 5, 9 squares, respectively). In fact, by setting appropriate variables equal to zero, it is the case for all n ≤ 16 other than n = 3, 5, 9.) (See update at start of this section.) A. Boutin Boutin’s Identity: S ± (x[1 ]± x[2 ]±…± x[k])^k = k! 2^k-1x[1]x[2]…x[k] where the exterior sign is the product of the interior signs. (Or, the term is negative if there is an odd number of negative interior signs.) The case k=2 gives the well-known, (a+b)^2 - (a-b)^2 = 4ab while for k = 3,4, (a+b+c)^3 - (a-b+c)^3 - (a+b-c)^3 + (a-b-c)^3 = 24abc (a+b+c+d)^4 - (a-b+c+d)^4 - (a+b-c+d)^4 - (a+b+c-d)^4 + (a-b-c+d)^4 + (a-b+c-d)^4 + (a+b-c-d)^4 - (a-b-c-d)^4 = 192abcd and so on for other kth powers. The case k=3 then implies that, (x[1]^3+x[2]^3+…+x[n]^3) (y[1]^3+y[2]^3+…+y[n]^3) = z[1]^3+ z[2]^3+z[3]^3+ z[4]^3 or, “The product of two sums of n cubes is the sum of four cubes.” Proof: Simply let a = x[1]^3+x[2]^3+…+x[n]^3, b = y[1]^3+y[2]^3+…+y[n]^3, and c = 9 in Boutin's Identity for k = 3.
{"url":"https://sites.google.com/site/tpiezas/005b/","timestamp":"2014-04-17T22:12:28Z","content_type":null,"content_length":"122283","record_id":"<urn:uuid:63e996ca-d33e-4861-98fa-b9c816ca6261>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Flower Mound Prealgebra Tutor Find a Flower Mound Prealgebra Tutor ...In my career I have used and taught many mathematical concepts. I have gone to the military's ASVAB web site and have no problems with any of the AFQT subjects tested, including word knowledge and paragraph comprehension. So I will be very happy to assist with your preparation for the exam. 15 Subjects: including prealgebra, chemistry, calculus, ASVAB ...I have learned to study in a certain way, to not procrastinate (this one keeps me from getting overwhelmed, I am able to study better and produce much better results!), and I have also learned to ask for the help I need in order to be successful! While, being a student with LD's and ADHD makes f... 17 Subjects: including prealgebra, chemistry, geometry, biology ...Whether you are in elementary, secondary or college, my knowledge and skill set will be a valuable asset in preparing you for a successful and productive academic career! Outside of classes I took, I have a lot of experience with genetics in the practical setting of the laboratory. My research at SMU focused on genetic pathways. 30 Subjects: including prealgebra, reading, chemistry, English ...I find that this makes geology more meaningful to many students, and any good teacher will tell you that we love teaching material that students find personally meaningful. In order to solve problems of probability, students need to feel comfortable manipulating fractions and setting up sample s... 15 Subjects: including prealgebra, chemistry, geometry, algebra 1 ...As a high school and college instuctor, I have had dozens of students develop and deliver oral presentations to their classmates. I have guided many of these students through this process in private and in small groups. I had difficulties with presenting in front of others in years past, and understand the fears that many people have of speaking in front of others. 25 Subjects: including prealgebra, English, reading, writing
{"url":"http://www.purplemath.com/Flower_Mound_prealgebra_tutors.php","timestamp":"2014-04-18T14:06:31Z","content_type":null,"content_length":"24359","record_id":"<urn:uuid:b30b6938-51fc-43e4-8643-7d72247aceaf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
(c) 2006 Wolfgang Schlage 1. Symmetrical Encryption Methods Public-Key Cryptography deals with "asymmetrical" encryption methods. To contrast, let's first look at symmetrical methods: Most people have a vague idea how symmetrical encryption works: A message, for example an email, is made unreadable with the help of a key, e.g., a number unknown to outsiders. Only those who know the key are able to make the email readable again. Thus, it is important to make the key only available to those who should have access to the e-mail. Passwords and “your personal secret code” work with methods of this type. As encryption and decryption work with the same key, we call these methods symmetrical. An important problem of symmetrical encryption methods is: If one cannot send the message to the recipient without fearing it may be intercepted (that’s the reason why we encrypt it in the first place), how do we get the key undetected to the recipient? Send another email with the key? (This is called the “key exchange” problem.) Of course: One can meet ahead of time and agree on a key for every day of the coming year. Or one can send the keys for the next 100 emails in a letter. But these are complicated ways, and the administration and secret storage of all these keys is then another 2. Asymmetrical or Public-Key Encryption Methods In the 1970s, mathematicians discovered the possibility of asymmetrical cryptography: 1. Encryption and decryption are done with the help of a key pair, not a single key. 2. One key of the key pair encrypts the message, which can only be decrypted by the other key of the key pair, and by no other key. (As we use different keys for encryption and decryption, we call this method asymmetrical.) 3. That also works in reverse: What the second key has encrypted can only be decrypted by the first. Interestingly enough, one cannot decrypt the message with the key that encrypted it, one needs the other key. 4. One cannot compute one key from the knowledge of the other. (For all practical purposes, that is true. Theoretically it is possible; and cryptographers will tell you things such as: Provided one could use all the computing power of the whole planet, one would need the time equivalent to the age of the universe (or so) to compute the other key, given the mathematical algorithms known today. But even if they were mistaken, and it took only half the age of the universe or a hundredth of that time--for all practical purposes there is no way to get the other key from the knowledge of the first.) This opens up the following possibility: I generate such a key pair for myself. I send one of the keys (which will be my "public" key from now on) to everyone who wants to send me encrypted messages, e.g., secure emails. Whoever wants to send me an email uses this key for the encryption. I can send this key through an open email or on postcards. I can also deposit this key at a publicly accessible place, e.g. on a "key server" on the Internet, with the instruction: "Hi, this is my public key. Whoever wants to send me a secure email, please use this key." But I keep the other key of the key pair strictly to myself, STRICTLY TO MYSELF. This is my "private" or "secret" key. When someone wants to send me a secure email, he (she) uses my public key for encryption and sends it to me; as only the owner of the secret key is able to decrypt it, only I can read it; it does not matter that my public key is public knowledge. People who have never seen me or met me or whom I do not know can send me such an email, because a secret key exchange is not necessary. They just have to get my public key somehow. (Caution: Not even the sender of such an email is able to decrypt it; the sender may want to make a copy for himself *before* the encryption takes place.) This way, the problem of the key exchange is solved. 3. Cryptographic Signatures Asymmetric cryptography also opens the possibility of signing emails and other messages cryptographically or, as it is sometimes called, "electronically" or “digitally”: Usually, when we get an email, we trust that the person named as the sender actually sent the email. But any hacker worth his money is able to forge the sender's email address. Asymmetric encryption enables us to do the following: I can send my email twice: 1. In plain text. 2. Encrypted, and this time encrypted with my "secret" (!) key. (Just as a reminder: The encrypted form of this email can only be decrypted with my public key.) To check if this email is from me, people have to: 1. Take my public key and decrypt the encrypted email with it. 2. If the decrypted email is the same as the plain text email, both messages have to be from me, because only the owner of my secret key (and that is I) has the means to encrypt something that can be read with my public key. If it had been encrypted by another key, a "decryption" with my public key would only result in gibberish. Thus, the sender of the email is authenticated. Thus, a hacker can send an email to my bank, saying, "Please transfer $100,000.00" to John Smith (John Smith being the hacker, of course). But (unless the hacker has been able to hack my secret key beforehand), the hacker would not be able to encrypt this email in a way that, decrypted with my public key, it says the same. 4. Encrypting and Signing One can combine encrypting and signing: 1. One writes an email. 2. One encrypts this email with one's private key. 3. One takes (a) and (b), combines them in one email, and encrypts the resulting package with the public key of the recipient. 4. This is sent to the recipient. 5. The recipient decrypts the email with his/her secret key and gets (a) and (b) 6. The recipient uses the public key of the sender, decrypts (b) and checks if (a) and (b) are the same. If yes, 7. Bingo! 5. Details, Details If you want to know what you are doing while using public-key cryptography, you do not have to know more than discussed in 1. through 4. The computer reliably does the rest. However, some details are worth mentioning: Asymmetrical Methods There are a number of asymmetrical or public-key encryption methods (which are based on mathematically closely related principles): • RSA (named after Rivest, Shamir, Adleman, the inventors - or should one say discoverers?) • DH (Diffie-Hellman), • ElGamal • there are some more. Symmetrical Methods Symmetrical encryption methods use the same key for encryption and decryption. The advantage compared to asymmetrical methods is that they encrypt and decrypt considerably faster. The following are well known symmetrical encryption methods: • DES (meanwhile, 2005, outdated and insecure: It takes 3 hours to crack a DES message) • RC4 (with a 40-bit key: insecure. There is a screensaver on the Internet that you can freely download that cracks other people's RC4 messages for you while you are on your coffee break. It takes a total computing time of about a week or so to crack this key) • Triple-DES • AES (another name for this method is "Rijndael") with different key lengths, e.g., AES-128, AES-192, AES-256 • Blowfish • Twofish • CAST5 • IDEA (there is a patent on IDEA, which is the reason why it is not used everywhere) The methods from Triple-DES onward are considered safe at this time. (With accelerating computing speeds of new computers, previously safe methods are becoming unsafe over time.) Mixing Symetrical and Asymmetrical Methods in Practice In practical public-key cryptography, as for example in PGP, symmetrical and asymmetrical methods are used in combination in order to use the advantages of each method, namely, the faster encryption/ decryption times of symmetrical methods and the capability of signing and ease of key exchange of the asymmetric methods. Both methods are combined like this: 1. Alice wants to send a secure email to Bob. Alice chooses a symmetrical encryption method, e.g, AES with a key length of 128 bits. 2. Alice's Computer has a random number generator that generates the AES key of 128 bits. 3. Alice encrypts her email using the AES encryption algorithm with the key generated by her computer. She uses this key only this one time. 4. Alice encrypts the AES key (and the information that she used AES) with Bob's asymmetrical public key (e.g., with an RSA key with a length of 1024 bits). 5. Alice puts the AES-encrypted original email and the asymmetrically encrypted AES key in a data packet, the encrypted email, and sends it to Bob. 6. Bob uses his asymmetrical secret key to get the 128-bit AES key that Alice had used for her original email. 7. Bob uses the 128-bit AES key to decrypt Alice's original email. Key Lengths The longer a key, the safer it is, i.e., the more computing power and computing time one needs to crack the encryption. Symmetrical and asymmetrical keys have very different key lengths to provide the same amount of security. For example, a symmetrical key of 128 bits corresponds roughly to an asymmetrical key of 2304 bits key length. Thus, if someone claims that a key length of 256 is safe (or unsafe), one has to clarify which method is spoken about. Hash functions condense a message of any length into a unique definite single number of fixed length. (Sometimes this resulting hash number is called a "message digest.") The idea is that knowing the hash number does not give anyone any hint regarding the original message and that a slightly modified message, e.g., by adding just a blank space anywhere in the text, will result in a totally different hash number. Whoever has the hash number has no way (within a reasonable amount of time) to find a reasonable message that corresponds to this number, but whoever has the original message can compute the corresponding hash value easily. If two people want to know if two messages are the same, they can, instead of comparing the messages themselves, calculate the hash numbers of these messages: if the hash numbers are the same, it is very, very, very, ... (you can a lot more "verys" here) unlikely that the messages are different. In practical applications, the above-mentioned cryptographic signatures use such a hash number. Well-Known Hash-Algorithms are: • MD (stands for Message Digest) in the variants MD2, MD4, MD5 (all these are meanwhile not considered a hundred per cent safe any more) • SHA (Secure Hash Algorithm), in the versions SHA, SHA-1, SHA-2 256, SHA-2 384, SHA-2 512 (SHA and SHA-1 are also not considered 100% safe any more) • RIPEMD-160 (developed by the European Union). Two Standards of Asymmetrical Cryptography: PGP and X.509 As far as I know, there are currently two camps which, although they use the same encryption methods, use different implementations of them: the PGP standard und the X.509 certificate standard. 1. PGP PGP stands for "Pretty Good Privacy". The developers of PGP had a sense of mission and made the PGP programs publicly accessible from the very beginning (also outside the U.S.), as they felt that everyone had the right to privacy. Non-commercial users could even download the PGP programs for free and only commercial users had to license them. One of the key figures was Phil Zimmermann, who developed this standard in the 1990s. He got into a long and expensive legal struggle with the US federal government. The US government was of the opinion that encryption was of high military importance, and that someone who distributes encryption methods, especially outside the U.S., is legally equivalent to an arms dealer. The U.S. government finally dropped the charges against Phil Zimmermann, but only after he had incurred high costs for his legal defense. He became a hero of the privacy movement. From the very beginning, PGP developers published the source code of the core encryption software and have not claimed a copyright on it. (However, specific applications that use this basic encryption software are copyrighted.) This has led to a "PGP community", and other programs were developed that use the same encryption standard and that, at least theoretically, can all exchange secret emails with each other (There can be incompatibilities with certain methods due to minor variations and between different versions of the same program). Other programs using the PGP standard are, among others, OpenPGP and GPG (also called GnuPG). The commercial version of PGP has changed hands a number of times and is owned today by the PGP Corporation. Their newest version is PGP 9.5 (as of Spring 2007). Private users can use a slightly limited version for free (download it from www.PGP.com); those who want full functionality or are commercial users have to buy the program. GPG and OpenPGP are available in public license, i.e., one can use these programs commercially or non-commercially without charge. 2. X.509 (Certificates) X.509 is a standard of the computer industry and is used for Internet browser encryption and Internet browser security. A "X.509 Certificate" is a public key tied to an identity of a person, corporation, website, or email address. The certificate authorities issue these certificates and guarantee that the certificate's owner (to be more exact: the owner of the corresponding private key) is really the one the certificate says he/she is. Internet browsers (Microsoft's Internet Explorer, Netscape, and all the others), some email-programs, and some other programs can work with these certificates. The certificates use the same encryption methods as PGP, but the data formats are different. Programs that work with PGP keys cannot (i.e., only with a special "translation") work with X.509 certificates and vice versa. PGP 9.0, however, is able to use X.509 certificates; this is one of its new features. PGP and X.509 standards are in competition in the field of emails and messaging; it is not clear at the moment which one (or none or both?) will prevail in the end. 7. InstantCrypt InstantCrypt uses GPG as its encryption engine, which uses the PGP standard. Thus, InstandCrypt should be capable of exchanging mails with the other PGP programs (e.g., any application using GPG, OpenPGP, PGP 9.0 and PGP's previous versions), but this has not been extensively tested. 8. Literature Bruce Schneier: Applied Cryptography. Protocols, Algorithm, and Source Code in C (2nd ed). New York: Wiley, 1996 (about US-$ 60) is book for people interested in reading more about the logical and mathematical foundations of modern cryptology. It is, in my opinion, a wonderful book and not too difficult to read for the mathematically interested.
{"url":"http://instantcrypt.com/how_public_key_encryption_works-introduction.php","timestamp":"2014-04-21T12:08:09Z","content_type":null,"content_length":"20252","record_id":"<urn:uuid:9c2a4fc8-4308-4200-937a-1a4de8b743ca>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Clay Mathematics Institute The original Clay Mathematics Institute icon took the form of the sculpture "Figureight Knot Complement vii/ CMI" by sculptor Helaman Ferguson. The CMI commissioned the sculpture in early 1999, and the sculptor himself unveiled the granite masterpiece on May 10, 1999. Smaller polished bronze versions have been presented in recognition of the annual Clay Research Awards. The bronze replicas were crafted using the lost wax process for molds made from the original. The master sculpture also served as a model for the larger sculpture carved in Inner Mongolian Black Granite and located at the CMI office in Oxford. The figureight knot is presented as an esker curve winding with no self-intersections on a double torus. The complement of the figureight knot has the structure of a double quotient group, one of the groups being discrete. The mathematical object which the sculpture represents is the orbifold X given as a quotient of three-dimensional hyperbolic space by a discrete group action, as described by the equations in the image, which are inscribed on the larger granite sculpture. The current logo of the CMI echoes the esker curve. Note by Marc Lackenby The first line says that the figure-eight knot complement (i.e. S^3 \ 4[1]) is a quotient of hyperbolic 3-space by a discrete group Γ of isometries. Viewing them as 2x2 matrices, an explicit generating set is given in the bottom line. Lines 2-5 give a formula for the volume of this hyperbolic 3-manifold. Lines 3-5 say that where v[i] has the formula in line 2. This formula arises as follows. The figure-eight knot complement is obtained from two regular ideal tetrahedra. So, we just need a formula for the volume of such an ideal tetrahedron. It is where θ[1], θ[2] and θ[3] are the dihedral angles of the tetrahedron and Λ is the "Lobachevsky function". In this case θ[1] = θ[2] = θ[3 ]= π/3. The Lobachevsky function is defined as an infinite sum, but in this case, because we're dealing with θ[i] = π/3, you get a periodicity and you get just the five terms v[1 ],..., v[5]. Milnor wrote a short chapter in Thurston's notes about this, which can be found here
{"url":"http://www.claymath.org/about-cmi/icon","timestamp":"2014-04-19T04:26:31Z","content_type":null,"content_length":"22458","record_id":"<urn:uuid:21d34a35-0880-4e9e-9ffe-81eea9359273>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Eisenstein's irreducibility criterion This result is very useful for producing examples of irreducible polynomial Theorem Let f=a[n]x^n + a[n-1]x^n-1 + ... + a[0] be a nonconstant polynomial with integer coefficients and let p be a prime number. Suppose that • p does not divide a[n] • p|a[n-1],...,a[0] • p^2 doesn't divide a[0]. is an irreducible polynomial in • x^6 - 30x^5 + 6x^4 - 18x^3 + 12x^2 - 6x +12 is irreducible in Q[x] by Eisenstein with p=3 (note that we can't use p=2 :) • x^n - 2 is irreducible by Eisenstein with p=2. • Consider f(x)=x^3- 3x - 1. We can't apply Eisenstein directly but consider f(x+1) (Obviously if f(x+1) is irreducible then so is f(x).) We have f(x+1)=(x+1)^3 - 3(x+1) - 1 = x^3 + 3x^2 -3. By Eisenstein (p=3) we deduce that f(x) is irreducible. In fact Eisenstein's criterion is a special case of a more general result. Theorem Let R be a unique factorization domain with field of fractions K. Let f=a[n]x^n + a[n-1]x^n-1 + ... + a[0] be a nonconstant polynomial in R[x]. Let p be a prime in R. Suppose that • p does not divide a[n] • p|a[n-1],...,a[0] • p^2 doesn't divide a[0]. is an irreducible polynomial in Proof of Eisenstein's criterion: Firstly, we can assume that f is primitive, for if we write f=ch, with c the content and h primitive then since p doesn't divide a[n] it doesn't divide c. It follows quickly that h also satisfies the conditions of the criterion. Finally, if h is irreducible then so is f. By Gauss's Lemma, if f fails to be irreducible in K[x] then it has a factorization f=f[1]f[2] in R[x] so that f[1],f[2] both have degree < deg f. Let's say that f[1]=c[0]+ ... c[r]x^r and f[2]=d[0]+ ... d[r]x^r. Now a[0]=c[0]d[0] and a[0] is divisible by p but not by p^2. Thus one of c[0] and d[0] is divisble by p and the other is not. WLOG p|c[0]. but doesn't divide d[0]. Now p does not divide a[n]=c[r]d[s] so it doesn't divide c[r]. Let k be the smallest integer such that p does not divide c[k]. Thus, k>0 and k<=r. Now a[k]=c[0]d[k]+ ... + c[k]d[0]. We know that p|a[k] and p|c[0],...,c [k-1] so it follows that p|c[k]d[0]. But p doesn't divide either of the two terms in this product and so this contradicts the primeness of p, completing the proof.
{"url":"http://everything2.com/title/Eisenstein%2527s+irreducibility+criterion","timestamp":"2014-04-18T01:16:39Z","content_type":null,"content_length":"28946","record_id":"<urn:uuid:483695a8-1707-42a9-9d61-b4f228b4a90b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
An introduction to the decibel. While modern test instruments can compute decibels for us, we should know what a decibel is and how to manually calculate gain or loss.We all have a direct, personal interest in the decibel (dB) since our ears respond to all sound in a logarithmic fashion; thus our ears' response can be described using decibels, since they also are logarithmically based units (using a base of 10). But, this While modern test instruments can compute decibels for us, we should know what a decibel is and how to manually calculate gain or loss. We all have a direct, personal interest in the decibel (dB) since our ears respond to all sound in a logarithmic fashion; thus our ears' response can be described using decibels, since they also are logarithmically based units (using a base of 10). But, this isn't the only reason to be knowledgeable about decibels. The decibel is also used in electrical measurements throughout our industry in innumerable ways. For example, the ability of an isolation transformer or a power-line filter to reduce (attenuate) electrical noise over some range of frequencies is but one example of this, since the performance is generally described using decibels. Another example is how much of a signal is lost over a transport path such as a coaxial cable or similar metallic path. Also, the gain of an amplifier is generally expressed in decibels. Even though we now enjoy having dB computed for us by sophisticated test instruments such as solid-state oscilloscopes, we do need to know what the dB is and how to work manually with the dB. This will prevent us from being snowed by what our instrument displays when we push the button. The basics The dB is typically expressed in relation to the electrical unit it's to be used with. For example, dBV and dBmV are used for decibels expressed in terms of voltage or millivolts; dBA and dBmA are used for decibels expressed in terms of amperes and milliamperes; and dBW is used for decibels expressed in terms off watts. Oddly enough, dB expressed in terms of mW is simply abbreviated as dBm, and the "W" is just left off. (It's very important to express dB in this manner if confusion is to be avoided.) Prefixing dB with a minus sign (-) means a loss, and either no sign or a positive one (+) means a gain. Logarithmic approach A lot of signal processes are nonlinear and actually are best described as being logarithmic in nature. Hence, if you try to use simple ratios to describe them, the results get either unrealistically spread-out at one end and are bunched-up at the other end of a chart or graph. This makes the information very hard to use. An example of this problem can be cited using the ear again. When a linear resistance potentiometer is used as an audio volume control, all of the control is confined to the last few degrees of shaft rotation and the majority of the rotation doesn't appear to have much effect at all. However, when a logarithmically tapered control is used, the adjustment of volume is nearly uniform throughout the full rotational range of the potentiometer. Another example is graphing a signal on linear marked graph paper. Because the signal is logarithmic in nature, its graph here either will be confusing due to the curvature of the plot or unusable due to the contraction and expansion of the data plot (similar to that encountered with the volume control and sound). Using logarithmic marked graph paper allows the "curved" plot to be generated using straight or almost straight lines. This makes for easier interpretation and general usage. An example of this is the charging or discharging of a capacitor over time. General equations A description of the general equation for determining the dB of gain (+dB) or loss (-dB) for any set of two voltages on a path of equal impedance is shown below. dB =20 log ([E.sub.1]/[E.sub.2]) (equation 1) Solving this form of equation is not difficult with a calculator that has a "log" key on it. All you have to do is first compute the ratio of the input to output voltage ([E.sub.1]/[E.sub.2]), press the "log" key, and then multiply the whole thing by 20. This gives the result in terms of -dB or +dB depending upon whether signal was lost or added to in the circuit. You can substitute current (I) for voltage using this equation if you desire. Power calculations are a different matter. They're calculated in much the same manner as above, except the equation uses a factor of 10 as opposed to 20. Shown below is the basic power equation for computing dB in a circuit of the same impedance. dB = 10 log ([P.sub.1]/[P.sub.2]) (equation 2) The values of dB (calculated using equations 1 and 2) based on varying ratios of power and voltage/current are shown in the accompanying table. You might want to commit a few of the really important relationships in dB to memory since you may want to quickly estimate something. For example, the following common values should be remembered. * Voltage or current: Doubling or halving of voltage (or current) is a [+ or -]6 dB change. A numerical ratio of 10 is 20 dB, 100 is 40 dB, 1000 is 60 dB, and so on. * Power: Doubling or halving of power is a [+ or -]3 dB change. A numerical ratio of 2 is 3 dB, 4 is 6 dB, 10 is 10 dB, 100 is 20 dB, and so on. The nice thing about dB notation is that gains and losses in any given circuit are simply added and subtracted arithmetically to find the final value of gain or loss. Then, with a little algebra or with the anti-log key (10x) on your calculator, you determine the voltage, current, or power ratio from the resulting answer. Example 1 Let's see how the gains and losses of a given signal transport (losses) and amplifier (gain) system work together to produce a given output from a specified input. Suppose you know the [+ or -]dB at any point in a system. You then can compute the voltage or current ratio. This is a simple algebra problem that's easy to do with a calculator. Let's say that the above "system" has 12 dB of gain. How do we compute the output voltage (or current) based upon knowing the input and the dB of gain? The answer lies in the following equation. ([E.sub.1]/[E.sub.2]) = [10.sup.(dB/20)] (equation 3) This is the basic equation for computing the voltage ratio in a circuit of the same impedance when the [+ or -]dB is known. To use it, we first divide the known dB value by 20. The result is the power we wish to raise 10 to. Using our trusty calculator, we push the Xy key. The resulting value is the ratio between the input voltages. Getting back to our example, we have 12 dB of gain after the signal is "processed and transported" from the source to the load. Using equation 3 and the procedure previously described, we find that this 12dB works out to a ratio of 3.98:1. This value times 10 = 39.8V output. The ratio for power is computed in the same manner as in equation 3 except that 10 is used in place of 20 in the exponent's divisor. This is shown in the following basic equation for computing the power ratio in a circuit of the same impedance when the [+ or -]dB is known. ([P.sub.1]/[P.sub.2]) = [10.sup.(dB/10)] (equation 4) Warren H. Lewis is President of Lewis Consulting Services, Inc., San Juan Capistrano, Calif. and Honorary Chairman of EC&M's Harmonics and Power Quality Steering Committee. Gutters 3 last reply by cwlodyka in Questions & Help for Rookies Design Software 5 last reply by franklin w chandler jr in Software & Apps
{"url":"http://ecmweb.com/content/introduction-decibel","timestamp":"2014-04-16T09:48:52Z","content_type":null,"content_length":"127193","record_id":"<urn:uuid:51d33f1f-c99d-43f1-ac73-960a9c244bd9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. Why do this problem? invites pupils to develop a systematic way of working and offers the opportunity for discussion in pairs, small groups and the whole class. For some learners, having a 'real' context might provide motivation to solve the problem. Possible approach Of course, you may wish to introduce this problem in the context of the scores of a local event, rather than the Olympics. This may help many pupils engage in the solving of the problem. Whatever the context, at first you could invite pupils to guess what the half time score was to the match. Take a few suggestions and then ask them to try and find all the possibilities. Give time for pupils to work in pairs on the task and look out for those children who are beginning to work in a systematic way. After a suitable length of time, draw the whole group together and ask them how they are making sure they don't miss out any possibilities. You may wish to ask certain pairs to share their ways of working with the whole group. It might be handy to suggest that each different possibility is written on a separate strip of paper as this might aid later discussions. Give them longer to work on the problem then bring everyone together once more to discuss findings. You could ask each pair how many different possible scores they think there are - they are unlikely to agree! This is where having the scores written on strips is useful as you can stick them on the board, or ask members of the class to hold them, then invite everyone to sort them or re-order them. In this way, a system is imposed on the scores and any missing ones can be identified quickly. You can then challenge pairs to find the possibilities for the $3 - 3$ match, using a similar system. The experience of working on the $4 - 2$ result all together should give them more confidence to tackle the second match in their pairs. Key questions How do you know that may have been a half time score? How can you be sure that you have found ALL the possible half time scores? Suppose the final score was a draw, what then? Possible extension You could ask "If there are $24$ possible different half time scores what could the final score have been?". Possible support Some pupils may prefer to start with games where there are fewer goals, for example $0 - 1$, $1 - 0$, $1 - 1$ etc so that there are fewer possible half time scores.
{"url":"http://nrich.maths.org/7408/note?nomenu=1","timestamp":"2014-04-17T18:26:30Z","content_type":null,"content_length":"6602","record_id":"<urn:uuid:e7bca566-449f-45a1-b97a-c731900507cb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
glDrawPixels with dynamic "array" [Archive] - OpenGL Discussion and Help Forums 12-06-2003, 05:16 PM I was hoping someone could help me understand (or at least agree with me) why I am having problems using a dynamic array with glDrawPixels. I have no problem with glDrawPixels when declaring a static 3d array with predefined dimensions. Is the reason I sometimes get a null reference due to the fact that my dynamic "array" isnt necessarily contiguous whereas the static predefined one is? Here is my code... class FrameBuffer GLfloat*** theBuffer; int length,width,depth; FrameBuffer::FrameBuffer(int x, int y, int colors) width = x; length = y; depth = colors; theBuffer = new GLfloat**[x]; for(int i = 0; i < width; i++) theBuffer[i] = new GLfloat*[y]; for(int j = 0; j < length; j++) theBuffer[i][j] = new GLfloat[depth]; for(int k = 0; k < depth; k++) theBuffer[i][j][k] = 0.0f;
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-128776.html","timestamp":"2014-04-21T07:23:40Z","content_type":null,"content_length":"4829","record_id":"<urn:uuid:6f4e8aa5-901b-4238-94fa-35c2ddd60340>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Regression analysis for interval-valued data,” in Data Analysis, Classification, and Related , 2007 "... This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute variou ..." Cited by 20 (14 self) Add to MetaCart This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties. - Journal of Symbolic Data Analysis , 2003 "... ..." - Statistica Applicata [Italian Journal of Applied Statistics , 2005 "... Real world data analysis is often affected by different type of errors as: measurement errors, computation errors, imprecision related to the method adopted for estimating the data (parameters). The uncertainty in the data, which is strictly connected to the above errors, may be treated by consideri ..." Cited by 3 (0 self) Add to MetaCart Real world data analysis is often affected by different type of errors as: measurement errors, computation errors, imprecision related to the method adopted for estimating the data (parameters). The uncertainty in the data, which is strictly connected to the above errors, may be treated by considering, rather than a single value for each data, the interval of values in which it may fall: the interval data. This kind of data representation imposes a new formulation of the classical statistical methods in the case that interval-valued variables are considered. Accordingly, purpose of the present work is to develop suitable statistical methods for: obtaining a synthesis of the data, analysing the variability in the data and the existing relations among interval-valued variables. The proposed solutions are based on the following assessments: – The developed statistics for interval-valued variables are intervals. – Statistical methods for interval-valued variables embrace classical statistical methods as special cases. – The proposed interval solutions do not contain redundant elements with respect to a given criterion. In the present work particular interest is devoted to the proof of the properties of the proposed techniques and to the comparison of the obtained results with those already existing in the literature. - Conf. on Principles and Practice of Knowledge Discovery in Databases, PPKDD-2000 , 2000 "... The data descriptions of the units are called "symbolic" when they are more complex than the standard ones due to the fact that they contain internal variation and are structured. Symbolic data happen from many sources, for instance in order to summarise huge Relational Data Bases by their under ..." Cited by 1 (0 self) Add to MetaCart The data descriptions of the units are called "symbolic" when they are more complex than the standard ones due to the fact that they contain internal variation and are structured. Symbolic data happen from many sources, for instance in order to summarise huge Relational Data Bases by their underlying concepts. "Extracting knowledge" means getting explanatory results, that why, "symbolic objects" are introduced and studied in this paper. They model concepts and constitute an explanatory output for data analysis. Moreover they can be used in order to define queries of a Relational Data Base and propagate concepts between Data Bases. We define "Symbolic Data Analysis" (SDA) as the extension of standard Data Analysis to symbolic data tables as input in order to find symbolic objects as output. In this paper we give an overview on recent development on SDA. We present some tools and methods of SDA and introduce the SODAS software prototype (issued from the work of 17 teams of nine countries involved in an European project of EUROSTAT). 1 - In Principles and Practice of knowledge discovery in databases , 2000 "... In this article we propose an algorithm for Principal Components Analysis when the variables are histogram type. This algorithm also works if the data table has variables of interval type and histogram type mixed. If all the variables are interval type it produces the same output as the one produced ..." Cited by 1 (0 self) Add to MetaCart In this article we propose an algorithm for Principal Components Analysis when the variables are histogram type. This algorithm also works if the data table has variables of interval type and histogram type mixed. If all the variables are interval type it produces the same output as the one produced by the algorithm of the Centers Method propose in [5, Cazes, Chouakria, Diday and Schektman (1997)]. 1 The algorithm In this algorithm we use the idea proposed in [9, Diday (1998)]. We represent each histogram--individual by a succession of k interval--individuals (the first one included in the second one, the second one included in the third one and so on) where k is the maximum number of modalities taken by some variable in the input symbolic data table. Instead of representing the histograms in the factorial plane, we are going to represent the Empirical Distribution Function F Y defined, in [3, Bock and Diday (2000)] associated with each histogram. In other words if we have an histogram variable Y on a set E = {a 1 , a 2 , . . .} of objects with domain Y represented by the mapping Y (a) = (U(a), # a ), for a # E, where # a is frequency distribution, then in the algorithm we will use the function F (x) = # i / # i #x # i instead of the histogram. Definition 1. Let X = (x ij ) i=1,2,...,m, j=1,2,...,n be a symbolic data table with variables type continuous, interval and histogram, and let be k = max{s, where s is the number of modalities of Y j , j = 1, 2, . . . , n} where Y j is a variable of histogram type 1 . We define the vector--succession of intervals associated with each cell of X as: 1 If all the variables are interval type then k = 1. 1. if x ij = [a, b] then the vector--succession of intervals associated is: x # ij = # # # # # [a, b] [a... "... Abstract. This paper aims to adapt clusterwise regression to interval-valued data. The proposed approach combines the dynamic clustering algorithm with the center and range regression method for interval-valued data in order to identify both the partition of the data and the relevant regression mode ..." Add to MetaCart Abstract. This paper aims to adapt clusterwise regression to interval-valued data. The proposed approach combines the dynamic clustering algorithm with the center and range regression method for interval-valued data in order to identify both the partition of the data and the relevant regression models, one for each cluster. Experiments with a car interval-valued data set show the usefulness of combining both approaches. "... We introduce a new approach to regression with imprecisely observed data, combining likelihood inference with ideas from imprecise probability theory, and thereby taking different kinds of uncertainty into account. The approach is very general: it provides a uniform theoretical framework for regress ..." Add to MetaCart We introduce a new approach to regression with imprecisely observed data, combining likelihood inference with ideas from imprecise probability theory, and thereby taking different kinds of uncertainty into account. The approach is very general: it provides a uniform theoretical framework for regression analysis with imprecise data, where all kinds of relationships between the variables of interest may be considered and all types of imprecisely observed data are allowed. Furthermore, we propose a regression method based on this approach, where no parametric distributional assumption is needed and likelihood-based interval estimates of quantiles of the residuals distribution are used to identify a set of plausible descriptions of the relationship of interest. Thus, the proposed regression method is very robust and yields a set-valued result, whose extent is determined by the amounts of both kinds of uncertainty involved in the regression problem with imprecise data: statistical uncertainty and indetermination. In addition, we apply our robust regression method to an interesting question in the social sciences by analyzing data from a social survey. As result we obtain a large set of plausible relationships, reflecting the high uncertainty inherent in the analyzed data set. "... Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Cardiologists are interested in determining whether the type of hospital pathway followed by a patient is predictive of survival. The study objecti ..." Add to MetaCart Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Cardiologists are interested in determining whether the type of hospital pathway followed by a patient is predictive of survival. The study objective was to determine whether accounting for hospital pathways in the selection of prognostic factors of one-year survival after acute myocardial infarction �AMI � provided a more informative analysis than that obtained by the use of a standard regression tree analysis �CART method�. Information on AMI was collected for 1095 hospitalized patients over an 18-month period. The construction of pathways followed by patients produced symbolic-valued observations requiring a symbolic regression tree analysis. This analysis was compared with the standard CART analysis using patients as statistical units described by standard data selected TIMI score as the primary predictor variable. For the 1011 �84, resp. � patients with a lower �higher � TIMI score, the pathway variable did not appear as a diagnostic variable until the third �second � stage of the tree construction. For an ecological analysis, again TIMI score was the first predictor variable. However, in a symbolic regression tree
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1978215","timestamp":"2014-04-20T06:55:20Z","content_type":null,"content_length":"33934","record_id":"<urn:uuid:ff3b806d-54f1-4164-a58f-fbc4f159448f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Twin primes Mathematicians have conjectured since Euclid’s time that there are infinite pairs of prime numbers separated from each other by 2. Despite the fact that primes are separated on average by bigger gaps as numbers increase, evidence suggests that primes continue to appear as “twin primes” (green triangles) no matter how high you go. The illustration above highlights prime numbers, counting from 1 at upper left to 300 at lower right. Below, prime numbers are shown, counting from 1 at upper left to 625 at lower right. But for mathematicians, suggestive evidence isn’t good enough. For at least a century, they’ve labored to prove the twin prime conjecture. A major advance came this spring, when University of New Hampshire mathematician Yitang Zhang showed that there are infinitely many primes separated by some number smaller than 70 million. That may be a lot bigger than their eventual goal of 2, but by the end of July mathematicians had already whittled that limit down to 5,414. Expanding the view Note: To comment, Science News subscribing members must now establish a separate login relationship with Disqus. Click the Disqus icon below, enter your e-mail and click “forgot password” to reset your password. You may also log into Disqus using Facebook, Twitter or Google.
{"url":"https://www.sciencenews.org/article/twin-primes","timestamp":"2014-04-17T07:29:05Z","content_type":null,"content_length":"74343","record_id":"<urn:uuid:61cb023d-2938-427f-8472-9a5259565ebc>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Study about the irreversible behaviour of an electrode. 2. Understand the mechanism of electron transfer to an electrode. 3. Determine the current density. 4. Verifying Tafel plot. Tafel's name is an adjective in the language of all trained electrochemists, yet not too many of them would even know his first name. The fame of the "Tafel law" and "Tafel line" overshadow Tafel's claim to fame as one of the founders of modern electrochemistry. Until 1893, Tafel had lectured organic chemistry, but after 1893 he lectured physical and general chemistry. By German tradition, this would include lots of electrochemistry, and lots of experimentation. With strychnine reduction, Tafel had truly turned electrochemist. A careful observer, Tafel soon was able to summarize his major and rather far-reaching general deductions from his experimental work. Professor Julius Tafel, around 1905. (Courtesy Chemical Institute, Wurzburg University) Tafel equation and Tafel plots Tafel equation governs the irreversible behaviour of an electrode. To understand this we can consider the general mechanism of electron transfer to an electrode. Consider an electrolyte in which an inert or noble electrode is kept immersed. It is called working electrode, ( [ ]respectively and they are very low. An inert electrolyte is also present to minimise IR drop. Along with At the thermodynamic equilibrium of the system no net current flows across The equilibrium mentioned above is dynamic. Though no net current flows across the electrodes, both reduction and oxidation takes place at equal rate, so that the composition of the electrolyte does not change. The dynamic flow of electrons or charge in both directions can be written in terms of current densities as follows. The equilibrium situation at an electrode is characterised by equilibrium potential and exchange current density. For the reaction to have practical significance, a net current should flow and a net reaction either oxidation or reduction should occur. For this the kinetic aspect of the system must be considered. It is to be recalled that thermodynamics fixes the direction and kinetics determines the rate. For this let as apply an external potential to [ ](by applying external potential more positive than ( To summarize the situation, at the equilibrium potential, Negative to Positive to The famous Butler-Volmer equation is expressed as: From this equation, it can be understand that the measured current density is a function of (i) over potential ( Transfer coefficients are not independent variables. In general, For many reaction, Equation (4) indicates that the current density at any over potential is the sum of cathodic and anodic current densities. At the extreme condition of over potential being highly negative. Cathodic current density increases while anodic current density becomes negligible. At this stage, the first term in Butler-Volmer equation (4) becomes negligible. The equation can be written as: When the over potential is higher than above 52 mV, this equation shows that the increase in current is exponential with over potential. The current also depends on Equation 7 is called cathodic Tafel equation. Similarly at positive over potentials higher than 52 mV anodic current density is much higher than cathodic and the cathodic current density becomes negligible. Hence Equation (9) is called anodic Tafel equation. Experimental Determination of I and Tafel Plot The test electrode is kept immersed in its salt solution. The solution should be very dilute so that the concentration near the surface of the electrode does not differ too much from the bulk concentration. A calomel electrode is kept very close to the test electrode. An inert electrode is also taken which serves as the counter electrode. A DC potential is applied across the test and the counter electrodes, making the test electrode negative. This establishes a potential across the test and the reference electrodes which is read by a very sensitive voltmeter connected in the circuit. From this value the rest potential is subtracted to get the applied potential component on the test electrode. An ammeter connected in series reads the current passing through the circuit. The applied potential is increased which increases the over potential on the cathode (test electrode is made more negative) and the corresponding current value is measured (ammeter reading). In this way the current values are taken for several over potential values making test electrode more and more negative. The log values of these current values are plotted against the over potential on one side. In the next step the test electrode is connected to the positive terminal and the counter to negative. As done earlier the current is measured for various over potential values and plotted against them on the other side of the graph. Significance of Tafel Plots 1. The point of intersection on the Y axis of the extrapolated graph gives the value of I[0], the exchange current density, which is otherwise very difficult to determine. It the current passing at equilibrium conditions and a very low value. 2. The transfer coefficients can be determined; from the anodic slope, [ ]and from cathodic slope [ ]can be determined. This value is very important in industrial practice. This determines the potential that is to be applied to affect the desired rate of reduction or oxidation. 3. Knowing the value of transfer coefficient for a reaction the number of electrons ' 4. The effect of As cathodic transfer coefficient value increases reduction is favoured and oxidation is not favoured and vice versa for anodic transfer coefficient. The transfer coefficients depend on the pH of the medium; in acidic conditions (low pH) reduction is favoured which is revealed by an increase in
{"url":"http://amrita.vlab.co.in/?sub=2&brch=190&sim=605&cnt=1","timestamp":"2014-04-16T19:25:56Z","content_type":null,"content_length":"51842","record_id":"<urn:uuid:fe9d59da-6a75-41d4-bf4e-8060e6c679a2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Density of Dolean exponentials in L2 and Wiener Measure up vote 0 down vote favorite Assume that W is the classical Wiener space C([0,1],R) note $\mu$ the Wiener measure, and denote by $\mu_s$ the image of $\mu$ under the maping $T: W ->W$ such that$ T(w)= \sqrt(s) w$ . Denote by $W_t$ the coordinate functionnal defined by $W_t(w)=w_t$, denote by $F_t$ the borel sigma field of $W_t$,and define the stochastic intégral as usual.It is a classical result that the linear span of the set $e^{\int u_v dW_t -\int \frac{u^2}{2} dt}$, u in $L^2[0,1]$ is dense in $L^2(\mu)$. My question is : Is that true that the linear span f the set $e^{\int \frac{u_v}{s} dW_t -\int \frac{u^2}{2s}dt}$ , u in $L^2[0,1]$ , is dense in $L^2(\mu_s)$ ??? I want to draw your attention on the fact that the expecation of $e^{\int \frac{u_v}{s} dW_t -\int \frac{u^2}{2s}dt}$ is 1 under $\mu_s$, but not those of $e^{\int \frac{u_v}{s} dW_t -\int \frac{u^2} {2s^2}dt}$, moreover $e^{\int \frac{u_v}{s} dW_t -\int \frac{u^2}{2s}dt}$ is a weight that enables to use the Cameron -Martin theorem (or even the Girsanov theorem) on the space $(W,F,\mu_s)$. fa.functional-analysis stochastic-calculus measure-theory ca.analysis-and-odes Syd, I think you have an interesting question in there, but your grammar and syntax are very hard to parse. Could you please edit your question and make it more clear? Thanks. – Tom LaGatta Aug 6 '10 at 1:41 Ok, it's done . – Syd L Aug 6 '10 at 8:37 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged fa.functional-analysis stochastic-calculus measure-theory ca.analysis-and-odes or ask your own question.
{"url":"http://mathoverflow.net/questions/34668/density-of-dolean-exponentials-in-l2-and-wiener-measure","timestamp":"2014-04-16T13:33:12Z","content_type":null,"content_length":"49116","record_id":"<urn:uuid:3b81c748-8231-4fc4-b50e-6d176d1ef5d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Regularity of sparse Fourier transforms up vote 5 down vote favorite Suppose $F$ has discrete Fourier transform $(a_n)$ where $a_n=0$ unless $n=2^k$ for some $k > 0$, in which case $a_n=1/k$ (or $a_n=1/k^2$ if you want: I'm happy with anything polynomial). What sort of regularity conditions does $F$ have? Is it Holder continuous, or not? To be explicit: $$ F(x)=\sum_{k=1}^\infty k^{-2} \exp(ix2^k) $$ for example. More generally, I'm interested in two dimensional (discrete) Fourier transforms: is there a good reference for this sort of thing? fourier-analysis fa.functional-analysis add comment 3 Answers active oldest votes If $0 < \alpha < 1/2$ then a continuous function on the circle is $\operatorname{Lip}_\alpha$ only if the Fourier coefficients satisfy $a_n = {\rm O}( n^{-\alpha})$; this is in Katznelson's book (Chapter I, Corollary 4.6) for instance. [EDIT (2013-07-10): at the time I thought this was "iff" but a comment points out that I misremembered; in any case, for lacunary series such as the one in the question, a lot more is known than in the general case; see e.g. Katznelson Chapter V for the basics.] up vote 7 down vote accepted So the function you defined above isn't going to be Hölder continuous for any positive exponent, even though it's clearly continuous (absolutely convergent Fourier series). Off the top of my head, I don't know of any particularly good source for the higher-dimensional stuff. Great! Will have a look at Katznelson, but at the very least, this gives me an intuition about what's going on. – Matthew Daws Nov 8 '09 at 14:32 1 @Yemon, could you tell me on which page I can find the proof of the statement you stated at the beginning? I found the statement that Lip$_\alpha$ implies Fourier coefficient decays like $O(1/n^{\alpha})$, but I couldn't find the converse. Thank you. – Syang Chen Jul 15 '12 at 8:25 @SyangChen my mistake - thanks for pointing this out – Yemon Choi Jul 10 '13 at 22:25 add comment Gian Maria Dall'Ara's comment is the solution. This function that you describe is a (typical) example of a continuous function that's nowhere differentiable. In fact, suppose that you have an integrable function $F$ such that $\hat F(n) = a _ n $ whenever $n=\lambda _ k$ and zero otherwise, where we assume that the sequence $\lambda_k$ is lacunary in the sense of Hadamard (i.e ${\ lambda _ {k+1}} / {\lambda _ k}\geq c$). If the function $F$ is differentiable at some point then $a _ {\lambda _k}=o(\frac{1}{\lambda _ k})$ (actually i have the impression that the proof of this fact uses the weaker assumption that $F$ is Lipschitz continuous at some point). More generally, if you replace differentiability of the function $F$ with $\alpha$-Hölder continuity (in a neighborhood of zero say) for $0<\alpha <1$ then you conclude that $a _ {\lambda _k}=O(\frac{1}{\lambda _ k ^\alpha})$. So your function is not $\alpha$- Hölder either. Remark 1: The contrary is also true since $a _ {\lambda _k}=o(\frac{1}{\lambda _ k})$ implies that $F$ is differentiable at any point of the circle where the partial sums converge to the function. I have some doubts about the precise hypothesis needed here. I'm not sure if you need your function to have only positive spectrum, but your function here does anyways. up vote Remark 2: You can look in Grafakos book for example, or of course, in Zygmund's trigonometric series (that would be my first reference for this type of problems). Katznelson has also a lot of 4 down information. But I know that Grafakos book contains these results for sure. Remark 3: So your function is nowhere differentiable and is not Hölder continuous either. However it has other nice properties. For example, it belongs to any $L^p$ for $1\leq p <\infty$ and the $L^p$ norm is comparable to the $L^2$ norm (here note that the lacunary gaps force the Littlewood-Paley pieces of the function to behave as independent random variables). On top of that, using kolmogorov's result on lacunary Fourier series you get an easy a.e convergence result of the partial sums to the function (something which is still true for $L^2$ functions in general, but several scales deeper and more difficult to prove). Remark 4: Finally, your function has only positive frequences and belongs to $L^p$ on the circle, hence it belongs to the Hardy space $H^p$ on the circle. I don't know if you can use that in your problem, but it is a strong property. add comment One of the first examples (historically) of nowhere differentiable continuous functions was given by $a_{2^n} = 2^n$ and $0$ otherwise. Taking tensor powers of this function you get very irregular functions of the kind you want. By very irregular here I mean nowhere differentiable (and so at least not in $\operatorname{Lip}_1$, but maybe you can get much more). In any case up vote 3 these Fourier series are called lacunary (à la Hadamard) and there should be a lot of literature about them. down vote @AndrewStacey I took the liberty of fixing your fix and operatornaming something while at it – Yemon Choi Jul 10 '13 at 17:06 add comment Not the answer you're looking for? Browse other questions tagged fourier-analysis fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/4625/regularity-of-sparse-fourier-transforms","timestamp":"2014-04-16T13:59:42Z","content_type":null,"content_length":"65097","record_id":"<urn:uuid:045b3696-8648-4537-a834-ece846c2b7b5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Bifurcation Diagram of the Hénon Map The Hénon map is the two-dimensional map described by the equations The plot shows the last 36 coordinates after iterations of the Hénon map using the initial point . To obtain an interesting region of periodic and chaotic behavior, we use values between 0.91 and 1.41 with . Changing these values will give very similar results. You can click anywhere in the diagram to see a magnified inset in the lower-left corner. Using the lowest values for and will let you quickly find an appropriate region to inspect. Larger values will show a cleaner, more detailed plot. The idea for the Locator-magnifier was taken from Ed Pegg Jr.
{"url":"http://www.demonstrations.wolfram.com/BifurcationDiagramOfTheHenonMap/","timestamp":"2014-04-20T09:29:42Z","content_type":null,"content_length":"43630","record_id":"<urn:uuid:0a3ac90f-d825-4ff8-9a7b-ba137c71a858>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
A feasible directions algorithm for nonlinear complementarity problems and applications in mechanics Find out how to access preview-only content February 2009 Volume 37 Issue 5 pp 435-446 A feasible directions algorithm for nonlinear complementarity problems and applications in mechanics Purchase on Springer.com $39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. Get Access Complementarity problems are involved in mathematical models of several applications in engineering, economy and different branches of physics. We mention contact problems and dynamics of multiple bodies systems in solid mechanics. In this paper we present a new feasible direction algorithm for nonlinear complementarity problems. This one begins at an interior point, strictly satisfying the inequality conditions, and generates a sequence of interior points that converges to a solution of the problem. At each iteration, a feasible direction is obtained and a line search performed, looking for a new interior point with a lower value of an appropriate potential function. We prove global convergence of the present algorithm and present a theoretical study about the asymptotic convergence. Results obtained with several numerical test problems, and also application in mechanics, are described and compared with other well known techniques. All the examples were solved very efficiently with the present algorithm, employing always the same set of parameters. 1. Arora JS (2004) Introduction to optimum design, 2nd edn. Academic, London 2. Baiocchi C, Capelo A (1984) Variational and quasivariational inequalities. Applications to free-boundary problems. Wiley, Chichester 3. Bazaraa MS, Shetty CM (1979) Theory and algorithms. Nonlinear programming. Wiley, New York 4. Chen C, Mangasarian OL (1996) A class of smoothing functions for nonlinear and mixed complementarity problems. Comput Optim Appl 5:97–138 CrossRef 5. Chen X, Ye Y (2000) On smoothing methods for the P_0 matrix linear complementarity problem. SIAM J Optim 11:341–363 CrossRef 6. Christensen PW, Klarbring A , Pang JS, Stroömberg N (1998) Formulation and comparison of algorithms for frictional contact problems. Int J Num Meth Eng 42:145–173 CrossRef 7. Crank J (1984) Free and moving boundary problems. Oxford University Press, New York 8. Dennis JE, Schnabel RB (1996) Numerical methods for unconstrained optimization and nonlinear equations. SIAM, Philadelphia 9. Fathi Y (1979) Computational complexity of LCPs associated with positive definite symmetric matrices. Math Program 17:335–344 CrossRef 10. Ferris MC, Kanzow C (2002) Complementarity and related problems: a survey. Handbook of applied optimization. Oxford University Press, New York, pp 514–530 11. Ferris MC, Pang JS (1997) Engineering and economic applications of complementarity problems. SIAM Rev 39:669–713 CrossRef 12. Fischer A (1992) A special newton-type optimization method. Optim 24:269–284 CrossRef 13. Geiger C, Kanzow C (1996) On the resolution of monotone complementarity problems. Comput Optim Appl 5:155–173 CrossRef 14. Harker PT (1998) Accelerating the convergence of the diagonal and projection algorithms for finite-dimensional variational inequalities. Math Program 41:29–59 CrossRef 15. Herskovits J (1982) Développement d’une méthode númerique pour l’Optimisation non linéaire. Dr. Ing. Thesis, Paris IX University, INRIA-Rocquencourt (in English) 16. Herskovits J (1986) A two-stage feasible directions algorithm for nonlinear constrained optimization. Math Program 36:19–38 CrossRef 17. Herskovits J (1995) A view on nonlinear optimizaton. In: Herskovits J (ed) Advances in structural optimization. Kluwer Academic, Dordrecht, pp 71–117 18. Herskovits J (1998) A feasible directions interior point technique for nonlinear optimization. J Optim Theory Appl 99(1):121–146 CrossRef 19. Herskovits J, Santos G (1998) Feasible arc interior point algorithm for nonlinear optimization. In: Fourth world congress on computational mechanics, (in CD-ROM), Buenos Aires, June–July 1998 20. Herskovits J, Leontiev A, Dias G, Santos G (2000) Contact shape optimization: a bilevel programming approach. Struct Multidisc Optim 20:214–221 CrossRef 21. Herskovits J, Mappa P, Goulart E, Mota Soares CM (2005) Mathematical programming models and algorithms for engineering design optimization. Comput Methods Appl Mech Eng 194(30–33):3244–3268 22. Hock W, Schittkowski K (1981) Test example for nonlinear programming codes. Springer, Berlin Heidelberg New York 23. Jiang H, Qi L (1997) A new nonsmooth equations approach to nonlinear complementarity problems. SIAM J Control Optim 35(1):178–193 CrossRef 24. Kanzow C (1994) Some equation-based methods for the nonlinear complementarity problem. Optim Methods Softw 3:327–340 CrossRef 25. Kanzow C (1996) Nonlinear complementarity as unconstrained optimization. J Optim Theory Appl 88:139–155 CrossRef 26. Kinderlehrer D, Stampacchia G (1984) An introduction to variational. Oxford University Press, New York 27. Leontiev A, Huacasi W (2001) Mathematical programming approach for unconfined seepage flow problem. Eng Anal Bound Elem 25:49–56 CrossRef 28. Leontiev A, Huacasi W, Herskovits J (2002) An optimization technique for the solution of the signorini problem using the boundary element method. Struct Multidisc Optim 24:72–77 CrossRef 29. Mangasarian OL (1973) Equivalence of the complementarity problem to a system of nonlinear equations. SIAM J Appl Math 31:89–92 CrossRef 30. Mangasarian OL, Solodov MV (1993) Nonlinear complementarity as unconstrainede and constrained minimization. Math Program (Serie B) 62:277–297 CrossRef 31. Murphy FH, Sherali HD, Soyster AL (1982) A mathematical programming approach for determining oligopolistic market equilibrium. Math Program 24:92–106 CrossRef 32. Murty KG (1988) Limear complementarity, linear and nonlinear programming. Sigma series in applied mathematics, vol 3. Heldermann, Berlin 33. Petersson J (1995) Behaviourally constrained contact force optimization. Struct Multidisc Optim 9:189–193 34. Qi L, Sun D (1998) Nonsmooth equations and smoothing newton methods. Technical Report, School of Mathematics, University of New South Wales, Sydney 35. Subramanian PK (1993) Gauss-Newton methods for the complementarity problem. J Optim Theory Appl 77:467–482 CrossRef 36. Tanoh G, Renard Y, Noll D (2004) Computational experience with an interior point algorithm for large scale contact problems. Optimization Online 37. Tin-Loi F (1999a) On the numerical solution of a class of unilateral contact structural optimization problems. Struct Multidisc Optim 17:155–161 38. Tin-Loi F (1999b) A smoothing scheme for a minimum weight problem in structural plasticity. Struct Multidisc Optim 17:279–285 39. Tseng P (1997) An infeasible path-following method for mono tone complementarity problems. SIAM J Optim 7:386–402 CrossRef 40. Vanderplaats G (1999) Numerical optimization techniques for engineering design, 3rd edn. VR&D, Colorado Springs 41. Wright SJ (1997) Primal-dual interior-point methods. SIAM, Philadelphia 42. Yamashita N, Fukushima M (1995) On stationary points of the implicit lagrangian for nonlinear complementarity problems. J Optim Theory Appl 84:653–663 CrossRef 43. Yamashita N, Dan H, Fukushima M (2004) On the identification of degenerate indices in the nonlinear complementarity problem with the proximal point algorithm. Math Programming 99:377–397 CrossRef 44. Xu S (2000) The global linear convergence of an infeasible non-interior path-following algorithm for complemenarity problems with uniform P-functions. Math Program 87:501–517 CrossRef 45. Zouain N, Herskovits J, Borges LA, Feijóo RA (1993) An iterative algorithm for limit analysis with nonlinear yield functions. Int J Solids Struct 30(10):1397–1417 CrossRef A feasible directions algorithm for nonlinear complementarity problems and applications in mechanics Cover Date Print ISSN Online ISSN Additional Links □ Feasible direction algorithm □ Interior point algorithm □ Nonlinear complementarity problems □ Variational formulations in mechanics Industry Sectors Author Affiliations □ 1. COPPE, Mechanical Eng. Prog., Federal University of Rio de Janeiro, Caixa Postal 68503, 21945 970, Rio de Janeiro, Brazil □ 2. Department of Mathematics, UFJF, ICE Campus Universitário, Federal University of Juiz de Fora, CEP 36036-330, Juiz de Fora-MG, Brazil
{"url":"http://link.springer.com/article/10.1007%2Fs00158-008-0252-5","timestamp":"2014-04-20T21:20:59Z","content_type":null,"content_length":"57032","record_id":"<urn:uuid:952b9fde-be4d-44d1-bcfe-d58807bccb10>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionRadar Emitter Recognition SystemPrimary Recognition Based on Improved Rough k-meansThe Advanced Recognition Using RVMComputational Complexity AnalysisResults and DiscussionResults of Experiment 1: Classification of the Radar Emitter SignalsResults of Experiment 2: Classification of the Iris Data SetConclusionsReferencesFigures and Tables Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s130100848 sensors-13-00848 Article Hybrid Radar Emitter Recognition Based on Rough k-Means Classifier and Relevance Vector Machine YangZhutian^1 WuZhilu^1 YinZhendong^1^* QuanTaifan^1 SunHongjian^2 School of Electronics and Information Technology, Harbin Institute of Technology, Harbin 150001, China; E-Mails: deanzty@gmail.com (Z.Y.); wuzhilu@hit.edu.cn (Z.W.); quantf@hit.edu.cn (T.Q.) Department of Electronic Engineering, King's College London, Strand, London, WC2R 2LS, UK; E-Mail: hongjian.sun@kcl.ac.uk Author to whom correspondence should be addressed; E-Mail: zgczr2005@yahoo.com.cn; Tel.: +86-451-8641-8284 (ext. 193); Fax: +86-451-8640-3135. 2013 11 01 2013 13 1 848 864 17 09 2012 11 12 2012 27 12 2012 © 2013 by the authors; licensee MDPI, Basel, Switzerland. 2013 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for recognizing radar emitter signals. In this paper, a hybrid recognition approach is presented that classifies radar emitter signals by exploiting the different separability of samples. The proposed approach comprises two steps, namely the primary signal recognition and the advanced signal recognition. In the former step, a novel rough k-means classifier, which comprises three regions, i.e., certain area, rough area and uncertain area, is proposed to cluster the samples of radar emitter signals. In the latter step, the samples within the rough boundary are used to train the relevance vector machine (RVM). Then RVM is used to recognize the samples in the uncertain area; therefore, the classification accuracy is improved. Simulation results show that, for recognizing radar emitter signals, the proposed hybrid recognition approach is more accurate, and presents lower computational complexity than traditional approaches. hybrid recognition rough boundary uncertain boundary computational complexity Radar emitter recognition is a critical function in radar electronic support systems for determining the type of radar emitter [1]. Emitter classification based on a collection of received radar signals is a subject of wide interest in both civil and military applications. For example, in battlefield surveillance applications, radar emitter classification provides an important means to detect targets employing radars, especially those from hostile forces. In civilian applications, the technology can be used to detect and identify navigation radars deployed on ships and cars used for criminal activities [2]. This technology can be also applied in navigation radars for detecting ships and estimating their sizes [3], focusing on future classification stages [4]. The recent proliferation and complexity of electromagnetic signals encountered in modern environments greatly complicates the recognition of radar emitter signals [1]. Traditional recognition methods are becoming inefficient against this emerging issue [5]. Many new radar emitter recognition methods were proposed, e.g., intra-pulse feature analysis [6], stochastic context-free grammar analysis [1], and artificial intelligence analysis [7–11]. In particular, the artificial intelligence analysis approach has attracted much attention. Artificial intelligence techniques have been also successfully applied when working with radars for other purposes, such as clutter reduction stages [12], in target detection stages [13,14] and in target tracking stages [15]. Among the artificial intelligence approaches, the neural network and the support vector machine (SVM) are widely used for radar emitter recognition. In [8], Zhang et al. proposed a method based on the rough sets theory and radial basis function (RBF) neural network. Yin et al. proposed a radar emitter recognition method using the single parameter dynamic search neural network [9]. However, the prediction accuracy of the neural network approaches is not high and the application of neural networks requires large training sets, which may be infeasible in practice. Compared to the neural network, the SVM yields higher prediction accuracy while requiring less training samples. Ren et al.[2] proposed a recognition method using fuzzy C-means clustering SVM. Lin et al. proposed to recognize radar emitter signals using the probabilistic SVM [10] and multiple SVM classifiers [11]. These proposed SVM approaches can improve the accuracy of recognition. Unfortunately, the computational complexity of SVM increases rapidly with the increasing number of training samples, so the development of classification methods with high accuracy and low computational complexity is becoming a focus of research. Recently, a general Bayesian framework for obtaining sparse solutions to regression and classification tasks named relevance vector machine (RVM) was proposed. RVM is attracting more and more attention in many fields, including radar signal analysis [16,17]. Classifiers can be categorized into linear classifiers and nonlinear classifiers. A linear classifier can classify linear separable samples, but cannot classify linearly inseparable samples efficiently. A nonlinear classifier can classify linearly inseparable samples; nevertheless it usually has a more complex structure than a linear classifier and the computational complexity of the nonlinear classifier will be increased when processing linearly separable samples. In practice, the radar emitter signals consist of both linearly separable samples and linearly inseparable samples, which makes classification challenging, so in an ideal case, linearly separable samples should are classified by linear classifiers, while only these linearly inseparable samples are classified by the nonlinear classifier. However in the traditional recognition approach, only one classifier is used; thus, it is difficult to classify all radar emitter signal samples. In this paper, a hybrid recognition method based on the rough k-means theory and the RVM is proposed. To deal with the drawback of the traditional recognition approaches, we apply two classifiers to recognize linearly separable samples and linearly inseparable samples, respectively. Samples are firstly recognized by the rough k-means classifier, while linearly inseparable samples are picked up and further recognized by using RVM in the advanced recognition. This approach recognizes radar emitter signals accurately and has a lower computational complexity. The rest of the paper is organized as follows. In Section 2, a novel radar emitter recognition model is proposed. In Section 3, the primary recognition is introduced. In Section 4, the advanced recognition is introduced. In Section 5, the computational complexity of this approach is analyzed. The performance of the proposed approach is analyzed in Section 6, and conclusions are given in Section 7. A combination of multiple classifiers is a powerful solution for difficult pattern recognition problems. Thinking about the structure, a combined classifier can be divided into serial and concurrent. A serial combined classifier usually has a simple structure and is easy to establish. In serial combined classifiers, the latter classifier makes the samples rejected by the former its training samples. Thus in designing it, the key is choosing the complementary classifiers and determining the rejected samples. In this section, a hybrid radar emitter recognition approach that consists of a rough k-means classifier in the primary recognition and a RVM classifier in the advanced recognition is proposed. This approach is based on the fact that in the k-means clustering, the linearly inseparable samples are mostly at the margins of clusters, which makes it difficult to determine which cluster they belong to. To solve this problem, in our approach a linear classifier and a nonlinear classifier are applied to form a hybrid recognition method. In the proposed approach, the rough k-means classifier, which is linear, is applied as the primary recognition. It can classify linearly separable samples and pick up those linearly inseparable samples to be classified in the advanced recognition. In the rough k-means algorithm, there are two areas in a cluster, i.e., certain area and rough area. But in the rough k-means classifier proposed in this paper, there exist three areas, i.e., certain area, rough area and uncertain area. For example, in two dimensions, a cluster is depicted in Figure 1. Training samples are clustered first. At the edge of the cluster, there is an empty area between the borderline and the midcourt line of the two cluster centers. We name this area as the uncertain area. In clustering, there is no sample in the uncertain area. When the clustering is completed, these clusters will be used as the minimum distance classifier. When unknown samples are classified, samples are distributed into the nearest cluster. However linearly inseparable samples are usually far from cluster centers and out of the cluster probably, i.e., in the uncertain area. Thus after distributed into their nearest clusters, the unknown samples in the uncertain area will be recognized by the advanced recognition using a nonlinear classifier. For those unknown samples in the certain area and rough area, the primary recognition outputs final results. After sorting and feature extraction, radar emitter signals are described by pulses describing words. Radar emitter recognitions are based on these pulses describing words. The process of the hybrid radar emitter recognition approach is shown in Figure 2. Based on the pulses describing words, we can obtain an information sheet of radar emitter signals. By using rough sets theory, the classification rules are extracted. These classification rules are the basis of the initial centers of the rough k-means classifier. More specifically, they determine the initial centers and the number of clusters. After that, the known radar emitter signal samples are clustered by the rough k-means while the rough k-means classifier in the primary recognition is built, as described in the next section. The samples in the margin of a cluster are affected easily by noises and even out of the cluster boundary, which will cause confusions in recognition of unknown samples. Thus, the samples in the margin of a cluster are picked up to be used as the training data for the RVM in the advanced recognition. In recognition, the unknown samples to be classified are recognized firstly by the rough k-means classifier. The uncertain sample set, which is rejected by the primary recognition, is classified by the RVM in the advanced recognition. In the advanced recognition, RVM will recognize these unknown samples based on the training samples, i.e., the samples in the rough areas. More specifically, the samples which are the rough samples affected by the noise, will be recognized. And other samples will be rejected by the advanced recognition. Based on the process of the recognition approach described above, the accuracy of the hybrid recognition is a superposition of two parts, i.e., the accuracy of the primary recognition and the accuracy of the advanced recognition. The samples that the primary recognition rejects are classified by the advanced recognition. So the estimate of recognition accuracy can be given by: A total = A primary + R primary × A advancedwhere A[total], A[primary], A[advanced], and R[primary] denote the accuracy of the hybrid recognition, the accuracy of the primary recognition, the accuracy of the advanced recognition, and the reject rate of the primary classifier, respectively. As mentioned above, a classifier based on the rough k-means is proposed as the primary recognition. Rough k-means is a generation of k-means algorithm, which is one of the most popular iterative descent clustering algorithms [18]. The basic idea of k-means algorithm is to make the samples have high similarity in a class, and low similarity among classes. However k-means clustering algorithm has the following problems: The number of clusters in the algorithm must be given before clustering. The k-means algorithm is very sensitive to the initial center selection and can easily end up with a local minimum solution. The k -means algorithm is also sensitive to isolated points. To overcome the problem of isolated points, Pawan and West proposed the rough k-means algorithm [19]. The rough k-means can solve the problems of nondeterminacy in clustering and reduce the effect of isolated samples efficiently, but it still requires initial centers and the number of clusters as priors. In this paper, we propose to determine the number and initial centers of clusters based on rough sets theory. In rough sets theory, an information system can be expressed by a four-parameters group [20]: S = {U, R, V, f}. U is a finite and non-empty set of objects called the universe, and R = C ∪ D is a finite set of attributes, where C denotes the condition attributes and D denotes the decision attributes. V = ∪v[r], (r ∈ R) is the domain of the attributes, where v[r] denotes a set of values that the attribute r may take. f: U × R → V is an information function. The equivalence relation R partitions the universe U into subsets. Such a partition of the universe is denoted by U/R = E[1], E [2],…, E[n], where E[i] is an equivalence class of R. If two elements u, v ∈ U belong to the same equivalence class E ⊆ U/R, u and v are indistinguishable, denoted by ind(R). If ind(R) = ind(R–r), r is unnecessary in R. Otherwise, r is necessary in R. Since it is not possible to differentiate the elements within the same equivalence class, one may not obtain a precise representation for a set X ⊆ U. The set X, which can be expressed by combining sets of some R basis categories, is called set defined, and the others are rough sets. Rough sets can be defined by upper approximation and lower approximation. The elements in the lower bound of X definitely belong to X, and elements in the upper bound of X belong to X possibly. The upper approximation and lower approximation of the rough set R can be defined as follows [20]: R _ ( X ) = ∪ { Y ∈ U R : Y ⊆ X } R ¯ ( X ) = ∪ { Y ∈ U R : Y ∩ X ≠ ⊘ }where Ṟ(X) represents the set that can be merged into X positively, and R̄(X) represents the set that is merged into X possibly. In the radar emitter recognition, suppose Q is the condition attribute, namely, the pulse describing words for classification, P is the decision attribute, namely, the type of radar emitter, and the U is the set of radar emitter samples. The information systems decided by them are U/P = {[x][P]|x ∈ U} and U/Q = {[y][P]|y ∈ U}. If for any [x][P] ∈ (U/P): Q ¯ ( [ x ] P ) = Q ( [ x ] P ) = [ x ] Pthen P is dependent on Q completely, that is to say when disquisitive radar emitter sample is some characteristic of Q, it must be some characteristic of P. P and Q are of definite relationship. Otherwise, P and Q are of uncertain relationship. The dependent extent of knowledge P to knowledge Q is defined by: γ Q = P O S P ( Q ) / | U |where POS[P](Q) = ∪Q̱(x) and 0 ≤ γ[Q] ≤ 1. The value of γ [Q] reflects the dependent degree of P to Q. γ[Q] = 1 shows P is dependent on Q completely; γ[Q] close to 1 shows P is dependent on Q highly; γ[Q] = 0 shows P is independent of Q and the condition attribute Q is redundancy for classification. Due to the limitation of length, rough sets theory is introduced briefly here. And the details of rough sets are introduced in reference [20]. After discretization and attribute reduction, the classification rules are extracted. Using this approach, the initial centers are computed based on the classification rules of rough sets. The process can be described as follows: Classification rules are obtained based on the rough sets theory. The mean value of every class is obtained. The clustering number equals to the number of rules and define the mean values as the initial clustering centers: t p = ∑ x ∈ X p x card ( X p )where X[p] denotes the set of samples in the classification rule p of the rough sets theory. In rough k-means algorithm upper approximation and lower approximation are introduced. The improved cluster center is given by [19]: C j = { ω lower × ∑ v ∈ A _ ( x ) v j | A _ ( x ) | + ω upper × ∑ v ∈ ( A ¯ ( x ) − A _ ( x ) ) v j | A ¯ ( x ) − A _ ( x ) | if A ¯ ( x ) − A _ ( x ) ≠ ⊘ ω lower × ∑ v ∈ A _ ( x ) v j | A _ ( x ) | otherwisewhere the parameters ω[lower] and ω[upper] are lower and upper subject degrees of x relative to the clustering center. For each object vector v, d(x, t[i]) (1 ≤ i≤ I) denotes the distance between the center of cluster t[i] and the sample. The lower and upper subject degrees of x relative to its cluster is based on the value of d(x,t[i])−d[min](x), where d[min](x) = min[i][∈[1], [I][]]d(x,t[i]). If the value of d(x,t[i]) − d[min](x) ≥ λ, the sample x is subject to the lower approximation of its cluster, where λ denotes the threshold for determining upper and lower approximation. Otherwise, x will be subject to the upper approximation. The comparative degree can be determined by the number of elements in the lower approximation set and the upper approximation set, as follows: ω lower ( i ) ω upper ( i ) = | A ¯ ( X i ) | | A _ ( X i ) | , ( A _ ( X i ) ≠ ⊘ ) ω lower ( i ) + ω upper ( i ) = 1 In Equation (7), the parameter λ determines the lower and upper subject degree of X[k] relative to some clustering. If the threshold λ is too large, the low approximation set will be empty, while if the threshold λ is too small, the boundary area will be powerless. The threshold λ can be determined by: Compute the Euler distance of every object to K class clustering centers and distance matrix D(i, j) Compute the minimum value d[min](i) in every row of matrix D(i, j) Compute distance between every object and other class center d[i] and d[t](i, j)=d(i)-d[min](i) Obtain the minimum value d[s](i) (except zero) in every row λ is obtained from the minimum value d[s](i) In the training process of the rough k-mean classifier, we need calculate the cluster center; rough boundary R[ro] and uncertain boundary R[un] in every cluster. After clustering, the center of a cluster and the farthest sample from the center of the cluster are determined. The area between rough boundary and uncertain boundary (R[ro] < d[x]< R[un]) is defined as rough area, where d[x] denotes the distance from a sample to the center. In the training, if a training sample is in the rough area, it will be used to train the RVM in the advanced recognition. The uncertain boundary threshold R[un] is defined by: R u n = max ( d x )where max(d[x]) is the distance from the farthest sample to the center. The rough radius R[ro] can be defined by: R r o = δ R u nand the scale factor δ ∈ [0.7,0.9] generally. In this paper, δ = 0.8. In a cluster, the area beyond uncertain boundary (d[x] > R[un]) is the uncertain area. When unknown samples are recognized, they will be distributed into the nearest cluster. If d[x] > R[un], these samples will be further recognized by the advanced recognition. For other unknown samples, the result of the primary recognition will be final. In addition, the accuracy of primary recognition is relevant with the radii of clusters. Rough k-means clustering can lessen the radii of clusters effectively. Comparison of radii of the rough k-means cluster and the k-means cluster is shown in Figure 3. As shown in Figure 3, the radius of the k-means cluster is the distance from the cluster center to the farthest isolated sample. In the rough k-means, the cluster center is the average of the lower approximation center and the upper approximation center. The upper approximation center is near to the farthest sample, so the cluster radius of rough k-means R[r] is less than the k-means radius R, obviously. As the radius is shortened, when unknown samples are recognized, the probability that an uncertain sample is recognized as a certain sample is reduced. Therefore, the accuracy of the primary recognition is increased. The relevance vector machine (RVM), a sparse Bayesian modeling approach, is proposed by Tipping [21], which enables sparse classification by linearly-weighting a small number of fixed basis functions from a large dictionary of potential candidates. And a significant advantage to support vector machine is that the kernel function of RVM avoids satisfying Mercer's condition [22–24]. In classification, the output function y(x) is defined by: y ( x , ω ) = σ ( ω T ϕ ( x ) )where σ(z) = 1/(1+e^−^z) and ω denotes the weight matrix. Suppose ω is to a Gauss conditional probability, with the 0 expectation and variance a i − 1. For two classes classification, the likelihood function is defined by: P ( t ∣ ω ) = ∏ n − 1 N σ { y ( x n , ω ) } t n [ 1 − σ { y ( x n , ω ) } ] 1 − t nwhere t[n] ∈ (0,1) denote the target value. Seeking the maximum posterior probability estimation is equivalent to seeking the mode point of the Gaussian function, namely, μ[MP]. Due to: P ( ω ∣ t , α ) = P ( t ∣ α ) P ( ω ∣ α ) P ( t ∣ α )the maximum posterior probability estimation according to ω is equivalent to maximize: log { P ( ω ∣ t , α ) } = log { P ( t ∣ ω ) } + log { P ( ω ∣ α ) } − log { P ( t ∣ α ) } = ∑ n = 1 N [ t n log y n + ( 1 − t n ) log ( − y n ) ] − 1 2 ω T A ω + Cwhere y[n] = σ{y(x[n],ω)}, C denotes a constant. Similarly, the marginal likelihood function can be given by: P ( ω ∣ t , α ) = ∫ P ( t ∣ ω ) P ( ω ∣ α ) d ω P ( t ∣ ω M P ) P ( ω M P ∣ α ) ( 2 π ) M / 2 ∣ ∑ ∣ 1 / 2 Suppose t̂ = Φ[ωMP] + B^−1(t − y), the approximation of the Gaussian posterior distribution, i.e., μ[MP] = ΣΦ^TBt̂, with the variance Σ = (Φ^TBΦ + A) ^−1. The logarithm of the approximate marginal likelihood function is given by: log p ( t ∣ α ) = − 1 2 { N log ( 2 π ) + log | C | + t ^ T C − 1 t ^ }where C = B + ΦA^−1Φ^T A fast marginal likelihood maximisation for sparse Bayesian models is proposed in reference [21], which can reduce the learning time of RVM effectively. To simplify forthcoming expressions, it is defined that: s i = ϕ i T C − i − 1 ϕ i q i = ϕ i T C − i − 1 t It is showed that Equation (16) has a unique maximum with respect to α[i]: α i = s i 2 q i 2 − s i , if q i 2 > s i , α i = ∞ , if q i 2 ≤ s i The proposed marginal likelihood maximization algorithm is as follows: Initialize with a single basis vector φ[i], setting, from Equation (20): α i = ‖ ϕ i ‖ 2 ‖ ϕ i T ‖ 2 / ‖ ϕ i ‖ 2 − σ 2 . Compute Σ and μ (which are scalars initially), along with initial values of sm and qm for all M bases ϕ[m]. Select a candidate basis vector φ[i] from the set of all M. Compute θ i = q i − 2 − S i. If θ[i] > 0, α[i] < ∞, re-estimate α[i]. If θ[i] > 0, α[i] = ∞, add φ[i] to the model with updated α[i]. If θ[i] ≤ 0, α[i] < ∞, delete φ[i] from the model and set α[i] = ∞. Recompute and update Σ, μ, s[m] and q[m], where, s m = α m S m α m − S m, q m = α m Q m α m − S m, S[m] = φ[m]^TBφ[m]-φ[m]^TBΦΣΦ^TBφ[m] and Q[m] = φ[m]^TBt̂-φ[m]^TBΦΣΦ^TBt̂. If converged, terminate the iteration, otherwise go to 3. The fast marginal likelihood maximisation for sparse Bayesian models is stated in details in [21,22]. The computational complexity of the approach proposed in this paper consists of two parts, namely the computational complexity of the primary recognition and the computational complexity of the advanced recognition. In the training of the primary recognition, samples are clustered using rough k-means. The computational complexity of the rough k-means is O(dmt), where d, m and t denote the dimension of samples, the number of training samples and the iterations, respectively. In this paper, the optimal initial centers are determined by analyzing the knowledge rule of the training sample set based on rough set theory, instead of iteration. Thus, the computational complexity of the primary recognition is O(dm). The RVM is used as the advanced recognition in our approach. The computational complexity of RVM has nothing with the dimension of samples, but is related with the number of samples. The computational complexity of RVM training is discussed with respect to the complexity of the quadratic programming. RVM training has a computational complexity less than O(m′^3), where m′ denotes the number of training samples for RVM in the advanced recognition [22]. In conclusion, the computational complexity of our hybrid recognition is O(dm) + O(m′^3). In general, O(dm) ≪ O(m′3). Therefore, the computational complexity of the hybrid recognition training is regard as O(m′^3). In actual practice, m′ is not larger than the training sample number, i.e., m[22]. m′ will be lessened with the reduction of m. In the primary recognition, training samples are differentiated and only a part of samples, namely uncertain samples, are used for RVM training. Therefore, the proposed approach can present lower computational cost than RVM. The validity and efficiency of the proposed approach is proved by simulations. In the first simulation, radar emitter signals are recognized. The pulse describing words of the radar emitter signal include a radio frequency (RF), a pulse repetition frequency (PRF), antenna rotation rate (ARR) and a pulse width (PW). The type of radar emitter is the recognition result. Two hundred and seventy groups of data are generated on above original radar information for training. And the recognition accurate is calculated averaged over 200 random generations of the data set. Another simulation is adopted to test the generalization of the hybrid recognition with the Iris data set. The Iris data set contains 150 patterns belonging to three classes. There are 50 exemples for each class and each input is a four-dimensional real vector [25]. The recognition accuracy and computational complexity are compared with SVM and RVM. This simulation consists of two parts. In the first part, all 150 samples are used in training, while all of 150 samples are used to test the training accuracy. In the second part, 60 random samples are used to train classifiers and other 90 samples are used to test the generalization. Simulations are run on a personal computer, equipped with a Pentium (R) Dual 2.2 GHz processor and 2G RAM. An information sheet of radar emitter signals is built, which is shown as Table 1. Nine known radar emitter signals are applied to test the proposed approach. Training and test samples are random generations of the data set shown in Table 1. Data in the information table should be changed into discrete values, because continuous values cannot be processed by the rough sets theory. There are many methods for data discretization and here the equivalent width method [20] is applied in this paper. In our paper, attributes are divided into three intervals. The attribute values in the same interval have the same discrete value. In discretization, samples with the same discrete condition attribute values are merged into a discrete sample in Table 2 (one row). A, B, C and d denote the attributes RF, PRF, PW and type, respectively. After that, the dependent extent of radar type to each attribute is computed using Equation (3). The degrees of attribute importance can be calculated, i.e., σ[D](A) = 1/2, σ[D](B) = 3/8 and σ[D](C) = 0. As the dependent extent of radar type to the attribute C (PW) is 0, the attribute C is unnecessary for classification and removed. After redundancy attributes and repeated samples are removed, the knowledge rules are obtained. Table 3 shows these rules, where - denotes the arbitrary value. As shown in Table 3, six rules are extracted, which means that 270 samples from three types of radar emitter can be classified into six subclasses. Based on these knowledge rules, initial clustering centers are obtained using Equation (6). The known radar emitter samples are clustered by using the rough k-means on these initial cluster centers. The cluster centers, rough boundary and uncertain boundary of the primary recognition are computed. The information of clusters is shown in Table 4. The rough k-means classifier has been built and rough samples are picked up. RVM in the advanced recognition are trained using these rough samples. In recognition of unknown samples, some important parameters are computed in the simulation. The accuracy, error and reject rate of the primary recognition are 86%, 2.5%, 11.5%, respectively. The accuracy of advanced recognition is 93.1%. Thus, the estimate of accuracy can be computed as: A[total] = 86% + 11.5% × 91.3% = 96.5%. The proposed method is compared with the RBF-SVM, the probabilistic SVM radar recognition approach studied by Lin et al. in [10] and RVM studied by Tipping [22]. The training accuracy, training time and recognition accuracy are shown in Table 5. As shown in Table 5, the four approaches achieve high training accuracies. The training accuracy of the approach proposed in this paper achieves 99.5%, which indicates this approach has good fitting capacity to the training samples. The accuracy of the hybrid recognition proposed in this paper is 96.5%, which is higher than existing methods, i.e., 94.0%, 93.5% and 94.0%. The accuracy of the hybrid recognition from simulation experiments accords with the theoretical value, i.e., 96.5%. Moreover, SVM approaches need less train time than RVM. The training time of the proposed hybrid recognition is least in these four approaches, i.e., 2.1 s. The hybrid recognition has a faster training because of lower computational complexity. And the training computational complexities of approaches will be analyzed behind. In the first part, all 150 of the samples are used for training and testing. In this simulation, the training accuracy of the hybrid recognition is tested. In addition, the accuracy of recognition and computational complexity of the hybrid recognition is compared with those of SVM and RVM. The results are as shown in Table 6. From Table 6, we can know the proposed approach has a higher training accuracy than SVM and RVM. In the first part of this experiment, all 150 samples are used to train and test these methods. The hybrid recognition proposed in this paper has a high training accuracy, i.e., 99.33%, which is higher than those of other approaches, i.e., 98.00% and 98.67%. In the second part, 60 random samples from Iris are used to train classifiers and other 90 samples are used for test to test the generalization. The accuracy of recognition and computational complexity of the hybrid recognition is compared with those of SVM and RVM. The results are as shown in Table 7. The recognition accuracy of the proposed approach is 96.67%, which is higher than those of other approaches. It is indicated that the hybrid recognition has not only a high training accuracy but also a good generalization. In addition, let's compare the training computational complexities of SVM, RVM and the proposed approach. The computational complexity of SVM is O(m^3). The computational complexity of RVM is O(m^3). The computational complexity of the proposed approach is O(m′^3), where m′ denotes the number of training samples for the RVM in the advanced recognition of the hybrid recognition. When 150 samples are used as training samples, all of them are used to train the SVM and RVM, namely, m = 150. The time complexities of the classical SVM and RVM are O(150^3). In our approach, training samples are clustered in the primary recognition, and only the rough samples are used to train the RVM in the advanced recognition. More specifically, there are 71 training samples for the RVM in the advanced recognition, i.e., m′ = 71, so it's computational complexity is O(71^3). Similarly, when 60 samples are used as training samples, all of these samples are used to train SVM and RVM, while 36 training samples are picked up for the RVM in the advanced recognition of the hybrid recognition, i.e., m = 60 and m′ = 36. So in the second part, the computational complexity of SVM and RVM is O(60^3), while the computational complexity of the proposed approach is O(36^3). From the comparison above, we can know that the computational complexity of the hybrid recognition is obviously lower than those of RVM and SVM. Theoretically, lower computational complexity leads to less computational time. The actual calculation time for each algorithm is tested and the result is shown in Table 7. The training calculation time of the proposed hybrid recognition is obviously less than SVM and RVM. Compared with SVM, a distinct advantage of RVM is the sparse structure. Although the computational complexity of RVM training is close to the SVM's, the discrimination process of RVM is more succinct and rapid than the SVM's. The proposed hybrid recognition approach inherits this superiority from RVM. The recognition time of the proposed approach is close to RVM and less than SVM. In this paper, a hybrid recognition method has been proposed to recognize radar emitter signals. The hybrid classifier consists of a rough k-means classifier (linear classifier) and a RVM (nonlinear classifier). Based on the linear separability of the classifying sample, the sample is classified by the suitable classifier. Thus for the radar emitter sample set containing both linearly separable samples and linearly inseparable samples, the approach can achieve a higher accuracy. A linear classifier based on the rough set and the rough k-means has been proposed, i.e., the rough k-means classifier. The rough k-means clustering can reduce the radius of the clusters and increase the accuracy of the primary recognition. The initial centers for the rough k-means are computed based on the rough set, which can reduce the computational complexity of the rough k-means clustering. The rough k-means classifier can classify linear separable samples efficiently and pick up linearly inseparable samples. These linear inseparable samples are processed by the RVM in the advanced recognition. Therefore, the training samples for the RVM in the advanced recognition are reduced. Simulation results have shown that the proposed approach can achieve a higher accuracy, a lower computational complexity and less computation time, when compared with existing approaches. The hybrid recognition approach in this paper is suitable for the classification of the radar emitter signal containing both linearly separable and linearly inseparable samples. However, for the situations where only linearly separable or linearly inseparable samples are included, the effectiveness of the hybrid approach will be not significant. We admit that our hybrid recognition approach is based on the fact that these linearly inseparable samples which reduce the accuracy of clustering are mostly at the edges of clusters. From Equation (1), we know that if the linearly inseparable sample appears frequently in the center region instead of the edge, the accuracy of recognition will be reduced. How to solve these problems is the focus of our future work. This work was supported by a grant from National Natural Science Foundation of China (grant number: 61102084). LatombeG.GrangerE.DilkesF.A.Fast learning of grammar production probabilities in radar electronic support20104610371041 RenM.Q.CaiJ.Y.ZhuY.Q.HeM.H.Radar Emitter Signal Classification based on Mutual Information and Fuzzy Support Vector MachinesProceedings of International Conference on Software ProcessBeijing, China26–29 October 200816411646 Vicen-BuenoR.Carrasco-AlvarezR.Rosa-ZureraM.Nieto-BorgeJ.C.Jarabo-AmoresM.P.Artificial neural network-based clutter reduction systems for ship size estimation in maritime radars20102010115 ZwickeP.E.KissI.A new implementation of the Mellin transform and its application to radar classification of ships19830.1109/TPAMI.1983.4767371 BezousekP.SchejbalV.Radar technology in the czech republic2004192734 ZhangG.X.HuL.Z.JinW.D.Intra-pulse feature analysis of radar emitter signals200423477480 SwierczE.Automatic classification of LFM signals for radar emitter recognition using wavelet decomposition and LVQ classifier2011119488494 ZhangZ.C.GuanX.HeY.Study on Radar Emitter Recognition Signal Based on Rough Sets and RBF Neural NetworkProceedings of the 8th International Conference on Machine Learning and CyberneticsBaoding, China12–15 July 200912251230 YinZ.YangW.YangZ.ZuoL.GaoH.A study on radar emitter recognition based on spds neural network20111088388810.3923/itj.2011.883.888 LiL.JiH.WangL.Specific Radar Emitter Recognition based on Wavelet Packet Transform and Probabilistic SVMProceedings of IEEE International Conference on Information and AutomationZhuhai, China22– 24 June 200912831288 LiL.JiH.Combining Multiple SVM Classifiers for Radar Emitter RecognitionProceedings of the 6th International Conference on Fuzzy Systems and Knowledge DiscoveryYantai, China14–16 August 2010140144 Vicen-BuenoR.Carrasco-AlvarezR.Rosa-ZureraM.Nieto-BorgeJ.C.Sea clutter reduction and target enhancement by neural networks in a marine radar system200991913193610.3390/ s9030191322573993 Vicen-BuenoR.Carrasco-AlvarezR.Jarabo-AmoresM.P.Nieto-BorgeJ.C.Rosa-ZureraM.Ship detection by different data selection templates and multilayer perceptrons from incoherent maritime radar data2011514415410.1049/iet-rsn.2010.0001 Vicen-BuenoR.Carrasco-AlvarezR.Jarabo-AmoresM.P.Nieto-BorgeJ.C.Alexandre-CortizoE.Detection of ships in marine environments by square integration mode and multilayer perceptrons20116071272410.1109/TIM.2010.2078330 PerlovskyL.I.DemingR.W.Neural networks for improved tracking2007181854185710.1109/TNN.2007.903143 TorrioneP.Texture features for antitank landmine detection using ground penetrating radar2007452374238210.1109/TGRS.2007.896548 KovvaliN.CarinL.Analysis of wideband forward looking synthetic aperture radar for sensing land mines200439RS4S08.14S08.15 ChenY.YangJ.TrappeW.MartinR.P.Detecting and localizing identity-based attacks in wireless and sensor networks2010592418243410.1109/TVT.2010.2044904 PawanL.WestC.Interval set clustering of web users with rough k-means20042351610.1023/B:JIIS.0000029668.88665.1a WalczakB.MassartD.L.Rough sets theory19994711610.1016/S0169-7439(98)00200-7 TippingM.E.Sparse bayesian learning and the relevance vector machine20011211244 TippingM.E.Fast Marginal Likelihood Maximisation for Sparse Bayesian ModelsProceedings of the 9th International Workshop on Artificial Intelligence and StatisticsKey West, FL, USA3–6 January 2003 WongP.K.XuQ.VongC.M.WongH.C.Rate-dependent hysteresis modeling and control of a piezostage using online support vector machine and relevance vector machine20125919882001 XuQ.WongP.K.Hysteresis modeling and compensation of a piezostage using least squares support vector machines2011211239125110.1016/j.mechatronics.2011.08.006 AnandR.MehrotraK.MohanC.K.RankaS.Efficient classification for multiclass problems using modular neural networks1995627472756 Regions of the rough k-means classifier: the certain, the rough and the uncertain area. Linearly separable samples are usually near to the center, while linearly inseparable samples are usually far from the center. Flow chart of the hybrid radar emitter recognition approach proposed in this paper. First of all, samples are recognized by the primary recognition, which can classify linearly separable samples and pick up those linearly inseparable samples to be classified in the advanced recognition using relevance vector machine. The radius of a cluster in rough k-means is shorter than that in k-means. Information of known radar emitter signals. No. RF (MHz) PRF (Hz) PW (us) Type 1 8,799 1,500 0.1 1 2 8,847 750 0.5 1 3 8,755 620 0.5 2 4 8,890 580 0.5 2 5 8,875 585 0.5 2 6 8,804 750 0.1 1 7 8,850 1,500 0.5 1 8 9,460 1,300 0.25 3 9 9,436 1,600 0.15 3 Continuous values are changed into discrete information by using the equivalent width method. No. A B C d Classification rules are extracted based on rough sets theory. These rules are the basis of the choice of the initial centers in rough k-means cluster. No. A B d 1 - 1 2 Centers, rough boundary radiuses and uncertain boundary radiuses of clusters. Cluster Center R[ro] R[un] 1 (8882.5, 582.5) 63 142 2 (8,755, 620) 70 128 3 (8,827, 750) 56 119 4 (8,799, 1,500) 37 41 5 (8,850, 1,500) 34 45 6 (9,448, 1,450) 398 607 Training accuracy, training accuracy and recognition accuracy of radar emitter recognition approaches are compared. Recognition Approach Training Accuracy Training Time (s) Recognition Accuracy RBF-SVM 99.5% 3.1 94.0% PSVM 99.0% 3.4 93.5% RVM 99.0% 4.6 94.0% Method in this paper 99.5% 2.1 96.5% In the first part of experiment 2, the recognition accuracy of Iris data set and computational complexity are compared among three approaches. Approach Accuracy m or m′ Computational Complexity Training Time (s) SVM 98.00% 150 O(150^3) 0.9 RVM 98.67% 150 O(150^3) 1.2 Hybrid recognition 99.33% 71 O(71^3) 0.6 In the second part of experiment 2, the recognition accuracy of Iris data set and computational complexity are compared among three approaches. Approach Accuracy m or m′ Computational Complexity Training Time (s) SVM 93.33% 60 O(60^3) 0.13 RVM 94.44% 60 O(60^3) 0.14 Hybrid recognition 96.67% 36 O(36^3) 0.04
{"url":"http://www.mdpi.com/1424-8220/13/1/848/xml","timestamp":"2014-04-19T02:00:20Z","content_type":null,"content_length":"105082","record_id":"<urn:uuid:48e7edf3-88cc-4e74-8fda-1cb9350511a0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
mass spectra - the M+1 peak What causes the M+1 peak? What is an M+1 peak? If you had a complete (rather than a simplified) mass spectrum, you will find a small line 1 m/z unit to the right of the main molecular ion peak. This small peak is called the M+1 peak. In questions at this level (UK A level or its equivalent), the M+1 peak is often left out to avoid confusion - particularly if you were being asked to find the relative formula mass of the compound from the molecular ion peak. The carbon-13 isotope The M+1 peak is caused by the presence of the ^13C isotope in the molecule. ^13C is a stable isotope of carbon - don't confuse it with the ^14C isotope which is radioactive. Carbon-13 makes up 1.11% of all carbon atoms. If you had a simple compound like methane, CH[4], approximately 1 in every 100 of these molecules will contain carbon-13 rather than the more common carbon-12. That means that 1 in every 100 of the molecules will have a mass of 17 (13 + 4) rather than 16 (12 + 4). The mass spectrum will therefore have a line corresponding to the molecular ion [^13CH[4]]^+ as well as [^12CH[4]]^+. The line at m/z = 17 will be much smaller than the line at m/z = 16 because the carbon-13 isotope is much less common. Statistically you will have a ratio of approximately 1 of the heavier ions to every 99 of the lighter ones. That's why the M+1 peak is much smaller than the M+ peak. Using the M+1 peak What happens when there is more than 1 carbon atom in the compound? Imagine a compound containing 2 carbon atoms. Either of them has an approximately 1 in 100 chance of being ^13C. There's therefore a 2 in 100 chance of the molecule as a whole containing one ^13C atom rather than a ^12C atom - which leaves a 98 in 100 chance of both atoms being ^12C. That means that the ratio of the height of the M+1 peak to the M+ peak will be approximately 2 : 98. That's pretty close to having an M+1 peak approximately 2% of the height of the M+ peak.
{"url":"http://www.chemguide.co.uk/analysis/masspec/mplus1.html","timestamp":"2014-04-18T23:44:48Z","content_type":null,"content_length":"7803","record_id":"<urn:uuid:24a6ddc6-2d1b-4979-a644-26d7be67f06d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Physics from Classical Physics with an epistemic restriction yoda jedi and this one: ...However unlike pilot-wave theory, the model is stochastic, the wave function is not physically real and the Born’s statistics is valid for all time by construction. Moreover, the construction is unique given the classical Lagrangian or Hamiltonian. Finally, assuming that |λ| fluctuates around ~ with a very small yet finite width, then the model predicts small correction to the prediction of quantum mechanics. This might lead to precision test of quantum mechanics against our hidden variable You might guess the first thing I look at in a paper like this. "It is then imperative to ask how our model will deal with Bell’s no-go theory. Since our model reproduces the prediction of quantum mechanics for specific distribution of λ, then for this case, it must violate Bell inequality which implies that it is non-local in the sense of Bell [11], or there is no global Kolmogorovian space which covers all the probability spaces of the incompatible measurement in EPR-type of experiments [12], or both. We believe that this question can be discussed only if we know the physical origin of the the general rules of replacement postulated in Eq. (7). To this end, a discussion on the derivation of the rules from Hamilton-Jacobi theory with a random constraint is given some where else [13]." [13] includes a reference to the work of De Raedt et al, as well as others. So basically he ignores the issue. Not sure how he expects that to fly, since the use of Bell is to dig out these issues BEFORE the remainder of the theory is examined closely. Since there is no explicit non-local or non-realistic agent identified in the theory, how can it be internally consistent and agree to QM? Bell says it won't.
{"url":"http://www.physicsforums.com/showthread.php?t=611383","timestamp":"2014-04-19T19:44:32Z","content_type":null,"content_length":"49773","record_id":"<urn:uuid:da8d6580-c560-4648-b01b-68b0e3f91551>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
A much needed proof November 6th 2007, 11:56 PM #1 Nov 2007 A much needed proof Given integer n>1, prove that we can find integer k>1 and a1, a2, ......ak >1 such that a1 + a2 +........ak = n(1/a1 + 1/a2 +....1/ak) Does this need some complex theorem to start with? Much thanks! Say a_k = x then on LHS we have k*x on RHS we have n*k/x. So we require that x^2 = n so chose x = sqrt(n) for all a_1,...,a_k This means n = +/- sqrt a_k therefore k >1 must apply ? smilarly for a1 + a2 +........ak = n(1/a1 + 1/a2 +....1/ak)? No it means that if you set $a_1=a_2= .. =a_k=\sqrt{n}$ then the left hand side equals $k\sqrt{n}$ and the right hand side is equal to $n \left(\frac{k}{\sqrt{n}}\right)=k\sqrt{n}$, so the condition of the problem is satisfied by these values (at least it is when $n$ is a perfect square, otherwise the $a$'s are not integers and we still need a solution). November 7th 2007, 09:52 AM #2 Global Moderator Nov 2005 New York City November 8th 2007, 01:39 PM #3 Nov 2007 November 10th 2007, 10:05 AM #4 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/math-topics/22192-much-needed-proof.html","timestamp":"2014-04-17T09:40:38Z","content_type":null,"content_length":"40326","record_id":"<urn:uuid:e5c90b48-0444-4093-891e-79c8c25a0bc4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
@Article{RePEc-ecm-emetrp-v-61-y-1993-i-4-p-821-56, author = {Andrews, D.W.K}, title = {Tests for parameter instability and structural change with unknown change point}, year = {1993}, month = {July}, URL = {http://ideas.repec.org/a/ecm/emetrp/v61y1993i4p821-56.html}, abstract = {This paper considers tests for parameter instability and structural change with unknown change point. The results apply to a wide class of parametric models that are suitable for estimation by generalized method of moments procedures. The asymptotic distributions of the test statistics considered here are nonstandard because the change point parameter only appears under the alternative hypothesis and not under the null. The tests considered here are shown to have nontrivial asymptotic local power against all alternatives for which the parameters are nonconstant. The tests are found to perform quite well in a Monte Carlo experiment reported elsewhere. Copyright 1993 by The Econometric Society.}, journal = {Econometrica}, volume = {61}, number = {4}, pages = {821-56} } @Article{RePEc-eee-reveco-v-11-y-2002-i-1-p-101-115, author = {Arize, A.C}, title = {Imports and exports in 50 countries: Tests of cointegration and structural breaks}, year = {2002}, month = {April}, URL = {http://ideas.repec.org/a/eee/reveco/v11y2002i1p101-115.html}, abstract = {No abstract is available for this item.}, journal = {International Review of Economics \& Finance}, volume = {11}, number = {1}, pages = {101-115} } @Article{RePEc-eee-asieco-v-14-y-2003-i-3-p-465-487, author = {Baharumshah, A.Z and Lau, E and Fountas, S}, title = {On the sustainability of current account deficits: Evidence from four ASEAN countries}, year = {2003}, month = {June}, URL = {http://ideas.repec.org/a/eee/asieco/ v14y2003i3p465-487.html}, abstract = {This paper examines the sustainability of the current account imbalance for four ASEAN countries (Indonesia, Malaysia, the Philippines and Thailand) over the 1961-1999 period. To this end, we utilize the intertemporal budget constraint (IBC) model to explain the behavior of the current account in these countries. The analysis is based on various unit root and cointegration procedures including those allowing for a structural break to deal with the major shortcomings of previous studies. The empirical results indicate clearly that for all countries, except Malaysia, current account deficits were not on the long-run steady state in the pre-crisis (1961-1997) era. This leads us to conclude that the current accounts of these countries were unsustainable and did not move towards external account equilibrium. Moreover, the persistent current account deficits might serve as a leading indicator of financial crises. In contrast, we find strong comovement between inflows and outflows in Indonesia, the Philippines and Thailand in the period including the post-crisis years, while Malaysia was on an unsustainable path. This is because macroeconomic performance of most of the ASEAN countries has changed dramatically since the onset of the Asian crisis in mid-1997. The evidence suggests that action to prevent large appreciations should have been taken prior to the 1997 crisis. (This abstract was borrowed from another version of this item.)}, journal = {Journal of Asian Economics}, volume = {14}, number = {3}, pages = {465-487} } @Article{bahmani1994imports, author = {Bahmani-Oskooee, M}, title = {Are imports and exports of Australia cointegrated?}, year = {1994}, journal = {Journal of Economic Integration}, volume = {9}, number = {4}, pages = {525{--}533} } @Article{RePEc-jae-japmet-v-18-y-2003-i-1-p-1-22, author = {Bai, J and Perron, P}, title = {Computation and analysis of multiple structural change models}, year = {2003}, URL = {http:// ideas.repec.org/a/jae/japmet/v18y2003i1p1-22.html}, abstract = {In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural change models. Second, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program. Copyright {\textcopyright} 2002 John Wiley \& Sons, Ltd.}, journal = {Journal of Applied Econometrics}, volume = {18}, number = {1}, pages = {1-22} } @Article{RePEc-ecm-emetrp-v-66-y-1998-i-1-p-47-78, author = {Bai, J and Perron, P}, title = {Estimating and testing linear models with multiple structural changes}, year = {1998}, month = {January}, URL = {http://ideas.repec.org/a/ecm/emetrp/v66y1998i1p47-78.html}, abstract = {This paper develops the statistical theory for testing and estimating multiple change points in regression models. The rate of convergence and limiting distribution for the estimated parameters are obtained. Several test statistics are proposed to determine the existence as well as the number of change points. A partial structural change model is considered. The authors study both fixed and shrinking magnitudes of shifts. In addition, the models allow for serially correlated disturbances (mixingales). An estimation strategy for which the location of the breaks need not be simultaneously determined is discussed. Instead, the authors' method successively estimates each break point.}, journal = {Econometrica}, volume = {66}, number = {1}, pages = {47-78} } @Article{bineau2007imports, author = {Bineau, Y}, title = {Are imports and exports cointegrated: the case of bulgaria between 1967 and 2004}, year = {2007}, journal = {South East European Journal of Economics and Business}, volume = {2}, number = {2}, pages = {53{--}56} } @Article {tang2005malaysian, author = {Cheong, T.T}, title = {Are Malaysian exports and imports cointegrated? A comment}, year = {2005}, journal = {Sunway Academic Journal}, volume = {2}, pages = {101{--}107} } @Article{RePEc-eee-jimfin-v-29-y-2010-i-3-p-442-459, author = {Christopoulos, D and Le{\'{o}}n-Ledesma, M.A}, title = {Current account sustainability in the US: What did we really know about it?}, year = {2010}, month = {April}, URL = {http://ideas.repec.org/a/eee/jimfin/v29y2010i3p442-459.html}, abstract = {We analyze the sustainability of the US current account (CA) deficit by means of unit-root tests. First, we argue that there are several reasons to believe that the CA may follow a non-linear mean-reversion behavior under the null of stationarity. Using a non-linear ESTAR model we can reject the null of non-stationarity favoring the sustainability hypothesis. Second, we ask whether unit-root tests are a useful indicator of sustainability by comparing in-sample results for the 1960-2004 period to the developments observed up to the end of 2008. We find that the non-linear model outperforms the linear and random walk models in terms of forecast performance. The large shocks to the CA observed in the last five years induced a faster speed of mean reversion, ensuring the necessary adjustment to meet the inter-temporal budget constraint.}, journal = {Journal of International Money and Finance}, volume = {29}, number = {3}, pages = {442-459}, keywords = {Current account sustainability Stationarity Non-linear models} } @Article {RePEc-ecm-emetrp-v-49-y-1981-i-4-p-1057-72, author = {Dickey, D.A and Fuller, W.A}, title = {Likelihood ratio statistics for autoregressive time series with a unit root}, year = {1981}, month = {June}, URL = {http://ideas.repec.org/a/ecm/emetrp/v49y1981i4p1057-72.html}, abstract = {No abstract is available for this item.}, journal = {Econometrica}, volume = {49}, number = {4}, pages = {1057-72} } @Article{Enders, author = {Enders, W}, title = {Applied econometric time series}, year = {2005}, journal = {Willy Series in Probability and Statistics, Second edition} } @Article {RePEc-ecm-emetrp-v-55-y-1987-i-2-p-251-76, author = {Engle, R.F and Granger, C.W.J}, title = {Co-integration and error correction: Representation, estimation, and testing}, year = {1987}, month = {March}, URL = {http://ideas.repec.org/a/ecm/emetrp/v55y1987i2p251-76.html}, abstract = {The relationship between cointegration and error correction models, first suggested by Granger, is here extended and used to develop estimation procedures, tests, and empirical examples. A vector of time series is said to be cointegrated with cointegrating vector a if each element is stationary only after differencing while linear combinations a8xt are themselves stationary. A representation theorem connects the moving average , autoregressive, and error correction representations for cointegrated systems. A simple but asymptotically efficient two-step estimator is proposed and applied. Tests for cointegration are suggested and examined by Monte Carlo simulation. A series of examples are presented. Copyright 1987 by The Econometric Society.}, journal = {Econometrica}, volume = {55}, number = {2}, pages = {251-76} } @Article{erbaykal2008turkey, author = {Erbaykal, E and Karaca, O}, title = {Is Turkey{'}s foreign deficit sustainable? Cointegration relationship between exports and imports}, year = {2008}, journal = {International Research Journal of Finance and Economics}, volume = {14}, pages = {177{--}181} } @Article{doi-10.1080-10168739900000004, author = {Fountas, S and Wu, J.-L}, title = {Are the U.S. current account deficits really sustainable?}, year = {1999}, URL = {http://www.tandfonline.com/doi/abs/10.1080/10168739900000004}, abstract = {We have tested for a long-run relationship between four U.S. Export measures and analogous import measures (measured in nominal and real terms, levels and deflated by GNP) in the 1967-1994 period using quarterly data. Using various econometric tests that include standard Engle-Granger cointegration tests and two tests that allow for test-determined breaks in the cointegrating relationship, we have shown that the hypothesis of no long-run relationship between exports and imports cannot be rejected. This finding contrasts sharply with earlier literature and carries the important policy implication that US current account deficits are not sustainable. [F30]}, journal = {International Economic Journal}, volume = {13}, number = {3}, pages = {51-58}, doi = {10.1080/10168739900000004} } @Article{RePEc-fip-fedder-y-1996-i-qiv-p-10-20, author = {Gould, D.M and Ruffin, R.J}, title = {Trade deficits: Causes and consequences}, year = {1996}, URL = {http://ideas.repec.org/a/fip/fedder/y1996iqivp10-20.html}, abstract = {According to conventional wisdom, trade balances reflect a country's competitive strength-the lower the trade deficit, the stronger the country's industries and the higher its rate of economic growth. In this article, David Gould and Roy Ruffin review the history of the conventional wisdom and empirically examine whether large overall trade deficits or bilateral trade imbalances are associated with lower rates of economic growth. They find that, once the fundamental determinants of growth have been accounted for, trade imbalances have little effect on rates of economic growth.}, journal = {Economic and Financial Policy Review}, number = {Q IV}, pages = {10-20}, keywords = {Deficit financing ; Free trade} } @Article{RePEc-oup-ecinqu-v-29-y-1991-i-3-p-429-45, author = {Hakkio, C.S and Rush, M}, title = {Is the budget deficit \"too large?\"}, year = {1991}, month = {July}, URL = {http://ideas.repec.org/a/oup/ecinqu/v29y1991i3p429-45.html}, abstract = {Yes, specifically, the authors find that recently spending and taxing policies of the government{--}if continued{--}violate the government's intertemporal budget constraint. As a result, government spending must be reduced and/or tax revenues must be increased. These conclusions are based on tests of whether government spending and revenue are cointegrated. In addition to examining real spending and revenue, the authors also normalize these variables by real GNP and population. For a growing economy, these normalized measures are perhaps more pertinent. The authors also test and find support for the hypothesis that deficits have become a problem only in recent years. Copyright 1991 by Oxford University Press.}, journal = {Economic Inquiry}, volume = {29}, number = {3}, pages = {429-45} } @Techreport{RePEc-got-iaidps-111, author = {Herzer, D and Nowak-Lehmann, F.D}, title = {Are exports and imports of Chile cointegrated?}, year = {2005}, month = {Jul}, URL = {http://ideas.repec.org/p/got/iaidps/111.html}, abstract = {This study examines the long-run relationship between Chilean exports and imports during the 1975-2004 period using unit root tests and cointegration techniques that allow for endogenously determined structural breaks. The results indicate that there exists a long-run equilibrium between exports and imports in Chile, despite the balance-of-payments crisis of 1982-83. This finding implies that Chile\'s macroeconomic policies have been effective in the long-run and suggests that Chile is not in violation of its international budget constraint.}, institution = {Ibero-America Institute for Economic Research}, publication\ _type = {type}, number = {111}, keywords = {Exports; imports; cointegration; structural break; Chile} } @Techreport{RePEc-ipe-ipetds-1154, author = {Hollauer, G and de Mendon{\c{c}}a, M.A.A}, title = {Testing Brazilians` imports and exports co-integration with monthly data for 1996-2005}, year = {2006}, month = {Jan}, URL = {http://ideas.repec.org/p/ipe/ipetds/1154.html}, abstract = {The goal of this paper is to test the Husted model and to inspect the long-runsustainability of Brazilian current account in a very specific period of time (1996-2005) by the use of monthly data. We have tested the inter-temporal budgetconstraints (IBC) condition via unit root test with structural break and co-integrationthrough Gregory-Hansen test in a 117 long nominal, GDP normalized and CPInormalized series for the Brazilian economy. The results indicated that the pure IBCcondition does not hold for the Brazilian economy. However, there is co-integrationamong the series used in this work and the balance of accounts is sustainable.}, institution = {Instituto de Pesquisa Econ{\^{o}}mica Aplicada - IPEA}, publication\_type = {type}, number = {1154} } @Article {RePEc-tpr-restat-v-74-y-1992-i-1-p-159-66, author = {Husted, S}, title = {The emerging U.S. current account deficit in the 1980s: A cointegration analysis}, year = {1992}, month = {February}, URL = {http://ideas.repec.org/a/tpr/restat/v74y1992i1p159-66.html}, abstract = {This paper seeks to understand the recent history of U.S. external imbalances by identifying the \"long-run tendency\" of the U.S. current account balance and investigating its behavior. The procedure that is adopted is to estimate cointegrating regressions between U.S. exports and imports of goods and services. Estimates from cointegrating regressions between several measures of U.S. exports and imports show that up to about the end of 1983 the U.S. current account tended toward zero. Since that time, there has been an apparent structural shift resulting in a long-run tendency for a deficit in excess of $100 billion per year. Copyright 1992 by MIT Press.}, journal = {The Review of Economics and Statistics}, volume = {74}, number = {1}, pages = {159-66} } @Article{RePEc-eee-dyncon-v-12-y-1988-i-2-3-p-231-254, author = {Johansen, S}, title = {Statistical analysis of cointegration vectors}, year = {1988}, URL = {http://ideas.repec.org/a/eee/dyncon/v12y1988i2-3p231-254.html}, abstract = {No abstract is available for this item.}, journal = {Journal of Economic Dynamics and Control}, volume = {12}, number = {2-3}, pages = {231-254} } @Article{RePEc-bla-obuest-v-52-y-1990-i-2-p-169-210, author = {Johansen, S and Juselius, K}, title = {Maximum likelihood estimation and Inference on cointegration {--}With applications to the demand for money}, year = {1990}, month = {May}, URL = {http://ideas.repec.org/a/bla/obuest/v52y1990i2p169-210.html}, abstract = {This paper gives a systematic application of maximum likelihood inference concerning cointegration vectors in non-stationary vector valued autoregressive time series models with Gaussian errors, where the model includes a constant term and seasonal dummies. The hypothesis of cointegration is given a simple parametric form in terms of cointegration vectors and their weights. The relation between the constant term and a linear trend in the non-stationary part of the process is discussed and related to the weights. Tests for the presence of cointegration vectors, both with and without a linear trend in the non-stationary part of the process are derived. Then estimates and tests under linear restrictions on the cointegration vectors and their weights are given. The methods are illustrated by data from the Danish and the Finnish economy on the demand for money. Copyright 1990 by Blackwell Publishing Ltd}, journal = {Oxford Bulletin of Economics and Statistics}, volume = {52}, number = {2}, pages = {169-210} } @Article{RePEc-prg-jnlpep-v-2005-y-2005-i-1-id-254-p-82-88, author = {Kalyoncu, H}, title = {Sustainability of current account for Turkey: Intertemporal solvency approach}, year = {2005}, URL = {http://ideas.repec.org/a/prg/jnlpep/v2005y2005i1id254p82-88.html}, abstract = {This paper examines sustainability of current account for Turkey during the period 1987:Q1 - 2002:Q4. Using the usual intertemporal borrowing constraint, I have tested for a long-run relationship between Turkey exports and imports (measured in real terms to real gross domestic product) using quarterly data. In my empirical analysis of the sustainability of current account for Turkey, cointegration approaches have been used. Empirical results suggest that there exists a unique long-run or equilibrium relationship among real exports and imports and their percentage to real GDP and their estimated cointegration factor (b) is very close to 1. The empirical findings suggest that the current account of Turkey is sustainable in the long-run.}, journal = {Prague Economic Papers}, volume = {2005}, number = {1}, pages = {82-88}, keywords = {sustainability; intertemporal budget constraint; current account deficits} } @Article{Katircioglu200917, author = {Katircioglu, S.T}, title = {Revisiting the tourism-led-growth hypothesis for Turkey using the bounds test and Johansen approach for cointegration}, year = {2009}, URL = {http://www.sciencedirect.com/science/article/pii/S0261517708000794}, abstract = {This paper empirically revisits and investigates the tourism-led-growth (TLG) hypothesis in the case of Turkey by employing the bounds test and Johansen approach for cointegration using annual data from 1960{\textendash}2006. Although Gunduz and Hatemi-J (2005; Is the tourism-led growth hypothesis valid for Turkey? Applied Economics Letters. 12, 499{\textendash}504) support the TLG hypothesis for Turkey (suggesting unidirectional causation from tourism to economic growth) by making use of the leveraged bootstrap causality tests, and Ongan and Demiroz (2005; The contribution of tourism to the long-run Turkish economic growth. Ekonomick{\'{y}} {\v{c}}asopis [Journal of Economics]. 53(9), 880{\textendash}894.) suggest bidirectional causality between international tourism and economic growth in Turkey, this study does not find any cointegration between international tourism and economic growth in Turkey. Therefore, unlike the findings of Gunduz and Hatemi-J (2005) and Ongan and Demiroz (2005), this study rejects the TLG hypothesis for the Turkish economy since no cointegration was found and error correction mechanisms plus causality tests cannot be run for further steps in the long term.}, journal = {Tourism Management}, volume = {30}, number = {1}, pages = {17 - 20}, keywords = {Tourism-Led Growth}, doi = {10.1016/j.tourman.2008.04.004}, issn = {0261-5177} } @Article{RePEc-eee-econom-v-54-y-1992-i-1-3-p-159-178, author = {Kwiatkowski, D and Phillips, P.C.B and Schmidt, P and Shin, Y}, title = {Testing the null hypothesis of stationarity against the alternative of a unit root : How sure are we that economic time series have a unit root?}, year = {1992}, URL = {http://ideas.repec.org/a/eee/econom/v54y1992i1-3p159-178.html}, abstract = {The standard conclusion that is drawn from this empirical evidence is that many or most aggregate economic time series contain a unit root. However, it is important to note that in this empirical work the unit root is set up as the null hypothesis testing is carried out ensures that the null hypothesis is accepted unless there is strong evidence against it. Therefore, an alternative explanation for the common failure to reject a unit root is simply that most economic time series are not very informative about whether or not there is a unit root; or, equivalently, that standard unit root tests are not very powerful against relevant alternatives. (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.) (This abstract was borrowed from another version of this item.)}, journal = {Journal of Econometrics}, volume = {54}, number = {1-3}, pages = {159-178} } @Article {RePEc-tpr-restat-v-85-y-2003-i-4-p-1082-1089, author = {Lee, J and Strazicich, M.C}, title = {Minimum lagrange multiplier unit root test with two structural breaks}, year = {2003}, month = {November}, URL = {http://ideas.repec.org/a/tpr/restat/v85y2003i4p1082-1089.html}, abstract = {The endogenous two-break unit root test of Lumsdaine and Papell is derived assuming no structural breaks under the null. Thus, rejection of the null does not necessarily imply rejection of a unit root per se, but may imply rejection of a unit root without break. Similarly, the alternative does not necessarily imply trend stationarity with breaks, but may indicate a unit root with breaks. In this paper, we propose an endogenous two-break Lagrange multiplier unit root test that allows for breaks under both the null and alternative hypotheses. As a result, rejection of the null unambiguously implies trend stationarity. {\textcopyright} 2003 President and Fellows of Harvard College and the Massachusetts Institute of Technology.}, journal = {The Review of Economics and Statistics}, volume = {85}, number = {4}, pages = {1082-1089} } @Article{RePEc-tpr-restat-v-79-y-1997-i-2-p-212-218, author = {Lumsdaine, R.L and Papell, D.H}, title = {Multiple trend breaks and the unit-root hypothesis}, year = {1997}, month = {May}, URL = {http://ideas.repec.org/a/tpr/restat/ v79y1997i2p212-218.html}, abstract = {Ever since Nelson and Plosser (1982) found evidence in favor of the unit-root hypothesis for 13 long-term annual macro series, observed unit - root behavior has been equated with persistence in the economy. Perron (1989) questioned this interpretation, arguing instead that the \"observed\" behavior may indicate failure to account for structural change. Zivot and Andrews (1992) restored confidence in the unit-root hypothesis by incorporating an endogenous break point into the specification. By allowing for the possibility of two endogenous break points, we find more evidence against the unit-root hypothesis than Zivot and Andrews, but less than Perron. {\textcopyright} 1997 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology}, journal = {The Review of Economics and Statistics}, volume = {79}, number = {2}, pages = {212-218} } @Incollection{Mackinnon1991, author = {Mackinnon, J.G}, title = {Critical values for cointegration tests, in long-run economic relationships}, year = {1991}, editor = {Engle, R.F and Granger, C.W.J.}, publication\_type = {type}, publisher = {Oxford university press}, address = {Oxford}, pages = {267-276} } @Article{RePEc-jae-japmet-v-11-y-1996-i-6-p-601-18, author = {MacKinnon, J.G}, title = {Numerical distribution functions for unit root and cointegration tests}, year = {1996}, month = {Nov.-Dec.}, URL = {http://ideas.repec.org/a/jae/japmet/v11y1996i6p601-18.html}, abstract = {This paper employs response surface regressions based on simulation experiments to calculate distribution functions for some well-known unit root and cointegration test statistics. The principal contributions of the paper are a set of data files that contain estimated response surface coefficients and a computer program for utilizing them. This program, which is freely available via the Internet, can easily be used to calculate both asymptotic and finite-sample critical values and P-values for any of the tests. Graphs of some of the tabulated distribution functions are provided. An empirical example deals with interest rates and inflation rates in Canada. Copyright 1996 by John Wiley \& Sons, Ltd.}, journal = {Journal of Applied Econometrics}, volume = {11}, number = {6}, pages = {601-18} } @Article {RePEc-aea-jecper-v-16-y-2002-i-3-p-131-152, author = {Mann, C.L}, title = {Perspectives on the U.S. current account deficit and sustainability}, year = {2002}, month = {Summer}, URL = {http:// ideas.repec.org/a/aea/jecper/v16y2002i3p131-152.html}, abstract = {The US current account has been in deficit for 25 years, accumulating a negative net international investment position of some $20 trillion dollars. This article reviews three frameworks for analyzing the causes of large US external imbalances: The NIPA; income, relative-prices, and trade flows, and portfolio allocation of cross-border capital flows. It presents two approaches to considering how long these imbalances can persist. Rather than focusing on the current account as a share of U.S. GDP, understanding the behavior of the global investors and their investment choices is key to the trajectory of the current account and the US dollar.}, journal = {Journal of Economic Perspectives}, volume = {16}, number = {3}, pages = {131-152} } @Article{RePEc-taf-applec-v-37-y-2005-i-17-p-1979-1990, author = {Narayan, P.K}, title = {The saving and investment nexus for China: Evidence from cointegration tests}, year = {2005}, URL = {http://ideas.repec.org/a/taf/applec/v37y2005i17p1979-1990.html}, abstract = {The saving and investment nexus as postulated by Feldstein and Horioka (FH) (1980) is revisited. The saving investment correlation for China is estimated over the periods 1952-1998 and 1952-1994, the latter culminating in a period of fixed exchange rate regime. Amongst the key results, it is found that saving and investment are correlated for China for both the period of the fixed exchange rate and the entire sample period. With high saving-investment correlation, the results suggest that the Chinese economy is in conformity with the FH hypothesis. This is a valid outcome, for in China capital mobility was fairly restricted over the 1952-1994 period as indicated by the relatively low foreign direct investment.}, journal = {Applied Economics}, volume = {37}, number = {17}, pages = {1979-1990} } @Article{RePEc-ecm-emetrp-v-69-y-2001-i-6-p-1519-1554, author = {Ng, S and Perron, P}, title = {LAG length selection and the construction of unit root tests with good size and power}, year = {2001}, month = {November}, URL = {http://ideas.repec.org/a/ecm/emetrp/ v69y2001i6p1519-1554.html}, abstract = {It is widely known that when there are errors with a moving-average root close to - 1, a high order augmented autoregression is necessary for unit root tests to have good size, but that information criteria such as the \"AIC\" and the \"BIC\" tend to select a truncation lag (\"k\") that is very small. We consider a class of Modified Information Criteria (\"MIC\") with a penalty factor that is sample dependent. It takes into account the fact that the bias in the sum of the autoregressive coefficients is highly dependent on \"k\" and adapts to the type of deterministic components present. We use a local asymptotic framework in which the moving-average root is local to - 1 to document how the \"MIC\" performs better in selecting appropriate values of \"k\". In Monte-Carlo experiments, the \"MIC\" is found to yield huge size improvements to the \"DF-super-GLS\" and the feasible point optimal \"P-sub-T\" test developed in Elliott, Rothenberg, and Stock (1996). We also extend the \"M\" tests developed in Perron and Ng (1996) to allow for \"GLS\" detrending of the data. The \"MIC\" along with \"GLS\" detrended data yield a set of tests with desirable size and power properties. Copyright The Econometric Society.}, journal = {Econometrica}, volume = {69}, number = {6}, pages = {1519-1554} } @Article{1995, author = {Ng, S and Perron, P}, title = {Unit root tests in ARMA models with data-dependent methods for the selection of the truncation lag}, year = {1995}, URL = {http://www.jstor.org/stable/2291151}, abstract = {We analyze the choice of the truncation lag in the context of the Said-Dickey test for the presence of a unit root in a general autoregressive moving average model. It is shown that a deterministic relationship between the truncation lag and the sample size is dominated by data-dependent rules that take sample information into account. In particular, we study data-dependent rules that are not constrained to satisfy the lower bound condition imposed by Said-Dickey. Akaike's information criterion falls into this category. The analytical properties of the truncation lag selected according to a class of information criteria are compared to those based on sequential testing for the significance of coefficients on additional lags. The asymptotic properties of the unit root test under various methods for selecting the truncation lag are analyzed, and simulations are used to show their distinctive behavior in finite samples. Our results favor methods based on sequential tests over those based on information criteria, because the former show less size distortions and have comparable power.}, journal = {Journal of the American Statistical Association}, volume = {90}, number = {429}, pages = {268-281}, issn = {01621459} } @Article{RePEc-eaa-ijaeqs-v-5-y2008-i-1\_6, author = {Perera, N and Varma, R}, title = {An empirical analysis of sustainability of trade deficit: Evidence from Sri Lanka}, year = {2008}, URL = {http://ideas.repec.org/a/eaa/ijaeqs/v5y2008i1\_6.html}, abstract = {In this paper, the long-run relationship between Sri Lankan exports and imports during the period 1950 to 2006 is examined using unit root tests and cointegration techniques that allow for an endogenously determined structural break. The results failed to support the existence of a long-run equilibrium between exports and imports in Sri Lanka. This finding questions the effectiveness of Sri Lanka{'}s current long-term macroeconomic policies and suggests that Sri Lanka is in violation of its international budget constraint.}, journal = {International Journal of Applied Econometrics and Quantitative Studies}, volume = {5}, number = {1}, pages = {79-92}, keywords = {Trade Deficit; Unit root; Structural Breaks; Cointegration; Sri Lanka} } @Article{RePEc-eee-econom-v-80-y-1997-i-2-p-355-385, author = {Perron, P}, title = {Further evidence on breaking trend functions in macroeconomic variables}, year = {1997}, month = {October}, URL = {http://ideas.repec.org/a/eee/econom/v80y1997i2p355-385.html}, abstract = {No abstract is available for this item.}, journal = {Journal of Econometrics}, volume = {80}, number = {2}, pages = {355-385} } @Article{1990, author = {Perron, P}, title = {Testing for a unit root in a time series with a changing mean}, year = {1990}, URL = {http://www.jstor.org/stable/1391977}, abstract = {This study considers testing for a unit root in a time series characterized by a structural change in its mean level. My approach follows the "intervention analysis" of Box and Tiao (1975) in the sense that I consider the change as being exogenous and as occurring at a known date. Standard unit-root tests are shown to be biased toward nonrejection of the hypothesis of a unit root when the full sample is used. Since tests using split sample regressions usually have low power, I design test statistics that allow the presence of a change in the mean of the series under both the null and alternative hypotheses. The limiting distribution of the statistics is derived and tabulated under the null hypothesis of a unit root. My analysis is illustrated by considering the behavior of various univariate time series for which the unit-root hypothesis has been advanced in the literature. This study complements that of Perron (1989), which considered time series with trends.}, journal = {Journal of Business \& Economic Statistics}, volume = {8}, number = {2}, pages = {pp. 153-162}, issn = {07350015} } @Article {RePEc-bes-jnlbes-v-10-y-1992-i-4-p-467-70, author = {Perron, P and Vogelsang, T.J}, title = {Testing for a unit root in a time series with a changing mean: Corrections and extensions}, year = {1992}, month = {October}, URL = {http://ideas.repec.org/a/bes/jnlbes/v10y1992i4p467-70.html}, abstract = {This note provides a correction to the treatment of the asymptotic distribution of tests for a unit root for the additive outlier model presented in Perron (1990). It is shown that the tests, as stated for that case, have asymptotic distributions that depend on the correlation structure of the data even if the appropriate order of the autoregression is selected. The authors present a simple modification that yields statistics with the same asymptotic distributions (free of nuisance parameters) as stated earlier.}, journal = {Journal of Business \& Economic Statistics}, volume = {10}, number = {4}, pages = {467-70} } @Incollection{Pesaran1999, author = {Pesaran, M.H and Shin, Y}, title = {An autoregressive distributed lag modelling approach to cointegration analysis}, year = {1999}, booktitle = {Econometrics and Economic Theory in the 20th Century: The Ragnar Frisch Centennial Symposium}, editor = {Strom, S.}, publication\_type = {type}, publisher = {Cambridge University Press}, address = {Cambridge}, chapter = {11} } @Article {RePEc-jae-japmet-v-16-y-2001-i-3-p-289-326, author = {Pesaran, M.H and Shin, Y and Smith, R.J}, title = {Bounds testing approaches to the analysis of level relationships}, year = {2001}, URL = {http://ideas.repec.org/a/jae/japmet/v16y2001i3p289-326.html}, abstract = {This paper develops a new approach to the problem of testing the existence of a level relationship between a dependent variable and a set of regressors, when it is not known with certainty whether the underlying regressors are trend- or first-difference stationary. The proposed tests are based on standard F- and t-statistics used to test the significance of the lagged levels of the variables in a univariate equilibrium correction mechanism. The asymptotic distributions of these statistics are non-standard under the null hypothesis that there exists no level relationship, irrespective of whether the regressors are I(0) or I(1). Two sets of asymptotic critical values are provided: one when all regressors are purely I(1) and the other if they are all purely I(0). These two sets of critical values provide a band covering all possible classifications of the regressors into purely I(0), purely I(1) or mutually cointegrated. Accordingly, various bounds testing procedures are proposed. It is shown that the proposed tests are consistent, and their asymptotic distribution under the null and suitably defined local alternatives are derived. The empirical relevance of the bounds procedures is demonstrated by a re-examination of the earnings equation included in the UK Treasury macroeconometric model. Copyright {\textcopyright} 2001 John Wiley \& Sons, Ltd.}, journal = {Journal of Applied Econometrics}, volume = {16}, number = {3}, pages = {289-326} } @Article {PHILLIPS01061988, author = {Phillips, P.C.B and Perron, P}, title = {Testing for a unit root in time series regression}, year = {1988}, URL = {http://biomet.oxfordjournals.org/content/75/2/ 335.abstract}, abstract = {This paper proposes new tests for detecting the presence of a unit root in quite general time series models. Our approach is nonparametric with respect to nuisance parameters and thereby allows for a very wide class of weakly dependent and possibly heterogeneously distributed data. The tests accommodate models with a fitted drift and a time trend so that they may be used to discriminate between unit root nonstationarity and stationarity about a deterministic trend. The limiting distributions of the statistics are obtained under both the unit root null and a sequence of local alternatives. The latter noncentral distribution theory yields local asymptotic power functions for the tests and facilitates comparisons with alternative procedures due to Dickey \& Fuller. Simulations are reported on the performance of the new tests in finite samples.}, journal = {Biometrika}, volume = {75}, number = {2}, pages = {335-346}, doi = {10.1093/biomet/75.2.335} } @Article{Pindyck1991, author = {Pindyck, R.S and Rubinfeld, D.L}, title = {Models and economic forecasts}, year = {1991}, journal = {McGraw-Hill Inc.} } @Article {RePEc-ora-journl-v-1-y-2009-i-1-p-163-168, author = {Ramona, D and Razvan, S}, title = {Analysis of the romanian current account sustainability}, year = {2009}, month = {May}, URL = {http:// ideas.repec.org/a/ora/journl/v1y2009i1p163-168.html}, abstract = {This paper explores the sustainability of the Romanian current account. For this purpose we test the stationarity and cointegration of the monthly credit and debit transactions of the current account. It results these time series have unit roots for level}, journal = {Annals of Faculty of Economics}, volume = {1}, number = {1}, pages = {163-168}, keywords = {Romanian Current Account; Sustainability; Cointegration} } @Techreport{RePEc-nbr-nberwo-2772, author = {Stock, J.H and Watson, M.W}, title = {A probability model of the coincident economic indicators}, year = {1988}, month = {Nov}, URL = {http://ideas.repec.org/p/nbr/nberwo/2772.html}, abstract = {The Index of Coincident Economic Indicators, currently compiled by the U.S. Department of Commerce, is designed to measure the state of overall economic activity. The index is constructed as a weighted average of four key macroeconomic time series, where the weights are obtained using rules that dare to the early days of business cycle analysis. This paper presents an explicit rime series model (formally, a dynamic factor analysis or \"single index\" model) that implicitly defines a variable that can be thought of as the overall state of the economy. Upon estimating this model using data from 1959-1987, the estimate of this unobserved variable is found to be highly correlated with the official Commerce Department series, particularly over business cycle horizons. Thus this model provides a formal rationalization for the traditional methodology used to develop the Coincident Index. Initial exploratory exercises indicate that traditional leading variables can prove useful in forecasting the short-run growth in this series.}, institution = {National Bureau of Economic Research, Inc}, publication\_type = {type}, number = {2772} } @Article{RePEc-eee-ecolet-v-72-y-2001-i-2-p-219-224, author = {Wu, J.-L and Chen, S.-L and Lee, H.-Y}, title = {Are current account deficits sustainable?: Evidence from panel cointegration}, year = {2001}, month = {August}, URL = {http://ideas.repec.org/a/eee/ecolet/v72y2001i2p219-224.html}, abstract = {No abstract is available for this item.}, journal = {Economics Letters}, volume = {72}, number = {2}, pages = {219-224} } @Article{RePEc-bes-jnlbes-v-10-y-1992-i-3-p-251-70, author = {Zivot, E and Andrews, D.W.K}, title = {Further evidence on the great crash, the oil-price shock, and the unit-root hypothesis}, year = {1992}, month = {July}, URL = {http://ideas.repec.org/a/bes/jnlbes/ v10y1992i3p251-70.html}, abstract = {Perron (1989) has carried out tests of the unit root hypothesis against the alternative hypothesis of trend stationarity with a break in the trend occurring at the Great Crash of 1929 or at the 1973 oil price shock. Here a variation of Perron's test is considered in which the break point is estimated rather than fixed. The asymptotic distribution of the \ "estimated break point\" test statistic is determined and the data considered by Perron are reanalyzed. The authors find less evidence against the unit root hypothesis than Perron finds for many of the data series, but stronger evidence against it for several of the series.}, journal = {Journal of Business \& Economic Statistics}, volume = {10}, number = {3}, pages = {251-70} }
{"url":"http://www.economics-ejournal.org/economics/journalarticles/2012-46/references/@@export","timestamp":"2014-04-20T03:54:40Z","content_type":null,"content_length":"43196","record_id":"<urn:uuid:a201fd4f-f585-44b6-9d90-b4a39e049407>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - How do we get angular dependent wave function from only radially dependent potentials Remember that electrons can have angular momentum in the atom! If angular momentum is 0, of course we get a spherically symmetric probability distribution. But if the electron has angular momentum it must be, in some sense, rotating, so we have to pick an axis. Our probability distribution should be cylindrically symmetric, but it won't be spherical
{"url":"http://www.physicsforums.com/showpost.php?p=3736919&postcount=6","timestamp":"2014-04-17T21:31:31Z","content_type":null,"content_length":"7129","record_id":"<urn:uuid:40a92ea0-5398-4f2e-9ebf-4ea1d77113c1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
There is now a 4th edition of this book Vector Calculus 4th edition 3rd edition ISBN 9780971576636 802 pages, hardcover, smythe-sewn binding, $78, June 2007 Course adoptions at Cal State East Bay, Cornell University, Harvard University, the Harvard-Westlake School,the Hotchkiss School, Indiana University, Indiana University-Purdue University Indianapolis, UC San Diego, the University of Minnesota, the University of Montana, the University of Chicago Laboratory Schools, the University of South Dakota, Washington University in St. Louis, the Wheeler School MAA Review: "When reading this book, I constantly was aware of the fact that I would have benefited immensely if I had gotten my hands on it when I was an undergraduate... I was very impressed with the depth, clarity and ambition of this book. It respects its readers, it assumes that they are intelligent and naturally curious about beautiful mathematics. Then it provides them with all the tools necessary to learn multivariable calculus, linear algebra and basic analysis." — Gizem Karaali, assistant professor of Mathematics at Pomona College Read entire review Praise from readers: "I've begun reading the book and have quickly come to truly love its approach. As a high school calculus teacher, I've read many a linear algebra text and many multivariate calculus books as well. A former student of mine, currently an undergraduate ... at Harvard, recommended yours... and it has lived up to all the praise he has heaped upon it. I truly value the approach you take to try to foster true understanding (including the amazingly helpful margin notes that dot each chapter), the obvious deep thought you put into the order in which all the material is presented, and the clear desire it seems the authors have to truly share their expertise in the subject in a way most likely for the target audience to fully benefit from it." — Ross Lipsky Valley Stream South High School Mathematics Department "Thank you for this gold mine! (and for the cheap price too, especially for European people!)" — Marc Chambon, graduate student in inorganic chemistry and chemical engineering, Université Via Domitia, Perpignan, France "I am having a great time reading the third edition of your book (and attempting all the problems). It's quite user friendly without being sappy... I particularly enjoyed problem A1.5 - I thought I'd never see how Peano did it. Part c. is worth the answer book alone." —Lewis Robinson, retired M. D. indulging a lifelong taste for mathematics Preface (excerpt in html, with link to complete preface in pdf) To order (for books shipped to the United States) To order (for books shipped to other countries) Student Solution Manual for 3rd edition Math programs used in the book Table of contents (in html) Look inside this book (sample pages, mostly in pdf) Errata (latest posting March 23, 2008) Readers praise first two editions "Superb on all counts" - review in CHOICE (review of 1st edition) "A real gem" - review of 2nd edition, MAA Monthly 3 new sections: • Eigenvectors and eigenvalues Section 2.7 on eigenvectors, eigenvalues, and diagonalization bypasses the determinant; a few pages in chapter 4 connect this discussion with determinants and the characteristic function. • Integration and curvature Section 5.4 discusses the Gauss map. • Electromagnetism and differential forms (Section 6.9) Other changes compared to the 2nd edition: • New examples Notably an extended example on checking boundary points when looking for critical points of functions • Uniqueness of row echelon form The first and second editions left the proof of uniqueness in theorem 2.1.8 as an exercise. Now this important result is in the text. • New exercises Including an exercise proving a version of Kantorovich's theorem with a weaker continuity condition on the derivatives; this has implications for the inverse and implicit function • New proof of the generalized Stokes's theorem • Programs posted on web No more laborious typing! The three programs used in the book are available for cut-and-paste at math programs.
{"url":"http://matrixeditions.com/UnifiedApproach3rd.html","timestamp":"2014-04-16T10:11:09Z","content_type":null,"content_length":"29553","record_id":"<urn:uuid:49d1c021-e004-4ced-82ed-7d763b476ce9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Ordinary Differential Equations 24.1 Ordinary Differential Equations The function lsode can be used to solve ODEs of the form using Hindmarsh’s ODE solver LSODE. Solve the set of differential equations The solution is returned in the matrix x, with each row corresponding to an element of the vector t. The first element of t should be t_0 and should correspond to the initial state of the system x_0, so that the first row of the output is x_0. The first argument, fcn, is a string, inline, or function handle that names the function f to call to compute the vector of right hand sides for the set of equations. The function must have the in which xdot and x are vectors and t is a scalar. If fcn is a two-element string array or a two-element cell array of strings, inline functions, or function handles, the first element names the function f described above, and the second element names a function to compute the Jacobian of f. The Jacobian function must have the form in which jac is the matrix of partial derivatives | df_1 df_1 df_1 | | ---- ---- ... ---- | | dx_1 dx_2 dx_N | | | | df_2 df_2 df_2 | | ---- ---- ... ---- | df_i | dx_1 dx_2 dx_N | jac = ---- = | | dx_j | . . . . | | . . . . | | . . . . | | | | df_N df_N df_N | | ---- ---- ... ---- | | dx_1 dx_2 dx_N | The second and third arguments specify the initial state of the system, x_0, and the initial value of the independent variable t_0. The fourth argument is optional, and may be used to specify a set of times that the ODE solver should not integrate past. It is useful for avoiding difficulties with singularities and points where there is a discontinuity in the derivative. After a successful computation, the value of istate will be 2 (consistent with the Fortran version of LSODE). If the computation is not successful, istate will be something other than 2 and msg will contain additional information. You can use the function lsode_options to set optional parameters for lsode. See also: daspk, dassl, dasrt. Query or set options for the function lsode. When called with no arguments, the names of all available options and their current values are displayed. Given one argument, return the value of the corresponding option. When called with two arguments, lsode_options set the option opt to value val. Options include "absolute tolerance" Absolute tolerance. May be either vector or scalar. If a vector, it must match the dimension of the state vector. "relative tolerance" Relative tolerance parameter. Unlike the absolute tolerance, this parameter may only be a scalar. The local error test applied at each integration step is abs (local error in x(i)) <= ... rtol * abs (y(i)) + atol(i) "integration method" A string specifying the method of integration to use to solve the ODE system. Valid values are No Jacobian used (even if it is available). Use stiff backward differentiation formula (BDF) method. If a function to compute the Jacobian is not supplied, lsode will compute a finite difference approximation of the Jacobian "initial step size" The step size to be attempted on the first step (default is determined automatically). "maximum order" Restrict the maximum order of the solution method. If using the Adams method, this option must be between 1 and 12. Otherwise, it must be between 1 and 5, inclusive. "maximum step size" Setting the maximum stepsize will avoid passing over very large regions (default is not specified). "minimum step size" The minimum absolute step size allowed (default is 0). "step limit" Maximum number of steps allowed (default is 100000). Here is an example of solving a set of three differential equations using lsode. Given the function ## oregonator differential equation function xdot = f (x, t) xdot = zeros (3,1); xdot(1) = 77.27 * (x(2) - x(1)*x(2) + x(1) \ - 8.375e-06*x(1)^2); xdot(2) = (x(3) - x(1)*x(2) - x(2)) / 77.27; xdot(3) = 0.161*(x(1) - x(3)); and the initial condition x0 = [ 4; 1.1; 4 ], the set of equations can be integrated using the command t = linspace (0, 500, 1000); y = lsode ("f", x0, t); If you try this, you will see that the value of the result changes dramatically between t = 0 and 5, and again around t = 305. A more efficient set of output points might be t = [0, logspace(-1, log10(303), 150), \ logspace(log10(304), log10(500), 150)]; See Alan C. Hindmarsh, ODEPACK, A Systematized Collection of ODE Solvers, in Scientific Computing, R. S. Stepleman, editor, (1983) for more information about the inner workings of lsode. An m-file for the differential equation used above is included with the Octave distribution in the examples directory under the name oregonator.m.
{"url":"http://www.gnu.org/software/octave/doc/interpreter/Ordinary-Differential-Equations.html","timestamp":"2014-04-19T01:32:48Z","content_type":null,"content_length":"12096","record_id":"<urn:uuid:29052288-cff0-496b-b1b8-6f1237e9bd52>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Week 15, Sunday CS 70 Reading Quiz -- Week 15, Sunday Please fill out this quiz, and press the "Submit" button at the end. Don't collaborate with anyone on quiz exercise solutions. Please answer all questions. SID: [No spaces and no dashes.] Login ID : 1. 130 CS70 students take the final exam. Suppose that all of these exams are shuffled thoroughly and then distributed to the class, one exam per student. Use Chebyshev's inequality to find an upper bound on the probability that at least five students receive their own exams. (More formally, as discussed in the lecture notes: upper-bound the probability that a random permutation of 130 items has at least five fixed points. The lecture notes show that the expected value and variance of the number of fixed points is 1; you may use this freely, without proof.) 2. Is the set of prime numbers countable? Why or why not? 3. What did you find difficult or confusing about the reading or the lectures, and what would you most like to see explained better? If nothing was difficult or confusing, and you understand the material pretty well, tell us what you found most interesting. Please be as specific as possible. CS 70 home page
{"url":"http://www-inst.eecs.berkeley.edu/~cs70/sp08/quizzes/quiz15/","timestamp":"2014-04-19T02:32:01Z","content_type":null,"content_length":"2190","record_id":"<urn:uuid:553d842c-ed7d-4a53-bcd3-50bf6d6f2ec5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Anomaly detection in gene expression via stochastic models of gene regulatory networks • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Genomics. 2009; 10(Suppl 3): S26. Anomaly detection in gene expression via stochastic models of gene regulatory networks The steady-state behaviour of gene regulatory networks (GRNs) can provide crucial evidence for detecting disease-causing genes. However, monitoring the dynamics of GRNs is particularly difficult because biological data only reflects a snapshot of the dynamical behaviour of the living organism. Also most GRN data and methods are used to provide limited structural inferences. In this study, the theory of stochastic GRNs, derived from G-Networks, is applied to GRNs in order to monitor their steady-state behaviours. This approach is applied to a simulation dataset which is generated by using the stochastic gene expression model, and observe that the G-Network properly detects the abnormally expressed genes in the simulation study. In the analysis of real data concerning the cell cycle microarray of budding yeast, our approach finds that the steady-state probability of CLB2 is lower than that of other agents, while most of the genes have similar steady-state probabilities. These results lead to the conclusion that the key regulatory genes of the cell cycle can be expressed in the absence of CLB type cyclines, which was also the conclusion of the original microarray experiment study. G-networks provide an efficient way to monitor steady-state of GRNs. Our method produces more reliable results then the conventional t-test in detecting differentially expressed genes. Also G-networks are successfully applied to the yeast GRNs. This study will be the base of further GRN dynamics studies cooperated with conventional GRN inference algorithms. Identifying the key features and dynamics of gene regulatory networks (GRNs) is an important step towards understanding behaviours of biological systems. Thanks to the development of high-throughput technology, the amount of microarray gene expression data has greatly increased, and numerous mathematical models attempt to explain gene regulations using gene networks [1,2]. Once a network structure is inferred, its dynamics needs to be considered. However, most methods focus on the inference of network structure which only provides a snapshot of a given dataset. Probabilistic Boolean Networks (PBNs) represent the dynamics of GRNs [3], but PBNs are limited by the computational complexity of the related algorithms [4]. In [5], a new approach to the steady-state analysis of GRNs based on G-Network theory [6,7] is proposed, while G-Networks were firstly applied to GRNs with simplifying assumptions concerning gene expression in [8]. However, the G-Network approach also exhibits specific difficulties because of the large number of parameters that are needed to compute their steady-state solution. Thus, in this study we reduce the number of model parameters on the basis of biological assumptions and focus on estimating two parameters in particular: the total input rate and steady-state probability of a A G-Network is a probabilistic queuing network having special customers which include positive and negative "customers", signals and triggers [6,7]. It was originally developed also as a model of stochastic neuronal networks [9] with "negative and positive signals or spikes" which represent inhibition and excitation. In terms of GRNs, a queue is a "place" in which mRNAs are stored, and an mRNA can be considered to be a "customer" of the G-Network. The positive and negative signals are interpreted as the protein activities such as transcription factors, inducers and repressors. Note that the customers or signals of the G-Network can be any biological molecules. However, in our study, we focus on behaviours of mRNAs because the available GRN data are usually mRNA expressions. Each queue has an input and service rates which represent a transcription and degradation processes, respectively. Our interest is to estimate the steady-state probability that a queue is busy, which corresponds to the probability that an mRNA is present, and we are also interested in the total mRNA input rate of each queue. To evaluation the accuracy of the proposed method, we generated a simple simulation dataset by using the stochastic gene expression models processed with the widely accepted Gillespie algorithm [10,11]. We also examine a real biological dataset obtained from the cell cycle of the budding yeast [12]. Although queueing theory is a common computational tool, G-Networks are an essential departure from queueing theory; in particular conventional queues could not be possibly applied to GRNs because the notion of inhibition does not exist in queueing theory but was introduced by G-Network theory. There are two other essential novelties in our work. First, our approach enables us to obtain the steady-state of GRNs with only polynomial computational complexity due to the product form solution of G-Networks; the computational cost due to large memory space and non-polynomial computational complexity are basic limitations in conventional methods such as PBN. Also our method can provide more reliable measures to detect differentially expressed genes in microarray analysis (as shown in our simulation study). G-networks and gene regulatory networks The GRN model used in this study is the probabilistic gene regulatory model introduced in [5]. In this model, let K[i](t) be integer-valued random variables which represent a quantity (mRNA) of the gene i at time t. If the K[i](t) is zero, the gene i cannot interact with other genes. Then we have the following Probabilities, where Λ[i ]is the total input rate (sum of transcription rate, λ[i ]and increment rate of mRNAs come from outside of system, I[i]), μ[i ]is the service rate (e.g. Degradation rate of mRNAs). o(Δt) → 0 as t → 0. Let r[i ]is representing the activity (signal process) rate of each gene i. Then 1/r[i ]is the average time between successive interactions of gene i with other genes. If the ith gene interacts with other genes, the following events occur: • With probability P^+ (i, j), gene i activates gene j; when this happens, K[i](t) is depleted by 1 and K[j](t) is increased by 1 • With probability P^- (i, j), gene i inhibits gene j; when this happens, both K[i](t) and K[j](t) are depleted by 1 • With probability Q(i, j, l) gene i joins with gene j to act upon gene l in excitatory mode, as a result of which both K[i](t) and K[j](t) are reduced by 1, while K[l](t) is increased by 1 • With probability d[i], which is defined as follow, the signal of gene i exits the system so K[i](t) is depleted by 1 Let's define a random process K(t) = [K[1](t), ..., K[n](t)], t ≥ 0 and an n-vector of non-negative integers k = [k[1], ..., k[n]]. The P (k, t) is the probability that K(t) takes k at time t, P (k, t) = P (K(t) = k). Then the probability that K(t) have k at time t + Δt is defined by where ith element is k[i ]+ 1 (k[i ]- 1) and I(x) is indicator function which is 1 if the condition, x, is satisfied or 0 other wise. The first and second terms describe the increment and decrement of the length of queue i, respectively. Third term is the probability that the gene i is activated but nothing is happened except queue i lose one mRNA. From fourth to sixth terms are the probabilities that gene i is activated and interacts with gene j. The rest terms of (1) represent the probabilities that the interaction of gene i and gene j affect the gene l (length of lth queue). Divide (1) by Δt and introduce the equilibrium probability distribution of the system P(k) = lim[t → ∞ ]P (k, t) then we obtain following dynamic behaviour, Now, let's consider following equations, Where q[i ](= r[i ]+ i is expressed in steady-state. Using (2) and (3), E. Gelenbe showed the following product form is satisfied [5,7]. where for any subset I n such that q[m]<1 for each m I, and I{m[1], ..., m[|I|]}. Results and discussion Simple gene regulatory networks using stochastic gene expression model In order to assess our G-Network model, we construct a simple GRN structure and generate the expression data using a synthetic stochastic gene expression model [13,14]. This stochastic gene expression model has several important features such as protein dimerization [15] and time delay for protein signalling [13]. Figure Figure11 shows the simulated network structure which is based on the following basic principles: the number of proteins per cell chases the number of mRNAs which in turn chases the number of active genes [14]. Figure Figure22 depicts the assumptions of our model and (5)~(11) give the corresponding processes (RPo: RNA open complex, Pro: promoter, R: mRNA, P: protein monomer, PP: protein dimmer, 0: degradation, t: time, and Δt: time increment): Simple gene regulatory network structure. The simulation study performed with the four gene GRN structure. Each gene inhibits its neighbor gene. Assumptions for the stochastic gene expressions. There are total 10 processes (Transcription, Translation, mRNA degradation, Dimerization, Monomerization, Monomer degradation, Dimer degradation, Time delay for protein activation, DNA-protein association/disassociation) ... where i, j A, B, C, D} in Figure Figure1.1. In addition, we assume that proteins such as transcription factors and repressors require accumulation times for their activation [11,13], and use the modified Gillespie algorithm to generate the expression data [10,11]. The cell growth rate and cell volume are fixed, and we consider five cells. Detailed parameters are summarized in Table Table11 with their references. Parameters of stochastic gene expression model The transcription process in (5) follows an exponential distribution with transcription initiation rate λ[2 ][16]. The translation processes are given in (6) and include direct competition between the ribosome binding site and the RNAse-E binding site which degrade the mRNAs. Thus the translation process follows a geometric distribution with probability p and busting size b = p(1 - p) [13,16]. T[D ]is the average time interval between successive competitions, and the number of surviving mRNAs n[2 ]in the population after transcription is blocked with n[2 ]= n[2,0 ]T[half ]= -(log(2)/log(p ))T[D ][13]. Thus the translation initiation rate, λ[3 ]= 1/T[D], can be computed. The protein dimer association and disassociation rates are k[a2 ]and k[d2], respectively, as shown in (7) and (8) [ 17]. We also consider the DNA-protein association and disassociation rates (k[a1 ]and k[d2 ]in (9) and (10), respectively) [18]. The degradation rate of mRNA and of proteins are obtained by using the half-life of each molecule (11) [16,17]. We generate three sets of expression data (Dataset 1, 2, and 3); each dataset has two groups, the normal and the case group. These groups are obtained with the same parameter values except for the transcription initiation rate of G[A ]in case group is 0.0012 sec^-1 which is half of the transcription rate in normal group, 0.0025 sec^-1. Both groups are simulated during 3000 seconds. In order to compare these two groups, we perform not only the G-Network analysis but also the t-test which is widely used to find differentially expressed genes in microarray analysis. Datasets 1 and 2 consist of 50 samples each which are drawn from all the data points. In Dataset 1, the expression of G[A ]is significantly different (p-value of t-test <0.01 in Table Table2)2) while the difference of the G [A ]expression in Dataset 2 is not significant. The third dataset consists of 500 samples which are randomly chosen from the original observations. Steady-state probability and total income rate of dataset showing significant p-value of G[A] Table Table22 summarizes the results of the three datasets. In the case groups of Datasets 1 and 2, both the q[A ]and Λ[A ]have the lowest values among the four nodes while the t-test of the G[A ] expression in Dataset 2 shows that it is not significant (p-value = 0.202). In the small sample results (Datasets 1 and 2), our method provides consistent results with large sample analysis (Dataset 3). The ratios (case/normal) also show that the q[A ]and Λ[A], in the case group, are smaller than one while the other ratios stay around one. In Dataset 3, the p-value of G[B ]is significant along with that of G[A ]because the expression of G[A ]directly affects the expression of G[B]. However, G[B ]is not the causal gene in this study. Our G-Network analysis reveals that only G[A ]has lower q and Λ values than other nodes including G[B]. All these results concur with the simulation data generated with one half of the normal transcription rate. Modeling cell cycle gene regulatory networks in budding yeast The cell cycle regulated transcription and its overall controls have been studied in detail for budding yeast [19]. Recent developments in high-throughput microarray techniques help to reveal many of yeast genes controlling the cell cycle [20] which consists of four distinct phases: Gap1 (G1), Synthesis (S), Gap2 (G2), and Mitosis (M). The cells grow during their G1 and G2 phases and their DNA is replicated during the S phase. In the M phase, cell growth stops and the cell divides into two daughter cells that include nuclear division. Many genes are involved with specific cell cycle phases, but the number of key regulators that are responsible for the control of the cell cycle process is much smaller. Thus, based on published information, we build a cell cycle GRN with the key regulators in budding yeast as shown in Figure Figure3,3, although the relationships that contribute to the true regulatory network structure of the cell cycle still remain uncertain. Therefore we simplify the cell cycle network structure by selecting thirteen key regulatory genes (the gray circles in Figure Figure3)3) and connect the genes without regard to the transcriptional and post-transcriptional processes. Figure Figure44 shows the reconstructed regulatory network structure. Cell cycle regulatory network structure in budding yeast. The genes are represented by circles. Complex molecules consisted of two more proteins are represented by a white rectangle. The gray and black boxes are transcription and post-transcription processes, ... Cell cycle regulatory network structure with selected 13 genes. Each node represents a queue. Signals are transferred through the edges. Solid and dashed lines are positive and negative interactions, The activity of cyclin-dependent kinases (CDKs) plays an important role in controlling periodic events during cell cycle. Some studies of cell cycle with high-throughput technologies have suggested alternative regulation models of periodic transcription [20]. D. Olando et., al. [12] measured the transcription levels of cell cycle related genes with the use of Yeast 2.0 oligonucleotide array and determined the manner in which transcription factor networks contribute to CDKs and to global regulation of the cell-cycle transcription process. This microarray dataset is used in our study with the cell cycle network structure of Figure Figure4;4; it consists of two groups: one group is obtained from wild-type (WT) cells and the other is from cyclin-mutant (CM) cells which are disrupted for all S-phase and mitotic cyclins (mutate clb1, 2, 3, 4, 5, and 6). The microarray data consist of a total of 30 data points taken over 270 minutes. We subdivide it into five states (groups), each consisting of 6 data points. The expression levels are transformed by taking the natural logarithm. Figure Figure55 depicts the transformed expression profiles of the 13 genes with 5 states. The black and gray solid lines are the expression profiles from WT and CM cells, respectively, and S1, S2, ..., S5 represent the five states. It is obvious that the profiles of CLB2 are different between WT and CM cells because the CM dataset is designed to monitor the cell cycle processes without the clb cyclines. Expression profiles of selected 13 genes. The black and gray lines represent the wild-type (WT) and clb-mutant (CM) groups' expression levels. Table Table33 summarizes the steady-state probabilities of 13 genes in the cell cycle GRN. All genes have similar steady-state probabilities in the WT and CM cell groups except for CLB2 in the CM group, which has a lower steady-state probability than the elements of the WT group: as shown in Table Table3,3, the ratio of CM/WT is smaller than one (bold letter). This smaller probability can be explained by considering the experimental design of the CM dataset which is obtained without clb cyclines. Also, the original study of this dataset suggested alternative cell cycle regulatory pathways in [12] which had revealed that over 70% of the cell cycle related genes were expressed periodically without the clb cyclines. In our results, the steady-state probabilities of the CM group are consistent with that of the WT group. These results draw the same conclusion as the original study, i.e. that the steady-state of the 12 genes does not entirely depend on the expression of CLB2. Table Table44 shows the estimated total input rate of the 13 genes. These results also show that only the input rates of CLB2 decrease in the CM group. Steady-state probability of the 13 genes in cell cycle GRNs Estimated total input rate of the 13 genes in cell cycle GRNs This paper has used the G-Network approach [5-8] to model GRNs. Two model parameters, the steady-state probability, q[i], and the total input rate, Λ[I], are estimated by determining the boundary of Λ[i ]and using a grid search. We first use simulated gene expression data generated on the basis of a stochastic gene expression model. Two groups (normal and case) of expression data are examined. These two groups are exactly the same except for one parameter, the transcription initiation rate. We have observed that the G-Network based method is able to detect the abnormally expressed genes, while the t-test produces false positives. Then, using real data, we have observed that the steady-state probability of CLB2 is lower than that of other agents and concluded that the key genes of cell cycle regulation can be expressed without the clb cyclines; this result is consistent with the original experimental study. However, the unchanged steady-state probabilities in all the five states may need to be considered, because the cell cycle has four phases (G1, S, G2, M) and expressions of genes involved with a specific phase are expected to be different from those in other phases. Also the small decrease rate and relatively large total input rates of CLB2 may require a more careful analysis of the G-Network approach in relation to cell cycle GRN structure. The manner in which we have used G-Network models in this paper did not currently include simultaneous interactions with three or more nodes. However this is not really a limiting effect of the model, since it suffices to include chain representations of dependencies in the G-Network model as has been done for neuronal networks [9] to cover excitatory and inhibitory effects that involve three or more nodes, and in fact random chains of nodes of any length. Although in this study the probabilities that gene i affect gene j, P^+ (i, j) and P^- (i, j) in (3), are fixed at the value one, we think that the conventional reverse engineering GRN methods using the "Ensemble" method [21] can provide these probabilities more accurately for an improved steady-state analysis of GRNs. In conclusion, our study has illustrated the use of G-Networks as a new approach for the steady-state analysis of GRNs, and has shown their usefulness in obtaining quantities such as the effective transcription rate and the steady-state probabilities, using them to detect differentially expressed genes, thus introducing a new approach which differs from more conventional microarray analysis methods. Future research will investigate the ensemble approaches to GRNs [21] based on the G-Network methodology in [5], which will allow to infer GRN structures, and also to monitor their steady-state behaviour. Once a GRN structure is determined, it is necessary to estimate the total input rate (Λ[i]) of ith queue and its steady-state probability, (q[i]). For the simplicity, the probabilities, P^+ (i, j), P ^- (i, j), and Q(i, j, l) in (3) are set to be one. Then, it can be rewritten as follows In (12), the Λ[i ]and R[i ]is the total input (Λ[i ]= λ[i ]+ I[i]) and total output rates (R[i ]= r[i ]+ μ[i]), respectively. i positively and i negatively. We fix the r[i ]as the number of out degrees of gene i and the degradation rate of mRNA, i, as a constant (Table (Table1)1) because the total output rate, R[i ]is not our interest. Therefore, we need to estimate two parameters, the total input rate, Λ[i], and the steady-state probability, q[i]. Let [i], which is larger than zero. The lower bound of total input is regarded as an initial transcription rate without any external input. In this study, we use 16]. The upper bound of Λ[i ] where the probabilities Let q[i]. Then where x[ij ]is the observed expression level (number of mRNAs) of ith gene at the jth observation and max(x[ij]) is the maximum value among all observed values of ith gene. Let Λ[iu ]is a value of total input rate between the lower bound and the upper bound of Λ[i ](q[i ]can be obtained numerically by solving (12) with the [iu]. Once the steady-state probability is determined, the log-likelihood of the given model can be computed by using (4) which is the same form of the likelihood of geometric distribution. It is known that the log-likelihood of geometric distribution is convex so we choose appropriate value Λ[i ]which maximizes the log-likelihood function. For each value of total input, Λ[iu ](q[iu], with initial value, L[iu], which is used to choose the optimal I total input rate, Note that the q[iu ]is a numerical solution of (12) with initial value, [iu]. In order to compute i, q[i], can be obtained by updating its value iteratively until the d^(t) <δ where d^(t) is the difference between tth iteration. In this study, δ is 0.0001. Competing interests The authors declare that they have no competing interests. Authors' contributions Haseong Kim developed the data analysis techniques including synthetic data generation and tested the models on the data. He wrote the first draft of the paper. E. Gelenbe developed the G-Network models and the specific application of these models to GRNs. He rewrote the paper for submission, and then finalised the accepted paper in preparation for its Other papers from the meeting have been published as part of BMC Bioinformatics Volume 10 Supplement 15, 2009: Eighth International Conference on Bioinformatics (InCoB2009): Bioinformatics, available online at http://www.biomedcentral.com/1471-2105/10?issue=S15. Some of this research has been supported by the EU FP7 DIESIS Project. This article has been published as part of BMC Genomics Volume 10 Supplement 3, 2009: Eighth International Conference on Bioinformatics (InCoB2009): Computational Biology. The full contents of the supplement are available online at http://www.biomedcentral.com/1471-2164/10?issue=S3. Articles from BMC Genomics are provided here courtesy of BioMed Central • MedGen Related information in MedGen • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2788379/?tool=pubmed","timestamp":"2014-04-17T04:08:37Z","content_type":null,"content_length":"105043","record_id":"<urn:uuid:f7282367-38ce-42d0-bc46-f41e77fda863>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics and Statistics Colloquium; Department of Mathematics and Statistics; Wright State University; Dayton, Ohio; January 7, 2000 We describe an approach to the dynamics of stochastic systems with finite memory using multiplicative cocycles in Hilbert space. We introduce the notion of hyperbolicity for stationary solutions of the stochastic differential system. We then establish the existence of smooth stable and unstable manifolds in a neighborhood of a hyperbolic stationary solution. The stable and unstable manifolds are stationary and asymptotically invariant under the stochastic semiflow. The proof uses ideas from infinite-dimensional multiplicative ergodic theory and interpolation arguments.
{"url":"http://opensiuc.lib.siu.edu/math_misc/21/","timestamp":"2014-04-19T13:14:37Z","content_type":null,"content_length":"20778","record_id":"<urn:uuid:c99b927b-a904-4f51-afc1-579c4e426242>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Is anything known about flow along vector fields over complete normed fields? up vote 1 down vote favorite In Bourbaki "Variétés différentielles et analytiques" there is a statement (without proof) that for a vector field on smooth manifold over complete normed field (characteristic zero) there is a local unique flow along it. Is it true or not? Is there any proof for this statement? differential-equations p-adic-analysis smooth-manifolds 2 No idea. What are the basic definitions here for smooth manifold and vector field over one of your complete normed fields? Over the reals, there is no difference between defining a vector as a derivation of functions at a point or as an equivalence class of curves going through the same point. – Will Jagy Sep 6 '11 at 20:18 I'm trying to get rid of the ambiguous [differential-equations] tag, usually in favor of [ca.analysis-and-odes] or [ap.analysis-of-pdes] but neither alternative seems to apply in this case. If you have an idea for a substitute that applies to this case, I will gladly create it for you. – François G. Dorais♦ Sep 7 '11 at 2:19 @Will: you know the definition of smooth functions, you can check the Implicit function theorem for complete normed fields (proof in Rudin's book on calculus is good for it), so you have a definition of smooth manifold. Over p-adic numbers (and Witt vectors over $\mathbb F_{p^n}$) there is no difference betweeb derivations and vector fields too. Maybe someone know the proof for p-adic numbers? I didn't find any such a statement (and the contrary too) in textbooks (like the Dwork's one) on p-adic differential equations. – zroslav Sep 7 '11 at 9:02 3 zroslav: maybe you should be less quick to assume everyone knows what a smooth function is over a complete normed field besides R. Although I don't have Bourbaki's book in front of me I would guess that in it what's defined geometrically over complete normed fields besides R is not the concept of a smooth manifold using infinitely differentiable functions, but rather an analytic manifold using functions that are given by power series. – KConrad Sep 7 '11 at 11:48 1 In fact, there is a notion of smoothness for the p-adic case introduced by Schikhof in his book "Ultrametric Calculus", and there are papers by Ludkovsky (Lyudkovskij) who develops a theory of manifolds based on Schikhof's definition. However I did not study those papers in detail. – Anatoly Kochubei Sep 8 '11 at 6:06 show 2 more comments 1 Answer active oldest votes In the p-adic case, the standard theory deals not with smooth functions and manifolds but with analytic ones. A complete exposition including necessary results from analysis is given up vote 1 down by J.-P. Serre, "Lie algebras and Lie groups", New York, Benjamin, 1965. Yes, the analytic case is rather simpler – zroslav Sep 8 '11 at 7:01 add comment Not the answer you're looking for? Browse other questions tagged differential-equations p-adic-analysis smooth-manifolds or ask your own question.
{"url":"http://mathoverflow.net/questions/74681/is-anything-known-about-flow-along-vector-fields-over-complete-normed-fields","timestamp":"2014-04-16T08:05:47Z","content_type":null,"content_length":"57587","record_id":"<urn:uuid:d35e71ba-d80e-48da-a8d0-106d41cc2877>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
numeric-prelude-0.0.4: An experimental alternative hierarchy of numeric type classes Source code Contents Index Polynomials and rational functions in a single indeterminate. Polynomials are represented by a list of coefficients. All non-zero coefficients are listed, but there may be extra '0's at the end. Usage: Say you have the ring of Integer numbers and you want to add a transcendental element x, that is an element, which does not allow for simplifications. More precisely, for all positive integer exponents n the power x^n cannot be rewritten as a sum of powers with smaller exponents. The element x must be represented by the polynomial [0,1]. In principle, you can have more than one transcendental element by using polynomials whose coefficients are polynomials as well. However, most algorithms on multi-variate polynomials prefer a different (sparse) representation, where the ordering of elements is not so fixed. If you want division, you need Number.Ratios of polynomials with coefficients from a Algebra.Field. You can also compute with an algebraic element, that is an element which satisfies an algebraic equation like x^3-x-1==0. Actually, powers of x with exponents above 3 can be simplified, since it holds x^3==x+1. You can perform these computations with Number.ResidueClass of polynomials, where the divisor is the polynomial equation that determines x. If the polynomial is irreducible (in our case x^3-x-1 cannot be written as a non-trivial product) then the residue classes also allow unrestricted division (except by zero, of course). That is, using residue classes of polynomials you can work with roots of polynomial equations without representing them by radicals (powers with fractional exponents). It is well-known, that roots of polynomials of degree above 4 may not be representable by radicals. Functor T C T C a b => C a (T b) (C a, C a b) => C a (T b) (Eq a, C a) => Eq (T a) (C a, Eq a, Show a, C a) => Fractional (T a) (C a, Eq a, Show a, C a) => Num (T a) Show a => Show (T a) (Arbitrary a, C a) => Arbitrary (T a) (C a, C a) => C (T a) C a => C (T a) C a => C (T a) C a => C (T a) C a => C (T a) (C a, C a) => C (T a) (C a, C a) => C (T a) (C a, C a) => C (T a) fromCoeffs :: [a] -> T a Source coeffs :: T a -> [a] Source showsExpressionPrec :: (Show a, C a, C a) => Int -> String -> T a -> String -> String Source evaluate :: C a => T a -> a -> a Source evaluateCoeffVector :: C a v => T v -> a -> v Source Here the coefficients are vectors, for example the coefficients are real and the coefficents are real vectors. evaluateArgVector :: (C a v, C v) => T a -> v -> v Source Here the argument is a vector, for example the coefficients are complex numbers or square matrices and the coefficents are reals. compose :: C a => T a -> T a -> T a Source compose is the functional composition of polynomials. It fulfills eval x . eval y == eval (compose x y) equal :: (Eq a, C a) => [a] -> [a] -> Bool Source add :: C a => [a] -> [a] -> [a] Source sub :: C a => [a] -> [a] -> [a] Source negate :: C a => [a] -> [a] Source horner :: C a => a -> [a] -> a Source Horner's scheme for evaluating a polynomial in a ring. hornerCoeffVector :: C a v => a -> [v] -> v Source Horner's scheme for evaluating a polynomial in a module. hornerArgVector :: (C a v, C v) => v -> [a] -> v Source shift :: C a => [a] -> [a] Source Multiply by the variable, used internally. unShift :: [a] -> [a] Source mul :: C a => [a] -> [a] -> [a] Source mul is fast if the second argument is a short polynomial, MathObj.PowerSeries.** relies on that fact. scale :: C a => a -> [a] -> [a] Source divMod :: (C a, C a) => [a] -> [a] -> ([a], [a]) Source tensorProduct :: C a => [a] -> [a] -> [[a]] Source tensorProductAlt :: C a => [a] -> [a] -> [[a]] Source mulShear :: C a => [a] -> [a] -> [a] Source mulShearTranspose :: C a => [a] -> [a] -> [a] Source progression :: C a => [a] Source differentiate :: C a => [a] -> [a] Source integrate :: C a => a -> [a] -> [a] Source integrateInt :: (C a, C a) => a -> [a] -> [a] Source Integrates if it is possible to represent the integrated polynomial in the given ring. Otherwise undefined coefficients occur. fromRoots :: C a => [a] -> T a Source alternate :: C a => [a] -> [a] Source Produced by Haddock version 2.6.0
{"url":"http://hackage.haskell.org/package/numeric-prelude-0.0.4/docs/MathObj-Polynomial.html","timestamp":"2014-04-20T01:06:43Z","content_type":null,"content_length":"29649","record_id":"<urn:uuid:a6d12497-e2f4-4425-91f1-3dc991c76f10>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Towards an Optimal Bit-Reversal Permutation Program - PROCEEDINGS OF THE SYMPOSIUM ON DISCRETE , 2000 "... We present a model that enables us to analyze the running time of an algorithm on a computer with a memory hierarchy with limited associativity, in terms of various cache parameters. Our cache model, an extension of Aggarwal and Vitter’s I/O model, enables us to establish useful relationships betw ..." Cited by 47 (3 self) Add to MetaCart We present a model that enables us to analyze the running time of an algorithm on a computer with a memory hierarchy with limited associativity, in terms of various cache parameters. Our cache model, an extension of Aggarwal and Vitter’s I/O model, enables us to establish useful relationships between the cache complexity and the I/O complexity of computations. As a corollary, we obtain cache-efficient algorithms in the single-level cache model for fundamental problems like sorting, FFT, and an important subclass of permutations. We also analyze the average-case cache behavior of mergesort, show that ignoring associativity concerns could lead to inferior performance, and present supporting experimental evidence. We further extend our model to multiple levels of cache with limited associativity and present optimal algorithms for matrix transpose and sorting. Our techniques may be used for systematic "... We investigate the memory system performance of several algorithms for transposing an N N matrix in-place, where N is large. Specifically, we investigate the relative contributions of the data cache, the translation lookaside buffer, register tiling, and the array layout function to the overall runn ..." Cited by 23 (1 self) Add to MetaCart We investigate the memory system performance of several algorithms for transposing an N N matrix in-place, where N is large. Specifically, we investigate the relative contributions of the data cache, the translation lookaside buffer, register tiling, and the array layout function to the overall running time of the algorithms. We use various memory models to capture and analyze the effect of various facets of cache memory architecture that guide the choice of a particular algorithm, and attempt to experimentally validate the predictions of the model. Our major conclusions are as follows: limited associativity in the mapping from main memory addresses to cache sets can significantly degrade running time; the limited number of TLB entries can easily lead to thrashing; the fanciest optimal algorithms are not competitive on real machines even at fairly large problem sizes unless cache miss penalties are quite high; low-level performance tuning “hacks”, such as register tiling and array alignment, can significantly distort the effects of improved algorithms; and hierarchical nonlinear layouts are inherently superior to the standard canonical layouts (such as row- or column-major) for this problem. - In Proc. 40th Annual Symposium on Foundations of Computer Science , 1999 "... This paper presents asymptotically optimal algorithms for rectangular matrix transpose, FFT, and sorting on computers with multiple levels of caching. Unlike previous optimal algorithms, these algorithms are cache oblivious: no variables dependent on hardware parameters, such as cache size and cach ..." Cited by 12 (1 self) Add to MetaCart This paper presents asymptotically optimal algorithms for rectangular matrix transpose, FFT, and sorting on computers with multiple levels of caching. Unlike previous optimal algorithms, these algorithms are cache oblivious: no variables dependent on hardware parameters, such as cache size and cache-line length, need to be tuned to achieve optimality. Nevertheless, these algorithms use an optimal amount of work and move data optimally among multiple levels of cache. For a cache with size Z and cache-line length L where Z � Ω � L 2 � the number of cache misses for an m � n matrix transpose is Θ � 1 � mn � L �. The number of cache misses for either an n-point FFT or the sorting of n numbers is Θ � 1 �� � n � L � � 1 � log Z n �� �. We also give an Θ � mnp �-work algorithm to multiply an m � n matrix by an n � p matrix that incurs Θ � 1 �� � mn � np � mp � � L � mnp � L � Z � cache faults. We introduce an “ideal-cache ” model to analyze our algorithms. We prove that an optimal cache-oblivious algorithm designed for two levels of memory is also optimal for multiple levels and that the assumption of optimal replacement in the ideal-cache model can be simulated efficiently by LRU replacement. We also provide preliminary empirical results on the effectiveness of cache-oblivious algorithms in practice. - In Proceedings of HPCS 5 , 1999 "... This paper explores the interplay between algorithm design and a computer's memory hierarchy. Matrix transpose and the bit-reversal reordering are important scientific subroutines which often exhibit severe performance degradation due to cache and TLB associativity problems. We give lower bounds tha ..." Cited by 8 (1 self) Add to MetaCart This paper explores the interplay between algorithm design and a computer's memory hierarchy. Matrix transpose and the bit-reversal reordering are important scientific subroutines which often exhibit severe performance degradation due to cache and TLB associativity problems. We give lower bounds that show for typical memory hierarchy designs, extra data movement is unavoidable. We also prescribe characteristics of various levels of the memory hierarchy needed to perform efficient bit-reversals. Insight gained from our analysis leads to the design of a near optimal bit-reversal algorithm. This Cache Optimal Bit Reverse Algorithm (COBRA) is implemented on the Digital Alpha 21164, Sun Ultrasparc 2, and IBM Power2. We show that COBRA is near optimal with respect to execution time on these machines and performs much better than previous best known algorithms. Copyright 1998 IEEE. Published in the Proceedings of HPCA 5, 9-13 January 1999 in Orlando, FL. Personal use of this material is permi... , 2000 "... ...................................................... xiii 1 Introduction .................................................. 1 1. Divide-and-Conquer and the Memory Hierarchy . . . . . . . . . . . 2 2. Overview of Architecture-Cognizant Divide-and Conquer . . . . . . 4 3. Overview of Napoleon . . . ..." Cited by 5 (0 self) Add to MetaCart ...................................................... xiii 1 Introduction .................................................. 1 1. Divide-and-Conquer and the Memory Hierarchy . . . . . . . . . . . 2 2. Overview of Architecture-Cognizant Divide-and Conquer . . . . . . 4 3. Overview of Napoleon . . . . . . . . . . . . . . . . . . . . . . . . . 5 4. What You Can Expect . . . . . . . . . . . . . . . . . . . . . . . . . 6 5. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1. Divide-and-Conquer Algorithms for Performance Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. The Importance of Architecture-Cognizance . . . . . . . . . 7 3. Complexity of Determining VariantPolicy . . . . . . . . . . 7 4. A Framework and System for Divide-and-Conquer Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5. The Fastest Portable FFT Algorithm . . . . . . . . . . . . . 8 6. Outline of Thesis . . . . . . . . . . . . . . . . .... - In: Proc. Parallel Architectures and Compilation Techniques (PACT , 2006 "... Matrix transposition is an important kernel used in many applications. Even though its optimization has been the subject of many studies, an optimization procedure that targets the characteristics of current processor architectures has not been developed. In this paper, we develop an integrated opti ..." Cited by 5 (1 self) Add to MetaCart Matrix transposition is an important kernel used in many applications. Even though its optimization has been the subject of many studies, an optimization procedure that targets the characteristics of current processor architectures has not been developed. In this paper, we develop an integrated optimization framework that addresses a number of issues, including tiling for the memory hierarchy, effective handling of memory misalignment, utilizing memory subsystem characteristics, and the exploitation of the parallelism provided by the vector instruction sets in current processors. A judicious combination of analytical and empirical approaches is used to determine the most appropriate optimizations. The absence of problem information until execution time is handled by generating multiple versions of the code- the best version is chosen at runtime, with assistance from minimal-overhead inspectors. The approach highlights aspects of empirical optimization that are important for similar computations with little temporal reuse. Experimental results on PowerPC G5 and Intel Pentium 4 demonstrate the effectiveness of the developed framework. Categories and Subject Descriptors D.3.4 [Programming Languages]: Processors—code generation; compilers; optimization , 2000 "... We describe a model that enables us to analyze the running time of an algorithm in a computer with a memory hierarchy with limited associativity, in terms of various cache parameters. Our model, an extension of Aggarwal and Vitter’s I/O model, enables us to establish useful relationships between the ..." Add to MetaCart We describe a model that enables us to analyze the running time of an algorithm in a computer with a memory hierarchy with limited associativity, in terms of various cache parameters. Our model, an extension of Aggarwal and Vitter’s I/O model, enables us to establish useful relationships between the cache complexity and the I/O complexity of computations. As a corollary, we obtain cache-optimal algorithms for some fundamental problems like sorting, FFT, and an important subclass of permutations in the single-level cache model. We also show that ignoring associativity concerns could lead to inferior performance, by analyzing the average-case cache behavior of mergesort. We further extend our model to multiple levels of cache with limited associativity and present optimal algorithms for matrix transpose and sorting. Our techniques may be used for systematic exploitation of the memory hierarchy starting from the algorithm design stage, and dealing with the hitherto unresolved problem of limited associativity. 1 "... Complex tensor contraction expressions arise in accurate electronic structure models in quantum chemistry, such as the coupled cluster method. This paper addresses two complementary aspects of performance optimization of such tensor contraction expressions. Transformations using algebraic properties ..." Add to MetaCart Complex tensor contraction expressions arise in accurate electronic structure models in quantum chemistry, such as the coupled cluster method. This paper addresses two complementary aspects of performance optimization of such tensor contraction expressions. Transformations using algebraic properties of commutativity and associativity can be used to significantly decrease the number of arithmetic operations required for evaluation of these expressions. The identification of common subexpressions among a set of tensor contraction expressions can result in a reduction of the total number of operations required to evaluate the tensor contractions. The first part of the paper describes an effective algorithm for operation minimization , 2008 "... The dissertation is submitted ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.46.9319","timestamp":"2014-04-18T20:27:13Z","content_type":null,"content_length":"36307","record_id":"<urn:uuid:5b2beb42-404b-446e-95b1-1ef0b6013d22>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
completeing the square February 10th 2009, 11:35 AM #1 Jan 2009 completeing the square By completing the square, the expression (x^2)+2x+164 equals ((x+A)^2)+B where A=______ and B=_______. I found A, which is 1 but I can't figure out B. help please you need to equal the equation to zero , then pass 164 to the other side of the equation . so you will have x^2+ 2x = -164. Then, you need multiply 2x by 1/2. The result, which is 1, must be written on both sides of the equation. x^2+2x+1= -164+1 At this point, you have already completed the square, and you only need to reduce both sides of the equation. (x+1)^2 = -163 (x+1)^2 + 163 Therefore, a = 1 and b = 163 February 10th 2009, 12:22 PM #2 Junior Member Feb 2009
{"url":"http://mathhelpforum.com/pre-calculus/72880-completeing-square.html","timestamp":"2014-04-20T09:23:04Z","content_type":null,"content_length":"31752","record_id":"<urn:uuid:c3f0981f-0ea7-41fa-a12e-5be7a22e8dc0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00402-ip-10-147-4-33.ec2.internal.warc.gz"}