content
stringlengths
86
994k
meta
stringlengths
288
619
What Are Moving Averages? Part 1 Learn what moving averages are, how to calculate them, and find out how they can help your daily life. August 11, 2012 With the 2012 Olympics coming to a close, it’s time to start thinking about 2016. So today, we’re going to imagine that you’re a runner training for the 1500 meter race at the next Olympic games. At the end of each day, you run a practice 1500 meter race and record your time. Since we have the luxury of making this story as awesome as we please, let’s not just assume you’re training for the Olympics (which would be impressive enough), let’s assume that you’re one of the early favorites to win. Which means you need to get your time down to around 3:30 (meaning 3 minutes 30 seconds)…which is really, really fast! The big question for today is: What’s the best way to track your progress? In other words, how do you know if you’re improving enough? Should you just look at the day-to-day changes in your time? Or is there a better way? In truth, there’s no absolutely right or wrong answer here—but there are better and worse answers. And a better answer in this situation is to use something called a moving average to track your progress. Why? That’s exactly what we’re going to talk about today. Runner’s Notebook: Week 1 Getting back to your quest for 1500 meter Olympic glory, let’s start by taking a look at the practice times you’ve recorded over the past week. On Monday you ran 1500 meters in 3:45, on Tuesday you improved to 3:38, on Wednesday you were a little off and came in at 3:50, Thursday was better at 3:41, and Friday was even better at 3:36. As you can see, your times bounced all over the place. So how can you fight through this mess and figure out how much you really improved—or if you improved at all, for that matter? Well, since you went from 3:45 on Monday to 3:36 on Friday, we can just say that you improved by 9 seconds…right? Or is that too optimistic? Review: Average and Mean To answer these questions, we first need to figure out what a moving average is. And to understand what a moving average is, we need to understand what the word “average” means. As we’ve talked about before, the word “average” can actually signify many things, but it usually refers to what’s known as the mean. As you probably know, to find the mean of a group of numbers we just add them up and then divide by the size of the group. So, to find your mean 1500 meter time over the 5 practice runs from the past week, just add up the times and divide by 5 to get a mean of 3:42. Runner’s Notebook: Week 2 But what does the mean value we’ve found really mean? To make things a bit clearer, let’s put another week’s worth of practice run times into your runner’s notebook. Let’s assume that the following week includes times of 3:44, then down to 3:38, up to 3:45, down to 3:34, and finally finishing up on Friday with a time of 3:39. As we did with the first week’s times, we can find the mean time of your practice runs over the second week by adding them up and dividing by 5. The result is a mean of 3:40. Now, back to the question: What do these mean values really mean? Well, finding the mean value for a given week is really just a way to evenly “smooth out” those times over the entire week. And when we compare the smoothed out times for these two weeks, we learn that you improved from a mean of 3:42 seconds in the first week to a mean of 3:40 seconds in the second week. So these mean values mean that you’ve improved 2 seconds on average…not bad! Why Bother With Averages? But you might be wondering: Why are we bothering to find averages at all? Isn’t this a lot more work than we need to do? If we’re trying to judge progress, can’t we just look at the day-to-day changes in your 1500 meter time? Unfortunately, not really…at least not easily. Because, as we’ve seen, like a lot of other things in the world—the weather, stock prices, and your weight to name a few—your 1500 meter practice times fluctuate a lot from day-to-day. And those fluctuations make it extremely hard to separate meaningful changes due to actual progress from meaningless here-today-gone-tomorrow noise. Fluctuations can make it extremely hard to separate meaningful changes from meaningless noise. Sometimes this noise will slow down your time (perhaps you ate something that didn’t exactly agree with you that morning) and sometimes it will speed it up (perhaps you had a particularly favorable wind at your back on the homestretch). But the important point is that these up-and-down fluctuations mostly go away when you smooth out the times by finding an average value. What Is a Moving Average? Being able to track week-to-week improvements by finding weekly mean values as we’ve done so far is great, but what if you really want to keep an eye on your day-to-day changes? Is there a way to do that and still get rid of those noisy fluctuations? In other words, is there a way to clean up the data so that you can see the forest from the trees? As you may have guessed, that’s exactly what a moving average does. There are many kinds of moving average, but today we’re going to focus on what’s called a simple moving average. Let’s say you want to keep track of your race times using a 3-day moving average. To find the average time for a day, just add that day’s time to the times from the previous 2 days and divide by 3. (To use a 4-day moving average instead, just add each day’s time to the times from the previous 3 days and divide by 4, etc.) If you do this over the two week period in your runner’s notebook, you’ll find a 3-day moving average time of 3:44.33 for the first Wednesday (which, if you think about it, is the first day you can calculate a 3-day moving average for), then down to 3:43.00, down again to 3:42.33, 3:40.33, and 3:39.33, then up to 3:42.33, down to 3:39.00, and finally finishing at 3:39.33 on the second Friday. As you can see, there are still day-to-day fluctuations, but they are much less prominent than they were before because the 3-day window smooths them out to reveal the overall trend—a trend that’s indicating that you’re well on your way to 2016 Olympic glory! Wrap Up Okay, that’s all the math we have time for today. But that is by no means all that we have to say about moving averages. For example, how do you know how big the window your average should track? What happens if you change the size of that window? What are some of the other kinds of moving averages? And what are some of their other real world applications? Stay tuned…we’ll answer all of these questions and more in an upcoming episode. Also, as luck would have it, you can find another example of just how useful moving averages are in this week’s Nutrition Diva episode about the best way to keep track of your weight. Be sure to check it out! Remember to become a fan of the Math Dude on Facebook where you’ll find lots of great math posted throughout the week. If you’re on Twitter, please follow me there, too. Finally, please send your math questions my way via Facebook, Twitter, or email at mathdude@quickanddirtytips.com. Until next time, this is Jason Marshall with The Math Dude’s Quick and Dirty Tips to Make Math Easier. Thanks for reading, math fans!
{"url":"http://www.quickanddirtytips.com/education/math/what-are-moving-averages-part-1","timestamp":"2014-04-18T05:35:01Z","content_type":null,"content_length":"90684","record_id":"<urn:uuid:fdb87544-30f5-419d-9ba4-62e83aed5b2e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Would American companies do more good by refusing to cooperate - JustAnswer Experts are full of valuable knowledge and are ready to help with any question. Credentials confirmed by a Fortune 500 verification firm. Would American companies do more good by refusing to cooperate with Chinese authorities (and risk not being able to do business in China) or by cooperating and working gradually to spread Internet Martin and 51 other General Specialists are ready to help you 23 years with Ford specializing in drivability and electrical and AC. Ford certs and ASE Certs Disclaimer: Information in questions, answers, and other posts on this site ("Posts") comes from individual users, not JustAnswer; JustAnswer is not responsible for Posts. Posts are for general information, are not intended to substitute for informed professional advice (medical, legal, veterinary, financial, etc.), or to establish a professional-client relationship. The site and services are provided "as is" with no warranty or representations by JustAnswer regarding the qualifications of Experts. To see what credentials have been verified by a third-party service, please click on the "Verified" symbol in some Experts' profiles. JustAnswer is not intended or designed for EMERGENCY questions which should be directed immediately by telephone or in-person to qualified
{"url":"http://www.justanswer.com/general/5appl-american-companies-good-refusing-cooperate.html","timestamp":"2014-04-20T06:12:48Z","content_type":null,"content_length":"84141","record_id":"<urn:uuid:55f1dc75-7fc6-4589-a7a1-ac5b9b7de807>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Pico Rivera Find a Pico Rivera Precalculus Tutor I have a BS in chemistry from Cal State Long Beach and am in the process of getting my single subject credential to teach high school chemistry. I am qualified to tutor in chemistry and most math classes. I am currently a substitute teacher in the Whittier area and am great at explaining topics in a variety of ways to help students succeed. 7 Subjects: including precalculus, chemistry, calculus, geometry ...I love working with students and have experience teaching a wide range of classes from pre-Algebra to Advanced Engineering Mathematics. I am currently at Fuller Seminary in Pasadena preparing for a career in community work as a pastor. Here is a testimonial from one of my students: “I love this teacher. 9 Subjects: including precalculus, calculus, geometry, algebra 1 ...I have been tutoring in math now for years, everything from algebra to calculus to standardized test prep math (SAT, ACT, AP Calculus, SAT Math II Subject test). My goal is to make students see the beauty and simplicity of math, demystifying the subject and making it relevant to the world. Alg... 60 Subjects: including precalculus, English, Spanish, reading Hello, My name is Yu Leo Lu, and I graduated from UC Berkeley with Bachelor of Arts degree major in Mathematics. My overall GPA is 3.75, and my major GPA is 3.80. Subjects that I am experienced in tutoring includes Algebra, Geometry and Calculus. 11 Subjects: including precalculus, calculus, geometry, algebra 1 ...I have been tutoring for this website for almost one year and had the pleasure of meeting all types of people. I've tutored subjects as low as third grade math, and as high as trignometry. I love helping students out in math and forming a strong relationship with them to make them feel comfortable by creating a positive environment. 10 Subjects: including precalculus, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/pico_rivera_precalculus_tutors.php","timestamp":"2014-04-20T11:07:42Z","content_type":null,"content_length":"24243","record_id":"<urn:uuid:a1cab6c2-7457-4764-941f-ba19829605e9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Superconductivity and the BCS Theory Can some one explain to me how superconductivity works exactly? I will type all I know about it so can you guys correct any misconceptions I may have? Superconductivity is the phenomenon in which a conductor, when cooled sufficiently (past a critical temperature Tc) exhibits negligible resistance. This phenomenon can be explained by the BCS theory, which works well in some scenarios (traditional metallic) but fails miserably at others (ceramics). The BCS theory is based upon the formation of Cooper pairs of electrons. The BCS theory states that when a negatively charged electron travels past positively charged ions in the lattice, the lattice distorts inwards towards the electron. This causes a relative concentration of positive charge following behind the moving electron. This deformation of the lattice causes another electron, with opposite "spin", to move into the region of higher positive charge density. The parts in Bold, which one is correct? The two electrons are then held together with a certain binding energy. If this binding energy is higher than the energy provided by ‘kicks’ from oscillating atoms in the conductor (which is true at low temperatures), then the electron pair will stick together and resist all ‘kicks’, thus not experiencing resistance. This electron pairing is favoured as it puts the electrons into a lower energy state. As long as T<Tc, the electrons remain paired due to reduced molecular motion. Electrons are fermions with spin +0.5 and -0.5 so when they combine they form a Boson which is 0, +1 or -1 spin. Below Tc, the Boson becomes a Bose Einstein Condensate which is a new state of matter that doesn't interact with ordinary matter so it passes through the metal lattice unimpeded. BCS Theory was highly successful in explaining the microscopic and macroscopic properties of some superconductors. It predicted certain properties which were verified later, such as the Meissner effect and heat capacity. For this, Bardeen, Schrieffer and Cooper were awarded the Nobel Prize. However, BCS Theory cannot explain high-temperature ceramic conductivity.
{"url":"http://www.physicsforums.com/showthread.php?t=627711","timestamp":"2014-04-19T09:41:05Z","content_type":null,"content_length":"34175","record_id":"<urn:uuid:7ab0a85e-bd6a-420b-8dd5-2cb0b8717765>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph Theory boggom at comcast.net boggom at comcast.net Fri Oct 6 00:16:19 CEST 2006 diffuser78 at gmail.com wrote: > Thanks for your quick reply. Since I have not read the documentation, I > was wondering if you can generate random graph and analyze some > peroperties of it like clustering coefficient or graph density. I am a > graph theory student and want to use python for development. Somebody > told me that Python has already so much bultin. Are there any > visualization tool which would depict the random graph generated by the > libraries. networkx has several random graph generators, including: and others (e.g. via configuration_model) For drawing you can use pygraphviz (also available at or the built-in drawing tools. >>> from networkx import * >>> no_nodes=1000 >>> for p in [ 0.1, 0.2, 0.3]: >>> g = watts_strogatz_graph(no_nodes, 4, p) >>> print density(g), average_clustering(g) be warned that drawing large random graphs are not overly insightful check the examples More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2006-October/365543.html","timestamp":"2014-04-20T16:26:08Z","content_type":null,"content_length":"3664","record_id":"<urn:uuid:997c1667-f27f-41de-8273-e292285659ab>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Hourly Rates: Bill, Pay, % Profit Enter any two fields and the third will auto-calculate. For each calculation, tap calculator symbol at left to highlight variable field. Enter data in remaining 2 fields. Tap calculate or re-calculate. (Highlight variable field before each re-calculation.) Weekly Revenue: Weekly Pay Rate: Hourly Total Cost: Weekly Total Cost: Hourly Gross Profit: Weekly Gross Profit: Tax & Fee Burdens Enter an estimate of your tax and fee burdens, or switch to the detailed form. Employees & Hours Per Week Projected Total Hours Per Week: Projected Total Revenue Per Week: Projected Gross Profit Per Week: Gross Markup:
{"url":"http://laborprofitcalculator.com/","timestamp":"2014-04-19T20:17:19Z","content_type":null,"content_length":"8237","record_id":"<urn:uuid:505b2ad2-d74e-41b1-8240-3d2a17783e1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Kelvin Helmholtz instability (3D) The computational domain is the unit cube. Periodic boundary conditions are used all over the boundary of the computational domain. The Reynolds number is 7'500 with respect to the side length of the simulation. The equations solved are the full 3D incompressible Navier-Stokes equations for the motion of the fluid Initially, the computational domain is divided in three parts. The initial velocity of the middle part is (0.5, 0, 0) and of the other two parts is (-0.5, 0, 0) in a Cartesian reference frame. Random small-amplitude perturbations are added to the x and y components of the initial velocity field at the middle part of the domain, to trigger the development of the Kelvin-Helmholtz instability. With this setup, the coupled system is left to evolve. The grid used consists of 134'217'728 lattice nodes. Two approaches are chosen to visualize the flow: a convection-diffusion equation with low diffusion (left side of the animation), or a large number of Lagrangian particles (right side of the animation). In the convection-diffusion equation, the Prandtl number is equal to 1.0 It is seen that the discrete particles produce a much sharper picture than the convection-diffusion equation, because even when a low parameter of diffusion is chosen, this continuum mode is plagued by numerical diffusivity. The quality of the blending pattern with Lagrangian particles is even further illustrated in the animation below, which shows a 2D cut through the simulation:
{"url":"http://www.palabos.org/gallery/incompressible-isothermal-flow/54-kelvin-helmholtz","timestamp":"2014-04-17T09:42:08Z","content_type":null,"content_length":"14981","record_id":"<urn:uuid:07c932b9-d35f-421e-82ba-00af2275ebd5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
BerettaSpeed.com - Information - Tire Calculation April 18, 2014 Tire Calculation June 22, 2003 205/55 R16.., what does this mean? Most people understand that the last number is the rim size in inches and the first number is the tread width in millimeters. But, what about the middle, number 55? Well, it is the aspect ratio, which indicates how tall the sidewall is. To find your sidewall height take the tread width and multiply it by this percentage and you get your sidewall height in mm. Percentages, metric and standard units of measurement, oh, the fun we are going to have here. Now that you understand a few things about tire sizing. What can be done? Well, oodles really, including how many revolutions it makes in a mile (revs/mile), which can be used to calculate speed (revs/mile/time). To figure revolutions per mile we need to calculate only one thing, the overall tire height, which can be found by using the given information on the tire about its size. What gives a tire its overall tire height? Rim size and sidewall, of course. With the tire listed above we have a 16 in rim and a sidewall that is 112.75 mm tall (205 * 55%), but beings that the sidewall is at the top and bottom, multiplying that number by two gives, 225.5 mm of total sidewall height. To convert 225.5 mm to inches divide by the conversion 25.4 mm/in, which gives 8.88 in of total sidewall height. Accordingly, 16" + 8.88" = 24.88" of overall tire height. Rim size in inches + ((Tread width * aspect ratio * 2) / 25.4 mm/in) = Tire Height in inches How are revolutions per mile calculated from overall tire height? Well if you remember from Geometry that the circumference of a circle is equal to the diameter (tire height) * pi, or 24.88" of overall tire height * pi = 78.16 in of circumference. Now, one circumference is one revolution, so divide the circumference one mile... 63,360 inches per mile (conversion factor) / 78.16 in = 810.6 revolutions per mile To find your revolutions per mile in one simple step, use the following combination of the above formulas. 63,360 (in per mile) /(((Tread Width (mm) * Aspect Ratio * 2)/25.4) + Rim Size * pi) = Revolutions per mile contributed by: Canada [ 3083 views | ]
{"url":"http://www.berettaspeed.com/information/view_article.php?id=6","timestamp":"2014-04-18T10:34:10Z","content_type":null,"content_length":"11771","record_id":"<urn:uuid:b3974e67-a596-40a9-9f7b-aed17ac8db69>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Zeller's Algorithm Zeller's Algorithm can be used to determine the day of the week for any date in the past, present or future. Actually the algorithm should work only for those dates starting with the use of the Gregorian Calendar in the year 1582. The Gregorian Calendar will be off one full day in the year 4902, so this algorithm works for any dates between 1582 and 4902. To use this algorithm, input your date of birth, press "ok" and then tada the day of the week in which you were born on appears.
{"url":"http://artwachter.com/links/zeller.htm","timestamp":"2014-04-18T15:39:16Z","content_type":null,"content_length":"7963","record_id":"<urn:uuid:1f24a9d0-71e1-4161-94f5-df03e62c16f3>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate the Quota Rent on Supply & Demand Original post by Edriaan Koening of Demand Media A government may impose a quota on a certain product for various reasons, for example to keep natural resources sustainable or to protect domestic producers. Because of the quota, the quantity of products being traded changes and the price changes as well. Quota rent refers to the economic benefit gained by the party who sells the same products at a higher price. By illustrating the situation on a supply and demand graph, you can find the quota rent. Step 1 Draw a graph with the vertical axis representing price and the horizontal axis representing quantity. Label the price and quantity ranges of the product under quota along both axes. For example, label the price axis with the numbers $0 to $1,000 and the quantity axis with the numbers 0 to 500. Step 2 Find out the levels of supply and demand at various price points for the product under quota. For example, a coffee table may have a demand quantity of 700 and a supply quantity of 200 at the price of $200. The same product may have a demand quantity of 300 and a supply quantity of 600 at the price of $600. Step 3 Mark a point on the graph for each price point. Connect all the supply points and all the demand points to get a demand-and-supply graph. Each line will cut across the graph diagonally and the two lines will meet at one point. This point shows the price and quantity at which the product is traded if there were only domestic demand and domestic supply. Step 4 Mark the price level at which the product is being traded in the world market to take into account demand and supply levels from other countries. Draw a horizontal line from the price axis across the graph. For example, The supply and demand lines for coffee tables may intersect at the price of $500 and the quantity of 550. However, importation of coffee tables pushes the price down to $400. Draw a horizontal line along the price point of $400. Step 5 Draw a vertical line down from the point where the horizontal line from Step 4 meets the demand line and another vertical line where it meets the supply line. Bring these vertical lines down to meet the horizontal axis of the graph. The space between the two vertical lines shows the quantity of products being imported. For example, if the lines intersect the horizontal axis at 400 and 600, then there are 200 coffee tables being imported into the country at the price of $400. Step 6 Determine the quota quantity imposed by the government. For example, the government may limit the number of coffee tables being imported into the country to 100. Find the horizontal level below the intersection of the supply and demand lines where the distance between the supply and demand line is 100. Draw a horizontal line along this level between the supply and demand lines. Step 7 Draw a vertical line down from each of the two ends of the horizontal line you drew in Step 6 until it meets the horizontal axis of the graph. Shade the rectangular area made by these lines and the horizontal line from Step 4. This shaded area represents the quota rent. Step 8 Multiply the length and the height of the shaded rectangle from the previous step to find the amount of the quota rent. About the Author Edriaan Koening started professionally writing in 2005 while studying toward her Bachelor of Arts in media and communications at the University of Melbourne. She has since written for several magazines and websites. Koening also holds a Master of Commerce in funds management and accounting from the University of New South Wales.
{"url":"http://wiki.fool.com/wiki/index.php?title=How_to_Calculate_the_Quota_Rent_on_Supply_%26_Demand&oldid=31944","timestamp":"2014-04-17T12:34:11Z","content_type":null,"content_length":"48663","record_id":"<urn:uuid:df36c081-5138-4887-aac4-ca3452cc6e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Isabelle/ZF sessions • ZF Author: Lawrence C Paulson, Cambridge University Computer Laboratory Copyright 1995 University of Cambridge Zermelo-Fraenkel Set Theory. This theory is the work of Martin Coen, Philippe Noel and Lawrence Paulson. Isabelle/ZF formalizes the greater part of elementary set theory, including relations, functions, injections, surjections, ordinals and cardinals. Results proved include Cantor's Theorem, the Recursion Theorem, the Schroeder-Bernstein Theorem, and (assuming AC) the Wellordering Theorem. Isabelle/ZF also provides theories of lists, trees, etc., for formalizing computational notions. It supports inductive definitions of infinite-branching trees for any cardinality of branching. Useful references for Isabelle/ZF: Lawrence C. Paulson, Set theory for verification: I. From foundations to functions. J. Automated Reasoning 11 (1993), 353-389. Lawrence C. Paulson, Set theory for verification: II. Induction and recursion. Report 312, Computer Lab (1993). Lawrence C. Paulson, A fixedpoint approach to implementing (co)inductive definitions. In: A. Bundy (editor), CADE-12: 12th International Conference on Automated Deduction, (Springer LNAI 814, 1994), 148-161. Useful references on ZF set theory: Paul R. Halmos, Naive Set Theory (Van Nostrand, 1960) Patrick Suppes, Axiomatic Set Theory (Dover, 1972) Keith J. Devlin, Fundamentals of Contemporary Set Theory (Springer, 1979) Kenneth Kunen, Set Theory: An Introduction to Independence Proofs, (North-Holland, 1980) • ZF-AC Author: Lawrence C Paulson, Cambridge University Computer Laboratory Copyright 1995 University of Cambridge Proofs of AC-equivalences, due to Krzysztof Grabczewski. See also the book "Equivalents of the Axiom of Choice, II" by H. Rubin and J.E. Rubin, 1985. The report "Mechanizing Set Theory", by Paulson and Grabczewski, describes both this development and ZF's theories of cardinals. • ZF-Coind Author: Jacob Frost, Cambridge University Computer Laboratory Copyright 1995 University of Cambridge Coind -- A Coinduction Example. It involves proving the consistency of the dynamic and static semantics for a small functional language. A codatatype definition specifies values and value environments in mutual recursion: non-well-founded values represent recursive functions; value environments are variant functions from variables into values. Based upon the article Robin Milner and Mads Tofte, Co-induction in Relational Semantics, Theoretical Computer Science 87 (1991), pages 209-220. Written up as Jacob Frost, A Case Study of Co_induction in Isabelle Report, Computer Lab, University of Cambridge (1995). • ZF-Constructible Relative Consistency of the Axiom of Choice: Inner Models, Absoluteness and Consistency Proofs. Gödel's proof of the relative consistency of the axiom of choice is mechanized using Isabelle/ZF. The proof builds upon a previous mechanization of the reflection theorem (see http://www.cl.cam.ac.uk/users/lcp/papers/Sets/reflection.pdf). The heavy reliance on metatheory in the original proof makes the formalization unusually long, and not entirely satisfactory: two parts of the proof do not fit together. It seems impossible to solve these problems without formalizing the metatheory. However, the present development follows a standard textbook, Kunen's Set Theory, and could support the formalization of further material from that book. It also serves as an example of what to expect when deep mathematics is formalized. A paper describing this development is • ZF-IMP Author: Heiko Loetzbeyer & Robert Sandner, TUM Copyright 1994 TUM Formalization of the denotational and operational semantics of a simple while-language, including an equivalence proof. The whole development essentially formalizes/transcribes chapters 2 and 5 of Glynn Winskel. The Formal Semantics of Programming Languages. MIT Press, 1993. • ZF-Induct Author: Lawrence C Paulson, Cambridge University Computer Laboratory Copyright 2001 University of Cambridge Inductive definitions. • ZF-Resid Author: Lawrence C Paulson, Cambridge University Computer Laboratory Copyright 1995 University of Cambridge Residuals -- a proof of the Church-Rosser Theorem for the untyped lambda-calculus. By Ole Rasmussen, following the Coq proof given in Gerard Huet. Residual Theory in Lambda-Calculus: A Formal Development. J. Functional Programming 4(3) 1994, 371-394. See Rasmussen's report: The Church-Rosser Theorem in Isabelle: A Proof Porting Experiment. • ZF-UNITY Author: Lawrence C Paulson, Cambridge University Computer Laboratory Copyright 1998 University of Cambridge ZF/UNITY proofs. • ZF-ex Author: Lawrence C Paulson, Cambridge University Computer Laboratory Copyright 1993 University of Cambridge Miscellaneous examples for Zermelo-Fraenkel Set Theory. Includes a simple form of Ramsey's theorem. A report is available: Several (co)inductive and (co)datatype definitions are presented. The report http://www.cl.cam.ac.uk/Research/Reports/TR312-lcp-set-II.ps.gz describes the theoretical foundations of datatypes while describes the package that automates their declaration.
{"url":"http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/library/ZF/index.html","timestamp":"2014-04-19T09:31:32Z","content_type":null,"content_length":"7440","record_id":"<urn:uuid:012bf789-246d-4ad5-83f1-f2b0b021300a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Hoboken, NJ Prealgebra Tutor Find a Hoboken, NJ Prealgebra Tutor ...I have tutored for over six years, since finishing my undergraduate education. My real-world experiences allow me to help students relate to and understand the material on a personal level they can appreciate in their own lives. It is this deep internalization of the material that helps them to... 24 Subjects: including prealgebra, chemistry, calculus, physics ...I also go over how to write a strong essay, supported by clear and relevant examples and written in the format the test makers prefer. Many GMAT students have not seen quadratic equations, right isosceles triangles, sentence corrections, etc., since their days in school. I provide a comprehensi... 18 Subjects: including prealgebra, geometry, GRE, algebra 1 ...I am also comfortable doing online tutoring and could combine that with in person tutoring.I have been teaching High School Algebra for 10 years. I have also taught a College Algebra Class at a local community college and taught pre algebra classes there as well. I have tutored at least 25 kids in algebra over the last 10 years. 20 Subjects: including prealgebra, geometry, algebra 1, GRE ...I have a passion for math and believe that all students are capable of improving their math skills with the proper assistance. I have several experiences working with young children and tutoring math to students who are struggling with the subject both inside and outside of the classroom. I hav... 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...In college I took several advanced chemistry courses including chemical thermodynamics, organic chemistry, and biochemistry. I also have recently tutored in chemistry. I have five years experience teaching high school geometry. 8 Subjects: including prealgebra, chemistry, geometry, algebra 1 Related Hoboken, NJ Tutors Hoboken, NJ Accounting Tutors Hoboken, NJ ACT Tutors Hoboken, NJ Algebra Tutors Hoboken, NJ Algebra 2 Tutors Hoboken, NJ Calculus Tutors Hoboken, NJ Geometry Tutors Hoboken, NJ Math Tutors Hoboken, NJ Prealgebra Tutors Hoboken, NJ Precalculus Tutors Hoboken, NJ SAT Tutors Hoboken, NJ SAT Math Tutors Hoboken, NJ Science Tutors Hoboken, NJ Statistics Tutors Hoboken, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/hoboken_nj_prealgebra_tutors.php","timestamp":"2014-04-19T02:15:40Z","content_type":null,"content_length":"24155","record_id":"<urn:uuid:26f3f85d-4f63-4abb-a0b9-402a82749026>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
on the Mathematical Art Jiuzhang suanshu (or Chu Chang Suan Shu, Nine Chapters on the Mathematical Art) Jiuzhang suanshu This book contains a total of 246 questions in nine chapters (hence the name Nine Chapters). For each question in the book, there is only answer given. The method of solving the question is omitted. A possible explanation is that this book was used as a textbook. The teachers probably did not want the students to know the answer. Also if there were methods in the book, the teacher then could not learn their living by teaching! Here is the opening of Chapter 1 of the Nine Chapters on the Mathematical Art. Print Page This page is created by Keith Wong. Any question about this page may also be addressed to Dr. Len Berggren at Simon Fraser University, Burnaby, BC, Canada.
{"url":"http://www.chinapage.com/ninechapmath.html","timestamp":"2014-04-21T09:39:38Z","content_type":null,"content_length":"2381","record_id":"<urn:uuid:29cca6a2-c246-4b50-b715-a04e5fbd2d68>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Expressions March 24th 2013, 09:03 PM #1 Junior Member Sep 2012 Mathematical Expressions When writing a math expression, any time there is an open bracket "(", it is eventually followed by a closed bracket ")". When we have a complicated expression, there may be several brackets nested amongst each other, such as in the expression (x+1)∗((x−2)+3(x−4)×(x^2+7×(3x+4))). If we removed all the symbols other than the brackers from the expression, we would be left with the arrangement ()(()()(())). For any arrangement of brackets, it could have come from a valid mathematical expression if and only if for every place in the sequence, the number of open brackets before that place is at least as large as the number of closed brackets. If 34 open brackets and 34 closed brackets are randomly arranged, the probability that the resulting arrangement could have come from a valid mathematical expression can be expressed as a/b where a and b are coprime positive integers. What is the value of a+b? Re: Mathematical Expressions answer is 1/7 so a+b is 8 but please dont post brilliant problems here it is cheating... Re: Mathematical Expressions Man 8 is incorrect. I too had posted 8 only.... Re: Mathematical Expressions Dear Moderator, Please help me analyze the Kazakh website. The dvd for sale australia series brings you the most popular discount DVD stores online today. Re: Mathematical Expressions In algebra the inputs and outputs should be in symbolic format and it should do the operations of add, multiply, subtract, and divide algebraically. Now one can have his own tool that does algebraic long division for him. It also should do factorization and finds roots In addition to algebraic differentiation and integration....AV media carts March 27th 2013, 03:21 AM #2 Mar 2013 March 27th 2013, 07:00 PM #3 Junior Member Sep 2012 March 28th 2013, 01:56 AM #4 Mar 2013 137 W San Bernardino Rd. April 1st 2013, 03:52 AM #5 Apr 2013
{"url":"http://mathhelpforum.com/geometry/215465-mathematical-expressions.html","timestamp":"2014-04-17T09:16:30Z","content_type":null,"content_length":"43244","record_id":"<urn:uuid:097d654f-d024-4d55-bc4a-e530a0f4b998>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
From Scholarpedia This article describes a sequence of numbers, called cumulants, that are used to describe, and in some circumstances approximate, a univariate or multivariate distribution. Cumulants are not unique in this role; other sequences, such as moments and their generalizations, may also be used in both roles. Cumulants have multiple advantages over competitors, in that cumulants change in a very simple way when the underlying random variable is subject to an affine transformation, cumulants for sums of independent random variables have a very simple relationship to the cumulants of the addends, and cumulants may be used in a simple way to describe the difference between a distribution and its simplest Gaussian approximation. Overview and Definitions The moment of order \(r\) (or \(r\)th moment) of a real-valued random variable \(X\) is \[ \mu_r = E(X^r) \] for integer \(r=0,1,\ldots\ .\) The value is assumed to be finite. Provided that it has a Taylor expansion about the origin, the moment generating function (or Fourier--Laplace transform) \[\tag{1} M(\xi) = E(e^{\xi X}) = E(1 + \xi X +\cdots + \xi^r X^r/r!+\cdots) = \sum_{r=0}^\infty \ mu_r \xi^r/r! \] is an easy way to combine all of the moments into a single expression. The \(r\)th moment is hence the \(r\)th derivative of \(M\) at the origin. When \(X\) has a distribution given by a density \(f\ ,\) then \[\tag{2} \mu_r = \int_{-\infty}^\infty x^r f(x)\,dx,\] \[\tag{3} M(\xi) = E(e^{\xi X}) =\int_{-\infty}^\infty\exp(\xi x) f(x) d x. \] The cumulants \(\kappa_r\) are the coefficients in the Taylor expansion of the cumulant generating function about the origin \[ K(\xi) = \log M(\xi) = \sum_{r} \kappa_r \xi^r/r!. \] Evidently \(\mu_0 = 1\) implies \(\kappa_0 = 0\ .\) The relationship between the first few moments and cumulants, obtained by extracting coefficients from the expansion, is as follows \[\tag{4} \begin{array}{lcl} \ kappa_1 &=& \mu_1 \\ \kappa_2 &=& \mu_2 - \mu_1^2\\ \kappa_3 &=& \mu_3 - 3\mu_2\mu_1 + 2\mu_1^3\\ \kappa_4 &=& \mu_4 - 4\mu_3\mu_1 - 3\mu_2^2 + 12\mu_2\mu_1^2 -6\mu_1^4. \end{array}\] In the reverse direction \[\tag{5} \begin{array}{lcl} \mu_2 &=& \kappa_2 + \kappa_1^2\\ \mu_3 &=& \kappa_3 + 3\kappa_2\kappa_1 + \kappa_1^3\\ \mu_4 &=& \kappa_4 + 4\kappa_3\kappa_1 + 3\kappa_2^2 + 6\ kappa_2\kappa_1^2 + \kappa_1^4. \end{array}\] In particular, \(\kappa_1 = \mu_1\) is the mean of \(X\ ,\) \(\kappa_2\) is the variance, and \(\kappa_3 = E((X - \mu_1)^3)\ .\) Higher-order cumulants are not the same as moments about the mean. Hald (2000) credits Thiele (1889) with the first derivation of cumulants. Fisher (1929) called the quantities \(\kappa_j\) cumulative moment functions; Hotelling (1933) claims credit for the simpler term cumulants. Lauritzen (2002) presents an overview, translation, and reprinting of much of this early work. As above, let \( {\mathcal R}\) denote the real numbers. Let \( {\mathcal R}^+\) represent the positive reals, and let \( {\mathcal N}=\{0,1,\ldots\}\) be the natural numbers. Distribution Density CGF Cumulants Normal \( \frac{\exp(-x^2/2)}{\sqrt{2\pi}}, x\in{\mathcal \( \xi^2/2\) \( \kappa_1=0\ ,\) \( \kappa_2=1\ ,\) \( \kappa_r=0\) for \(r>2\) Bernoulli \( \pi^x(1-\pi)^{1-x}, x\in\{0,1\}\) \( \log(1-\pi+\pi\exp(\xi))\) \( \kappa_1=\pi\ ,\) \( \kappa_2=\pi(1-\pi)\ ,\) \( \kappa_3=[2 \pi ^3-3 \pi ^2+\pi]\) Poisson \( \frac{\exp(-\lambda)\lambda^x}{x!}, x\in{\ \( (e^{\xi }-1)\lambda\) \( \kappa_r=\lambda \ \forall r\) mathcal N} \) Exponential \( \frac{\exp(-x/\lambda)}{\lambda}, x\in{\mathcal \( -\log(1-\lambda\xi)\) \( \kappa_r=\lambda^r(r-1)! \ \forall r\) Geometric \( (1-\pi)\pi^x, x\in{\mathcal N}\) \( \log(1-\pi)-\log(1-\pi\exp(\ \( \kappa_1=\rho\ ,\) \( \kappa_2=\rho^2+\rho\),\( \kappa_3=2 \rho ^3+3 \rho ^2+\rho\) for \( \rho=\ xi)) \) pi/(1-\pi)\ .\) Definitions under less restrictive conditions The Cauchy distribution with density \( \pi^{-1}/(1+x^2)\) has no moments because the integral (2) does not converge for any integer \( r\ge 1\ ;\) Student's \( t\) distribution on five degrees of freedom is symmetric with density \( (3\pi\surd5/8)/(1 + x^2/5)^3\ .\) The first four moments are \( 0, 5/3, 0, 25\ .\) Higher-order moments are not defined. The cumulants up to order four are defined by (4) even though the moment generating function (1) does not exist for any real \( \xi\neq 0\ .\) In both of these cases, the characteristic function \( M(i\xi)\) is well-defined for real \( \xi\ ,\) \( \exp(-|\xi|)\) for the Cauchy distribution, and \( \exp(-|\xi|\surd 5)(1 + |\xi|\surd5 + 5\xi^ 2/3)\) for \( t_5\ .\) In the latter case, both \( M(i\xi)\) and \( K(i\xi)\) have Taylor expansions up to order four only, so the moments and cumulants are defined only up to this order. The infinite expansion (1) is justified when the radius of convergence is positive, in which case \( M(\xi)\) is finite on an open set containing zero, and all moments and cumulants are finite. However, finiteness of the moments does not imply that \( M(\xi)\) exists for any \( \xi\neq 0\ .\) The log normal distribution provides a counterexample. It has finite moments \( \mu_r = e^{r^2/2}\) of all orders, but (1) diverges for every \( \xi\neq 0\ .\) The normal distribution \(N(\mu, \sigma^2)\) has cumulant generating function \(\xi\mu + \xi^2 \sigma^2/2\ ,\) a quadratic polynomial implying that all cumulants of order three and higher are zero. Marcinkiewicz (1939) showed that the normal distribution is the only distribution whose cumulant generating function is a polynomial, i.e. the only distribution having a finite number of non-zero cumulants. The Poisson distribution with mean \(\mu\) has moment generating function \(\exp(\mu(e^\xi - 1))\) and cumulant generating function \(\mu(e^\xi -1)\ .\) Consequently all the cumulants are equal to the mean. Two distinct distributions may have the same moments, and hence the same cumulants. This statement is fairly obvious for distributions whose moments are all infinite, or even for distributions having infinite higher-order moments. But it is much less obvious for distributions having finite moments of all orders. Heyde (1963) gave one such pair of distributions with densities \( f_1(x) = \exp(-(\ log x)^2/2) / (x\sqrt{2\pi}) \) and \( f_2(x) = f_1(x) [1 + \sin(2\pi\log x)/2] \) for \(x > 0\ .\) The first of these is called the log normal distribution. To show that these distributions have the same moments it suffices to show that \[ \int_0^\infty x^k f_1(x) \sin(2\pi\log x)\, dx = 0 \] for integer \(k\ge 1\ ,\) which can be shown by making the substitution \(\log x = y+k\ .\) If the sequence of moments is such that (1) has a finite radius of convergence, the distribution is uniquely determined. Cumulants of order \(r \ge 2\) are called semi-invariant on account of their behavior under affine transformation of variables (Thiele ,1903, Dressel ,1940). If \(\kappa_r\) is the \(r\)th cumulant of \(X\ ,\) the \(r\)th cumulant of the affine transformation \(a + b X\) is \(b^r \kappa_r\ ,\) independent of \(a\ .\) This behavior is considerably simpler than that of moments. However, moments about the mean are also semi-invariant, so this property alone does not explain why cumulants are useful for statistical purposes. The term cumulant reflects their behavior under addition of random variables. Let \(S = X+Y\) be the sum of two independent random variables. The moment generating function of the sum is the product \[ M_S(\xi) = M_X(\xi) M_Y(\xi), \] and the cumulant generating function is the sum \[ K_S(\xi) = K_X(\xi) + K_Y(\xi). \] Consequently, the \(r\)th cumulant of the sum is the sum of the \(r\)th cumulants. By extension, if \(X_1,\ldots X_n\) are independent and identically distributed, the \(r\)th cumulant of the sum is \(n\kappa_r\ .\) Let \(\kappa_{n;r}\) be the cumulant of order \(r\) of the standardized sum \(n^{-1/2}(X_1+\cdots + X_n)\ ;\) then \[\tag{6} \kappa_{n;r}=n^{1-r/2} \kappa_r. \] Provided that the cumulants are finite, all cumulants of order \(r\ge 3\) of the standardized sum tend to zero, which is a simple demonstration of the central limit theorem. Good (1977) obtained an expression for the \(r\)th cumulant of \(X\) as the \(r\)th moment of the discrete Fourier transform of an independent and identically distributed sequence as follows. Let \ (X_1, X_2,\ldots\) be independent copies of \(X\) with \(r\)th cumulant \(\kappa_r\ ,\) and let \(\omega = e^{2\pi i/n}\) be a primitive \(n\)th root of unity. The discrete Fourier combination \[ Z = X_1 + \omega X_2 + \cdots + \omega^{n-1} X_n \] is a complex-valued random variable whose distribution is invariant under rotation \(Z\sim \omega Z\) through multiples of \(2\pi /n\ .\) The \(r\)th cumulant of the sum is \(\kappa_r \sum_{j=1}^n \omega^{r j}\ ,\) which is equal to \(n\kappa_r\) if \(r\) is a multiple of \(n\ ,\) and zero otherwise. Consequently \(E(Z^r) = 0\) for integer \(r < n \) and \(E(Z^n) = n\kappa_n\ .\) Multivariate cumulants Somewhat surprisingly, the relation between moments and cumulants is simpler and more transparent in the multivariate case than in the univariate case. Let \(X = (X^1,\ldots, X^k)\) be the components of a random vector. In a departure from the univariate notation, we write \(\kappa^r = E(X^r)\) for the components of the mean vector, \(\kappa^{rs} = E(X^r X^s)\) for the components of the second moment matrix, \(\kappa^{r s t} = E(X^r X^s X^t)\) for the third moments, and so on. It is convenient notationally to adopt Einstein's summation convention, so \(\xi_r X^r\) denotes the linear combination \(\xi_1 X^1 + \cdots + \xi_k X^k\ ,\) the square of the linear combination is \((\xi_r X^r)^2 = \xi_r\xi_s X^r X^s\) a sum of \(k^2\) terms, and so on for higher powers. The Taylor expansion of the moment generating function \(M(\xi) = E(\exp(\xi_r X^r))\) is \[ M(\xi) = 1 + \xi_r \kappa^r + \textstyle{\frac1{2!}} \xi_r\xi_s \kappa^{rs} + \textstyle{\frac1{3!}} \xi_r\xi_s \xi_t \kappa^{r s t} +\cdots. \] The cumulants are defined as the coefficients \(\kappa^{r,s}, \kappa^{r,s,t},\ldots\) in the Taylor expansion \[ \log M(\xi) = \xi_r \kappa^r + \textstyle{\frac1{2!}} \xi_r \xi_s \kappa^{r,s} + \textstyle{\frac1{3!}} \xi_r\xi_s \xi_t \kappa^{r,s,t} +\cdots. \] This notation does not distinguish first-order moments from first-order cumulants, but commas separating the superscripts serve to distinguish higher-order cumulants from moments. Comparison of coefficients reveals that each moment \(\kappa^{rs}, \kappa^{r s t},\ldots\) is a sum over partitions of the superscripts, each term in the sum being a product of cumulants: \[\begin {array}{lcl} \kappa^{rs}&=&\kappa^{r,s} + \kappa^r\kappa^s\\ \kappa^{r s t}&=&\kappa^{r,s,t} + \kappa^{r,s}\kappa^t + \kappa^{r,t}\kappa^s + \kappa^{s,t}\kappa^r + \kappa^r\kappa^s\kappa^t\\ &=&\ kappa^{r,s,t} + \kappa^{r,s}\kappa^t[3] + \kappa^r\kappa^s\kappa^t\\ \kappa^{r s t u}&=&\kappa^{r,s,t,u} + \kappa^{r,s,t}\kappa^u[4] + \kappa^{r,s}\kappa^{t,u}[3] + \kappa^{r,s}\kappa^t\kappa^u[6] + \kappa^r\kappa^s\kappa^t\kappa^u. \end{array}\] Each parenthetical number indicates a sum over distinct partitions having the same block sizes, so the fourth-order moment is a sum of 15 distinct cumulant products. In the reverse direction, each cumulant is also a sum over partitions of the indices. Each term in the sum is a product of moments, but with coefficient \((-1)^{\nu-1} (\nu-1)!\) where \(\nu\) is the number of blocks: \[\begin{array}{lcl} \kappa^{r,s} &=& \kappa^{rs} - \kappa^r\kappa^s\\ \kappa^{r,s,t} &=& \kappa^{r s t} - \kappa^{rs}\kappa^t[3] + 2 \kappa^r\kappa^s\kappa^t\\ \kappa^{r,s,t,u} &=& \kappa^{r s t u} - \kappa^{r s t}\kappa^u[4] - \kappa^{rs}\kappa^{t u}[3] + 2 \kappa^{rs}\kappa^t\kappa^u[6] - 6 \kappa^r\kappa^s\kappa^t\kappa^u \end{array}\] These relationships are an instance of Mobius inversion on the partition lattice. Partition notation serves one additional purpose. It establishes moments and cumulants as special cases of generalized cumulants, which includes objects of the type \(\kappa^{r,st} = {\rm cov}(X^r, X ^s X^t)\ ,\) \(\kappa^{rs, t u} = {\rm cov}(X^r X^s, X^t X^u)\ ,\) and \(\kappa^{rs, t, u}\) with incompletely partitioned indices. These objects arise very naturally in statistical work involving asymptotic approximation of distributions. They are intermediate between moments and cumulants, and have characteristics of both. Every generalized cumulant can be expressed as a sum of certain products of ordinary cumulants. Some examples are as follows: \[\begin{array}{lcl} \kappa^{rs, t} &=& \kappa^{r,s,t} + \kappa^r\kappa^ {s,t} + \kappa^s \kappa^{r,t}\\ &=& \kappa^{r,s,t} + \kappa^r\kappa^{s,t}[2]\\ \kappa^{rs,t u} &=& \kappa^{r,s,t,u} + \kappa^{r,s,t}\kappa^u[4] + \kappa^{r,t}\kappa^{s,u}[2] + \kappa^{r,t}\kappa^s\ kappa^u[4]\\ \kappa^{rs,t,u} &=& \kappa^{r,s,t,u} + \kappa^{r,t,u}\kappa^s[2] + \kappa^{r,t}\kappa^{s,u}[2] \end{array}\] Each generalized cumulant is associated with a partition \(\tau\) of the given set of indices. For example, \(\kappa^{rs,t,u}\) is associated with the partition \(\tau=rs|t|u\) of four indices into three blocks. Each term on the right is a cumulant product associated with a partition \(\sigma\) of the same indices. The coefficient is one if the least upper bound \(\sigma\vee\tau\) has a single block, otherwise zero. Thus, with \(\tau=rs|t|u\ ,\) the product \(\kappa^ {r,s}\kappa^{t,u}\) does not appear on the right because \(\sigma\vee\tau = rs|t u\) has two blocks. As an example of the way these formulae may be used, let \(X\) be a scalar random variable with cumulants \(\kappa_1,\kappa_2,\kappa_3,\ldots\ .\) By translating the second formula in the preceding list, we find that the variance of the squared variable is \[ {\rm var}(X^2) = \kappa_4 + 4\kappa_3\kappa_1 + 2\kappa_2^2 + 4\kappa_2\kappa_1^2, \] reducing to \(\kappa_4 + 2\kappa_2^2\) if the mean is zero. Exponential families Let \( f\) be a probability distribution on an arbitrary measurable space \( ({\mathcal X},\nu)\ ,\) and let \( t\colon{\mathcal X}\to{\mathcal R}\) be a real-valued random variable with cumulant generating function \( K(\cdot)\ ,\) finite in a set \( \Theta\) containing zero in the interior. The family of distributions on \( {\mathcal X}\) with density \[ f_\theta(x) = e^{\theta t(x)} f(x) / M(\theta) = e^{\theta t(x) - K(\theta)} f(x) \] indexed by \( \theta\in\Theta\) is called the exponential family associated with \( f\) and the canonical statistic \( t\ .\) In statistical physics, the normalizing constant \(M(\theta)\) is called the partition function. Two examples suffice to illustrate the idea. In the first example, \( {\mathcal X} = \{1,2,\ldots\}\) is the set of natural numbers, \( f(x) \propto 1/x^2\) and \( t(x) = -\log(x)\ .\) The associated exponential family is \( f_\theta(x) = x^{-\theta}/\zeta(\theta),\) where \( \zeta(\theta)\) is the Riemann zeta function with real argument \( \theta > 1\ .\) In the second example, \( {\mathcal X}={\mathcal X}_n\) is the symmetric group or the set of permutations of \( n\) letters, \( x\in{\mathcal X}_n\) is a permutation, \( t(x)\) is the number of cycles, \( f(x) = 1/n!\) is the uniform distribution, and \( M_n(\xi) = \Gamma(n+e^\xi)/(n!\, \Gamma(e^\xi))\) for all real \( \xi\ .\) The exponential family of distributions on permutations of \( [n]\) is \[ f_{n,\theta}(x) = \frac{\Gamma(\lambda)\, \lambda^{t(x)}} {\Gamma(n+\lambda)}, \] the same as the distribution generated by the Chinese restaurant process with parameter \( \lambda = e^\ theta\ .\) The associated marginal distribution on partitions, the Ewens distribution on partitions of \([n]\ ,\) is also of the exponential-family form with canonical statistic equal to the number of blocks or cycles. This distribution is also the same as the distribution generated by the Dirichlet process. This number \( t(x)\) is a random variable whose cumulants are the derivatives of \( \ log M(\cdot)\) evaluated at the parameter \( \theta\ .\) In the multi-parameter case, \( t\colon{\mathcal X}\to{\mathcal R}^p\) is a random vector and \( \xi\colon{\mathcal R}^p\to{\mathcal R}\) is a linear functional, \( M(\xi) = E(e^{\xi(t)})\) is the joint moment generating function. It is sometimes convenient to employ Einstein's implicit summation convention in the form \( \theta(t) = \theta_i t^i\) where \( t^1,\ldots, t^p\) are the components of \( t(x)\ ,\) and \( \theta_1,\ldots, \theta_p\) are the coefficients of the linear functional. For simplicity of notation in what follows, \( {\mathcal X}={\mathcal R}^p\) and \( t(x) = x\) is the identity function. An exponential-family distribution in \( {\mathcal R}^p\) has the form \[ f_\theta(x)=\exp(x^j\theta_j-g(x)-\varphi(\theta)) \] for given functions \( g\) and \( \varphi\ .\) Integration shows that the distribution \( f_\theta\) has cumulant generating function \( K_\theta(\xi)=\varphi(\theta+\xi)-\varphi(\theta)\ .\) The cumulants of \( X\sim f_\theta\) are equal to the derivatives of \( \varphi\) at the parameter \( \theta\ .\) Calculus of cumulants Consider descriptions of the sampling distribution of estimates of cumulants. Such calculations are notationally complicated, and may be simplified by a tool called umbral calculus. The umbral calculus is a syntax or formal system consisting of certain operations on objects called umbrae, mimicking addition and multiplication of independent real-valued random variables. Rota and Taylor (1994) reviews this calculus. To each real-valued sequence \( 1, a_1, a_2,\ldots\) there corresponds an umbra \( \alpha\) such that \( E(\alpha^r) = a_r\ .\) This definition goes beyond the random variable context to allow for special umbrae, the singleton and Bell umbra, corresponding to no real-valued random variable. Using these special umbrae, one develops the notion of an \(\alpha\) -cumulant umbra \(\chi\cdot\alpha\) by formal product operations in the syntax. Properties of cumulants, [[\(k\)-statistics|k-statistics]] and other polynomial functions are then derived by purely combinatorial operations. Di Nardo et al. (2008) present details. Streitberg (1990) presents parallels between the calculus of cumulants and the calculus of certain decompositions of multivariate cumulative distribution functions into independent segments; these characterizations in terms of independent segments are called Lancaster interactions. Moment and Cumulant Measures for Random Measures Moments and cumulants extend quite naturally to random distributions. Let \(\upsilon\) be a random measure on a space \(\Upsilon\ .\) Then the expectation of \(\upsilon\) is defined as that measure such that \(E(\upsilon)(A)=E(\upsilon(A))\ ,\) for \(A\) in a suitable sigma field. Higher--order moments then translate to expectations of product measures. Let \(\upsilon^{(k)}\) be the measure defined on \(\Upsilon^k\ ,\) such that \(\upsilon^{(k)}(A_1\times\cdots\times A_k)=\prod_{j=1}^k\upsilon(A_j)\ .\) Then the moment of order \(k\) of \(\upsilon\) is \(E(\upsilon^{(k)})\ .\) A moment generating functional can similarly be defined for \(\upsilon\ ;\) a heuristic definition may be constructed through analogy with (1): Let \[ \Phi(f)=\sum_{r=0}^\infty f(x_1)\ldots f(x_r)\upsilon^ {(r)}(d x_1\cdots d x_r)/r!, \] for certain functions \(f\) on \(\Upsilon\ ,\) and moments can be recovered from \(\Phi(f)\) via Fréchet differentiation. Cumulants can then be defined as in (4), using the obvious analogy. These moments and cumulants have application to the theory of point processes. The above exposition, and applications to the theory of point processes, can be found in Daley and Vere-Jones (1988). Approximation of distributions Edgeworth approximation Suppose that \(Y\) is a random variable that arises as the sum of \(n\) independent and identically-distributed summands, each of which has mean \(0\ ,\) unit variance, and cumulants \(\kappa_r\ ,\) and \(X=Y/\sqrt{n}\ .\) For ease of exposition, assume that cumulants of all orders exist. Then, using (6), the cumulant generating function of \(X\) is given by \(K(\xi)=\xi^2/2 +\kappa_3\xi^3/(6\ sqrt{n}) +\kappa_4\xi^4/(24 n) +\cdots\ ,\) and the moment generating function of \(X\) is given by \[ M(\xi)=\exp(\xi^2/2)\exp(\kappa_3\xi^3/(6\sqrt{n})+\kappa_4\xi^4/(24 n)+\cdots) \] Expanding the second factor gives \[ M(\xi)=\exp(\xi^2/2)\left(1\!+\!{{\kappa_3\xi^3}\over{6\sqrt{n}}}\!+\!{{\kappa_4\xi^4}\over{24 n}}\!+\!\cdots\!+\! {\textstyle{\frac12}} \left[ {{\kappa_3\xi^3}\over{6\sqrt {n}}}\!+\!{{\kappa_4\xi^4}\over{24 n}}\!+\!\cdots\right]^2\!+\!\!\cdots\right). \] Reordering terms in powers of sample size, \[\tag{7} =\exp(\xi^2/2)\left(1+{{\kappa_3\xi^3}\over{6\sqrt{n}}}+{{\ kappa_4\xi^4}\over{24 n}}+ {{\kappa_3^2\xi^6}\over{72 n}}+\cdots\right). \] Repeated application of integration by parts to (3) shows that \[\tag{8} \xi^r M(\xi) =\int_{-\infty}^\infty\exp(\xi x)(-1)^r f^{(r)}(x) d x, \] where \(f^{(r)}\) denotes the derivative of \(f\) of order \(r\ .\) Relation (8) holds if \(f\) and its derivatives go to zero quickly as \(\vert x\vert\to\infty\ .\) Applying (8) to the normal density \(\phi(x)=\exp(-x^2/2)/\sqrt{2\pi}\ ,\) and applying the result to (7), gives \[ M(\xi)\approx\int_{-\infty}^\infty\exp(\xi x)\phi(x)\left[1+{{\kappa_3 h^3(x)}\over{6\sqrt{n}}}+{{\kappa_4h^4 (x)}\over{24 n}}+ {{\kappa_3^2h^6(x)}\over{72 n}}\right] d x \] for \(h^r(x)=(-1)^r\phi^{(r)}(x)/\phi(x)\ .\) Since the relationship giving the moment generating function in terms of the density is invertible, and since the inversion process is properly smooth, Edgeworth (1907) approximates the density of \(X\) by \[\tag{9} e_4(x)=\phi(x)\left[1+{{\kappa_3 h^3(x)}\over{6\sqrt{n}}}+{{\kappa_4h^4 (x)}\over{24 n}}+ {{\kappa_3^2h^6(x)}\over{72 n}}\right]. \] In fact, when the summands contributing to \(Y\) have a density and cumulants of order at least 5, the error in the approximation, multiplied by \(n^{3/2}\ ,\) remains bounded. The functions \(h^r\) defined above are the Hermite polynomials. The approximation (9) is known as the Edgeworth series. The subscript refers to the number of cumulants used in its definition. This series can be used to approximate either the cumulative distribution function or survival function through term-wise integration. The preceding discussion is intended to be heuristic; Kolassa (2006) presents a rigorous derivation, along with the natural extension to random vectors. Saddlepoint approximation The approximation (9) to the density \(f(x)\) has the property that \(|f(x)-e_r(x)|\leq C n^{-(r-1)/2}\ ,\) for some constant \(C\ ,\) when the cumulant of order \(r+1\) exists; \(C\) does not depend on \(x\ .\) A similar bound holds for the relative error \((f(x)-e_r(x))/f(x)\ ,\) only when \(x\) is restricted to a finite interval. Because of the polynomial factor multiplying the first omitted term in (9), the relative error can be expected to behave poorly. One might prefer an approximation that maintains good behavior for values of \(X\) in a range that increases as \(n\) increases; specifically, one might prefer an approximation that performs well for values of \(\bar Y=X/\sqrt{n}\) in a fixed interval. Assume again that random variables \(Y_j\) are independent and identically distributed, each with a cumulant generating function \(K(\xi)\) finite for \(\xi\) in a neighborhood of \(0\ .\) As above, define the exponential family \[ f_{\bar Y}(\bar y;\theta)=\exp(\theta\bar y-K(\theta))f_{\bar Y}(\bar y). \] One can then choose a value of \(\theta\) depending on \(\bar y\) that makes \(f_{\bar Y} (\bar y;\theta)\) easy to approximate, and use the exponential family relationship to derive an approximation for \(f_{\bar Y}(\bar y)\ .\) Conventionally we choose \(\hat\theta\) to satisfy \[\tag {10} K'(\hat\theta)=\bar y; \] this makes the expectation of the distribution with density \(f_{\bar Y}(\cdot;\hat\theta)\) equal to the observed value. One then applies (9), with the scale of the ordinate changed to reflect the fact that we are approximating the distribution of \(X/\sqrt{n}\ ,\) to obtain \[ f_{\bar Y}(\bar y)\approx\exp(-\hat\theta\bar y+K(\hat\theta)) n\phi(0)\left[1+{{\kappa_3 h^3(0)}\over{6\sqrt{n}}}+ {{\kappa_4h^4(0)}\over{24 n}}+ {{\kappa_3^2h^6(0)}\over{72 n}}\right]. \] Using the fact that \(h^3(0)=0\ ,\) \(h^4(0)=3\ ,\) and \(h^6(0)=-15\ ,\) we obtain \[\tag{11} f_{\bar Y}(\bar y)\approx{{n}\ over{\sqrt{2\pi}}} \exp(K(\hat\theta)-\hat\theta\bar y) \left[1+{{\hat\kappa_4}\over{8 n}}- {{5\hat\kappa_3^2}\over{24 n}}\right]. \] Here \(\hat\kappa_j\) are calculated from the derivatives of \(K\) in the preceding manner, but in this case evaluated at \(\hat\theta\ .\) This approximation may only be applied to values of \(\bar y\) for which (10) has solutions in an open neighborhood of 0. Expression (11) represents the saddlepoint approximation to the density of the mean \(\bar Y\ ;\) since \(f_{\bar Y}(\bar y;\theta)\) has a cumulant generating function defined on an open set containing \(0\ ,\) cumulants of all orders exist, the Edgeworth series including \(\kappa_6\) may be applied to \(f_{\bar Y}(\bar y;\theta)\ ,\) and so the error in the Edgeworth series is of order \(O(1/n^2)\ .\) Hence the error in (11) is of the same order, and in this case, is relative and uniform for values of \(\bar y\) in a bounded subset of an open subset on which (10) has a solution. This approximation was introduced to the statistics literature by Daniels (1954). The Edgeworth series for the density was trivially integrated to obtain an approximation to tail probabilities. Integration of the saddlepoint approximation is more delicate. Two main approaches have been investigated. Daniels (1987) expresses \(f_{\bar Y}(\bar y)\) exactly as a complex integral involving \(K(\xi)\ ,\) integrates with respect to \(\bar y\) to obtain another complex integral, and reviews techniques for approximating the resulting integrals. Robinson (1982) and Lugannani and Rice (1980) derive tail probability approximations based on approximately integrating (11) with respect to \(\bar y\) directly. These saddlepoint and Edgeworth approximations have multivariate and conditional extensions. Davison (1988) exploits the conditional saddlepoint tail probability approximation to perform inference in canonical exponential families. Samples and sub-samples A function \(f\colon{\mathcal R}^n\to{\mathcal R}\) is symmetric if \(f(x_1 ,\ldots, x_n) = f(x_{\pi(1)} ,\ldots, x_{\pi(n)})\) for each permutation \(\pi\) of the arguments. For example, the total \ (T_n = x_1 + \cdots + x_n\ ,\) the average \(T_n/n\ ,\) the min, max and median are symmetric functions, as are the sum of squares \(S_n = \sum x_i^2\ ,\) the sample variance \(s_n^2 = (S_n - T_n^2/ n)/(n-1)\) and the mean absolute deviation \(\sum |x_i - x_j|/(n(n-1))\ .\) A vector \(x\) in \({\mathcal R}^n\) is an ordered list of \(n\) real numbers \((x_1 ,\ldots, x_n)\) or a function \(x\colon[n]\to{\mathcal R}\) where \([n]=\{1 ,\ldots, n\}\ .\) For \(m \le n\ ,\) a 1--1 function \(\varphi\colon[m]\to[n]\) is a sample of size \(m\ ,\) the sampled values being \(x\varphi = (x_{\varphi(1)} ,\ldots, x_{\varphi(m)})\ .\) All told, there are \(n(n-1)\cdots(n-m+1)\) distinct samples of size \(m\) that can be taken from a list of length \(n\ .\) A sequence of functions \(f_n\colon{\mathcal R}^n\to{\mathcal R}\) is consistent under sub-sampling if, for each \(f_m, f_n\ ,\) \[ f_n(x) = {\rm ave} _\varphi f_m(x\varphi), \] where \({\rm ave} _\varphi\) denotes the average over samples of size \(m\ .\) For \(m=n\ ,\) this condition implies only that \(f_n\) is a symmetric function. Although the total and the median are both symmetric functions, neither is consistent under sub-sampling. For example, the median of the numbers \((0,1,3)\) is one, but the average of the medians of samples of size two is 4/3. However, the average \(\bar x_n = T_n/n\) is sampling consistent. Likewise the sample variance \(s_n^2 = \sum(x_i - \bar x)^2/(n-1)\) with divisor \(n-1\) is sampling consistent, but the mean squared deviation \(\sum(x_i - \bar x_n)^2/n\) with divisor \(n\) is not. Other sampling consistent functions include Fisher's \(k\)-statistics, the first few of which are \ (k_{1,n} = \bar x_n\ ,\) \(k_{2,n} = s_n^2\) for \(n\ge 2\ ,\) \( k_{3,n} = n\sum(x_i - \bar x_n)^3/((n-1)(n-2)), \) defined for \(n\ge 3\ .\) For a sequence of independent and identically distributed random variables, the \(k\)-statistic of order \(r\le n\) is the unique symmetric function such that \(E(k_{r,n}) = \kappa_r\ .\) Fisher (1929) derived the variances and covariances. The connection with finite-population sub-sampling was developed by Tukey (1950). The class of statistics called [[\(U\)-statistics|U-statistic]] is consistent under sub-sampling. • D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer-Verlag, New York, 1988. • H. E. Daniels. Saddlepoint approximations in statistics. The Annals of Mathematical Statistics, 25 (4): 631--650, 1954. • H. E. Daniels. Tail probability approximations. Review of the International Statistical Institute, 55: 37--46, 1987. • A. C. Davison. Approximate conditional inference in generalized linear models. Journal of the Royal Statistical Society Series B, 50: 445--461, 1988. • E. Di Nardo, G. Guarino, and D. Senato. A unifying framework for $k$-statistics, polykays and their multivariate generalizations. Bernoulli, 14: 440--468, 2008. • P. L. Dressel. Statistical seminvariants and their setimates with particular emphasis on their relation to algebraic invariants. The Annals of Mathematical Statistics, 11 (1): 33--57, 1940. • F. Y. Edgeworth. On the representation of statistical frequency by a series. Journal of the Royal Statistical Society, 70 (1): 102--106, 1907. • R. A. Fisher. Moments and product moments of sampling distributions. Proceedings of the London Mathematical Society, Series 2, 30: 199--238, 1929. • I. J. Good. A new formula for k-statistics. The Annals of Statistics, 5 (1): 224--228, 1977. • A. Hald. The early history of cumulants and the Gram-Charlier series. International Statistical Review, 68: 137--153, 2000. • C. C. Heyde. On a property of the lognormal distribution. Journal of the Royal Statistical Society. Series B (Methodological), 25 (2): 392--393, 1963. • Harold Hotelling. Review: [untitled]. Journal of the American Statistical Association, 28 (183): 374--375, 1933. ISSN 01621459. URL urlhttp://www.jstor.org/stable/2278451. • J. E. Kolassa. Series Approximation Methods in Statistics. Springer--Verlag, New York, 2006. • S.L. Lauritzen, editor. Thiele: pioneer in statistics. Oxford University Press, New York, 2002. • R. Lugannani and S. Rice. Saddle point approximation for the distribution of the sum of independent random variables. Advances in Applied Probability, 12: 475--490, 1980. • J. Marcinkiewicz. Sur une peropri'et'e de la loi de Gauss. Mathematische Zeitschrift, 44: 612--618, 1939. • J. Robinson. Saddlepoint approximations for permutation tests and confidence intervals. Journal of the Royal Statistical Society. Series B (Methodological), 44 (1): 91--101, 1982. • G.-C. Rota and B. D. Taylor. The classical umbral calculus. SIAM J. Math. Anal, 25 (2): 694--711, 1994. • B. Streitberg. Lancaster interactions revisited. The Annals of Statistics, 18 (4): 1878--1885, 1990. • T. N. Thiele. Almindelig Iagttagelseslaere: Sandsynlighedsregning og mindste Kvadraters Methode. C. A. Reitzel, Copenhagen, 1889. • T. N. Thiele. Theory of Observations. C. & E. Layton, London, 1903. • J. W. Tukey. Some sampling simplified. Journal of the American Statistical Association, 45 (252): 501--519, 1950. See also
{"url":"http://www.scholarpedia.org/article/Cumulants","timestamp":"2014-04-20T08:15:08Z","content_type":null,"content_length":"64977","record_id":"<urn:uuid:5055eaca-866f-4458-bf73-2cc5145d0fae>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: CRM Proceedings & Lecture Notes 2004; 347 pp; softcover Volume: 37 ISBN-10: 0-8218-3329-4 ISBN-13: 978-0-8218-3329-2 List Price: US$120 Member Price: US$96 Order Code: CRMP/37 Superintegrable systems are integrable systems (classical and quantum) that have more integrals of motion than degrees of freedom. Such systems have many interesting properties. This proceedings volume grew out of the Workshop on Superintegrability in Classical and Quantum Systems organized by the Centre de Recherches Mathématiques in Montréal (Quebec). The meeting brought together scientists working in the area of finite-dimensional integrable systems to discuss new developments in this active field of interest. Properties possessed by these systems are manifold. In classical mechanics, they have stable periodic orbits (all finite orbits are periodic). In quantum mechanics, all known superintegrable systems have been shown to be exactly solvable. Their energy spectrum is degenerate and can be calculated algebraically. The spectra of superintegrable systems may also have other interesting properties, for example, the saturation of eigenfunction norm bounds. Articles in this volume cover several (overlapping) areas of research, including: - Standard superintegrable systems in classical and quantum mechanics. - Superintegrable systems with higher-order or nonpolynomial integrals. - New types of superintegrable systems in classical mechanics. - Superintegrability, exact and quasi-exact solvability in standard and PT-symmetric quantum mechanics. - Quantum deformation, Nambu dynamics and algebraic perturbation theory of superintegrable systems. - Computer assisted classification of integrable equations. The volume is suitable for graduate students and research mathematicians interested in integrable systems. Titles in this series are co-published with the Centre de Recherches Mathématiques. Graduate students and research mathematicians interested in integrable systems. • Á. Ballesteros, F. J. Herranz, F. Musso, and O. Ragnisco -- Superintegrable deformations of the Smorodinsky-Winternitz Hamiltonian • F. Calogero and J.-P. Françoise -- Isochronous motions galore: Nonlinearly coupled oscillators with lots of isochronous solutions • T. L. Curtright and C. K. Zachos -- Nambu dynamics, deformation quantization, and superintegrability • C. Gonera -- Maximally superintegrable systems of Winternitz type • S. Gravel -- Cubic integrals of motion and quantum superintegrability • J. Harnad and O. Yermolayeva -- Superintegrability, Lax matrices and separation of variables • F. J. Herranz, Á. Ballesteros, M. Santander, and T. Sanz-Gil -- Maximally superintegrable Smorodinsky-Winternitz systems on the \(N\)-dimensional sphere and hyperbolic spaces • A. Kokotov and D. Korotkin -- Invariant Wirtinger projective connection and Tau-functions on spaces of branched coverings • L. G. Mardoyan -- Dyon-oscillator duality. Hidden symmetry of the Yang-Coulomb monopole • P. Desrosiers, L. Lapointe, and P. Mathieu -- Supersymmetric Calogero-Moser-Sutherland models: Superintegrability structure and eigenfunctions • W. Miller, Jr. -- Complete sets of invariants for classical systems • A. G. Nikitin -- Higher-order symmetry operators for Schrödinger equation • A. V. Penskoi -- Symmetries and Lagrangian time-discretizations of Euler equations • L. G. Mardoyan, G. S. Pogosyan, and A. N. Sissakian -- Two exactly-solvable problems in one-dimensional quantum mechanics on circle • M. F. Rañada and M. Santander -- Higher-order superintegrability of a rational oscillator with inversely quadratic nonlinearities: Euclidean and non-Euclidean cases • F. Finkel, D. Gómez-Ullate, A. González-López, M. A. Rodríguez, and R. Zhdanov -- A survey of quasi-exactly solvable systems and spin Calogero-Sutherland models • M. Sheftel -- On the classification of third-order integrals of motion in two-dimensional quantum mechanics • R. G. McLenaghan, R. G. Smirnov, and D. The -- Towards a classification of cubic integrals of motion • K. Takasaki -- Integrable systems whose spectral curves are the graph of a function • P. Tempesta -- On superintegrable systems in \(E_2\): Algebraic properties and symmetry preserving discretization • A. V. Turbiner -- Perturbations of integrable systems and Dyson-Mehta integrals • Y. Uwano -- Separability and the Birkhoff-Gustavson normalization of the perturbed harmonic oscillators with homogeneous polynomial potentials • J. Bérubé and P. Winternitz -- Integrability and superintegrability without separability • T. Wolf -- Applications of CRACK in the classification of integrable systems • G. A. Grünbaum and M. Yakimov -- The prolate spheroidal phenomenon as a consequence of bispectrality • O. Yermolayeva -- On a trigonometric analogue of Atiyah-Hitchin bracket • A. Zhalij and R. Zhdanov -- Separation of variables in time-dependent Schrödinger equations • M. Znojil -- New types of solvability in PT symmetric quantum theory
{"url":"http://ams.org/bookstore?fn=20&arg1=crmpseries&ikey=CRMP-37","timestamp":"2014-04-21T04:54:02Z","content_type":null,"content_length":"19998","record_id":"<urn:uuid:19402e7d-85f4-49cb-b0eb-fa5cb8ec033b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Normal maps induced by linear transformation Results 1 - 10 of 58 - OPTIMIZATION METHODS AND SOFTWARE , 1995 "... The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a path-generation procedure which is used to construct a piecewise-linear path from the current point to the Newton point; a step length acceptan ..." Cited by 149 (33 self) Add to MetaCart The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a path-generation procedure which is used to construct a piecewise-linear path from the current point to the Newton point; a step length acceptance criterion and a non-monotone pathsearch are then used to choose the next iterate. The algorithm is shown to be globally convergent under assumptions which generalize those required to obtain similar results in the smooth case. Several implementation issues are discussed, and extensive computational results obtained from problems commonly found in the literature are given. , 1995 "... In this paper we present a new algorithm for the solution of nonlinear complementarity problems. The algorithm is based on a semismooth equation reformulation of the complementarity problem. We exploit the recent extension of Newton's method to semismooth systems of equations and the fact that the n ..." Cited by 79 (9 self) Add to MetaCart In this paper we present a new algorithm for the solution of nonlinear complementarity problems. The algorithm is based on a semismooth equation reformulation of the complementarity problem. We exploit the recent extension of Newton's method to semismooth systems of equations and the fact that the natural merit function associated to the equation reformulation is continuously differentiable to develop an algorithm whose global and quadratic convergence properties can be established under very mild assumptions. Other interesting features of the new algorithm are an extreme simplicity along with a low computational burden per iteration. We include numerical tests which show the viability of the approach. - Computational Optimization and Applications , 1998 "... Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface, the code was reorganized using object-oriented design techniques. At the same time, robustness issues ..." Cited by 48 (17 self) Add to MetaCart Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface, the code was reorganized using object-oriented design techniques. At the same time, robustness issues were considered and enhancements made to the algorithm. In this paper, we document the external interfaces to the PATH code and describe some of the new utilities using PATH. We then discuss the enhancements made and compare the results obtained from PATH 2.9 to the new version. 1 Introduction The PATH solver [12] for mixed complementarity problems (MCPs) was introduced in 1995 and has since become the standard against which new MCP solvers are compared. However, the main user group for PATH continues to be economists using the MPSGE preprocessor [36]. While developing the new PATH implementation, we had two goals: to make the solver accessible to a broad audience and to improve the - SIAM J. OPTIMIZATION , 1996 "... Linear and nonlinear variational inequality problems over a polyhedral convex set are analyzed parametrically. Robinson’s notion of strong regularity, as a criterion for the solution set to be a singleton depending Lipschitz continuously on the parameters, is characterized in terms of a new “critica ..." Cited by 47 (15 self) Add to MetaCart Linear and nonlinear variational inequality problems over a polyhedral convex set are analyzed parametrically. Robinson’s notion of strong regularity, as a criterion for the solution set to be a singleton depending Lipschitz continuously on the parameters, is characterized in terms of a new “critical face” condition and in other ways. The consequences for complementarity problems are worked out as a special case. Application is also made to standard nonlinear programming problems with parameters that include the canonical perturbations. In that framework a new characterization of strong regularity is obtained for the variational inequality associated with the Karush-Kuhn-Tucker conditions. , 1995 "... Recent improvements in the capabilities of complementarity solvers have led to an increased interest in using the complementarity problem framework to address practical problems arising in mathematical programming, economics, engineering, and the sciences. As a result, increasingly more difficult pr ..." Cited by 41 (5 self) Add to MetaCart Recent improvements in the capabilities of complementarity solvers have led to an increased interest in using the complementarity problem framework to address practical problems arising in mathematical programming, economics, engineering, and the sciences. As a result, increasingly more difficult problems are being proposed that exceed the capabilities of even the best algorithms currently available. There is, therefore, an immediate need to improve the capabilities of complementarity solvers. This thesis addresses this need in two significant ways. First, the thesis proposes and develops a proximal perturbation strategy that enhances the robustness of Newton-based complementarity solvers. This strategy enables algorithms to reliably find solutions even for problems whose natural merit functions have strict local minima that are not solutions. Based upon this strategy, three new algorithms are proposed for solving nonlinear mixed complementarity problems that represent a significant improvement in robustness over previous algorithms. These algorithms have local Q-quadratic convergence behavior, yet depend only on a pseudo-monotonicity assumption to achieve global convergence from arbitrary starting points. Using the MCPLIB and GAMSLIB test libraries, we perform extensive computational tests that demonstrate the effectiveness of these algorithms on realistic problems. Second, the thesis extends some previously existing algorithms to solve more general problem classes. Specifically, the NE/SQP method of Pang & Gabriel (1993), the semismooth equations approach of De Luca, Facchinei & Kanz... , 1997 "... Variational inequalities over sets defined by systems of equalities and inequalities are considered. A continuously differentiable merit function is proposed whose unconstrained minima coincide with the KKT-points of the variational inequality. A detailed study of its properties is carried out showi ..." Cited by 34 (11 self) Add to MetaCart Variational inequalities over sets defined by systems of equalities and inequalities are considered. A continuously differentiable merit function is proposed whose unconstrained minima coincide with the KKT-points of the variational inequality. A detailed study of its properties is carried out showing that under mild assumptions this reformulation possesses many desirable features. A simple algorithm is proposed for which it is possible to prove global convergence and a fast local convergence rate. Preliminary numerical results showing viability of the approach are reported. - MATH. OPER. RES , 1994 "... Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a bound-constrained nonlinea ..." Cited by 28 (1 self) Add to MetaCart Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a bound-constrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Q-linear, Q-superlinear, or Q-quadratic, depending on the tolerances used to solve the - Math. Oper. Res , 2002 "... Based on an inverse function theorem for a system of semismooth equations, this paper establishes several necessary and sufficient conditions for an isolated solution of a complementarity problem defined on the cone of symmetric positive semidefinite matrices to be strongly regular/stable. We show f ..." Cited by 27 (15 self) Add to MetaCart Based on an inverse function theorem for a system of semismooth equations, this paper establishes several necessary and sufficient conditions for an isolated solution of a complementarity problem defined on the cone of symmetric positive semidefinite matrices to be strongly regular/stable. We show further that for a parametric complementarity problem of this kind, if a solution corresponding to a base parameter is strongly stable, then a semismooth implicit solution function exists whose directional derivatives can be computed by solving certain affine problems on the critical cone at the base solution. Similar results are also derived for a complementarity problem defined on the Lorentz cone. The analysis relies on some new properties of the directional derivatives of the projector onto the semidefinite cone and the Lorentz cone. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=323389","timestamp":"2014-04-17T19:48:16Z","content_type":null,"content_length":"36991","record_id":"<urn:uuid:b320b586-9544-46a0-8152-8188d2119ab8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Theory and Mathematical Statistics Kyoto- Japan 1986 1st edition by Watanabe | 9783540188148 | Chegg.com Probability Theory and Mathematical Statistics 1st edition Kyoto, Japan 1986 Details about this item Probability Theory and Mathematical Statistics: These proceedings of the fifth joint meeting of Japanese and Soviet probabilists are a sequel to Lecture Notes in Mathematics Vols. 33O, 550 and 1O21. They comprise 61 original research papers on topics including limit theorems, stochastic analysis, control theory, statistics, probabilistic methods in number theory and mathematical physics. Back to top Rent Probability Theory and Mathematical Statistics 1st edition today, or search our site for S. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Springer.
{"url":"http://www.chegg.com/textbooks/probability-theory-and-mathematical-statistics-1st-edition-9783540188148-3540188142","timestamp":"2014-04-20T15:02:12Z","content_type":null,"content_length":"20531","record_id":"<urn:uuid:f1280317-7ca0-4310-a1ee-cc9c7d0ffc2a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Lynbrook Math Tutor Find a Lynbrook Math Tutor ...I have taught this subject since I began my career as a teacher. I have studied it in detail at the teachers' college for two years and also when I was reading for my masters' degree. I have taught drawing for many years at the elementary level. 20 Subjects: including algebra 1, SAT math, English, reading ...In addition, I have tremendous experience teaching classes and tutoring individuals in English and in the Math and English SAT's.I have taught from elementary through high school, including study and organizational skills. Currently I am teaching middle school where I incorporate study and organ... 27 Subjects: including SAT math, ACT Math, Spanish, English ...While at Beloit College, I continued my study of sociology, taking a general survey, a class in social statistics, and a number of classes on race, gender, and culture. I applied and was admitted (I declined the offer) for Ph.D. level study in Sociology at the University of Virginia and Master's... 50 Subjects: including algebra 2, elementary (k-6th), music history, religion ...I have strong background in Physics and Mathematics/Statistics and I enjoy teaching and tutoring. I would be looking for students seeking tutoring on a one on one basis in high school math, statistics, and physics including advanced placement. I would work with the students to improve their understanding of the subject by concept building and problem solving exercises. 11 Subjects: including calculus, discrete math, differential equations, linear algebra I obtained my BSc in Applied Mathematics and BA in Economics dual-degree from the University of Rochester (NY) in 2013. I am a part-time tutor in New York City and want to help those students who need exam preparation support or language training. I used to work at the Department of Mathematics on campus as Teaching Assistance for two years and I know how to help you improve your skills. 7 Subjects: including calculus, actuarial science, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Lynbrook_Math_tutors.php","timestamp":"2014-04-20T03:59:58Z","content_type":null,"content_length":"23950","record_id":"<urn:uuid:cfd9fad6-24da-4b55-98a2-b4fb9383b3b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
A Users Guide to WolframAlpha June 24, 2009 Welcome to WolframAlpha™. No need to download and install any special software or plugins, WolframAlpha (sometimes spelled Wolfram|Alpha) is a web site – not unlike Google™. Use the following topics to learn Wolfram basics: • What is WolframAlpha? • Getting Started with WolframAlpha • When to use WolframAlpha rather than Google • How to Write Simple Queries • Getting Financial Data • Getting Medical Data • Getting Scientific Data • Writing Mathematical Queries • Exporting Results • Finding the Easter Eggs • Programming WolframAlpha • Problem Reports What is WolframAlpha? Let’s ask WolframAlpha what it thinks it is. Go to the WolframAlpha site http://www.wolframalpha.com/ and type What is WolframAlpha? into the search box: This is known as a natural language query –- it takes your English sentence and interprets it. WolframAlpha responds, “I am a computational knowledge engine.” How does this differ from a search engine like Google? If you type a natural language query like this into Google’s search box, you’ll get something entirely different – a list of web pages that match what you typed. For example, all the sites that match “What are you?” including links to the rock band The Who, Pearl Gaskin’s interviews, and images that contain the words “What”“are” and Google crawls the web for its answer. WolframAlpha searches its own huge database of factual data to find the correct information and computes the answer from structured data. Getting Started with WolframAlpha If you tried the example in the previous section, you’re already started using WolframAlpha. WolframAlpha is the brainchild of Stephen Wolfram, a British physicist and former child prodigy who, in 1988, created the Mathematica computer program used by scientists, researchers and engineers to perform complex mathematics using large volumes of data. Wolfram is also the author of A New Kind of Science, a paradigm-shattering work that has been compared to Newton’s Principia Mathematica. Wolfram provides an instructional screencast video to help you get started. It’s well worth your time if you’re interested in using WolframAlpha. Watch it here. The WolframAlpha web site also provides numerous examples of how to enter queries as well as mathematical formulas into WolframAlpha’s search box. Watch it here. When to use WolframAlpha rather than Google WolframAlpha is designed to provide clear, straightforward answers to specific questions using natural language queries and equations. Google gives you links to all the web pages (often millions or even billions) related to whatever you type into its search box. An article in UK’s Telegraph summarizes the differences, “a Google search, for, say, the nutritional values of a Big Mac returns 123,000 results which web users then have to sift through to find the facts they want. A similar search on Wolfram Alpha will, in theory, return a single answer, detailing the calories, fat content and salt levels in the McDonald’s burger.” Let’s try it. Type Tell me about Big Mac nutrition? into WolframAlpha’s search box: You get a complete nutritional breakdown of the Big Mac that goes considerably beyond the screenshot above. WolframAlpha gives you computed results such as average daily value ranking for protein, total fat, saturated fat, vitamin A, and vitamin C. It also gives you graphs showing the average highest nutrients compared to other foods, and tables showing averages for calories, carbohydrates, fats, protein, vitamins, minerals, and sterols. Here are some of the graphs: How to Write Simple Queries You’ve already seen several examples of simple natural language queries. You don’t need to write complete English sentences, WolframAlpha (mostly) understands your shorthand. You’ll need to experiment a little to get your queries just right. Here are some simple examples: 1. Generate a map of Afghanistan: afghanistan map 2. Find U.S. life expectancies: us life expectancies (77.89 by the way) 3. Do basic arithmetic: 534 + 1267 4. Create a decimal approximation: pi to 1000 digits 5. Make a graph of x3 – 6x2 + 4x + 12: plot x^3 – 6x^2 + 4x + 12 6. Compare GDP per capita for the U.S. and China: GDP per capita US/China These examples just scratch the surface. A complete list of topics currently supported by WolframAlpha includes: Mathematics Life Sciences Places & Geography Education Statistics & Data Technological World Socioeconomic Data Organizations Physics Transportation Weather Sports & Games Chemistry Computational Sciences Health & Medicine Music Materials Web & Computer Systems Food & Nutrition Colors Engineering Units & Measures Words & Linguistics Astronomy Money & Finance Culture & Media Earth Sciences Dates & Times People & History Getting Financial Data WolframAlpha lets you do a number of financial calculations and complex calculations. Some of the more useful ones include analyzing the stock market or calculating mortgage rates. For example, to compare Microsoft, Google, and Apple, type MSFT, APL, GOOG into WolframAlpha’s search box: The complete output from this query includes several pages of data and graphs – recent returns, relative price history, performance comparisons including bonds and T-bills, projections, mean-variance optimal portfolio, expected annual return, and volatility. How about that new house? Can you afford it? Type mortgage $300,000, 6.5%, 30 years into the search box: In addition to the monthly payment and interest rate, you get the total interest paid and nice graphs showing payment balances over time. Getting Medical Data WolframAlpha provides a number of interesting medical statistics including body measurements, physical exercise, diseases, mortality, medical tests, and medical computations. For example, to determine the benefits of running 30 minutes for a 35-year-old female, type running 30min 35yo female into the search box: In addition, WolframAlpha gives you speed, pace, distance, time, and race predictions. Or, how about cholesterol information for a 45-year-old male. Type: cholesterol 45yo man into the search box: Getting Scientific Data By now, you’ve probably got a good sense of how to get useful information from WolframAlpha. If you’re a scientist or a student of science, you’ve got many, many more options. Table 1 summarizes some of the example areas you can query for physics, chemistry, astronomy, earth sciences, and life sciences. Table 1. A sample of WolframAlpha scientific areas available to query ┃ Physics │ Chemistry │ Astronomy │ Earth Sciences │ Life Sciences ┃ ┃ Mechanics │ Chemical Elements │ Star Charts │ Geology │ Animals & Plants ┃ ┃ Electricity & Magnetism │ Chemical Compounds │ Solar System │ Geodesy │ Paleontology ┃ ┃ Optics │ Ions │ Stars │ Earthquakes │ Genomics & Molecular Biology ┃ ┃ Thermodynamics │ Chemical Quantities │ Exoplanents │ Tides │ ┃ ┃ Relativity │ Chemical Solutions │ Galaxies │ Atmospheric Sciences │ ┃ ┃ Nuclear Physics │ Chemical Thermodynamics │ Astrophysics │ │ ┃ ┃ Quantum Physics │ Chemical Formulas │ Observatories │ │ ┃ ┃ Particle Physics │ 3D Structure │ Satellites │ │ ┃ Writing Mathematical Queries Given that WolframAlpha is derived from the Mathematica computer program, it’s not surprising that the most sophistication and breadth of query capabilities are mathematical. You’ve already seen some simple mathematical queries. Here are some more complex examples: 1. Factor a polynomial: factor 2x^5 – 19x^4 + 58x^3 – 67x^2 + 56x – 48 2. Compute the properties of a polyhedron: dodecahedron 3. Calculate a derivative: derivative of x^4 sin x 4. Compute a truth table: P && (Q || R) 5. Compute an integral: integrate sin x dx from x=0 to pi This visual representation of an integral should be a tremendous asset to calculus students. In addition to calculus, WolframAlpha supports algebra, geometry, number theory, discrete mathematics, applied mathematics, logic and set theory, mathematical functions, and advanced mathematics. Exporting Results You can export WolframAlpha’s results as a Mathematica file or as a PDF by clicking links in the lower right of the search results. Text results are rendered as images, so you can’t simply copy cell data and paste it into a Microsoft Office or OpenOffice spreadsheet or database. You can temporarily turn the images back into text – but if you do, they will lose their cell formatting. Here’s how to get WolframAlpha data into a spreadsheet: 1. Paste the text data into Microsoft Word or a similar word processor. Use Find and Replace All to convert the | characters into Tabs. Enter | in the find field and enter ^t in the replace field. 2. Copy the edited text and paste it into the spreadsheet. It will automatically distribute itself into a series of cells. If you’re using a spreadsheet or database that doesn’t handle copy/paste well, import the tabbed text file. According to PC World, the yet-to-be-released Wolfram Alpha Professional will have a direct Excel export feature for a small license fee. Another feature of the Professional edition will be the option to cross-reference your private data with knowledge engine results. Finding the Easter Eggs (Ah, yes…) Stephen Wolfram and his team built WolframAlpha with a sly chuckle. There are some whimsical “Easter eggs” (hidden responses in the computational engine’s database) that will put a grin on your face – if you catch the cultural references. Here’s a fun one. Type What is the meaning of the universe? into the search box: Any self-respecting geek will recognize this reference to Douglas Adams’ SF novel, Hitchhiker’s Guide to the Galaxy. The supercomputer Deep Thought – specially built for the purpose of answering this Ultimate Question – computes and checks its answer for seven and a half million years. And yes, it turns out to be 42. Unfortunately, The Ultimate Question itself is unknown. Ben Parr at Mashable has collected 20 of the most popular WolframAlpha Easter eggs. For a good laugh (and, in some cases, a test of your cultural prowess), try these out: What is your name? What’s the speed of an unladen swallow? How many roads must a man walk down before you can call him a man? How to cook a Welshman? Where am I? Why did the chicken cross the road? What’s the answer to life? P = NP I can’t let you do that, Dave (This is WolframAlpha’s response when there’s too much server traffic.) How long is a piece of string? How much wood could a woodchuck chuck? What do men/women want? Where’s Wally? Where did I put my keys? Where the hell is Matt? Are you a PC (or Mac)? If a tree falls in a forest and no one is around to hear it, does it make a sound? When is Judgment Day? Where in the world is Carmen Sandiego? Programming WolframAlpha Web developers and programmers have several options ranging from simple JavaScript to render the WolframAlpha search box on a web site; toolbars and gadgets to download; and a full Application Programming Interface (API). The API lets you build a mashup, interface with AJAX or Flash, or add application code in almost any programming or scripting language using a web service. To use the API, you must first request a WolframAlpha API application ID http://www.wolframalpha.com/apiapplication.html Problem Reports Like any new piece of software, not all users are satisfied with the features and usability of the WolframAlpha product. CNET did a poll of its readers shortly after WolframAlpha was released in May 2009. Asked to judge how happy they were with the outcome of their searches, readers gave Wolfram Alpha an average score of 3.55, with 1 being "most satisfied" and 5 being least. WolframAlpha is something entirely new – it doesn’t replace Google, but complements it. With subsequent releases, it is likely to become an integral web component. Based on a small sample of 1,459 responses, the CNET report concludes that, “readers were dissatisfied with Wolfram Alpha's ability to produce results for anything outside of a relatively narrow set of queries related to math, science, or statistics. Forty percent said they would not recommend Wolfram Alpha to friends, while 28 percent thought it was only appropriate for ‘serious data nerds.’” It‘s likely that a similar poll – if conducted using a sample of scientists, mathematicians and engineers – would yield significantly different (and much more favorable) results. This isn’t surprising given that parent company Wolfram Research is the developer of Mathematica – heavily used by universities and scientific research facilities. The company has many experts in sophisticated math and science topics, so you’d expect that results for those types of queries produce far more useful answers. WolframAlpha is something entirely new – it doesn’t replace Google, but complements it. With subsequent releases and a growing database, it is likely to become an integral web component in computational research and analysis. 3 Comments Submitted by Alan R. Light on June 30, 2009 at 1:04 am. I can see that the folks at Wolfram Alpha have been busy improving their engine. I asked how much wood could a woodchuck chuck if a woodchuck could chuck wood perhaps a month or two ago, and the response was quite unsatisfactory. I am glad to see that this question has now been definitively answered, and that the developers have been paying attention to their customers' need for woodchuck-related knowledge. Submitted by Anonymous on July 1, 2009 at 7:38 am. WOW... this is genius! Submitted by Cara84 on July 2, 2009 at 9:53 am. Pdf ebooks searching service may be also used for finding out whether the book you are searching for is available in electronic edition and the desired format and if it still makes sense to search for it further One Trackback 1. [...] http://hplusmagazine.com/2009/06/24/users-guide-wolframalpha/ [...] Post a Comment
{"url":"http://hplusmagazine.com/2009/06/24/users-guide-wolframalpha/","timestamp":"2014-04-16T04:38:42Z","content_type":null,"content_length":"78828","record_id":"<urn:uuid:8126f731-aa57-41c5-8049-0902b8305d12>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Instructor Class Description Time Schedule: Peter W Johnson IND E 315 Seattle Campus Probability and Statistics for Engineers Application of probability theory and statistics to engineering problems, distribution theory and discussion of particular distributions of interest in engineering, statistical estimation and data analysis. Illustrative statistical applications may include quality control, linear regression, and analysis of engineering data sets. Prerequisite: either MATH 136, MATH 307, or AMATH 351. Offered: Class description In this course students will learn the basic fundamentals of probability and statistics. This course is designed to develop students probabilistic and statistical intuition. Student learning goals Identify various probability distributions Calculate basic statistical measures Conduct statistical analyses using either Excel or Minitab Apply statistical methods to a real data set General method of instruction Lectures covering general concepts and problem examples. Recommended preparation Math 136, Math 307 or AMATH 131 Class assignments and grading 6 short homework assignments will be completed in addition students will complete a small project Homework 20% Midterm Exam 20% Project 15% Final 30% Participation 5% The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without notice. In most cases, the official course syllabus will be distributed on the first day of class. Last Update by Peter W Johnson Date: 09/24/2012
{"url":"http://www.washington.edu/students/icd/S/inde/315petej.html","timestamp":"2014-04-18T15:45:26Z","content_type":null,"content_length":"4529","record_id":"<urn:uuid:f403bcd4-c031-4dcf-aed5-201efacb4e7b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics of the Solid State Physics of Wave Phenomena Physics Procedia Physics Reports Physics Research International Physics Today Physics World Physik in unserer Zeit Physik Journal Plasma Physics and Controlled Fusion Plasma Physics Reports Proceedings of the National Academy of Sciences, India Section A Progress in Low Temperature Physics Progress in Materials Science Progress in Nuclear Magnetic Resonance Spectroscopy Progress in Optics Progress in Particle and Nuclear Physics Progress in Planning Progress of Theoretical and Experimental Physics Quantum Electronics Quantum Measurements and Quantum Metrology Quarterly Journal of Mechanics and Applied Mathematics Radiation Effects and Defects in Solids Radiation Measurements Radiation Physics and Chemistry Radiation Protection Dosimetry Radiation Research Radio Science Radiological Physics and Technology Reflets de la physique Reports on Mathematical Physics Reports on Progress in Physics Research & Reviews : Journal of Physics Research in Drama Education Research Journal of Physics Results in Physics Reviews in Mathematical Physics Reviews of Accelerator Science and Technology Reviews of Geophysics Reviews of Modern Physics Revista Colombiana de Física Revista Mexicana de Astronomía y Astrofísica Revista Mexicana de Física Revista mexicana de física E Rheologica Acta Russian Journal of Mathematical Physics Russian Journal of Nondestructive Testing Russian Physics Journal Samuel Beckett Today/Aujourd'hui Science and Technology of Nuclear Installations Science China Physics, Mechanics and Astronomy Science Foundation in China Scientific Journal of Physical Science Scientific Reports Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of Sensor Letters Sensors and Actuators A: Physical Services Computing, IEEE Transactions on Shock and Vibration Shock Waves Software Engineering, IEEE Transactions on Solid State Physics Solid-State Circuits Magazine, IEEE South African Journal for Research in Sport, Physical Education and Recreation Space Research Journal Space Weather Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy Spectrochimica Acta Part B: Atomic Spectroscopy Spectroscopy Letters: An International Journal for Rapid Communication Sri Lankan Journal of Physics Strength of Materials Strength, Fracture and Complexity Structural Dynamics Studies In Applied Mathematics Superconductor Science and Technology Surface Engineering Surface Review and Letters Surface Science Reports Surface Science Spectra Surface Topography : Metrology and Properties Synchrotron Radiation News Synthetic Metals Technical Physics Technical Physics Letters The Astronomy and Astrophysics Review The Chemical Physics of Solid Surfaces The European Physical Journal H The European Physical Journal Plus The International Journal of Multiphysics The Physics of Metals and Metallography The Physics Teacher Theoretical and Applied Fracture Mechanics First | 1 2 3 4 5 6 7 | Last Russian Journal of Mathematical Physics [3 followers] Hybrid journal It can contain Open Access articles ISSN (Print) 1531-8621 - ISSN (Online) 1061-9208 Published by Springer-Verlag [2187 journals] [SJR: 0.924] [H-I: 21] • Asymptotics of the class="a-plus-plus">S-matrix for perturbed Hill □ Abstract: Abstract We consider the Schrödinger operator with a periodic potential p plus a compactly supported potential q on the real line. We assume that both p and q have m ⩾ 0 derivatives. For generic p, the essential spectrum of the operator has an infinite sequence of open gaps. We determine the asymptotics of the S-matrix at high energy. PubDate: 2014-03-01 • Perturbation determinants for singular perturbations □ Abstract: Abstract Let A be a densely defined symmetric operator and let {Ã′, Ã} be an ordered pair of proper extensions of A such that their resolvent difference is of trace class. We study the perturbation determinant ΔÃ′/Ã(·) of the singular pair {Ã′, Ã} by using the boundary triplet approach. We show that, under additional mild assumptions on {Ã′, Ã, the perturbation determinant ΔÃ′/Ã(·) is the ratio of two ordinary determinants involving the Weyl function and boundary operators. In particular, if the deficiency indices of A are finite, then we obtain ΔÃ′ /Ã(z) = det (B′ - M(z))/det (B - M (z)), z ∈ ρ(Ã), where M(·) stands for the Weyl function and B′ and B for the boundary operators corresponding to Ã′ and à with respect to a chosen boundary triplet Π. The results are applied to ordinary differential operators and to second-order elliptic operators. PubDate: 2014-03-01 • On an isomonodromy deformation equation without the Painlevé property □ Abstract: Abstract We show that the fourth-order nonlinear ODE which controls the pole dynamics in the general solution of equation P I 2 compatible with the KdV equation exhibits two remarkable properties: (1) it governs the isomonodromy deformations of a 2 × 2 matrix linear ODE with polynomial coefficients, and (2) it does not possess the Painlevé property. We also study the properties of the Riemann-Hilbert problem associated to this ODE and find its large-t asymptotic solution for physically interesting initial data. PubDate: 2014-03-01 • The relationship between the Van-Der-Waals model and the undistinguishing □ Abstract: Abstract In the article, the relationship between the mesoscopic picture and the microscopic and macroscopic pictures is considered in the light of the work of V. E. Panin and his school. The coincidence of isochores and isotherms obtained by applying the undistinguishing statistics of objectively distinguishable objects with those of the Van-der-Waals model allows us to specify the heat capacity C V corresponding to real gases. The difference between Gentile statistics and parastatistical models in the mesoscopic case is indicated. PubDate: 2014-03-01 • Some solutions of the 3D Laplace equation in a layer with oscillating boundary describing an array of nanotubes with applications to cold field emission. II. Irregular arrays □ Abstract: Abstract As in the first part (J. Brüning, S.Yu. Dobrokhotov, D.S. Minenkov, 2011), we construct a family of special solutions of the Dirichlet problem for the Laplace equation in a domain with fast changing boundary. Using these solutions, we construct an analytic model of cold field electron emission from surfaces simulating arrays of vertically aligned nanotubes. Explicit analytic formulas lead to fast computations and also allow us to study the case of random arrays of tubes with stochastic distribution of parameters. We present these results and compare them with numerical approximations given in [1]. PubDate: 2014-03-01 • On solutions of a boundary value problem for a polyharmonic equation in unbounded domains □ Abstract: Abstract In the paper, we study uniqueness problems for solutions of a boundary value problem for a polyharmonic equation in the exterior of a compact set and in a half-space under the assumption that the generalized solution of the problem in question admits a finite Dirichlet integral with a weight of the form x a . In dependence on the values of the parameter a, we prove uniqueness theorems and also present precise formulas to evaluate the dimension of the space of solutions of this problem in the exterior of a compact set and in a half-space. PubDate: 2014-03-01 • Identities involving Laguerre polynomials derived from umbral calculus □ Abstract: Abstract In this paper, we investigate some identities for Laguerre polynomials involving Bernoulli and Euler polynomials derived from umbral calculus. PubDate: 2014-03-01 • Group structures on some families of 6-manifolds □ Abstract: Abstract We consider abelian groups formed by simply connected closed oriented smooth 6-manifolds with given 2-dimensional homology and given 2-dimensional Stiefel-Whitney class. In particular, an effective presentation of these groups is given. PubDate: 2014-03-01 • Embedding theorem for weighted Sobolev classes with weights that are functions of the distance to some class="a-plus-plus">h-set □ Abstract: Abstract The paper continues the first part (Russ. J. Math. Phys. 20 (3), 360–373). Let Ω be a John domain, let Γ ⊂ ∂Ω be an h-set, and let g and υ be weights on Ω that are distance functions to the set Γ of special form. In the paper, sufficient conditions are obtained under which the Sobolev weighted class W p,g r (Ω) is continuously embedded in the space L q,v (Ω). Moreover, bounds for the approximation of functions in W p,g r (Ω) by polynomials of degree not exceeding r − 1 in L q,v ( $\tilde \Omega $ ) are found, where $\tilde \Omega $ is a subdomain generated by a subtree of the tree T defining the structure of Ω. PubDate: 2014-03-01 • Corrected automatic continuity conditions for finite-dimensional representations of connected Lie groups □ Abstract: Abstract We continue the study of automatic continuity conditions for finite-dimensional representations of connected Lie groups. In particular, we claim that every locally bounded finite-dimensional representation of a connected Lie group is continuous on the commutator subgroup in the intrinsic Lie topology of the subgroup and continuous on the intersection of the commutator subgroup with the radical of the group in the original topology of the Lie group, thus correcting one of our previous results. PubDate: 2014-03-01 • In memoriam • Critical nonlinearities in partial differential equations □ Abstract: Abstract The paper is devoted to the role of critical nonlinearities in the framework of the theory of global solvability of nonlinear partial differential equations. In particular, a new approach to blow-up problems for solutions of nonlinear partial differential equations is suggested. PubDate: 2013-10-01 • Generic eigenvalue multiplicities for two-parameter families of operators □ Abstract: Abstract A theorem about two-parameter families of Schrödinger operators in proved; the potential is parameter dependent. PubDate: 2013-10-01 • Gauge equivalence and the inverse spectral problem for the magnetic Schrödinger operator on the torus □ Abstract: Abstract We study the inverse spectral problem for the Schrödinger operator H on the two-dimensional torus with even magnetic field B(x) and even electric potential V(x). Guillemin [11] proved that the spectrum of H determines B(x) and V(x). A simple proof of Guillemin’s results was given by the authors in [3]. In the present paper, we consider gauge equivalent classes of magnetic potentials and give conditions which imply that the gauge equivalence class and the spectrum of H determine the magnetic field and the electric potential. We also show that, generically, the spectrum and the magnetic field determine the “extended” gauge equivalence class of the magnetic potential. The proof is a modification of that in [3] with some corrections and clarifications. PubDate: 2013-10-01 • Estimate for a solution to the water wave problem in the presence of a submerged body □ Abstract: Abstract We study the two-dimensional problem of propagation of linear water waves in deep water in the presence of a submerged body. Under some geometrical requirements, we derive an explicit bound for the solution depending on the domain and the functions on the right-hand side. PubDate: 2013-10-01 • Some coercive problems for the system of Poisson equations □ Abstract: Abstract In the paper, two boundary value problems for the system of Poisson equations in three-dimensional domains are studied. PubDate: 2013-10-01 • Parastatistics and phase transition from a cluster as a fluctuation to a cluster as a distinguishable object □ Abstract: Abstract A phase transition of the first kind is a jump of a function, a phase transition of the second kind is a jump of its first derivative, a phase transition of the third kind, a jump of the second derivative. A phase transition from one statistic to another is very gradual, but finally it is as considerable as the phase transition of the first kind. However, we cannot introduce a clearly defined parameter to which this transition corresponds. This is due to the fact that the fluctuations near the critical point are huge, and this violates, in the vicinity of that point, the main law of equilibrium thermodynamics, which asserts that fluctuations are relatively small. The paper describes the transition in the supercritical fluid region of equilibrium thermodynamics from parastatistics to mixed statistics, in which the Boltzmann statistics is realized for long-living clusters. In economics this corresponds to a negative nominal credit rate. Examples of this non-standard situation are presented. PubDate: 2013-10-01 • On smoothness of solutions of some elliptic functional-differential equations with degeneration □ Abstract: Abstract We consider second-order differential-difference equations in bounded domains in the case where several degenerate difference operators enter the equation. The degeneration leads to the fact that the multiplicity of the zero eigenvalue for the corresponding differential-difference operator becomes infinite. Regularity of generalized solutions for such equations is known to fail in the interior of the domain. However, we prove that the projections of solutions onto the orthogonal complement to the kernel of the “leading” difference operator remain regular in certain subdomains which form a decomposition of the original domain. PubDate: 2013-10-01 • Analytic study of a potential model of tsunami with a simple source of piston type. 2. Asymptotic formula for the height of tsunami in the far □ Abstract: Abstract A far field asymptotic formula is derived for the single integral obtained in the first part of the research and giving the height of tsunami in the framework of the hydrodynamical potential model with a special axially-symmetric source of piston type. Conditions are indicated for the variables (of the model under consideration) that can vary in wide intervals and for which the asymptotic formula is of high precision for the calculation of the wave profile and for the time history of the height of the tsunami. Using results of manyvariant computer-aided processing, we found domains of variables of the model in which the asymptotic formula has high precision for the computations of the wave profile and of time history of height not only the leading waves of tsunami but also of tailing waves. PubDate: 2013-10-01 • Lyapunov exponent of the random Schrödinger operator with short-range correlated noise potential □ Abstract: Abstract We study the influence of disorder on propagation of waves in one-dimensional structures. Transmission properties of the process governed by the Schrödinger equation with the white noise potential can be expressed through the Lyapunov exponent γ which we determine explicitly as a function of the noise intensity σ and the frequency ω. We find uniform two-parameter asymptotic expressions for γ which allow us to evaluate γ for different relations between σ and ω. The value of the Lyapunov exponent is also obtained in the case of a short-range correlated noise, which is shown to be less than its white noise counterpart. PubDate: 2013-10-01
{"url":"http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=53&journalID=12209&pageb=6&userQueryID=&sort=&local_page=1&sorType=&sorCol=1","timestamp":"2014-04-18T18:11:09Z","content_type":null,"content_length":"128614","record_id":"<urn:uuid:fe9b1765-a82e-4056-a227-1913a86bc065>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Sudoku Puzzles - Definition A sudoku puzzle is a logic puzzle which is solved using logic and reasoning. Though the number of squares in the grid may vary, a standard Sudoku puzzle is a 9x9 grid divided into nine 3x3 blocks in which some of the numbers are given. The object of the puzzle is to fill in the rest of the grid using only the numbers 1 through 9 so that no number repeats in any block, row or column. The difficulty is determined by how many and which numbers are given and where they are placed in the grid and can range from very easy to extremely challenging. Though sudoku puzzles generally use numbers as symbols, no mathematical skills are required to solve the puzzle. Alternate Spellings: Su Doku Common Misspellings: suduko, soduku
{"url":"http://puzzles.about.com/od/glossary/g/sudoku-puzzle.htm","timestamp":"2014-04-21T15:01:44Z","content_type":null,"content_length":"36560","record_id":"<urn:uuid:0fd99c93-9177-4d8a-a980-e92390730541>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
A Finite Difference Scheme for Compressible Miscible Displacement Flow in Porous Media on Grids with Local Refinement in Time Abstract and Applied Analysis Volume 2013 (2013), Article ID 521835, 8 pages Research Article A Finite Difference Scheme for Compressible Miscible Displacement Flow in Porous Media on Grids with Local Refinement in Time School of Mathematic and Quantitative Economics, Shandong University of Finance and Economics, Jinan 250014, China Received 19 October 2012; Accepted 19 December 2012 Academic Editor: Xinan Hao Copyright © 2013 Wei Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Considering two-dimensional compressible miscible displacement flow in porous media, finite difference schemes on grids with local refinement in time are constructed and studied. The construction utilizes a modified upwind approximation and linear interpolation at the slave nodes. Error analysis is presented in the maximum norm and numerical examples illustrating the theory are given. 1. Introduction Numerical models of percolation flow are almost built up on a basis of the finite difference method to solve the system of partial differential equations. Usually, grids that we used are thinner, then the truncation error is smaller and the computing accuracy is higher. In order to assure certain computation accuracy, the grid number cannot be too little. But on the other hand, along with the increment of the grid number, the computation cost is greatly increased and the algebraic system which is formed finally cannot be resolved, even with the largest of today’s supercomputers. Actually, we only need to refine grids around wells, cracks, obstacles, domain boundaries, and so forth, where the pressure changes radically. But because the finite difference grid is composed of straight lines and the grid density cannot be varied with space, it limits the simulating scale and the simulating accuracy. For the local grid refinement technique, we still make use of the finite difference grid system and divide partial grids which are needed to be refined into fine grids. In this way, we can resolve problems, such as small well spacing, fault, and boundary, and we can improve the simulating accuracy and extend the simulating scale [1]. Ewing et al. construct some finite difference approximations on grids with local refinement in space for the ellipse equation and obtain error estimates in the -norm [2]. Cai et al. analyze stationary local grid refinement for the diffusion equation [3, 4]. Ewing et al. derive implicit schemes on the basis of a finite volume approach by approximation of the balance equation. This approach leads to schemes that are locally conservative and are absolutely stable [5]. Ewing et al. construct and study finite difference schemes for transient convection-diffusion problems on grids with local refinement in time and space. The proposed schemes are unconditionally stable and use linear interpolation along the interface [6]. Respectively for incompressible miscible displacement flow in porous media and the semiconductor device problem, authors discuss discrete schemes, error estimates, and numerical examples on composite triangular grids [7, 8]. In this paper, we study a finite difference scheme on grids with local refinement in time for two-dimensional compressible miscible displacement flow in porous media. The pressure equation is approximated by a five-point difference scheme, and the saturation equation is discretized by a modified upwind scheme. At the slave nodes, the construction utilizes linear interpolation. Finally, error analysis in the maximum norm is derived and numerical examples are given to support the numerical method and its convergence. The paper is organized as follows. In Sections 2 and 3, we formulate the problem and introduce the necessary notations. In Section 4 the construction of the finite difference scheme is presented. The error analysis is addressed in Section 5. Finally, in Section 6 we present numerical experiments that conform our theoretical results. 2. Problem Formulation We will consider a system of three nonlinear partial differential equations in a bounded domain , which forms a basic model of compressible miscible displacement flow in porous media [9–11]: where is the saturation of the th component in mixed liquid, is the th component of compression constant factor, is the porosity of the rock, is the permeability of the rock, is the viscosity of the fluid, which is the diffusion matrix, is the diffusion coefficient, and is the unit matrix. The unknowns are the pressure function and the saturation function . In addition, we have boundary conditions and initial conditions where is a plane bounded domain and is the boundary of . Usually this question is positive. Suppose the coefficients of (1) satisfy where , , , , , , are positive constants and , , and are Lipschitz continuous in the neighborhood of the solution. We suppose that the exact solutions of (1) are distributed smoothly; and satisfy Throughout this paper, the notations are used to denote generic constants. 3. Grids, Grid Functions, and Associated Notations First, is discretized using a regular grid with a parameter . The spatial nodes of the grid on are then defined by , where , . Next, we introduce closed domains , which are subsets of with boundaries aligned with the spatial discretization already defined. Further, it is required that , and we set . In order to avoid unnecessary complications, for ,, we assume that , where is an integer. With each subdomain , we associate corresponding sets of nodal points: is defined to be the set of all nodes of the discretization of that are in . We require , for , . And assume that there is no spatial refinement. In each , , we define a subset of boundary nodes as the nodes which have at least one neighbor not in . Then set . A discrete time-step is associated with each such that, for integers , Consequently, discrete time levels for are defined by , . Finally, we define the grid points by setting We continue by specifying the nodes in between time levels and as Correspondingly, the boundary nodes of are defined by The grid function is a function defined at the grid points of . we denote the nodal values of a grid function between time levels and as for , , . For we define , and , are the divided forward and backward difference operators, respectively, in and direction. Also define the divided backward time difference by 4. Construction of the Finite Difference Schemes Let , , and be the numerical approximations to the pressure , the velocity , and the saturation , respectively. The approximation for the pressure and the concentration approximation are done on composite grids in time. First for the pressure equation, we let Similarly we define , then let For regular coarse grids, the five-point difference scheme is where the difference operator . The Darcy velocity is computed as follows: corresponds to another direction, and the computational formula is similar to . Next we consider the saturation equation (1)(c). The positive and negative of the function are defined as and . For regular coarse grids, the upwind difference scheme of the saturation equation is In the region that is refined in time, we can construct finite difference schemes similar to (16)–(18). It is obvious that at time , , when the difference operators defined above are applied to the points of , not all space-time positions required correspond to actual nodes in . For such cases, we define In Figure 1, the slave nodes represent the missing space-time positions in the stencil of nodes in . The values there are computed by the interpolation formula (20). The discretization schemes of (1)–(4) on composite grids are given by , correspond to another direction, and computational formulae are similar to (22). 5. Error Analysis The discrete inner product and -norm of grid functions are defined, respectively, by We also use the standard notation for the discrete -norm of a grid function in the Sobolev space : Define the error of the above scheme by Firstly consider the pressure equation. Using (1)(a) and (21), we get the error equation: Using (27)(b), we can get Then using the maximum principle With the method of recursion and noticing that , we obtain the error estimate: Similarly using (27)(c) and (27)(d), Then we consider the saturation equation. Suppose that where is a positive constant. In the end of error analysis, we will prove the supposition (32). Under the supposition (32), we can prove the discretizations (23) satisfy the following property. Theorem 1. The finite difference schemes (23) comply with the requirements of the maximum principle and the difference operator is coercive in , that is, such that Considering and using (1)(c), (23), we can get the error equation of the saturation equation We need an induction hypothesis. Let , we assume that for , . When , we can obtain (36) by using (4) and (23). At the end of error analysis, we will prove (36) for , . From error estimates of the pressure equation (30) and (31) and the induction hypothesis (36), we have In order to get an estimate for the error , we need two types of auxiliary functions and . The grid functions and , respectively, satisfy where is the characteristic function of , is the characteristic function of , and is the characteristic function of . On condition that is coercive in , Ewing et al. have proved and satisfy the following lemma [6]. Lemma 2. and exist and are nonnegative, and the following estimates hold: Theorem 3. Let the exact solutions of (1) satisfy the condition (6), then the discretization scheme (23) is stable, and if the following estimate for the error holds: Proof. Define where , . Using induction over , it easy to observe that Moreover, Using the maximum principle, it follows that Similarly, Therefore, Then in view of (39), we conclude (40). It remains to check (32) and the induction hypothesis (36) for . From and the error estimate (40), it is easy to obtain (36). Then using (30) and (31), (34), and (40), we can obtain the supposition ( 32). The proof of Theorem 3 is complete. 6. Numerical Results We consider a system of coupled partial differential equations: where , . The following functions are used as exact solutions of (47): When and , exact solutions of (47) are shown in Figures 2 and 3. Specific forms of , , , , and are derived by exact solutions. From Figures 2 and 3, we can see exact solutions of (47) possess highly localized properties in . First, is discretized using a regular grid. Let the space-step and the time-step . Then choose the subregion , which is refined in time. Let the discretization parameters in the refined region , where is a positive integer. We denote by the numerical approximation to obtained by (23). And let the error estimate in maximum norm . Example 4. Let . Choosing , computational results obtained by (23) are shown in Figures 4 and 5 and Table 1. Example 5. Let . Choosing , computational results obtained by (23) are shown in Table 2. From Figures 4 and 5 and Tables 1 and 2, we can see that numerical results produced by using the local refinement technique are more accurate than those produced without refinement. From Tables 1 and 2, we observe a monotonic improvement of the accuracy in maximum norm when using difference refinement factors . These results are of great importance for the research on numerical simulation of the fluid flow problem and also indicate that the method proposed in this paper can be widely applied to some application fields, such as energy numerical simulation and environmental science. The author thanks Professor Yirang Yuan for his valuable constructive suggestions which lead to a significant improvement of this paper. This work is supported in part by the National Natural Science Foundation of China (Grant no. 71071088) and the Natural Science Foundation of Shandong Province of China (Grant no. ZR2011AQ021). 1. Y. R. Yuan, “Some new progress in the fileds of computational petroleum geology and others,” Chinese Journal of Computational Physics, vol. 20, no. 4, pp. 283–290, 2003. 2. R. E. Ewing, R. D. Lazarov, and P. S. Vassilevski, “Local refinement techniques for elliptic problems on cell-centered grids. I. Error analysis,” Mathematics of Computation, vol. 56, no. 194, pp. 437–461, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 3. Z. Q. Cai, J. Mandel, and S. F. McCormick, “The finite volume element method for diffusion equations on general triangulations,” SIAM Journal on Numerical Analysis, vol. 28, no. 2, pp. 392–402, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. Z. Q. Cai and S. F. McCormick, “On the accuracy of the finite volume element method for diffusion equations on composite grids,” SIAM Journal on Numerical Analysis, vol. 27, no. 3, pp. 636–655, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 5. R. E. Ewing, R. D. Lazarov, and P. S. Vassilevski, “Finite difference schemes on grids with local refinement in time and space for parabolic problems. I. Derivation, stability, and error analysis,” Computing, vol. 45, no. 3, pp. 193–215, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 6. R. E. Ewing, R. D. Lazarov, and A. T. Vassilev, “Finite difference scheme for parabolic problems on composite grids with refinement in time and space,” SIAM Journal on Numerical Analysis, vol. 31, no. 6, pp. 1605–1622, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 7. W. Liu and Y. R. Yuan, “Finite difference schemes for two-dimensional miscible displacement flow in porous media on composite triangular grids,” Computers & Mathematics with Applications, vol. 55, no. 3, pp. 470–484, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 8. W. Liu and Y. R. Yuan, “A finite difference scheme for two-dimensional semiconductor device of heat conduction on composite triangular grids,” Applied Mathematics and Computation, vol. 218, no. 11, pp. 6458–6468, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 9. J. Douglas, Jr. and J. E. Roberts, “Numerical methods for a model for compressible miscible displacement in porous media,” Mathematics of Computation, vol. 41, no. 164, pp. 441–459, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. R. Yi Yuan, “Time stepping along charcteristics for the finite element approximation of compressible miscible displacement in porous media,” Mathematica Numerica Sinica, vol. 14, no. 4, pp. 385–406, 1992. 11. J. M. Yang and Y. P. Chen, “A priori error analysis of a discontinuous Galerkin approximation for a kind of compressible miscible displacement problems,” Science China, Mathematics, vol. 53, no. 10, pp. 2679–2696, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
{"url":"http://www.hindawi.com/journals/aaa/2013/521835/","timestamp":"2014-04-19T09:54:31Z","content_type":null,"content_length":"613590","record_id":"<urn:uuid:39cdd0a5-952f-431f-8f81-15882baafdcd>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
What size is Shannon Tweed? You asked: What size is Shannon Tweed? Shannon Tweed Shannon Tweed is 5 feet and 10 inches tall. Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/what_size_is_shannon_tweed","timestamp":"2014-04-20T01:02:21Z","content_type":null,"content_length":"56170","record_id":"<urn:uuid:165cc026-66ea-4963-813a-1d2421662697>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 14 , 1994 "... A heuristic method has been developed for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense 3-D maps obtained by using a correlation-based stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in ma ..." Cited by 480 (6 self) Add to MetaCart A heuristic method has been developed for registering two sets of 3-D curves obtained by using an edge-based stereo system, or two dense 3-D maps obtained by using a correlation-based stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision, which is required for environment modeling (e.g., building a Digital Elevation Map). Objects are represented by a set of 3-D points, which are considered as the samples of a surface. No constraint is imposed on the form of the objects. The proposed algorithm is based on iteratively matching points in one set to the closest points in the other. A statistical method based on the distance distribution is used to deal with outliers, occlusion, appearance and disappearance, which allows us to do subset-subset matching. A least-squares technique is used to estimate 3-D motion from the point correspondences, which reduces the average distance between points in the two sets. Both synthetic and real data have been used to test the algorithm, and the results show that it is efficient and robust, and yields an accurate motion estimate. , 1992 "... Introduction This work describes an approach to finding objects in images based on deformable shape models. Boundary finding in two and three dimensional images is enhanced both by considering the bounding contour or surface as a whole and by using model-based shape information. Boundary finding u ..." Cited by 274 (6 self) Add to MetaCart Introduction This work describes an approach to finding objects in images based on deformable shape models. Boundary finding in two and three dimensional images is enhanced both by considering the bounding contour or surface as a whole and by using model-based shape information. Boundary finding using only local information has often been frustrated by poor-contrast boundary regions due to occluding and occluded objects, adverse viewing conditions and noise. Imperfect image data can be augmented with the extrinsic information that a geometric shape model provides. In order to exploit model-based information to the fullest extent, it should be incorporated explicitly, specifically, and early in the analysis. In addition, the bounding curve or surface can be profitably considered as a whole, rather than as curve or surface segments, because it tends to result in a more consistent solution overall. These models are best suited for objects whose diversity and irregularity of shape make - IEEE Trans. Med. Imag , 1996 "... The problem of obtaining a mathematical representation of the cortex of the human brain is examined. A parametrization of the outer cortex is first obtained using a deformable surface algorithm which, motivated by the structure of the cortex, is constructed to find the central layer of thick surface ..." Cited by 87 (11 self) Add to MetaCart The problem of obtaining a mathematical representation of the cortex of the human brain is examined. A parametrization of the outer cortex is first obtained using a deformable surface algorithm which, motivated by the structure of the cortex, is constructed to find the central layer of thick surfaces. Based on this parametrization, a hierarchical representation of the cortical structure is proposed through its depth map and its curvature maps at various scales. Various experiments on magnetic resonance data are presented. I. Introduction The problem of finding and parametrizing boundaries in two- and three-dimensional images is often an important step toward shape visualization and analysis, and has been extensively studied in the image analysis and computer vision literature. Several methods have been proposed, basedboth on bottom-up and top-bottom procedures. One very promising model which combines robustness to noise and the flexibility to represent a broad class of shapes is base... - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1997 "... In this paper we present a new technique for partial surface and volume matching of images in three dimensions. In this ..." - In Proceedings: BRL-CAD Symposium. Army Research Labs , 1995 "... We are currently developing an Automatic Target Recognition (ATR) algorithm to locate an object using multisensor data. The ATR algorithm will determine corresponding points between a range (LADAR) image, a color (CCD) image, a thermal (FLIR) image and a BRL/CAD model of the object being located. Th ..." Cited by 10 (10 self) Add to MetaCart We are currently developing an Automatic Target Recognition (ATR) algorithm to locate an object using multisensor data. The ATR algorithm will determine corresponding points between a range (LADAR) image, a color (CCD) image, a thermal (FLIR) image and a BRL/CAD model of the object being located. The success of this process depends in part on which features can be automatically extracted from the model database. The BRL/CAD models we have for this process contain more detail than can be productively used by our ATR algorithm and must be reduced to a more appropriate form. This paper presents algorithms we are developing in order to reduce the BRL/CAD models a level of detail appropriate for the ATR algorithm. A secondary feature of these algorithms is to also maintain a parallel version with details sufficient to appear realistic when rendered. This rendering enables the ATR system to animate its search procedure for monitoring and debugging. Model reduction begins by converting the Co... , 1995 "... OF THESIS OBTAINING 3D SILHOUETTES AND SAMPLED SURFACES FROM SOLID MODELS FOR USE IN COMPUTER VISION Model-based object recognition algorithms identify modeled objects in a scene by relating stored geometric models to features extracted from sensor data. This process can be combinatorially explosive ..." Cited by 8 (5 self) Add to MetaCart OF THESIS OBTAINING 3D SILHOUETTES AND SAMPLED SURFACES FROM SOLID MODELS FOR USE IN COMPUTER VISION Model-based object recognition algorithms identify modeled objects in a scene by relating stored geometric models to features extracted from sensor data. This process can be combinatorially explosive as the amount of information presented to the recognition algorithm increases. This thesis presents a method for extracting only relevant features from a stored three dimensional (3D) model in an attempt to reduce the difficulty of the recognition process. The development of the methods presented here were driven by the needs of the Automatic Target Recognition (ATR) algorithm being developed concurrently at Colorado State University (CSU). The ATR algorithm locates an object using multi-sensor data by determining the correspondence between a range (LADAR) image, a color image, a thermal (FLIR) image, and a Computer Aided Design (CAD) geometric model. The success of this process depends in ... - In Proc. 12th Annual Symp. on Computational Geometry , 1996 "... In this paper we present a new technique for partial surface and volume matching of images in three dimensions. In this problem, we are given two objects in 3-space, each represented as a set of points, scattered uniformly along its boundary or inside its volume. The goal is to find a rigid motio ..." Cited by 8 (0 self) Add to MetaCart In this paper we present a new technique for partial surface and volume matching of images in three dimensions. In this problem, we are given two objects in 3-space, each represented as a set of points, scattered uniformly along its boundary or inside its volume. The goal is to find a rigid motion of one object which makes a sufficiently large portion of its boundary lying sufficiently close to a corresponding portion of the boundary of the second object. This is an important problem in pattern recognition and in computer vision, with many industrial, medical, and chemical applications. Our algorithm is based on assigning a directed footprint to every point of the two sets, and locating all the pairs of points (one of each set) whose undirected components of the footprints are sufficiently similar. The algorithm then computes for each such pair of points all the rigid transformations that map the first point to the second, while making the respective direction components of ... "... Abstract—Isometric surfaces share the same geometric structure, also known as the “first fundamental form. ” For example, all possible bendings of a given surface that includes all length preserving deformations without tearing or stretching the surface are considered to be isometric. We present a m ..." Cited by 4 (0 self) Add to MetaCart Abstract—Isometric surfaces share the same geometric structure, also known as the “first fundamental form. ” For example, all possible bendings of a given surface that includes all length preserving deformations without tearing or stretching the surface are considered to be isometric. We present a method to construct a bending invariant signature for such surfaces. This invariant representation is an embedding of the geometric structure of the surface in a small dimensional Euclidean space in which geodesic distances are approximated by Euclidean ones. The bending invariant representation is constructed by first measuring the intergeodesic distances between uniformly distributed points on the surface. Next, a multidimensional scaling (MDS) technique is applied to extract coordinates in a finite dimensional Euclidean space in which geodesic distances are replaced by Euclidean ones. Applying this transform to various surfaces with similar geodesic structures (first fundamental form) maps them into similar signature surfaces. We thereby translate the problem of matching nonrigid objects in various postures into a simpler problem of matching rigid objects. As an example, we show a simple surface classification method that uses our bending invariant signatures. Index Terms—MDS (Multi-Dimensional Scaling), FMTD (Fast Marching Method on Triangulate Domains), isometric signature, classification, geodesic distance. - Applied Intelligence , 1994 "... We present and compare two new techniques for Learning Relational Structures (RS) as they occur in 2D pattern and 3D object recognition. These techniques, Evidence-Based Networks (EBS-NNet) and Rulegraphs (RG) combine techniques from Computer Vision with those from Machine Learning and Graph Matchin ..." Cited by 3 (2 self) Add to MetaCart We present and compare two new techniques for Learning Relational Structures (RS) as they occur in 2D pattern and 3D object recognition. These techniques, Evidence-Based Networks (EBS-NNet) and Rulegraphs (RG) combine techniques from Computer Vision with those from Machine Learning and Graph Matching. The EBS-NNet has the ability to generalize pattern rules from training instances in terms of bounds on both unary (single part) and binary (part relation) numerical features. It also learns, the compatibilities between unary and binary feature states in defining different pattern classes. Rulegraphs check this compatibilitybetween unary and binary rules bycombining Evidence Theory with Graph Theory. The two systems are tested and compared using a number of different pattern and object recognition problems. 2 1 Introduction In the context of Computer Vision, Relational Structures (RS) refer to the representation of patterns or shapes in terms of attributes of parts and part relations (s... , 1996 "... . Segmentation and labeling of X-ray medical images are untrivial tasks. These images are complex with low contrast and low signal to noise ratio (S/N). Hence, edge detectors produce oversegmented elements. Grouping and organization of these elements is not straightforward especially for deformable ..." Cited by 1 (1 self) Add to MetaCart . Segmentation and labeling of X-ray medical images are untrivial tasks. These images are complex with low contrast and low signal to noise ratio (S/N). Hence, edge detectors produce oversegmented elements. Grouping and organization of these elements is not straightforward especially for deformable objects. Elastic models have been successfully used for matching slightly deformable objects. Their performance and precision can be heavily compromised when the disparity between the model and the image is large and when the image contrast is weak and variable. In this paper, we present a new approach to matching free form deformable models with X-ray images. Unlike elastic models, free forms (or wire-frame models) do not bias the boundary detection toward the initial state. We assume that objects can be assembled using a set of elementary 3-D free-forms (subobjects): planar, conic and cylindric. Multiple deformable curves (simple closed curves) are used to model the 2-D semitransparent pr...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1000937","timestamp":"2014-04-18T08:21:37Z","content_type":null,"content_length":"40536","record_id":"<urn:uuid:88733a37-8c46-464f-90e0-fdcc7523632f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
two spaceships in opposite direction at near c I really appreciate your effort drawing diagrams. Good job. Unfortunately I am not too happy with your text. The feeling is mutual. The way you explain it is as if the time dilation occurs due to the further dots pacing of clockticks on the clock worldline. Time dilation occurs due to the speed of a clock in a given Inertial Reference Frame (IRF). Just as speeds of objects are different in different IRF's, so are time dilations. The time dilations for each of the space ships/stations in the three IRF's I drew are different. I illustrate the time dilation by marking off equal ticks of each observer's clock with dots along the path of each observer. I think it's a great way to explain it and I think it's very easy to see and comprehend for a newby which is what I was trying to do. Until some newbies comment about this, we'll never know if I'm right or wrong. There is also a danger of interpreting your diagrams as if timedilation happens because of the taking into account of the lightbeams. Are you talking about Relativistic Doppler? Let me put it this way: You have drawn three different diagrams, but to see how timedilation works it is better to look at one diagram only with more information on it. I started out drawing just one diagram, not to show anything about time dilation but rather to show how light travels at c, not 2c like jartsa was promoting in post #27 as a way to explain SR to newbies. You need to read my first response to him in post #29 to understand the context of my first diagram in post #30. I later drew two more diagrams in response to K^2's request in post #31. In each of these, the signal going from the red spaceship at his 1-minute mark is received by the black spaceship at his 9-minute mark, even though the light signal takes a varying amount of time in each IRF but still travels at c in each of them and not with respect to the speeds of the spaceships. This was the whole point of these diagrams in support of my comment to jartsa in post #29: In an Inertial Reference Frame (IRF) in which two spaceships are traveling in opposite directions at nearly the speed of light and one of them sends a light signal to the other one, the light signal travels at c relative to the IRF, not relative to either spaceship. (For simplicity I omited what red says): Blue notices black (and red) time dilation because when his blue clock ticks 15 (not 13 as you wrote;)), in his BLUE frame (=3D spaceworld) the black and red clock show 9. So far so good. First off, I did not make a mistake with regard to 13 versus 15. I was not talking about time dilation. I was talking about how long it took for the signal to get from the red spaceship to the black spaceship at the speed of light and I said "it took over 13 minutes according to the IRF". In the other two IRF's it took 4.5 minutes and 40 minutes. Light is not time dilated. I was not talking about time dilation. Please read my comments carefully before responding to them. And now to my main point: The blue spacestation cannot notice the black (or red) spaceship's time dilation. Time dilation is not observable by anyone anywhere anytime. It is a calculation related to the speed of an object in a given IRF. If it were ever observable, then the observer would know which arbitrary IRF we were using. Or, more significantly, if it were observable, then we could identify the an absolute ether rest state and all of Special Relativity would be out the window. Now what black says: Black notices time dilation of blue time because "when his black clock ticks 9 seconds, IN HIS BLACK FRAME (=3D spaceworld) the blue clock is only at 5,399 sec. (9 / 1,6667). And still IN HIS BLACK 3D world the blue 'time dilation' means: red clock is only at 1,975 sec (9 / 4,5556). Here you see clearly that the 'slowing down' of the blue and red clock have NOTHING to with the further spacing of blue or red dots relative to the black dots (red dots are equally spaced as black, blue dots are less spaced relative to the black ones!). Again, the black spaceship cannot notice the time dilation of the blue spacestation for the reasons I stated before. Just because I drew three IRF's in which one of the observer's was a rest, you should not extapolate that observer's observations to what is assigned by the IRF, such as the time dilation related to the speed of the other objects. I could just as easily have drawn another diagram in which none of the observers was a rest, for example one in which the black spaceship and the blue spacestation are traveling at the same speed in opposite directions. Then how would you explain time dilation? On a Loedel diagram (I'm now too lazy too make one) you would see that the time dilation has nothing to do with the further spacing of clock ticks on the worldline of a clock. Time dilation is about relativity of simultaneity. (And that's only possible if you consider all events out there as observer independent entities time indications included (block universe). Just in case you wonder what the reciprocal time dilation is for blue at 9 (see sketch below): in blue 3D space black clock is at 5,389. In blue 3D space when blue clock is at 5,399 the black clock is at 3,238 (= 5,399 : 1,667 ). 3D cuts through 4D block spacetime! I wasn't wondering and I have no idea what your diagram is attempting to convey. Maybe a newby can explain it to me. Look, the reciprocal time dilation is very easily illustrated by looking at each of the IRF's for each observer and my text succinctly states what it is. For example, in the first IRF, blue's rest frame (post #30), I state that gamma for red and black is 1.6667 and I show their dots spaced by that amount with respect to the coordinates which also happens to be with respect to blue since blue is stationary in this IRF. Then if you go to the next IRF, black's rest frame (the first small IRF in post #32), I state in the text that the time dilation for the spacestation (incorrectly identified as the spaceship) is 1.6667 and you can see the exact same spacing of the blue dots in this IRF as you do for the black dots in the first IRF. You can do the same thing for each of the other pairs of space ships/station.
{"url":"http://www.physicsforums.com/showthread.php?p=4189195","timestamp":"2014-04-19T12:37:31Z","content_type":null,"content_length":"105095","record_id":"<urn:uuid:55b37730-526f-47f7-94bf-aaa562d6c931>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Andy's Math/CS page Math for Little People I'm 22. There's a part of me that's asking: "Why haven't you proved any great theorems yet?" But I can't beat myself up too much over this--with Republicans in the White House, and with so much HBO magic happening lately, the odds are stacked against me. But there's a different kind of math anxiety creeping up: one of these days, I might have kids. Those kids are going to be curious about the world around them, and they're going to look to me for answers--at least when their mother isn't around. For the most part, I'll be happy to fend them off with a mix of idle speculation and propaganda. But... what if they ask why honeycombs have six sides? What if they want to know why scooting towards the middle of the see-saw makes you go up? What if they ask why the rubber chain-links on the swing don't ever come apart, if they're not 'connected'? Could I in good conscience give sloppy answers to questions I know involve beautiful math? I know the Feds wouldn't come and take them away; I know the kids won't be too disappointed; I know their appetite for rigorous proof will come slowly, if at all. But could I live with myself? No, I've got to get a better grasp of the 'basic facts' of life in our spatial world before I help bring someone new along, if only for my own sake. But what are these? It's up for grabs, but here's the category I've had in mind mostly: qualitative features of structures, arrangements, and movement, observable by 'medium-sized' agents like children, and generated by/consistent with a naive 'block-world' understanding of matter and physical law (not referring to the AI environment of that name, but in a similar spirit). This is the conceptual space in which I've lived most of my life (and thought about discrete math and computer science), and if it's good enough for me it's good enough for my kids. Molecules and lightning bolts I won't sweat so much, but Block World I want to get right. So then... how to pick out, think about, and explain its most important facts? Some of them are, I think, topological, as I've alluded with the swing example. But kids won't swallow homotopy theory any more than they'll eat lima beans, even if I can still remember it by then. Anyways, there's a potential save here: Block World is basically discrete, so the space of 'topological' deformations of objects is much smaller and more well-behaved (I've already on discretization in topology). A good place to start would be the Jordan Curve Theorem, famed for its difficulty in spite of its surface obviousness: what could be clearer than the fact that a non-self-intersecting loop in the plane divides the plane into two components, 'inside' and 'outside'? As one discrete formulation, consider the loop as a path on the grid, where each step in any direction is of even length. Then it appears that the inside is path-connected, also by grid paths (without the even-length restriction), and separated from the outside. For example: I think this problem is excellent fodder for thought, and I would encourage anyone who's been scared away by the Theorem in the past to work on it in this friendlier setting. No hints, though--would your kids respect you? (Clarification for concerned parties: there are no children on the horizon; I do believe in responsible parenting; and I don't get HBO, although I do swear by Curb Your Enthusiasm Labels: general math, puzzles 5 Comments: • I'm 22. There's a part of me that's asking: "Why haven't you proved any great theorems yet?" I'm 25. I enjoy math, but I only minored in it. Now I'm thinking it would be nice to go back to school and become a mathematician. Part of me says this will work out well, whether or not I ever do any "important" work. Another part of me is telling me not to waste my time because I'm too old. Would I be making a horrible mistake? I don't know anyone who can advise me on this. Obviously it's a pretty major decision. By , at 1:44 PM • Hi anon, going back to school is a major decision. But it's not necessarily a binary all-or-nothing decision, nor does going back to school require such a decision. You could potentially take a year of additional school as -an investigation into the field, your interest in it, and your aptitude; -an investment in your overall intellectual growth/capital; -a hedonistic act. By any of these lights it would seem that trying grad school is unlikely to be a 'horrible mistake'. I have no scientific data on age & achievement in math, but I seriously doubt being 25 is a fundamental barrier to entry, especially if you enjoy math and minored in it. It's hard to give you more advice--I would need to know more about you. But the most basic advice I have is, expose yourself to math and give it a chance to grow in your life. If it grows, follow up with school etc. In my view, the most fertile and authentic mathematical experiences are active ones. Regularly solving problems (even very modest ones) empowers, while reading lay treatments of the most advanced theories and profiles of famous mathematicians can engender feelings of envy and inadequacy. That's not to say one shouldn't try to get a sense of the larger field and its successes; but personal involvement is in my view key both to success and to insight into one's own abilities and tastes. This is why I post puzzles on my page. Hope this helps, and email me if you'd like more specific advice. By Andy D, at 4:09 PM • Hi, thanks for your comments. I will follow your advice. I was relieved that you didn't say 25 is too old, which I was half-expecting you to say. I think you're quite right about lay treatments being discouraging (aside from "Men of Mathematics", which is how I got turned on to math in the first place). Maybe they are the reason I'd come to believe that I was too old. Also, when I posted my comment, I'd just read a very discouraging essay about mathematicians by alfred adler. Choice quotes: "The mathematical life of a mathematician is short. Work rarely improves after the age of twenty-five or thirty. If little has been accomplished by then, little will ever be accomplished." "Each generation has its few great mathematicians, and mathematics would not even notice the absence of the others. They are useful as teachers, and their research harms no one, but it is of no importance at all. A mathematician is great or he is nothing." Discouraging stuff! I will make an effort to ignore it. By , at 2:23 AM • By the way, I've also read Adler's essay and can report that he has no idea what he's talking about (in my experience as a professional mathematician). He's done an unusually large amount of damage to the mathematical community by spreading discouraging rumors. He's right that few mathematicians do such incredibly innovative work that if they hadn't lived, nobody else would have thought of it for a long time, but so what? Just because somebody else would have made the same discovery sooner or later, it doesn't mean your work isn't valuable. There are thousands of mathematicians making fundamental, important research contributions, even though only a handful stand out as geniuses. It's also true that if you spend ten years trying to do research but end up with only minor accomplishments, then you are not likely to have great accomplishments in the future (although there are exceptions). So if you work on research from ages 25 to 35 without much success, it's probably not your calling. On the other hand, if you reach 35 without having had an opportunity to do much research, this doesn't really limit your future success. Eventually old age will catch up with you - few 80-year-olds can compete with their former 20-year-old selves intellectually - but 35 is far from old. 25 isn't old at all, so you shouldn't consider age a barrier. The only serious disadvantage to getting an older start is the tenure system. Let's say you start graduate school at 26, and spend six years there (the average is probably around five, but let's add a year since you may need to fill in some background). At age 32 you will either get a tenure-track job or a two to three-year postdoc (the latter if you are aiming for a job at a top research university, the former otherwise). Your tenure case will be decided about six years after you start a tenure-track job. So the net effect is that you may well not have a job until age 32 or a permanent job until age 38-41. In the meantime, you may be getting married, starting a family, etc. For some people, this is no big deal; for others, it's a real issue. My impression is that this is the main thing to keep in mind regarding the effect of age on starting a Ph.D. program. By the way, one temptation may be to start a masters program and then transfer to a Ph.D. program if it goes well. At some schools this is a common option, but it is often better to start a Ph.D. program and then leave with a masters degree if you don't like it (nobody will ever know from your CV that your plans changed). Math faculty often treat Ph.D. students better, and it is easier for a Ph.D. student to get a fellowship to help pay for graduate school. By , at 9:10 PM • Hey! Coach Factory Online simply wanted to say your website is one of the nicely laid out, most inspirational I have come across in quite a while. By Coach Factory Online, at 2:48 AM
{"url":"http://andysresearch.blogspot.com/2007/04/math-for-little-people.html","timestamp":"2014-04-19T12:19:22Z","content_type":null,"content_length":"30891","record_id":"<urn:uuid:696d089b-778f-473b-9ec6-c0f816d1b04c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Jaden has designed a pair of slides, BA and BD, for a theme park. He plans to add another slide BC so that angle ABD is equal to angle DBC, as shown below. Part 1: Using complete sentences, describe the steps used to construct the third slide. Part 2: Compare and contrast the method of constructing the slide verses sketching the slide. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50155b87e4b0fa2467315c2b","timestamp":"2014-04-16T17:37:43Z","content_type":null,"content_length":"36470","record_id":"<urn:uuid:5c40e4d6-d0d3-4dfb-a6ae-aee2637f2159>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamentals of Programming using C++ by Baldwin Floating-point types Floating-point types are a little more complicated than whole-number types. I found the following definition of floating-point in the Free On-Line Dictionary of Computing at this URL: "A number representation consisting of a mantissa, M, an exponent, E, and an (assumed) radix (or "base") . The number represented is M*R^E where R is the radix - usually ten but sometimes 2."
{"url":"http://www.dickbaldwin.com/Cosc1315/Slides/Pf00150bi.htm","timestamp":"2014-04-19T14:30:06Z","content_type":null,"content_length":"1362","record_id":"<urn:uuid:45b02a23-39eb-426e-bf45-18dcfff4cc54>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Compound interest November 7th 2008, 12:12 AM #1 Compound interest Lily invest $2500 at simple interst for 9 months. The amt that she will receive is $2650. If she invests $2500 at the same rate for 9 months but compounded monthly, how much more interest would she receive? The interest I lily gained by the simple interest is: $I = 2650-2500 = 150$ Recall that simple interest $I = Prt$. Given principal P, time t, and interest I, you can solve for rate r. Note that the rate here is per month, so you may have to multiply by 12 to switch it to annual for convenience. Recall that the compounded amount formula is: $A = P\left(1+\frac{r}{m}\right)^{mt}$ Simply find the compounded amount and subtract the principal from it to find the interest. Remember to switch the time to years. Then find out how much more interest she would receive. November 7th 2008, 03:06 AM #2 Super Member Jun 2008
{"url":"http://mathhelpforum.com/algebra/58159-compound-interest.html","timestamp":"2014-04-20T17:42:14Z","content_type":null,"content_length":"32139","record_id":"<urn:uuid:c1132db0-0bf9-48fb-8713-b3ef8a1ab6a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Online Calculators & Conversion Tables Look here for information about measurement systems, units, and related topics, as well as online calculators and software for unit conversions. You'll also find calculators for pH, temperature, molecular weight, and other chemistry figures. Metric Unit Prefixes Metric units of measurement are all based on units of ten. Here is a list of the most common metric unit prefixes. Temperature Converter Here's a handy online converter for temperatures. Simply enter your known temperature (Kelvin, Celsius, Fahrenheit) and the other values will appear. It's much easier than trying to remember the formulas or do the math! Basic Online Calculator Sometimes you just need to use a quickie calculator! Here's one for addition, subtraction, multiplication, and division. Unit Conversion Worksheets Use these printable worksheets to practice unit conversions. These pdf questions and answers will test your understanding of metric-metric, metric-English, and temperature conversions. Balancing Chemical Equations Here is a step-by-step tutorial for balancing chemical equations, along with a worked example. This is a must-read for students of general and introductory chemistry! Calculating Concentration Do the units for solution concentration confuse you? Get definitions and examples for calculating percent composition by mass, mole fraction, molarity, molality, and normality. I've also included a bit of information on dilutions. Chemistry Problems - Worked Examples This is an ever-growing collection of worked chemistry problems. The examples are grouped according to subject matter. Constants, Prefixes, Conversion Factors This is a set of tables with some useful physical constants, conversion factors, and unit prefixes. They are used in many calculations in chemistry, as well as in physics and other sciences. Area, Perimeter & Volume Formulas Perimeter, surface area and volume formulas are used for many chemistry calculations. You may need to find surface area and volume to determine density and concentration, for example. While it's a good idea to memorize these formulas, here a list of formulas to use as a handy reference. Mean or Average - Calculate the Mean It's very important to know how to calculate the mean or average of a set of numbers. Among other things, this will allow you to calculate your grade point average. However, you'll need to calculate the mean for several other situations, too. Measurements & Conversions Quiz Take a multiple choice quiz to test your comprehension of units, conversions, and significant figures. Metric to Metric Conversions - Unit Cancelling Method Here is a step by step example of a conversion between metric units. This illustrates how to cancel units to convert measurement units. Metric Prefixes Quiz How well do you know your metric prefixes? Quiz yourself with this ten question multiple choice self-test. Perimeter and Surface Area Formulas Perimeter and surface area formulas are part of the math used in common science calculations. You While it's a good idea to memorize these formulas, here is a list of perimeter, circumference and surface area formulas to use as a handy reference. Scientific Notation Scientific notation uses exponents to express numerical figures. Here's an explanation of what scientific notation is, plus examples of how to write numbers and perform addition, subtraction, multiplication and division problems using scientific notation. Significant Figures in Measurements and Calculations This article discusses the use of significant figures in taking measurements and performing calculations. Learn about significant figures, uncertainty, accuracy, precision, rounding, and truncating. Losing significant figures and effects of exact numbers are also described. Surface Area and Volume Formulas Surface area and volume formulas are part of the math used in common science calculations. You may need to calculate surface area and volume to determine density, pressure and concentration, for example. While it's a good idea to memorize these formulas, here a list of surface area and volume formulas to use as a handy reference. Table of Physical Constants Need a value for a fundamental physical constant? This handy reference table contains commonly used physical constants used in chemistry. Thermometer with Celsius and Fahrenheit Degrees This thermometer is labelled with both Fahrenheit and Celsius degrees. Use it to compare the Fahrenheit and Celsius temperature scales or to look up conversions between them. Why Is lb the Symbol for Pounds? Have you ever wondered why lb is used as the symbol for the pounds unit? Here's the answer to the question. 1976 Standard Atmosphere Calculator These are online conversions for altitude, temperature, pressure, density, and speed of sound. Many different units are available. ASCII Character Codes The Indiana University Knowledge Base provides this table of decimal and hexadecimal equivalents for ASCII characters. ChemCalc - Calculate a Molecular Formula Enter a chemical formula and ChemCalc will calculate it molecular formula, molecular mass, exact mass, and elemental analysis will plot an isotopic distribution graph. In addition to the calculator, the site offers FAQs, contact information, mailing lists, table of elements and their atomic mass, and section on known atom groups. Chemistry Calculators There are online grams-to-moles, moles-to-grams, temperature, and ideal gas converters. Conversion and Calculation Center ConvertIt.com's online calculators can perform unit, time zone, and currency conversions. There is also a reference section for geographical locations. Conversion Formulas The US Geological Survey provides this table of common metric to English conversions. Conversion of Units The Institute of Chemistry, Berlin, provides this easy to use online calculator for a host of different units. You simply choose input the starting value and unit and select the desired result. Conversion Table for Cooking Botanical.com provides these English-to-metric and metric-to-English conversion tables for weights and measures. On-screen conversions transform one international unit to another – including time zones. Decimal - Hexadecimal - Binary Conversion Table From decimal 0 to decimal 255. Distillation, Vapor Pressure, and Equilibria Tables and calculations involving vapor-liquid equilibria. From Shuzo Ohe, Science University of Japan. eFunda: Units and Constants A lot of information about units and unit conversion; includes also constants, information about SI, and unit prefixes. Fundamental Physical Constants From NIST, latest values of the constants and background information. Grit and Microgrit Grading Conversion Chart Convert from grit to mesh to microns to inches. From Reade Advanced Materials. How Many? From Russ Rowlett, University of North Carolina, a comprehensive dictionary of units of measurement covering the English customary systems, the metric system, and the International System of Weights and Measures (SI). International System of Units From NIST, essentials of the SI, background, and bibliography. Internet French Property's Conversion Table Converts metric, imperial, and US measurements. MegaConverter 2 This is an online converter for angles, area, astronomical distance, density, energy, financial interest, force, fractions-to-decimal, heat index, kitchen measures, length, mass, prefixes, power, pressure, resistance, speed, temperature, time, viscosity, volume, weight, wind chill... you name it! Metric/Imperial Converter Includes length, mass, and volume conversions. Molecular Structure Calculations This site will calculate molecular properties (bond lengths, angles, atomic charges, dipole moment, bond orders, molecular orbital energies) and the best Lewis structure that fits the molecular orbitals. The Lewis structure is given with formal electron pair localized bonds and the hybridization of the atomic orbitals used to form these bonds. The calculations take 1-2 hours to perform. Nephron Information Center Easily convert mg/dL to mEq/L for a variety of ions and important biomolecules. Notes on Acids and Bases Short descriptions with examples and a Java applet that calculates [H+] and pH. From Gwen Sibert, Roanoke Valley Governor's School. Online Metric Converter Science Made Simple provides this program, which performs English to metric conversions for area, length, pressure, stress, speed, temperature, time, volume, weight, and fruit (I had to look that one up... it's apples and oranges). This site does most conversions, with over 5,000 different units and 30,000 conversions. More unusual choices include clothing size conversions, light units, and number bases. pH Calculator Converts from hydrogen ion concentration to pH and vice versa. From Henry Bungay, Rensselaer Polytechnic Institute. Physical Reference Data SI units, atomic weights, ground states, atomic spectra, nuclear physics data. From National Institute of Standards and Technology. Pressure Conversion Calculator Interconvert Pa, torr, mm Hg, atm, lb/in2 and more. From the American Vacuum Society. SI Units Conversion Table Supports unit conversions from non-SI to SI. From TechExpo. A freeware units conversion program specifically for engineers, runs under Windows 9x/NT, from Katmar Software. Uncertainty of Measurement Results From NIST, essentials of expressing measurement uncertainty. Universal Currency Converter It may not be universal, but it is certainly worldwide. US Metric Association Promoters of US transition to metric system. VassarStats: Statistical Computation Website This is the site for you if you are calculating probabilities, distributions, frequencies, proportions, correlations, regressions, t-tests, ANOVA, ANCOVA, or are performing other statistical tests. Web Pages that Perform Statistical Calculations For computing probabilities, regression analysis, analysis of variance, and other statistical tests. From John C. Pezzullo. World Wide Metric Co. Inc. On-screen conversion calculators for length, weight, pressure, and volume. WWW Unit Converter Applets deliver on-screen conversions. From Jan Derk Stegeman.
{"url":"http://chemistry.about.com/od/convertcalculate/","timestamp":"2014-04-16T18:58:00Z","content_type":null,"content_length":"53187","record_id":"<urn:uuid:968c6b6b-a167-4c71-ade8-3bfda385f7cb>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
[Getdp] User defined source Kubicek Bernhard Bernhard.Kubicek at arsenal.ac.at Tue Oct 17 10:42:52 CEST 2006 Hi Amit, maybe I can help you there in some way. We have the problem that we need to import a three dimensional scalar field into GetDP, to do further calculations with it in an formulation (i.E. some for_v). Considering that one already has the temperature nodal values in the sorting that the mesh in GetDP uses, one can do the following: Using the additions at the end of this mail to the normal formulation "for_v", one will find a hell lot of values 12345678 in the .pre file after an initial sole pre-processing step. One moves the file to a new filename .ori.pre Now, using a self programmed tool, one can replace the "funny" 12345678 values with the corresponding temperature values, as they have the same sorting as the mesh nodes, and recreates a .pre file. Then one invokes the calculation and dances happily around his computer. I understand that this can be considered as ugly as ugly can be, however it works.. Otherwise, there exist an undocumented function ReadSolution (to be found in the source-code), where one would need to supply a self-created solution file. In your case, I would propose you create a mesh file without gmesh, directly from you hexahedral mesh (being very careful about the volume orientation, and the nodal numbering) using self programmed software. Although this methods require a lot of hand-work, and although GetDP has a quite steep learning curve, I still recommend using it. We are very happy about it, and have experienced calculation times about 70 times shorter than a similar calculation using FEMLab (transient 2D simulation o electron convection/dissipation within electric fields)... Nice greetings from Vienna, Constraint { { Name Constr_T ; Case { { Region Vol; Value 12345678;} FunctionSpace { { Name fsT ; Type Form0 ; BasisFunction { { Name sn ; NameOfCoef vn ; Function BF_Node ; Support Vol ; Entity NodesOf[ All ] ; } Constraint { { NameOfCoef vn; EntityType NodesOf ; NameOfConstraint Constr_T; } Formulation { { Name dummyT; Type FemEquation ; Quantity { { Name T ; Type Local ; NameOfSpace fsT ; } Equation { Galerkin { [ Dof{ T} , {T} ] ; In Vol ; Jacobian Vol ; Integration Int ; } //Never Executed { Name for_v; Type FemEquation ; Quantity { { Name T ; Type Local ; NameOfSpace fsT ; } Equation { ... //use {T} { Name all;System { Name T ; NameOfFormulation dummyT ; DestinationSystem A1;} { Name A1 ; NameOfFormulation for_v ; } Generate[A1] ; Solve[A1] ; SaveSolution[A1] ; -----Ursprüngliche Nachricht----- Von: getdp-bounces at geuz.org [mailto:getdp-bounces at geuz.org] Im Auftrag von Amit.Itagi at seagate.com Gesendet: Dienstag, 17. Oktober 2006 00:02 An: getdp at geuz.org Betreff: [Getdp] User defined source I am trying to solve the heat equation in a 3D geometry. My thermal source is a result of optical power dissipation in a lossy material. This source term is calculated on a Cartesian grid using a different program. How can an import this source into my getdp problem ? getdp mailing list getdp at geuz.org More information about the getdp mailing list
{"url":"http://www.geuz.org/pipermail/getdp/2006/000886.html","timestamp":"2014-04-21T07:03:58Z","content_type":null,"content_length":"6383","record_id":"<urn:uuid:2d8757d1-c073-4b36-b843-5307d7e32669>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Which Moments to Match? A. Ronald Gallant, George Tauchen Econometric Theory, Vol. 12, No. 4, (Oct., 1996), pp. 657-681. We describe an intuitive, simple, and systematic approach to generating moment conditions for GMM estimation of the parameters of a structural model. The idea is to use the score of a density that has an analytic expression to define the GMM criterion. The auxiliary model that generates the score should closely approximate the distribution of the observed data but is not required to nest it. If the auxiliary model nests the structural model then the estimator is as efficient as maximum likelihood. The estimator is advantageous when expectations under a structural model can be computed by simulation, by quadrature, or by analytic expressions but the likelihood cannot be computed easily.
{"url":"http://people.duke.edu/~arg/pubs/et96.html","timestamp":"2014-04-20T00:39:03Z","content_type":null,"content_length":"1307","record_id":"<urn:uuid:cef07464-4095-470f-a602-8d47f3584ce9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
'Impossible' Form of Matter Takes Spotlight In Study of Solids By MALCOLM W. BROWNE Published: September 5, 1989 QUASICRYSTALS, a puzzling form of solid matter regarded as impossible until five years ago, have now moved to center stage in a worldwide investigation into the nature of solid matter. Theorists and experimenters meeting at an international quasicrystal conference in Greece this week will seek to interpret some remarkable recent discoveries, including a new family of quasicrystalline metal alloys that are the most perfect quasicrystals yet developed. Scientists believe some of these new materials will have peculiar properties likely to find uses in electronics and other technologies. Quasicrystals, for example, might permit special-purpose computer components to respond to magnetic fields in ways not possible with conventional semiconductors. ''This is a great intellectual adventure for physicists and mathematicians,'' said David R. Nelson, a Harvard University theorist. ''Quasicrystals are a delightful new toy for us, and part of the fascination stems from the fact that quasicrystals can evidently assume an infinite number of types.'' Quasicrystalline matter is a category intermediate between the two types of solids traditionally recognized by physicists: the crystals and the glasses. Quasicrystal is a shortened form of the more technical term quasi-periodic crystals. According to classical theory, which until 1984 had remained unchallenged for nearly two centuries, all solids were believed to consist either of crystals or glass. Crystals are three-dimensional frameworks of atoms bound together by electrons in such a way that the same patterns of atoms are identically repeated throughout an entire crystal. Typical crystals are those of table salt, in which sodium and chlorine atoms alternate in a perfectly regular cubic lattice, a kind of three-dimensional chessboard, with one atom at each corner of every square. In the solids known as glasses, which include special forms of metal and other minerals as well as common window glass, there is no ordered structure; atoms are jumbled together in chaotic disorder. By contrast with true crystals and glasses, quasicrystals contain atoms in ordered arrays, but the patterns they assume are subtle and do not recur at precisely regular intervals. Crystallographers were astonished to discover that quasicrystals exhibit a quality called ''fivefold symmetry.'' This means that if a quasicrystal is rotated in an X-ray beam, symmetrical X-ray scattering patterns recur five times with each complete rotation. This had been considered impossible. To create a solid exhibiting fivefold symmetry is equivalent to using five-sided tiles - regular pentagons - to cover a floor. Unlike rectangles, triangles and hexagons, regular pentagons cannot be fitted together to cover a floor without leaving gaps. By analogy, it was believed, a perfectly filled crystal could never be made using icosahedral (20-sided) clusters of atoms exhibiting fivefold But in 1984, theorists and experimenters, working independently, exploded this assumption. At the National Bureau of Standards, now the National Institute of Standards and Technology, in Gaithersburg, Md., Dr. Dany Schechtman, a visiting Israeli scientist, stunned colleagues when he discovered that an alloy of aluminum and manganese exhibited the supposedly impossible fivefold symmetry. At almost the same time, Paul J. Steinhardt, a theorist at the University of Pennsylvania, and his collaborators, discovered a scheme by which just such a crystal might be assembled. The plan was based on the mathematics of ''tiling,'' the fitting together of regular geometric forms to cover a surface. Since then, both theoretical and experimental research have put quasicrystals on a solid scientific footing. The first quasicrystalline alloy discovered by Dr. Schechtman, which was named ''schechtmanite'' in his honor, proved to be only the first such alloy in a long series. Mixtures of aluminum with copper, iron, lithium and ruthenium have produced quasicrystalline alloys with even more interesting properties than schechtmanite. A gallium-based group of quasicrystalline alloys containing magnesium and zinc that exhibit particularly striking quasicrystalline characteristics is under study at Harvard. According to David P. Vincenzo, a physicist at I.B.M., an aluminum-copper-iron alloy with the formula Al#65 Cu#20Fe#15 recently discovered at I.B.M. by Peter A. Bancel appears to be a ''perfect'' quasicrystal - that is, its atomic irregularities, if any, cannot be detected by standard X-ray techniques. The first quasicrystalline alloy was created by chilling a molten mixture of aluminum and manganese very rapidly. But it has since been found that much better quasicrystals can be made by cooling molten mixtures extremely slowly, thereby giving their constituent atoms time to find appropriate positions in the lattice structure. Theory on Superconductors Theorists speculate that because of the patterns of electron bonds holding them together, some quasicrystals may become superconductors at very low temperatures. Their lattice structures, expected to be more rigid than those of ordinary crysals, make it probable that many quasicrystals will prove to be harder than steel, and potentially useful for making super-hard tools. But for the present, scientists are mainly concerned with understanding the electronic characteristics that may result from quasi-periodic arrays of atoms. The mathematical tiling theory underlying the latest research in quasi crystals developed rapidly in the 1970's because of the work of Roger Penrose, a renowned mathematician at Oxford University in England. Dr. Penrose showed that by laying two types of rhombus-shaped tiles according to certain rules, a floor could be completely covered, leaving no gaps or overlapping tiles, and creating patterns that never exactly repeat themselves. Such patterns are called quasi-periodic. Physicists at Harvard University, the I.B.M. Thomas J. Watson Research Center at Yorktown Heights, N.Y., the University of Pennsylvania and other institutions have discovered various theoretical patterns by which nature may mimic Penrose tiling schemes in real crystals. The ''tiles'' (or geometric units) discovered by Dr. Penrose are of two types, ''skinny'' and ''fat'' rhombuses, which are used in combination to form patterns. All four sides of both types have identical lengths, but the corners form different angles; the corner angles within a fat rhombus must be 72 degrees and 108 degrees, while those of a skinny rhombus are 36 and 144 degrees. The sides of the two types of rhombus may be joined only by certain rules. Link to 'Golden Mean' Penrose tiling has another characteristic that fascinates mathematicians and architects: it exhibits a feature known to the ancient Greeks as the ''golden mean,'' a ratio that has been used in paintings, sculpture and architecture through the ages. The golden mean governs the proportions of the Parthenon and many other classical buildings. The ratio, as applied to artistic shapes and structures, is roughly equal to the ratio of lengths of the human body as divided at the navel, and is regarded as particularly pleasing to the eye. Scientists recently discovered that the golden mean also describes important characteristics of quasicrystals. The golden mean can be approximated by dividing a straight line into two parts, one larger than the other. The ratio of the shorter part to the longer part must exactly equal the ratio of the longer part to the entire line, and in both cases, this ratio is approximately 1 to 1.618034..., an ''irrational number'' whose decimals extend to infinite length without repeating. One property of a mathematical Penrose tiling scheme is that it incorporates fat and skinny rhombuses in exactly the ratio expressed by the golden mean. Scientists have discovered that this mathematical relationship probably has profound effects on the properties of real quasicrystals. Julian D. Maynard of Pennsylvania State University and his graduate student, Shanjin He, recently succeeded in simulating the electronic properties of quasicrystals using an array of 150 musical tuning forks. Tuning-Fork Experiment The scientists, whose achievement was recently reported in the journal Physical Review Letters, first built a base, made of aluminum, and inscribed on it a typical Penrose tiling pattern of ''fat'' and ''skinny'' rhombuses. At the center of each rhombus they mounted a tuning fork with a frequency (440 hertz) corresponding to the note A above Middle C. Steel wire was then welded to the tuning forks in such a way that each tine was linked to two tines of neighboring tuning forks. This acoustically linked all the tuning forks in the system. The investigators then placed an electromagnet next to one tine to set the tine vibrating at a succession of different frequencies. Electric guitar pickups were positioned randomly next to four other tines in the array, to sense the intensity and pitch of the sounds the tines emitted. The apparatus was thus able to measure the acoustical resonances and interactions of the entire tuning fork system, in much the same way that electronic sensors would measure the electronic resonances of a quasicrystal. Dr. Maynard found that certain frequencies created by the electromagnet resulted in collective resonances in all the tuning forks, while other frequencies produced no resonances. By plotting a graph of these resonances over a wide range of frequencies, he found that the width of gaps between resonating frequencies occurred in ratios exactly corresponding to the golden mean. ''The effect we saw was acoustical,'' he said, ''but it is analagous to the spectral structures you would see for electrons in quasicrystals. Our technique may have potential in predicting electronic characteristics of yet-to-be-created quasicrystals that might be useful as electronic components.'' Some theorists believe such properties may exhibit ''fractal'' behavior, in which basic patterns are infinitely repeated at all scales, from the infinitely small to the infinitely large. For example, Dr. DiVincenzo of I.B.M. said in an interview, the resistance of a quasicrystal to an electrical current might change in fractal steps when the crystal is exposed to a magnetic field of changing ''The truth is,'' Dr. DiVincenzo said, ''we're not sure what we'll find as we go along, and the prospect of encountering surprises is what makes quasicrystals so attractive these days.'' Using Acoustics to Measure Electronic Properties To penetrate the mysteries of quasicrystals, scientists inscribed an aluminum plate with a pattern of rhombuses analogous to the arrays of atoms in a quasicrystal. They mounted a tuning fork in the center of each rhombus and welded steel wires connecting each tine of each fork to two neighboring tines, linking the whole network. An oscillating electromagnet forced one of the tuning-fork tines to vibrate, causing resonance throughout the system. The resonance changed as the frequency of the electromagnet changed. The resulting spectrum of sound and silence might prove useful in predicting the electronic characteristics of real quasicrystals. The Building Blocks of Normal Crystals and Quasicrystals In conventional crystals, atoms are arranged in regular, repetitive patterns analogous to those formed by tiles covering a floor. Rectangular or hexagonal tiles, like those at the right, can easily form regular patterns leaving no gaps. In quasicrystals, however, groups of atoms form patterns that never exactly repeat themselves. The mathematician Roger Penrose discovered two rhombuses, shown above, that can be laid out by special rules to ''tile'' any surface completely, with no repeating patterns. Penrose tiles must be laid with matching edges, marked above with single or double arrows. Real quasicrystal edges may not match perfectly. A typical Penrose tiling pattern, above, made up of two basic rhombuses. Although similarities are visible, patterns never exactly repeat. Quasicrystals are believed to embody similar arrays of atoms. Diagrams (pg. C1 & C9)
{"url":"http://www.nytimes.com/1989/09/05/science/impossible-form-of-matter-takes-spotlight-in-study-of-solids.html?pagewanted=all&src=pm","timestamp":"2014-04-21T07:33:29Z","content_type":null,"content_length":"45390","record_id":"<urn:uuid:2d10d352-2468-409d-b9d5-6e35f5bd2959>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: All in Calculus for Fundamental Thrm. of Calculus, Hard to figure out... Discussion: All in Calculus for Fundamental Thrm. of Calculus Topic: Hard to figure out... Related Item: http://mathforum.org/mathtools/tool/1004/ << see all messages in this topic < previous message Subject: Hard to figure Author: Chuck Date: Feb 25 2004 I agree. This applet begins as a proof of one part of the FTC, exactly the way I do it in my classes. But then leaves the student hanging. I suspect this applet is associated with some text. Reply to this message Quote this message when replying? yes no Post a new topic to the All in Calculus for Fundamental Thrm. of Calculus discussion Visit related discussions: The Fundamental Theorem of Calculus tool Fundamental Thrm. of Calculus Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?co=c&context=cell&do=r&msg=11337","timestamp":"2014-04-16T16:30:32Z","content_type":null,"content_length":"16342","record_id":"<urn:uuid:72a59d41-9f49-466e-a4a8-5d13f0af3eb3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Return the Kaiser window. The Kaiser window is a taper formed by using a Bessel function. M : int Number of points in the output window. If zero or less, an empty array is returned. beta : float Shape parameter for window. out : array The window, normalized to one (the value one appears only if the number of samples is odd). The Kaiser window is defined as The Kaiser was named for Jim Kaiser, who discovered a simple approximation to the DPSS window based on Bessel functions. The Kaiser window is a very good approximation to the Digital Prolate Spheroidal Sequence, or Slepian window, which is the transform which maximizes the energy in the main lobe of the window relative to total energy. The Kaiser can approximate many other windows by varying the beta parameter. │beta│ Window shape │ │0 │Rectangular │ │5 │Similar to a Hamming │ │6 │Similar to a Hanning │ │8.6 │Similar to a Blackman │ A beta value of 14 is probably a good starting point. Note that as beta gets large, the window narrows, and so the number of samples needs to be large enough to sample the increasingly narrow spike, otherwise nans will get returned. Most references to the Kaiser window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. >>> from numpy import kaiser >>> kaiser(12, 14) array([ 7.72686684e-06, 3.46009194e-03, 4.65200189e-02, 2.29737120e-01, 5.99885316e-01, 9.45674898e-01, 9.45674898e-01, 5.99885316e-01, 2.29737120e-01, 4.65200189e-02, 3.46009194e-03, 7.72686684e-06]) Plot the window and the frequency response: >>> from numpy import clip, log10, array, kaiser >>> from scipy.fftpack import fft, fftshift >>> import matplotlib.pyplot as plt >>> window = kaiser(51, 14) >>> plt.plot(window) >>> plt.title("Kaiser window") >>> plt.ylabel("Amplitude") >>> plt.xlabel("Sample") >>> plt.show() >>> A = fft(window, 2048) / 25.5 >>> mag = abs(fftshift(A)) >>> freq = linspace(-0.5,0.5,len(A)) >>> response = 20*log10(mag) >>> response = clip(response,-100,100) >>> plt.plot(freq, response) >>> plt.title("Frequency response of Kaiser window") >>> plt.ylabel("Magnitude [dB]") >>> plt.xlabel("Normalized frequency [cycles per sample]") >>> plt.axis('tight'); plt.show()
{"url":"http://docs.scipy.org/doc/numpy-1.3.x/reference/generated/numpy.kaiser.html","timestamp":"2014-04-20T18:34:36Z","content_type":null,"content_length":"16433","record_id":"<urn:uuid:d64640b6-81e3-48d5-82af-153841bdbdf3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 2010 [00415] [Date Index] [Thread Index] [Author Index] Re: whats wrong with this code ?! • To: mathgroup at smc.vnet.net • Subject: [mg110531] Re: whats wrong with this code ?! • From: "becko" <becko565 at hotmail.com> • Date: Wed, 23 Jun 2010 01:54:06 -0400 (EDT) I see the error of my ways now. I did try x[[{i, j}]] = x[[{j, i}]] at first, but I couldn't get past the "Set::setps: ... in the part assignment is not a symbol" message. The HoldFirst is the key! And thanks for the PbS explanation! From: Leonid Shifrin Sent: Sunday, June 20, 2010 5:29 AM To: becko ; mathgroup at smc.vnet.net Subject: [mg110531] Re: [mg110456] whats wrong with this code ?! Here is the correct code for your method: qksort[x_List, left_Integer, right_Integer] := If[right - left >= 1, Module[{i, z}, {i, z} = split[x, left, right]; {qksort[z[[left ;; i - 1]], 1, i - left], z[[i]], qksort[z[[i + 1 ;; right]], 1, right - i]} // Flatten], x] I ommitted previous functions since no changes are needed for them. Your code contains one non-obvious inefficiency though, and that is in the way you deal with lists, particularly swapping function. Using ReplacePart and idiom z = swap[z,...] means that you copy the entire list (actually twice - once internally via ReplacePart and once explicitly) to swap only two elements. Therefore, a single swap operation has a linear rather than the constant time complexity in the size of the list whose elements are being swapped. This is hidden for small lists by the fact that other operations such as list indexing and breaking list into pieces are costly and shadow this effect. Also, most operations in qsort are with small lists, for which this effect is not visible. You will start seeing it for lists of ~50000 elements or so, where OTOH the use of home-made sort is only of academic interest anyway, given the highly efficient built-in sorting function. Anyway, below is a similar implementation based on pass-by-reference semantics: SetAttributes[swapPbR, HoldFirst]; swapPbR[x_, i_Integer, j_Integer] := x[[{i, j}]] = x[[{j, i}]]; SetAttributes[splitPbR, HoldFirst]; splitPbR[x_, left_Integer, right_Integer] := Module[{l = RandomInteger[{left, right}], T, i = left}, T = x[[l]]; swapPbR[x, left, l]; Do[If[x[[j]] < T, swapPbR[x, ++i, j]], {j, left + 1, right}]; swapPbR[x, left, i]; qksortPbR[x_List, left_Integer, right_Integer] := Module[{i, qsort, xl = x}, qsort[l_Integer, r_Integer] := If[r - l >= 1, i = splitPbR[xl, l, r]; qsort[l, i - 1]; qsort[i + 1, r]]; qsort[left, right]; This implementation is based on pass-by-reference semantics and in-place list modification for all main functions. I use a local copy of the original list <xl>, and recursive local <qsort> function defined in the Module scope, which allows me to embed <xl> into it directly without passing it as a parameter. The < swapPbR> function works on the original list passed to it, rather than creating a copy, and is constant-time. The function <splitPbR> also modifies the original list. Note that I omitted the head-testing patterns _List, since they will slow the function down and are not strictly necessary for dependent functions, and OTOH x_List pattern may not match if this argument is held. I find that this is a good example of a (rare) case where pass-by-reference can indeed have some benefits in Mathematica. You can do some benchamarks and see that PbR version is about twice faster for smalller lists and starts to win big for larger ones. Of course, it is still much slower than the built-in Sort. Hope this helps. On Sat, Jun 19, 2010 at 4:47 AM, becko <becko565 at hotmail.com> wrote: ok. I give up. I've been struggling with this the entire night. I have three functions: swap[..], split[..] and qksort[..]. The objective is to implement a recursive sort algorithm. I have tried to execute it on list={2,5,4,7,9,1};. But I keep getting the "Cannot take positions .. through .. in .." message. You may need to execute it a few times to see the error (because of it depends on the RandomInteger). Here are the three functions. Thanks in advance. If [ z[[j]]<T,z=swap[z,++i,j] ], Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable X-Sun-Content-Length: 6786 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <META content=text/html;charset=iso-8859-1 = <META name=GENERATOR content="MSHTML 8.00.7600.16385"></HEAD> <BODY style="PADDING-LEFT: 10px; PADDING-RIGHT: 10px; PADDING-TOP: = id=MailContainerBody leftMargin=0 topMargin=0 = name="Compose message area"> <DIV><FONT face=Calibri>I see the error of my ways now. I did try = x[[{i, j}]] = x[[{j, i}]] at first, but I couldn't get past the "Set::setps: ... in = the part assignment is not a symbol" message. The HoldFirst is the key! And = thanks for the&nbsp;PbS explanation!</FONT></DIV> <DIV><FONT face=Calibri></FONT>&nbsp;</DIV> <DIV><FONT face=Calibri></FONT><BR></DIV> <DIV style="FONT: 10pt Tahoma"> <DIV style="BACKGROUND: #f5f5f5"> <DIV style="font-color: black"><B>From:</B> <A title="mailto:lshifr at gmail.com&#10;CTRL + Click to follow link" = Shifrin</A> </DIV> <DIV><B>Sent:</B> Sunday, June 20, 2010 5:29 AM</DIV> <DIV><B>To:</B> <A title="mailto:becko565 at hotmail.com&#10;CTRL + Click to follow link" href="">becko</A> ; <A title="mailto:mathgroup at smc.vnet.net&#10;CTRL + Click to follow link" href="">mathgroup at smc.vnet.net</A> </DIV> <DIV><B>Subject:</B> Re: [mg110456] whats wrong with this code <DIV><BR></DIV>Hi,<BR><BR>Here is the correct code for your method:<BR><BR>ClearAll[qksort];<BR>qksort[x_List, left_Integer, = := <BR>&nbsp;If[right - left &gt;= 1, Module[{i, z}, {i, z} = = split[x, left, right];<BR>&nbsp;&nbsp; {qksort[z[[left ;; i - 1]], 1, i - left], = <BR>&nbsp;&nbsp;&nbsp;&nbsp; qksort[z[[i + 1 ;; right]], 1, right - i]} = Flatten], x]<BR><BR>I ommitted previous functions since no changes are = for them. <BR><BR>Your code contains one non-obvious inefficiency = though, and that is in the way you deal with lists, particularly swapping function. = ReplacePart and idiom z = swap[z,...] means that you copy the entire = (actually twice - once internally via ReplacePart and once explicitly) = to swap only two elements. Therefore, a single swap operation has a linear = rather than the constant time complexity in the size of the list whose elements are = swapped. <BR><BR>This is hidden for small lists by the fact that other operations such as list indexing and breaking list into pieces are = costly and shadow this effect. Also, most operations in qsort are with small lists, = which this effect is not visible. You will start seeing it for lists of = elements or so, where OTOH the use of home-made sort is only of academic = interest anyway, given the highly efficient built-in sorting function.<BR><BR>Anyway, below is a similar implementation based on pass-by-reference = HoldFirst];<BR>swapPbR[x_, i_Integer, j_Integer] :=<BR>&nbsp; x[[{i, = j}]] = x[[{j, i}]];<BR><BR>ClearAll[splitPbR];<BR>SetAttributes[splitPbR, HoldFirst];<BR>splitPbR[x_, left_Integer, right_Integer] :=<BR>&nbsp; = = RandomInteger[{left, right}], T, i = left},<BR>&nbsp;&nbsp; T = = swapPbR[x, left, l];<BR>&nbsp;&nbsp; Do[If[x[[j]] &lt; T, swapPbR[x, = ++i, j]], {j, left + 1, right}];<BR>&nbsp;&nbsp; swapPbR[x, left, = i];<BR><BR>ClearAll[qksortPbR];<BR>qksortPbR[x_List, left_Integer, right_Integer] :=<BR>&nbsp; Module[{i, qsort, xl = = qsort[l_Integer, r_Integer] :=<BR>&nbsp;&nbsp;&nbsp; If[r - l &gt;= 1,<BR>&nbsp;&nbsp;&nbsp;&nbsp; i = splitPbR[xl, l, r];<BR>&nbsp;&nbsp;&nbsp;&nbsp; qsort[l, i - = qsort[i + 1, r]];<BR>&nbsp;&nbsp; qsort[left, right];<BR>&nbsp;&nbsp; xl];<BR><BR><BR>This implementation is based on pass-by-reference = semantics and in-place list modification for all main functions. I use a local copy of = original list &lt;xl&gt;, and recursive local &lt;qsort&gt; function = defined in the Module scope, which allows me to embed &lt;xl&gt; into it directly = passing it as a parameter. The &lt; swapPbR&gt; function works on the = list passed to it, rather than creating a copy, and is constant-time. = function &lt;splitPbR&gt; also modifies the original list. Note that I = the head-testing patterns _List, since they will slow the function down = and are not strictly&nbsp; necessary for dependent functions, and OTOH x_List = may not match if this argument is held.<BR><BR>I find that this is a = example of a (rare) case where pass-by-reference can indeed have some = in Mathematica. You can do some benchamarks and see that PbR version is = twice faster for smalller lists and starts to win big for larger ones. = course, it is still much slower than the built-in Sort.<BR><BR>Hope this = <DIV class=gmail_quote>On Sat, Jun 19, 2010 at 4:47 AM, becko <SPAN dir=ltr>&lt;<A href="">becko565 at hotmail.com</A>&gt;</SPAN> = style="BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0pt 0pt 0pt = 0.8ex; PADDING-LEFT: 1ex" class=gmail_quote>ok. I give up. I've been struggling with this the = night. I have<BR>three functions: swap[..], split[..] and qksort[..]. = objective is to<BR>implement a recursive sort algorithm. I have tried = execute it on<BR>list={2,5,4,7,9,1};. But I keep getting the "Cannot = positions ..<BR>through .. in .." message. You may need to execute it = a few times to see<BR>the error (because of it depends on the = RandomInteger). Here are the<BR>three functions. Thanks in [ z[[j]]&lt;T,z=swap[z,++i,j] <DIV><FONT face=Calibri></FONT>&nbsp;</DIV></BODY></HTML>
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jun/msg00415.html","timestamp":"2014-04-18T23:48:28Z","content_type":null,"content_length":"37843","record_id":"<urn:uuid:7b174d93-fd1b-4d7c-9842-d4708465c93e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
e Altman Z Measuring the 'Fiscal-Fitness' of a company: The Altman Z-Score In the early 60's Edward Altman, using Multiple Discriminant Analysis combined a set of 5 financial ratios to come up with the Altman Z-Score. This score uses statistical techniques to predict a company's probability of failure using the following 8 variables from a company's financial statements: The ones in Green are from the Income Statement and the ones in Red from the Balance Sheet Use the following Z-Score Insolvency Prediction Calculator to assess a company.
{"url":"http://www.creditguru.com/CalcAltZ.shtml","timestamp":"2014-04-16T07:14:54Z","content_type":null,"content_length":"32823","record_id":"<urn:uuid:a3e9b874-fd09-4d6d-bfaa-ad17488ddd07>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Fitch :( April 20th 2013, 07:13 AM Fitch :( Please help :) How to prove: 1.p=>q - permission goal ~q=>~p 2. ~p=>q - permission and q=>r goal (~p=>~r)=>p 3. ~pI~q goal ~(p^q) April 20th 2013, 07:54 AM Re: Fitch :( Since you did not edit your post to add a note that it has been solved, was the [SOLVED] tag added intentionally? April 20th 2013, 11:26 AM Re: Fitch :( Thanks. I found a solution :) Only the last item leaves me wrong, for example: ~ pI goal ~ q ~ (p ^ q) - I can do up to 17, then later I got wrongI do not know why? Maybe I do not understand something. From the pictures that I have noted: 1, 2-9, 10 -17 and or eliminations. - Only 18 leaves me wrong: ( April 20th 2013, 11:46 AM Re: Fitch :( Does "I" mean disjunction, i.e., "or"? Usually it is denoted by ∨. In ASCII one can also write \/ or the letter "v". By "last", do you mean number 3 in the quote above? This is hard to understand. I am not sure what "pI" is and why there is no connective between "~q" and "~(p ^ q)". This is also hard to understand without seeing the first 17 steps. April 21st 2013, 05:03 AM Re: Fitch :( do u mean premise ~p|~q to prove ~(p&q)??? I'm also looking for the proof of this. April 21st 2013, 06:54 AM Re: Fitch :( 1.~p | ~q Premise 2.~p Assumption 3.p & q Assumption 4.p And Elimination: 3 5.p & q => p Implication Introduction: 4 6.~p Reiteration: 2 7.p & q Assumption 8.~p Reiteration: 6 9.p & q => ~p Implication Introduction: 8 10.~(p & q) Negation Introduction: 5, 9 11.~p => ~(p & q) Implication Introduction: 10 12.~q Assumption 13.p & q Assumption 14.q And Elimination: 13 15.p & q => q Implication Introduction: 14 16.p & q Assumption 17.~q Reiteration: 12 18.p & q => ~q Implication Introduction: 17 19.~(p & q) Negation Introduction: 15, 18 20.~q => ~(p & q) Implication Introduction: 19 21.~(p & q) Or Elimination: 1, 11, 20 This is the solution I got. But it's too long. :( Still it works :) April 21st 2013, 10:53 AM Re: Fitch :( I believe this is the correct derivation. I don't see how it can be shortened except by removing step 6 and making step 8 a reiteration of 2. A more standard variant of natural deduction (of which Fitch calculus is a particular notation) has a symbol ⊥ for contradiction. Then negation introduction is shorter: you assume ~p and p & q, derive p, derive ⊥ from ~p and p, then close the assumption p & q and derive ~(p & q) in one step. There is no need to derive p & q => p and p & q => ~p. Similarly, or elimination does not require implications ~p => ~(p & q) and ~q => ~(p & q): you just derive ~(p & q) two times from assumptions ~p and ~q, respectively, and or elimination derives ~(p & q) and closes the ~p and ~q in one step. April 21st 2013, 11:12 AM Re: Fitch :( The program I used doesn,t have contradiction. BTW can you help me to prove this the other way around? I mean starting from ~(p & q) as the premise to prove ~p | ~q. April 21st 2013, 11:42 AM Re: Fitch :( Ah, this is more complicated. This requires the rule of double-negation elimination or the law of excluded middle. Using the latter is easier. Which one do you have and what does it look like? April 21st 2013, 12:01 PM Re: Fitch :( April 21st 2013, 12:28 PM Re: Fitch :( I see that you don't have the law of excluded middle, but I meant to ask what the negation elimination rule looks like. What formulas does it take and produce? I would guess it takes ~~A and produce A. Is this correct? April 21st 2013, 12:39 PM Re: Fitch :( Yeah... :) April 21st 2013, 12:58 PM Re: Fitch :( We have a premise ~(p & q). The proof of ~p | ~q is by contradiction, i.e., we assume ~(~p | ~q) and prove p & q (described below), which contradicts the premise. This gives ~~(~p | ~q) by negation introduction and then ~p | ~q by negation elimination. So, assume ~(~p | ~q). We need to prove p and q. These two subderivations are similar. To prove p, we assume ~p and derive ~p | ~q, which contradicts the assumption. Therefore, ~~p and we conclude p by negation elimination. The second part, q, is proved similarly. April 22nd 2013, 07:16 AM Re: Fitch :( Thank you so much!!! :) October 23rd 2013, 06:44 PM Re: Fitch :( hi emakarov :) I'm struggling with this one still ... this is what I have so far: 1. ~(p & q) Premise 2. ~(~p | ~q) Assumption 3. ~p Assumption 4. ~p | ~q Or Introduction: 3 5. ~p => ~p | ~q Implication Introduction: 4 6. ~p Assumption Now I'm stuck as to how to get to the conclusion of ~p|~q. Can you please help? Also ... please bear in mind I don't have the necessary background in discrete mathematics .. so is there a specific textbook or site you can recommend also so I can understand better and learn? Thank you so much in advance :)
{"url":"http://mathhelpforum.com/discrete-math/217802-fitch-print.html","timestamp":"2014-04-18T21:24:55Z","content_type":null,"content_length":"18258","record_id":"<urn:uuid:95f579ca-1aaa-4ef2-ab0e-120a54c6494a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Analytic curve on Riemann surface up vote 1 down vote favorite Suppose there is a closed analytic curve $C$ on a Riemann surface $S$, that is the image of a map $\gamma$ from the equator $E$ of the Riemann sphere to the surface $S$ which is a restriction of a one-to-one complex analytic map $\Gamma$ of annulus $A(\supset E)$ into the surface $S$. Suppose that I change the complex structure on $S$: under which conditions the curve $C$ remains analytic? add comment 1 Answer active oldest votes On a neighborhood of $C$, the two complex structures are related by a diffeomorphism $h$, $J'=h^*J$. If $C$ is analytic with respect to $J$, it is analytic with respect to $J'$ (in the sense up vote you described at the beginning) if and only if the restriction of $h$ to $C$ is real-analytic with respect to the analytic structure induced by $J$. One direction is clear; the extension of 1 down the map to the annulus for the opposite direction is provided by (3.4) in math/1301.1074. Thank you. Just I do not understand one thing: A neighbourhood of $C$ has topology of annulus. I thought that you may equip annulus with two complex structures which are not related by a diffeomorphism. Could you comment, please? Zoltan – Zoltan Lengyel Jan 29 '13 at 18:12 @Zoltan: You should probably clarify what exactly you mean by complex structures here: just a collection of charts? The structure given by a Beltrami differential? Etc. More crucially: what does "$C$ remains analytic" mean? That the map $\gamma$ is analytic, or that the curve has some analytic parametrization? In the former case, the answer is essentially trivial, as noted by Aleksey: the identity map should restrict to be analytic on the curve $C$. In the latter case, I doubt there's a good general answer you can expect - e.g. any homeomorphism preserving $C$ will give such a structure. – Lasse Rempe-Gillen Jan 31 '13 at 13:05 (almost) complex structure is a smooth field of linear operators on tangent spaces squaring to minus identity. If you trace a curve on a surface and ask whether it is analytic the question does not make sense unless you fix a complex structure. Suppose you fix one and you trace an analytic curve with respect to it. Is this curve analytic with respect to all other complex structure? Probably not. And this is my question: to know what kind of deformations of the complex structure preserve the property of analycity. I still do not understand Aleksey's argument: why the diffeomorphism h exists? – Zoltan Lengyel Jan 31 '13 at 14:38 @Zoltan: Since your complex structure is smooth, it would seem that the identity is automatically a diffeomorphism between your surface S with the original structure, and with the new complex structure? Also, you have not answered my question: what do you mean by the curve being analytic? Let me make it more precise. The unit circle is an analytic curve. Take a diffeomorphism of the sphere that maps the unit circle to itself, but not in a real-analytic manner. Pull back the usual complex structure using this diffeomorphism. Does this satisfy your condition or not? – Lasse Rempe-Gillen Feb 7 '13 at 12:56 add comment Not the answer you're looking for? Browse other questions tagged riemann-surfaces conformal-geometry quasiconformal conformal-field-theory ag.algebraic-geometry or ask your own question.
{"url":"https://mathoverflow.net/questions/120222/analytic-curve-on-riemann-surface/120225","timestamp":"2014-04-16T22:45:39Z","content_type":null,"content_length":"55837","record_id":"<urn:uuid:3b645cc0-4c91-43fa-ad31-8fb81afb384f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
What did the Romans contribute to math? Date: 10/18/96 at 9:40:2 From: Howard and Lynnann Lovejoy Subject: Roman Contribution to Math Dear Dr. Math, I am the Head Librarian at Bahrain Bayan International School in the Persian Gulf. This past week, the seventh grade students have been in the library searching for information on the history of number systems -- a cross-curricular project assigned by their English/Math teachers. The classes are divided up into groups, each locating information on a different ancient civilization. We have found lots of information for the Maya, Chinese, and Babylonian/Egyptian groups -- but, little to nothing for the Greek or Roman groups. The students have decided that the Romans contributed very little to the history of math other than the numbers look pretty on clocks and outlines. Would you agree with that statement? We know that the Romans built elaborate roads and buildings -- they had to have mathematical engineers -- but did they really use the Roman Numerals to do their calculations? The Golden Mean supposedly is heavily used in Greek and Roman architecture ... was that by their design or discovered after-the- fact? We have searched encyclopedias and the internet to try and find information on these ancient number systems. Next week the students finish their research and start writing their papers. Any information or tips you can send us via the e-mail will be put to good use. Lynnann Lovejoy, Bayan School Librarian Date: 10/19/96 at 22:11:51 From: Doctor Mason Subject: Re: Roman Contribution to Math The Romans were indeed very good mathematicians. However, they tended to study only what is now called "applied math." They wanted to be able to use their knowledge to build the wonderful roads and bridges you mentioned. The Roman calculations were done on a board with grooves and holes. I've tried to draw the board below, and indicate the place values each groove and hole had. -- -- -- -- | | | | | | | | | | -- | | -- | | -- | | | || | | || | | || || | | | -- | | -- | | -- | | | | | | | | | | -- -- -- -- M D C L X V I On this board, they would place pebbles, "calculi" in Latin. This brings us to one of the Roman contributions to math: the words "calculate", and "calculus" both come from this origin. They would place one pebble in the appropriate place for each letter in their numeral. Other pebbles could be added to them representing another addend. The pebbles would have to be rearranged so no more than 4 were in a groove, or no more than 1 in a hole. They manipulated the pebbles to do all their calculations. Maybe this isn't what you were looking for, but the vocabulary of mathematics is often something we take for granted, and it DOES come from someplace. I think these things are interesting. -Doctor Mason, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 10/23/96 at 0:52:15 From: Howard and Lynnann Lovejoy Subject: Re: Roman Contribution to Math Dear Dr. Mason, The students got your response in time to include it in their notes and are now busily searching further information on applied maths. They were thrilled to get your response and requested that I thank you for helping them with their math research project. Special thanks from the Roman group: Fatima, Mariam, Maya, Yousif, and Talal.
{"url":"http://mathforum.org/library/drmath/view/57564.html","timestamp":"2014-04-17T18:57:58Z","content_type":null,"content_length":"8461","record_id":"<urn:uuid:a4bbff15-c97f-41c2-ad33-e1f095244f43>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
This is a tumblelog, kinda like a blog but with short-form, mixed-media posts with stuff I like. Scroll down a bit to start reading, or a bit more to read more about me. tumblrkitchen: how could it be so cute! I love bear, can’t stand it, I’m so out of controls
{"url":"http://phuongpham03.tumblr.com/","timestamp":"2014-04-21T02:00:46Z","content_type":null,"content_length":"42129","record_id":"<urn:uuid:02ff6b99-acea-43b5-9ffd-8b1b842b5d23>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
of the Bernoulli KEYWORDS: Bernoulli numbers, asymptotic inclusion, asymptotic approximation, OEIS A000367, OEIS A027641. An approximation of the Bernoulli numbers. There is a standard asymptotic formula for the Bernoulli numbers. For example at Mathworld or at the Digital Library of Mathematical Functions (NIST) (§24.11) one can find the formula The present author found two inclusions for the Bernoulli numbers which appear to be new. They are reported on this page. The bounds given comprise a simple but amazingly efficient approximation to the Bernoulli numbers which I will call the 'cute approximation'. A sharp inclusion of the Bernoulli numbers. Let B[n] denote the Bernoulli numbers. If n is even and n ≥ 38 then For example the inclusion predicts 0.5318704469415522033..*10^1770 < |B(1000)| < 0.5318704469415522039..*10^1770. And indeed |B(1000)| = 0.5318704469415522036..*10^1770. Note that the factorial function is not referenced in these formulas. A cute approximation of the Bernoulli numbers. The lower bound of the above inclusion can also be used as a convenient approximation for the Bernoulli numbers. It then takes the form (assuming n even) Equivalently this formula can be stated as For example the standard approximation gives B(1000) ~ 0.53182..*10^1770, which amounts to 4 valid decimal numbers, whereas the last approximation gives for B(1000) an approximation with almost 18.2 valid decimal digits. Asymptotic formulas More generally let us look at three asymptotic formulas for the logarithm of the Bernoulli numbers. 1. LogB1(n) = (1/2+n)*ln(n)+(1/2-n)*ln(pi)+(3/2-n)*ln(2)-n 2. LogB2(n) = (1/2+n)*ln(n)+(1/2-n)*ln(pi)+(3/2-n)*ln(2)-n* 3. LogB3(n) = (1/2+n)*ln(n)+(1/2-n)*ln(pi)+(3/2-n)*ln(2)-n* We see that formula 2 as well as formula 3 are refinements of formula 1. We are now focusing on formula 3 in its exponential form exp(LogB3(n)). What makes this asymptotic approximation especially useful -- besides being a much better approximation than those given by the Digital Library of Mathematical Functions (see §24.11) -- is that the error of the approximation can be easily estimated. In fact a convenient way to reason about the validity of an approximation formula is to give a lower bound for the number of exact decimal digits, i. e. to indicate the number of decimal digits which are guaranteed by the formula at least. In the case of exp(LogB3(n)) we have a good and simple way to express this bound: EddE(n) = floor(3*log(3*n)) (valid for n ≥ 50). For example for Bernoulli(31622776) this bound says that 55 decimal digits of exp(LogB3(n)) are guaranteed to be correct (the true value is 55.7). To sum up: to compute an approximation to the Bernoulli numbers B(n) with n ≥ 50 and n even compute exp(logB3(n)) and retain floor(3*log(3*n)) decimal digits of the result. def BernoulliAsympt(n) : if n < 50 : print "Value error, n has to be ≥ 50" if is_odd(n): return 0 R = RealField(300) # to be increased for large values n nn = n*n LogB = R((1/2+n)*ln(n)+(1/2-n)*ln(pi)+(3/2-n)*ln(2)-n*(1-1/(12*nn)* B = (-1)^(1+n//2)*exp(LogB) Edd = floor(3*ln(3*n)) print SciFormat(B, Edd - 1) # see SciFormat return B for n in (49..52): print n, BernoulliAsympt(n) 49 → Value error, n has to be ≥ 50 50 → 7.50086674607696e24 51 → 0 52 → -5.0387781014810e26 Note that the function displays only valid digits as we used this formatting function: SciFormat. With Sage the asymptotic formulas for the Bernoulli numbers based on these formulas and the exact decimal digits (edd) of these approximations can be computed as follows. Edd1(n) = -log10(abs(1-exp(LogB1(n))/abs(bernoulli(n))) Edd2(n) = -log10(abs(1-exp(LogB2(n))/abs(bernoulli(n))) Edd3(n) = -log10(abs(1-exp(LogB3(n))/abs(bernoulli(n))) EddE(n) = 3*ln(3*n) ANF = 50; END = 1000; STEP = 20 plot2 = list_plot([[n, Edd2(n)] for n in range(ANF,END,STEP)], color='red') plot3 = list_plot([[n, Edd3(n)] for n in range(ANF,END,STEP)], color='blue') plotE = list_plot([[n, EddE(n)] for n in range(ANF,END,STEP)], color='magenta') show(plot2 + plot3 + plotE) The plot below compares the number of exact decimal digits of the approximation (formula 3, blue curve) with the number of exact decimal digits guaranteed by this formula, 3*ln(3*n) (magenta curve). For comparison the red curve shows the exact decimal digits of formula 2. Formula 1 (given by DLMF/NIST) gives a poor approximation not worth to be shown. Restating exp(LogB3(n)) gives the following asymptotic expansion of the Bernoulli number B(n) for n>0 even: Note: The first inclusion was given on Jan. 18 2007 by the present author. Here the announcement in the newsgroup de.sci.mathematik. Thanks to Charles R Greathouse IV who drew my attention to an error in a previous version. Asymptotic inclusions and approximations for the Euler numbers. Asymptotic expansions of the factorial function.
{"url":"http://www.luschny.de/math/primes/bernincl.html","timestamp":"2014-04-20T21:49:20Z","content_type":null,"content_length":"10202","record_id":"<urn:uuid:4678529b-fb31-48ad-9db3-c96a630065e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Does this equation have an integer solution? October 11th 2012, 03:07 AM Does this equation have an integer solution? Hi everyone! :) I have the following problem: Which of these equations have an integer solution: 3731s + 6149t = 26 1001s + 2261t = 1 495s + 1704t = 1 385s + 17081t = 1 I thought the first one would not have an integer solution because 26 is not the gcd of 3731 and 6149, but 13 is. So I thought that it does not have an integer solution because if it did the equation would have to be 3731s + 6149t = 13, but according to the answer key it does have an integer solution. Could anyone explain why? Thank you everyone! :) October 11th 2012, 03:12 AM a tutor Re: Does this equation have an integer solution? If 3731s + 6149t = 13 then 3731(2s) + 6149(2t) =26. October 11th 2012, 03:28 AM Re: Does this equation have an integer solution? Ah of course :D thank you so much!
{"url":"http://mathhelpforum.com/discrete-math/205089-does-equation-have-integer-solution-print.html","timestamp":"2014-04-19T01:04:40Z","content_type":null,"content_length":"4378","record_id":"<urn:uuid:b7eb5122-393c-48e4-b822-71e3eafefc9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Euler's Faith and Folly Date: 03/25/2011 at 12:43:07 From: sankar Subject: Eulers infinite sums I am reading a book titled "Journey through Genius," by William Dunham. There is one chapter in this book about how Euler arrived at the sum of the reciprocals of the squares of the whole numbers, which is pi^2/6. The way he proves this is by using the Taylor expansion of sin(x)/x and equating the expansion with factors of the equation sin(x)/x. Then he collects all the coefficients of x^2 on right side with x^2 coefficient of the left side. But at the end of the chapter, Dunham says this: "Today, we recognize that Euler was not so precise in his use of the infinite as he should have been. His belief that finitely generated patterns and formulas automatically extend to the infinite case was more a matter of faith than of science, and subsequent mathematicians would provide scores of examples showing the folly of such hasty I don't understand what is wrong with Euler's analysis. Date: 03/25/2011 at 19:24:58 From: Doctor Jordan Subject: Re: Eulers infinite sums Hi Sankar, Let's say we have the polynomial f(x) = x^3 + x This can be written as f(x) = x(x^2 + 1) Before the use of complex numbers, we would have said that the only root of this polynomial is 0. But then it would be false that f(x) is a product of linear factors x - a, where a is a root of f(x). If we use complex numbers, then the roots of f(x) = x^3 + x are 0, i and -i; and indeed, f(x) = x(x - i)(x + i) So to be able to factor a polynomial, we need to know all of its complex roots. If we want to write sin(x) as a product of linear factors x - a, where a is a root of sin(x), then -- in analogy to factoring a polynomial -- we want to know all of the complex factors of sin(x). A criticism Euler received from, I believe, Nicolaus Bernoulli claims that determining the integer multiples of pi as the *real* roots of sin(x) does not mean that these are the function's *only* roots. In the same way that f(x) = x^3 + x has the real root 0, but also the complex roots i and -i, perhaps there are other complex roots. One could also complain that even if we know that the only roots of sin(x) are the integer multiples of pi, how do we know that sin(x) is equal to the infinite product (x - a), where a is a root of sin(x)? Using his formula e^(ix) = cos(x) + isin(x), Euler later did show that the only roots of sin(x) are the integer multiples of pi. And the equivalence of sin(x) and this infinite product can be shown in several different ways. The argument that uses the least prior knowledge that I know of is given on p. 18 of Reinhold Remmert's "Classical Topics in Complex Function Theory," a page available on Google Books. - Doctor Jordan, The Math Forum Date: 03/27/2011 at 01:13:03 From: sankar Subject: Thank you (Eulers infinite sums) Thank you so much. This really enhanced my understanding of the I really want to appreciate the work you guys are doing.
{"url":"http://mathforum.org/library/drmath/view/76682.html","timestamp":"2014-04-20T21:46:18Z","content_type":null,"content_length":"8406","record_id":"<urn:uuid:a852101d-f333-44f0-b7e9-0baadd3de8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: You roll a standard number cube. Find P(3 or 4). • 5 months ago • 5 months ago Best Response You've already chosen the best response. By a standard number cube, do you mean a dice? If so then the probabilty of rolling a number, x is as follows: P(1)=1/6 as there is one face on a die that has a number 1, and six faces it can land on P(2)=1/6 P(3)=1/6 etc etc So the probability of rolling P(3) = 1/6 and P(4)=1/6 then the P(3) or P(4) = P(3 or 4) = ? Best Response You've already chosen the best response. 1/3 = P(3or4) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/527bd06ee4b0de4fd476c98b","timestamp":"2014-04-21T15:51:28Z","content_type":null,"content_length":"30229","record_id":"<urn:uuid:1fe705bb-25b8-4411-b389-ef3cd13d7020>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
, in Architecture, any little square member or ornament used in crowning a larger moulding. FINÆUS (Orontius), in French Finé, professor of mathematics in the Royal-college of Paris, was the son of a physician, and was born at Briançon in Dauphiné in 1494. He went young to Paris, where his friends procured him a place in the college of Navarre. He applied himself there to philosophy and polite literature; but more especially to mathematics, in which, having a natural propensity, he made a considerable prosiciency. Particularly he made a good progress in mechanics; in which, having both a genius to invent instruments, and a skilful hand to make them, he gained much reputation by the specimens he gave of his ingenuity. Finæus first made himself publicly known by correcting and publishing Siliceus's Arithmetic, and the Margarita Philosophica. He afterwards read private lectures in mathematics, and then taught that science publicly in the college of Gervais: from the reputation of which, he was recommended to Francis the 1st, as the properest person to teach mathematics in the new college which that prince had founded at Paris. And here, though he spared no pains to improve his pupils, he yet found time to publish a great many books upon most parts of the mathematics. But neither his genius, his labours, his inventions, and the esteem which numberless persons shewed him, could secure him from that fate which so often befalls men of letters. He was obliged to struggle all his life time with poverty; and when he died, left a numerous family deeply in debt. However, as merit must always be esteemed in secret, though it seldom has the luck to be rewarded openly; so Finæus's children found Mecænases, who for their father's sake assisted his family.—He died in 1555, at 61 years of age. Like all other mathematicians and astronomers of those times, he was greatly addicted to astrology; and had the misfortune to be a long time imprisoned for having predicted some things, that were not acceptable to the court of France. He was also one of those, who vainly boasted of having found out the quadrature of the circle. An edition of his works, translated into the Italian language, was published in 4to, at Venice, 1587; consisting of Arithmetic, Practical Geometry, Cosmography, Astronomy, and Dialling.
{"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/f/fillet.html","timestamp":"2014-04-21T02:16:32Z","content_type":null,"content_length":"7262","record_id":"<urn:uuid:e655d906-2a2c-4d38-9c9b-39c859ef4c39>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
2009 Research Summary DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification View Current Project Information Simon Lacoste-Julien, Fei Sha^1 and Michael Jordan Google and Microsoft Probabilistic topic models (and their extensions) have become popular as models of latent structures in collections of text documents or images. These models are usually treated as generative models and trained using maximum likelihood estimation, an approach which may be suboptimal in the context of an overall classification problem. In this project, we present DiscLDA [1], a discriminative learning framework for such models as Latent Dirichlet Allocation (LDA) in the setting of dimensionality reduction with supervised side information. In DiscLDA, a class-dependent linear transformation is introduced on the topic mixture proportions (see Figure 1). This parameter is estimated by maximizing the conditional likelihood using Monte Carlo EM. By using the transformed topic mixture proportions as a new representation of documents, we obtain a supervised dimensionality reduction algorithm that uncovers the latent structure in a document collection while preserving predictive power for the task of classification. We compare the predictive power of the latent structure of DiscLDA with unsupervised LDA on the 20 Newsgroup document classification task and show that it uncovers an interesting latent structure which preserves good classification accuracy. Figure 1: Graphical model for DiscLDA [1] S. Lacoste-Julien, F. Sha, and M. Jordan, "DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification," Advances in Neural Information Processing Systems (NIPS) 21, 2009. ^1University of Southern California
{"url":"http://www.eecs.berkeley.edu/Research/Projects/2009/106504.html","timestamp":"2014-04-18T23:20:28Z","content_type":null,"content_length":"8676","record_id":"<urn:uuid:ec995bf4-c8d1-4705-95ee-633e8a8638f9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
What is 40 percent of 60? Horoscopic astrology is a form of astrology that uses a horoscope, a visual representation of the heavens, for a specific moment in time in order to interpret the inherent meaning underlying the alignment of the planets at that moment. The idea is that the placement of the planets at any given moment in time reflects the nature of that moment and especially anything that is born then, and this can be analyzed using the chart and a variety of rules for interpreting the 'language' or symbols therein. One of the defining characteristics of this form of astrology that makes it distinct from other traditions is the computation of the degree of the Eastern horizon rising against the backdrop of the ecliptic at the specific moment under examination, known as the ascendant. As a general rule, any system of astrology that does not utilize the ascendant does not fall under the category of horoscopic astrology, although there are some exceptions. Horoscope pattern In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings about interest, sympathy or motivation in the reader or viewer. Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement. A social issue (also called a social problem or a social situation) is an issue that relates to society's perception of a person's personal lives. Different cultures have different perceptions and what may be "normal" behavior in one society may be a significant social issue in another society. Social issues are distinguished from economic issues. Some issues have both social and economic aspects, such as immigration. There are also issues that don't fall into either category, such as wars. Thomas Paine, in Rights of Man and Common Sense, addresses man's duty to "allow the same rights to others as we allow ourselves". The failure to do so causes the birth of a social issue. Algeria · Nigeria · Sudan · Ethiopia · Seychelles Uganda · Zambia · Kenya · South Africa Afghanistan · Pakistan · India Nepal · Sri Lanka · Vietnam China · Hong Kong · Macau · Taiwan North Korea · South Korea · Japan Malaysia · Singapore · Philippines · Thailand Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-40-percent-of-60","timestamp":"2014-04-18T02:46:58Z","content_type":null,"content_length":"26868","record_id":"<urn:uuid:97745b8e-3eaf-4b04-907c-f98437544844>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Optical Boomerangs Optical beams that bend themselves in curves may, one day, guide intense plasmas along a desired path or help manipulate tiny particles. So far, researchers have studied these “self-accelerating” beams under somewhat limited conditions (see 28 November, 2007 Focus and 6 September, 2011 Viewpoint). In particular, they have been boxed in by the paraxial approximation: in order to make things work, the beams must bend only gradually, not straying far from the axis of propagation, otherwise the beams break up. In Physical Review Letters, two groups are now reporting generalized versions of these potentially useful curving beams that need not obey the paraxial limit. It is known that nonparaxial beams can be exact solutions to the Helmholtz wave equation if they have a particular intensity profile, but these beams only travel in circular paths (see 16 April, 2012 Viewpoint). Peng Zhang at the University of California, Berkeley, and colleagues showed with theoretical calculations and experimental demonstrations that two other kinds of exact solutions also exist, called accelerating Mathieu beams (which propagate along elliptical paths and include circular nonparaxial beams as a special case) and accelerating Weber beams (which propagate along parabolic paths). As these beams bend themselves around these paths, they resist breakup by diffraction. To create the self-bending beams, Zhang et al. modified the intensity profile of a laser beam with a computer-controlled modulator. They then demonstrated the self-healing nature of these beams by partially blocking them with obstacles and watching the beams regroup and continue on their way. In an independent report, Parinaz Aleahmad at the University of Central Florida, Orlando, and colleagues present calculations in which the Helmholtz equation is recast in terms of electric and magnetic vector potentials. With this fully vectorial approach, they demonstrate theoretically and experimentally the existence of two-dimensional beams that follow elliptical paths and are diffraction-free. In addition, Aleahmad et al. show that three-dimensional solutions exist that allow propagation of spherical wave fronts (among others), which also bend under certain conditions. – David Voss
{"url":"http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.109.203902","timestamp":"2014-04-19T15:26:10Z","content_type":null,"content_length":"6403","record_id":"<urn:uuid:d9d1e15e-e279-46d2-9a7d-7f661c56cbcb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Quiet at the End Published February 9, 2005 They say in space no one can hear you scream, but if you are falling into a black hole, you can’t even hear yourself. According to the so-called asymptotic silence hypothesis, spacetime becomes so contorted that signals can’t travel any significant distance–not even from your mouth to your ears. In the 11 February PRL, a team of physicists and mathematicians says it has new evidence for the hypothesis, which also applies to other “singularities” such as the starting point of the big bang. A better understanding of singularities could eventually point to a correct solution to the disaccord between relativity and quantum mechanics. Any critical concentration of mass, such as in the collapse of a massive star, will result in a singularity–essentially a gap in the universe that an observer cannot cross without ceasing to exist. The detailed properties of singularities depend on the full set of equations of general relativity, which are usually too complex to solve without some drastic simplifying assumptions. In the 1970’s, however, a Russian team conjectured that essentially all singularities exhibit asymptotic silence: As you approach the singularity, signals cannot travel indefinitely but face an increasingly limited “sphere of influence,” thanks to the hugely distorted “fabric” of spacetime. The team also expected spacetime to act chaotically as you approach the singularity–it should contort at rates that keep changing abruptly, rather than smoothly [1]. Asymptotic silence also showed up in simple big bang models of the Universe emerging from a singularity and had an important influence on modern cosmology. Researchers have demonstrated asymptotic silence with models of several simplified, highly symmetric versions of singularities, but the most general case has remained beyond reach because of the difficulty of solving Einstein’s equations. Lars Andersson of the University of Miami and his colleagues now report computer simulations of a two-dimensional spacetime emerging from a big-bang singularity, and they believe it captures the behavior of most real singularities, including black holes. To make the singularity realistic, the team allowed different regions of space to expand at different rates. The expanding spacetime is essentially the time-reversed movie of the contraction an observer would see when falling into a black hole, but the results don’t depend on the direction of time. Andersson and his colleagues found asymptotic silence and the chaotic behavior conjectured by the Russians but also something new. Among the many ways an observer could approach the singularity, there were certain special trajectories along which spacetime distorts at rates quite different from neighboring trajectories. The effect was separate from the chaotic phenomena associated with a single trajectory. The existence of such anomalies, called spikes, could be hard to prove rigorously, says co-author Claes Uggla of Karlstad University in Sweden, but it could be related to the breakdown of relativity theory at the short distances where quantum mechanics prevails. Several proposed quantum theories of gravity suggest that spacetime at this tiny scale is not a smooth continuum but rather a discrete lattice, where particles jump from one point to the next. The team also speculates that spikes in the big bang could have seeded gravitational waves that might someday be detectable. George Ellis of the University of Cape Town says the result is a significant step toward understanding singularities. “In a sense, it’s confirming something the Russians have been saying, but by a much more solid method.” 1. V. A. Belinski et al., “Oscillatory Approach to a Singular Point in the Relativistic Cosmology,” Adv. Phys. 19, 525 (1970); and “A General Solution of the Einstein Equations with a Time Singularity,” Adv. Phys. 31, 639 (1982).
{"url":"http://physics.aps.org/story/v15/st8","timestamp":"2014-04-16T20:03:16Z","content_type":null,"content_length":"15447","record_id":"<urn:uuid:a087e829-3efb-40db-93d3-7403c59f4d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [SI-LIST] : Embedded microstrip calculations, Ultracad Calculator Roy Leventhal (royl@cadence.com) Wed, 10 Dec 1997 13:07:37 -0500 Hello to all, I've enjoyed the excellent correspondence about the effects of solder mask, manufacturing tolerances, etc., on Zo. I would only add two things to the discussion: 1. Understand what your possible measurement inaccuracies might be. 2. Most closed form algorithims are based on work from the 1950s to nearly the present day that relied heavily on curve fitting of measured results. Computer driven field solution methods didn't become practical until today's afforable computing power became available. I have found Zo calculated by closed form algorithims to be from a percent or two lower to ten percent or so higher (depending on which algorithims I used) than Zo predicted by computer driven field solver methods. The more your particular design diverges from the measured, approximated test setup - the more out on a limb you are. Verification measurements have been done on the field solver predictions that show them to be accurate within a couple of In summary, I am not surprised that measured results vs. simulated results for Zo for a 90 ohm system might be 5 to 9 ohms lower using the curve fitted algorithim approach. Unless the software supplier has taken special, usually proprietary, pains to improve what is available in the literature, I wouldn't expect much better. Roy Leventhal
{"url":"http://www.qsl.net/wb6tpu/si-list2/pre99/0887.html","timestamp":"2014-04-19T04:25:53Z","content_type":null,"content_length":"3926","record_id":"<urn:uuid:feda5aa2-0c1c-4b50-a22b-4c220e11da64>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
A hideous Linear Regression/confidence set question Take the linear model Y=X*beta+e, where e~Nn(0, sigma^2 * I), and it has MLE beta.hat First, find the distribution of (beta.hat-beta)' * X'*X * (beta.hat-beta), where t' is t transpose. I think I've done this. I think it's a sigma^2 chi-squared (n-p) distribution. Next, Hence find a (1-a)-level confidence set for beta based on a root with an F distribution. I can't do this to save my life. I'm aware that an F distribution is the ratio of two chi-squareds, but where the hell I'm going to get another chi squared from I have no idea. Also, we're dealing in -vectors- and I don't know how,what,why any confidence set is going to be or even look like, and I've no idea how to even try to get one. -Any- help would be appreciated. Thanks
{"url":"http://www.physicsforums.com/showthread.php?t=391475","timestamp":"2014-04-19T09:42:11Z","content_type":null,"content_length":"27843","record_id":"<urn:uuid:34a85e0c-2aa1-4399-8827-0311c72fc833>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. make clear you question is it to solve !!! Best Response You've already chosen the best response. -2 (3 + 5 x - 3 (3 x + y)) Best Response You've already chosen the best response. \[-2(5x-3(2x+(x+y))+3)=-2(5x-3(2x+x+y)+3)\] \[=-2(5x-3(3x+y)+3)\] \[=-2(4x-9x-3y+3)\] \[=-2(-5x-3y+3)\] \[=10x+6y+-6\] \[=2(5x+3y-3)\] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e6f59a10b8beaebb29bd369","timestamp":"2014-04-19T02:05:00Z","content_type":null,"content_length":"32366","record_id":"<urn:uuid:96e70c36-5eec-4654-a282-548441fa6ed8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Find Out How Inductor in AC and DC Circuit works? | Transtutors Inductor in AC Circuit Assignment Help Inductor in an AC Circuit: Consider a pure inductor of self inductance L and zero resistance connected to an alternation source. Again we assume that an instantaneous current i = i[0] sin ωt flows through the inductor. Although there is no resistance, there is a potential difference V[L] between the inductor terminals a and b because the current varies with time, giving rise to self induced emf. V[L] = V[ab] = - (induced emg) = -(-L di/dt) Or V[L] = L di/dt = Li[0] ω cos ωt Or V[L]= V[0] sin (ωt + π^2) … (i) Here V[0] = i[0](ωL) … (ii) Or i[0] = V[0]/ωL i = V[0]/L sin (ωt) … (iii) Equation (iii) shows that effective ac resistance, i.e., inductive reactance of inductor is, X[L] = ωL And the maximum current, i[0] = V[0]/X[L] The unit of X[L] is also ohm. From Equations. (i) and (iii) we see that the voltage across the inductor leads the current passing through it by 90°. Figure shows V[L] and i as functions of time. Solved example 1: A 100Ω resistance is connected in series with a 4H inductor. The voltage across the resistor is, V[R] = (2.0V) sin (10^3 rad/s)t: (a) Find the expression of circuit current (b) Find the inductive reactance (c) Derive an expression for the voltage across the inductor. Solution: (a) i = VR/R = (2.0V)sin(10^3 rad/s)t/100 (2.0 x 10^-2 A) sin (10^3 rad/s)t (b) X[L] = ωL = (10^3 rad/s) (4H)= 4.0 x 10^3 ohm (c) The amplitude of voltage across inductor, V[0] = i[0]X[L] = (2.0 x 10^-2 A) (4.0 x 10^3 ohm)= 80 volt In an ac voltage across the inductor leads the current by 90° or p/2 rad. Hence, V[L] = V[0] sin (ωt + π/2)= (80 volt) sin {(10^3rad/s)t + π/2 rad} Email Based Homework Assignment Help in Inductor in an AC Circuit Transtutors is the best place to get answers to all your doubts regarding inductor in an AC circuit with solved examples. You can submit your school, college or university level homework or assignment to us and we will make sure that you get the answers you need which are timely and also cost effective. Our tutors are available round the clock to help you out in any way with Alternating Live Online Tutor Help for Inductor in an AC Circuit Transtutors has a vast panel of experienced physics tutors who specialize in inductor in an AC Circuit and can explain the different concepts to you effectively. You can also interact directly with our physics tutors for a one to one session and get answers to all your problems in your school, college or university level Alternating current. Our tutors will make sure that you achieve the highest grades for your physics assignments. Related Questions • As a rocket passes the earth at 0.75 c , it fires a laser perpen­dicular to... 35 mins ago As a rocket passes the earth at 0.75<span style="position: absolute; clip: rect(1.994em, 1000em, 2.832em, -0.543em); to Tags : Science/Math, Physics, Classical Physics, College ask similar question • A boy on a skateboard coasts along at 7.0 m / s . He has a ball that he can... 52 mins ago A boy on a skateboard coasts along at 7.0<span class="t Tags : Science/Math, Physics, Classical Physics, University ask similar question • Unit One (Sets) 1 hr ago Original Response 1. Create a set representing a real-life group of "things" and write it in roster notation. The members of the set can be groups such as tools you will use in your fu Tags : Science/Math, Math, Linear Algebra, College ask similar question • mastering chemistry 2 hrs ago Write an equation to show how H2PO3 ^- can act as an acid with HS^- acting as a base.Express your answer as a chemical equation. Identify all of the phases in your answer. Tags : Science/Math, Chemistry, Others, University ask similar question • Surface Chemistry 3 hrs ago What is the relation between gold number and mass number? Tags : Science/Math, Chemistry, Physical chemistry, Junior & Senior (Grade 11&12) ask similar question • Charlene made the sketch below in order to find the height x of a pole. She... 4 hrs ago Charlene made the sketch below in order to find the height x of a pole. She positioned a mirror on the ground so that she could see the reflection of the top of the pole. Her height, her distance from the mirror, and her line... Tags : Science/Math, Math, Others, University ask similar question • Prepare a critical analysis of a quantitative study focusing on protection... 4 hrs ago Prepare a critical analysis of a quantitative study focusing on protection of human participants, data collection, data analysis, problem statement, and interpretation of findings.<p style='font: Tags : Science/Math, Biology, Others, University ask similar question • Develop a program using a programming or macro language to 7 hrs ago Develop a program using a programming or macro language to implement the random search method. Design the subprogram so that it is expressly designed to locate a maximum. Test the program with A? (x, y) from Prob. 14.7. Use a... Tags : Science/Math, Math, Calculus, University ask similar question • what is the energy (in joules) a photon must have in order to excite an... 8 hrs ago what is the energy (in joules) a photon must have in order to excite an electron from E2 to E3? Tags : Science/Math, Chemistry, Physical chemistry, University ask similar question • assignmet math 9 hrs ago TECHNICAL MATHEMATICS 2 (M2G120914)<?xml:namespace prefix="o" ns= Tags : Science/Math, Math, Calculus, College ask similar question more assignments »
{"url":"http://www.transtutors.com/physics-homework-help/alternating-current/inductor-in-ac-circuit.aspx","timestamp":"2014-04-19T19:34:46Z","content_type":null,"content_length":"86613","record_id":"<urn:uuid:d07cdc41-662b-42e7-b5e3-d9f956e50a4d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
LHCb pins down the Bs mixing phase - CERN Courier The LHCb collaboration's presentation at Lepton Photon 2011 included one of the most eagerly awaited measurements in flavour physics: the CP violation phase in B[s]–B mixing. This is the counterpart of sin 2β in the B^0 system, which was measured by the B-factory experiments BaBar and BELLE using the channel B^0→ J/Ψ K[s]. They provided the first measurement of CP violation in B^0 mixing, which is both large and now well measured, with sin 2β = 0.69 ± 0.02. In contrast, the Standard Model prediction for φ[s], the corresponding phase for the B[s] meson, is extremely small and precise: φ[s] = 0.036 ± 0.002 rad (Charles et al. 2005). It is therefore an interesting place to search for new physics beyond the Standard Model, which may enhance the value. Time-dependent analyses of B[s] mesons were not accessible at the B-factories, so this remained a key measurement for hadronic machines, first at the Tevatron and now at the LHC. The golden mode for this study is B[s] → J/Ψ φ[s] where the J/Ψ decays to μ^+μ^– and the φ decays to K^+K^– . The measurement is very challenging: the final state is not a pure CP eigenstate, so an angular analysis has to be made to separate the CP-even and CP-odd components. In addition, the fast B[s]–B[s ]oscillation necessitates precise vertex reconstruction, and tagging of the production state (whether it was a B[s] or B[s ]) is also important. The result for φ[s] is correlated to another quantity in the fit, ΔΓ[s], the difference in width of the two B[s] mass eigenstates. (It is the mass difference of these two states that determines the oscillation frequency.) ΔΓ[s] can be positive or negative, but in the Standard Model is predicted to be 0.087 ± 0.021 ps^–1 (Lenz and Nierste 2011). The uncertainties on φ[s] and ΔΓ[s] are correlated, and furthermore the fit turns out to be insensitive to the replacement φ[s] → π – φ[s] when ΔΓ[s] → – ΔΓ[s] so there are two ambiguous solutions. As a result, the measurements are usually plotted as contours in the φ[s] vs ΔΓ[s] plane. The CDF and DØ experiments at the Tevatron made the first measurements. Their early results agreed with each other and appeared, when combined, to indicate a large value for φ[s], about 3σ away from the Standard Model expectation. More recent updates have moved their preferred values somewhat closer to the Standard Model, but a hint of a possible discrepancy remained, as shown by the red and green contours in figure 1 (Burdin and DØ 2011, CDF 2010). LHCb has now accumulated the largest sample of B[s] → J/Ψ φ decays in the world, over 8000 signal candidates with very high purity (figure 2). The resulting constraint is shown as the blue contour in figure 1 (LHCb 2011a). It is much more precise than the preceding measurements, with one of the two solutions being in good agreement with the Standard Model expectation – the hint of a discrepancy is not confirmed. This result also gives the first significant direct measurement of ΔΓ[s], 0.123 ± 0.029 ± 0.008 ps^–1, where the first uncertainty is statistical and the second systematic. Another related analysis presented by LHCb uses a different decay mode, B[s] → J/Ψ f[0], which should measure the same phase. Although the statistics are lower, the final state is CP-odd in this case, so the analysis is simpler (LHCb 2011b). It gives a consistent result to B^0[s] → J/Ψ φ, and the preliminary combined result from LHCb is φ[s] = 0.03 ± 0.16 ± 0.07 rad (LHCb 2011c). This result is statistically limited, but as data continue to pour in from the LHC there are good prospects for substantial further improvement. So, although LHCb has now ruled out a gross effect from new physics, the experiment should be able to measure the true value even if it is as small as predicted in the Standard Model – and test any subtle effects from new physics.
{"url":"http://cerncourier.com/cws/article/cern/47193","timestamp":"2014-04-20T05:45:46Z","content_type":null,"content_length":"31086","record_id":"<urn:uuid:fb4baee5-7c5e-4a75-8559-5dc06c04bdc0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Induction with an inequality March 16th 2010, 10:51 PM #1 Mar 2010 Mathematical Induction with an inequality Hello, I had this on my exam today, I was positive that I'd be able to deal with inequalities but for some reason this one threw me off. I had to prove $<br /> \frac{1}{n!} \leq \frac{1}{2^{n-1}}<br />$ for all positive integers i.e n=0,1,2,3.. anyone mind throwing some hints? 0! is defined as 1 $\frac{1}{1}<\frac{1}{2^{-1}}$ as $\frac{1}{2^{-1}}=2$ By inspection, the factorial is always increasing more since we are multiplying by values more and more greater than 2. Hello, I had this on my exam today, I was positive that I'd be able to deal with inequalities but for some reason this one threw me off. I had to prove $<br /> \frac{1}{n!} \leq \frac{1}{2^{n-1}}<br />$ for all positive integers i.e n=0,1,2,3.. Zero is not a positive integer. anyone mind throwing some hints? It's straight forward. I will show you the inductive step. We assume $\frac{1}{n!} \leq \frac{1}{2^n-1}$, and wish to prove $\frac{1}{(n+1)!} \leq \frac{1}{2^{n+1}-1}$. By implication it is expressed as $<br /> \frac{1}{n!} \leq \frac{1}{2^n-1} \Rightarrow \frac{1}{(n+1)!} \leq \frac{1}{2^{n+1}-1}<br />$ So if we can start from $\frac{1}{n!} \leq \frac{1}{2^n-1}$ and end our proof with $\frac{1}{(n+1)!} \leq \frac{1}{2^{n+1}-1}$, we are home free. Let's begin with $\frac{1}{n!} \leq \frac{1}{2^n-1}$. Multiplying through with $\frac{1}{n+1}$, we obtain $\frac{1}{(n+1)n!} \leq \frac{1}{(2^n-1)(n+1)}$. Observe $\frac{1}{(n+1)!} \leq \frac{1}{n2^n+2^n-n-1}$ $\frac{1}{(n+1)!} \leq \frac{1}{2^n(n+1)-n-1}= \frac{1}{2\cdot 2^n-n-1}\leq \frac{1}{2^{n+1}-2}< \frac{1}{2^{n+1}-1}$ So we succeeded. via induction.. $\frac{1}{n!}\le \frac{1}{2^{n-1}}$ ? Does the above cause $\frac{1}{(n+1)!}\le \frac{1}{2^n}$ ? $\frac{1}{(n+1)!}\le \frac{1}{2^n}\ \Rightarrow\ \left(\frac{1}{n+1}\right)\frac{1}{n!}\le \left(\frac{1}{2}\right)\frac{1}{2^{n-1}}$ When $n\ge\ 2,\ n+1\ge\ 2$ Hence the inequality is true as $\left(\frac{1}{n+1}\right)\frac{1}{n!}\le\ \left(\frac{1}{2}\right)\frac{1}{2^{n-1}}$ if $\frac{1}{n!}\le\ \frac{1}{2^{n-1}}$ and proof of the truth of the inequality is verified for the first n. via induction.. $\frac{1}{n!}\le \frac{1}{2^{n-1}}$ ? Does the above cause $\frac{1}{(n+1)!}\le \frac{1}{2^n}$ ? $\frac{1}{(n+1)!}\le \frac{1}{2^n}\ \Rightarrow\ \left(\frac{1}{n+1}\right)\frac{1}{n!}\le \left(\frac{1}{2}\right)\frac{1}{2^{n-1}}$ When $n\ge\ 2,\ n+1\ge\ 2$ Hence the inequality is true as $\left(\frac{1}{n+1}\right)\frac{1}{n!}\le\ \left(\frac{1}{2}\right)\frac{1}{2^{n-1}}$ if $\frac{1}{n!}\le\ \frac{1}{2^{n-1}}$ and proof of the truth of the inequality is verified for the first n. Sorry, I wasn't paying attention. Since it's $\frac{1}{2^{n-1}}$ , it should be easier. Let's see: <br /> \begin{aligned}<br /> \frac{1}{(n+1)!}&=\frac{1}{2^{n-1}}\cdot \frac{1}{(n+1)}\\<br /> &\leq \frac{1}{2^{n-1}}\cdot \frac{1}{(2)}\\<br /> &\leq \frac{1}{2^n}<br /> \end{aligned}<br /> thanks guys, I actually tried the problem after the test before anyone was able to help me out and actually got it pretty much just as all of you did, don't you hate it when that happens? March 17th 2010, 02:50 AM #2 MHF Contributor Oct 2009 March 17th 2010, 04:29 AM #3 MHF Contributor Dec 2009 March 17th 2010, 07:35 AM #4 Sep 2009 March 17th 2010, 11:40 AM #5 MHF Contributor Dec 2009 March 17th 2010, 12:13 PM #6 Sep 2009 March 17th 2010, 02:12 PM #7 Mar 2010
{"url":"http://mathhelpforum.com/discrete-math/134210-mathematical-induction-inequality.html","timestamp":"2014-04-25T01:59:36Z","content_type":null,"content_length":"59733","record_id":"<urn:uuid:5d636163-797d-4e9f-af68-7c234f2856d4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Puzzle Based on Multiplication Ideas Date: 08/07/2005 at 11:29:58 From: Kate Subject: Find the smallest natural number n...... Find the smallest natural number n which has the following properties: * Its decimal representation has 6 as the last digit * If the last digit 6 is erased and placed in front of the remaining digits, the resulting number is 4 times as large as the original number n. I know that the natural number n has ...46 as last 2 digits. When the digit 6 is shifted to front, then it will be 6...4. Also, before shifting the digit 6, the most front digit must be less than 6. That's what I have realized and tested, but I still can't work it out. Date: 08/08/2005 at 00:15:29 From: Doctor Greenie Subject: Re: Find the smallest natural number n...... Hi, Kate -- You are off to a great start when you noted that the last 2 digits of the number must be "46". How did you determine that? Presumably, you thought something like x 4 Then, according to the statement of the problem, the next-to-last digit in the number must be "4". So now you had x 4 But we can continue using this same reasoning over and over until we find a solution to the problem. Because we know the last two digits of the original number are "46", we can perform the multiplication by 4 of these two final digits to determine that the second digit from the right in the product is "8": x 4 Now we know the last two digits of the product; and according to the rules of the problem, the "8" is the third digit from the right in the original number. Then, by performing the multiplication by 4 of the digits we now know of the original number, the third digit from the right in the product is uniquely determined: x 4 And we just continue this process--multiplying the last digit we found in the original number by 4 to find the next digit to the left in the product, and copying that digit as the next digit to the left in the original number: x 4 x 4 x 4 We now have a "6" as the first digit of the product; this means we are done. The original number is 153846; multiplied by 4 the product is We can also work the problem in the opposite direction by a similar process. You noted in your original message that the first digit of the original number must be "less than 6". In fact, if the original number multiplied by 4 is to have "6" as the first digit, then the first digit of the original number must be "1". So we have x 4 But now, according to the rules for the problem, the "1" which is the first digit of the original number must be the second digit of the x 4 Now, by dividing the portion of the product we know by 4, we can see 61/4 = 15 (plus a remainder) so the second digit of the original number must be "5"--and that means the third digit of the product is "5": x 4 Again, we can repeat this process until we find the solution to the 615/4 = 153 (plus a remainder) so next digit is "3": x 4 6153/4 = 1538 (plus a remainder) so next digit is "8": x 4 61538/4 = 15384 (plus a remainder) so next digit is "4": x 4 615384/4 = 153846 (with no remainder) Since there is no remainder, we are done. Again we have (of course) found the same answer: the original number is 153846, and the product when multiplied by 4 is 615384. Please write back if you have questions on any of this. - Doctor Greenie, The Math Forum Date: 08/08/2005 at 06:05:07 From: Kate Subject: Thank you (Find the smallest natural number n......) Doctor Greenie, Thanks for your help in solving this question. I appreciate it, really! This is really a good place to learn math! I love it!
{"url":"http://mathforum.org/library/drmath/view/68355.html","timestamp":"2014-04-18T06:56:01Z","content_type":null,"content_length":"9551","record_id":"<urn:uuid:3e071194-ba6a-4fff-91dd-edccb10a079f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Bisimulation can’t be traced Results 11 - 20 of 150 - Science of Computer Programming , 1995 "... A transformational methodology is described for simultaneously designing algorithms and developing programs. The methodology makes use of three transformational tools - dominated convergence, finite differencing, and real-time simulation of a set machine on a RAM. We illustrate the methodology t ..." Cited by 41 (2 self) Add to MetaCart A transformational methodology is described for simultaneously designing algorithms and developing programs. The methodology makes use of three transformational tools - dominated convergence, finite differencing, and real-time simulation of a set machine on a RAM. We illustrate the methodology to design a new O(mn + n 2 )-time algorithm for deciding when n-state, m-transition processes are ready similar, which is a substantial improvement on the \Theta(mn 6 ) algorithm presented in [6]. The methodology is also used to derive a program whose performance, we believe, is competitive with the most efficient hand-crafted implementation of our algorithm. Ready simulation is the finest fully abstract notion of process equivalence in the CCS setting. 1 Introduction Currently there is a wide gap between the goals and practices of research in the theory of algorithm design and the science of programming, which we believe is A preliminary version of this paper appeared in the Conf... - In Proceedings 6 th Annual Symposium on Logic in Computer Science , 1991 "... ) Frits W. Vaandrager MIT Laboratory for Computer Science Cambridge, MA 02139, USA frits@theory.lcs.mit.edu Abstract The relation between process algebra and I/O automata models is investigated in a general setting of structured operational semantics (SOS). For a series of (approximations of) key p ..." Cited by 39 (1 self) Add to MetaCart ) Frits W. Vaandrager MIT Laboratory for Computer Science Cambridge, MA 02139, USA frits@theory.lcs.mit.edu Abstract The relation between process algebra and I/O automata models is investigated in a general setting of structured operational semantics (SOS). For a series of (approximations of) key properties of I/O automata, syntactic constraints on inference rules are proposed which guarantee these properties. A first result is that, in a setting without assumptions about actions, the well-known trace and failure preorders are substitutive for any set of rules in a format due to De Simone. Next additional constraints are imposed which capture the notion of internal actions and guarantee substitutivity of the testing preorders of De Nicola and Hennessy, and also of a preorder related to the failure semantics with fair abstraction of unstable divergence of Bergstra, Klop and Olderog. Subsequent constraints guarantee that input actions are always enabled and output actions cannot be bl... - THE JOURNAL OF LOGIC AND ALGEBRAIC PROGRAMMING , 2004 "... ..." , 1997 "... In this paper we describe the promoted tyft/tyxt rule format for defining higher-order languages. The rule format is a generalization of Groote and Vaandrager 's tyft/tyxt format in which terms are allowed as labels on transitions in rules. We prove that bisimulation is a congruence for any languag ..." Cited by 36 (0 self) Add to MetaCart In this paper we describe the promoted tyft/tyxt rule format for defining higher-order languages. The rule format is a generalization of Groote and Vaandrager 's tyft/tyxt format in which terms are allowed as labels on transitions in rules. We prove that bisimulation is a congruence for any language defined in promoted tyft/tyxt format and demonstrate the usefulness of the rule format by presenting promoted tyft/tyxt definitions for the lazy -calculus, CHOCS and the ß-calculus. 1 Introduction For a programming language definition that uses bisimulation as the notion of equivalence, it is desirable for the bisimulation relation to be compatible with the language constructs; i.e. that bisimulation be a congruence. Several rule formats have been defined, so that as long as a definition satisfies certain syntactic constraints, then the defined bisimulation relation is guaranteed to be a congruence. However these rule formats have not been widely used for defining languages with higher-... - THEORETICAL COMPUTER SCIENCE , 1994 "... We prove a general conservative extension theorem for transition system based process theories with easy-to-check and reasonable conditions. The core of this result is another general theorem which gives sufficient conditions for a system of operational rules and an extension of it in order to ensur ..." Cited by 36 (4 self) Add to MetaCart We prove a general conservative extension theorem for transition system based process theories with easy-to-check and reasonable conditions. The core of this result is another general theorem which gives sufficient conditions for a system of operational rules and an extension of it in order to ensure conservativity, that is, provable transitions from an original term in the extension are the same as in the original system. As a simple corollary of the conservative extension theorem we prove a completeness theorem. We also prove a general theorem giving sufficient conditions to reduce the question of ground confluence modulo some equations for a large term rewriting system associated with an equational process theory to a small term rewriting system under the condition that the large system is a conservative extension of the small one. We provide many applications to show that our results are useful. The applications include (but are not limited to) various real and discrete time settings in ACP, ATP, and CCS and the notions , 1998 "... In a similar way as 2-categories can be regarded as a special case of double categories, rewriting logic (in the unconditional case) can be embedded into the more general tile logic, where also side-effects and rewriting synchronization are considered. Since rewriting logic is the semantic basis o ..." Cited by 33 (25 self) Add to MetaCart In a similar way as 2-categories can be regarded as a special case of double categories, rewriting logic (in the unconditional case) can be embedded into the more general tile logic, where also side-effects and rewriting synchronization are considered. Since rewriting logic is the semantic basis of several language implementation efforts, it is useful to map tile logic back into rewriting logic in a conservative way, to obtain executable specifications of tile systems. We extend the results of earlier work by two of the authors, focusing on some interesting cases where the mathematical structures representing configurations (i.e., states) and effects (i.e., observable actions) are very similar, in the sense that they have in common some auxiliary structure (e.g., for tupling, projecting, etc.). In particular, we give in full detail the descriptions of two such cases where (net) process-like and usual term structures are employed. Corresponding to these two cases, we introduce two ca... - INFORMATION AND COMPUTATION , 1998 "... We set up a formal framework to describe transition system specifications in the style of Plotkin. This framework has the power to express many-sortedness, general binding mechanisms and substitutions, among other notions such as negative hypotheses and unary predicates on terms. The framework i ..." Cited by 32 (5 self) Add to MetaCart We set up a formal framework to describe transition system specifications in the style of Plotkin. This framework has the power to express many-sortedness, general binding mechanisms and substitutions, among other notions such as negative hypotheses and unary predicates on terms. The framework is used to present a conservativity format in operational semantics, which states sufficient criteria to ensure that the extension of a transition system specification with new transition rules does not affect the semantics of the original terms. - THEORETICAL COMPUTER SCIENCE , 2001 "... The computational engine of the verification tool UPPAAL consists of a collection of efficient algorithms for the analysis of reachability properties of systems. Model-checking of properties other than plain reachability ones may currently be carried out in such a tool as follows. Given a property t ..." Cited by 30 (11 self) Add to MetaCart The computational engine of the verification tool UPPAAL consists of a collection of efficient algorithms for the analysis of reachability properties of systems. Model-checking of properties other than plain reachability ones may currently be carried out in such a tool as follows. Given a property to model-check, the user must provide a test automaton T for it. This test automaton must be such that the original system S has the property expressed by precisely when none of the distinguished reject states of T can be reached in the parallel composition of S with T . This raises the question of which properties may be analyzed by UPPAAL in such a way. This paper gives an answer to this question by providing a complete characterization of the class of properties for which model-checking can be reduced to reachability testing in the sense outlined above. This result is obtained as a corollary of a stronger statement pertaining to the compositionality of the property language considered in this study. In particular, it is shown that our language is the least expressive compositional language that can express a simple safety property stating that no reject state can ever be reached. Finally, the property language characterizing the power of reachability testing is used to provide a definition of characteristic properties with respect to a timed version of the ready simulation preorder, for nodes of -free, deterministic timed automata. , 2001 "... In this paper we present history-dependent automata (HD-automata in brief). They are an extension of ordinary automata that overcomes their limitations in dealing with history-dependent formalisms. In a history-dependent formalism the actions that a system can perform carry information generated i ..." Cited by 29 (8 self) Add to MetaCart In this paper we present history-dependent automata (HD-automata in brief). They are an extension of ordinary automata that overcomes their limitations in dealing with history-dependent formalisms. In a history-dependent formalism the actions that a system can perform carry information generated in the past history of the system. The most interesting example is -calculus: channel names can be created by some actions and they can then be referenced by successive actions. Other examples are CCS with localities and the history-preserving semantics of Petri nets. Ordinary - Information and Computation , 2004 "... We present Persistent Turing Machines (PTMs), a new way of interpreting Turing-machine computation, one that is both interactive and persistent. A PTM repeatedly receives an input token from the environment, computes for a while, and then outputs the result. Moreover, it can \remember" its previo ..." Cited by 26 (3 self) Add to MetaCart We present Persistent Turing Machines (PTMs), a new way of interpreting Turing-machine computation, one that is both interactive and persistent. A PTM repeatedly receives an input token from the environment, computes for a while, and then outputs the result. Moreover, it can \remember" its previous state (work-tape contents) upon commencing a new computation. We show that the class of PTMs is isomorphic to a very general class of eective transition systems, thereby allowing one to view PTMs as transition systems \in disguise." The persistent stream language (PSL) of a PTM is a coinductively dened set of interaction streams : innite sequences of pairs of the form (w i ; w o ), recording, for each interaction with the environment, the input token received by the PTM and the corresponding output token. We dene an innite hierarchy of successively ner equivalences for PTMs over nite interaction-stream prexes and show that the limit of this hierarchy does not coincide with PSL-equivalence. The presence of this \gap" can be attributed to the fact that the transition systems corresponding to PTM computations naturally exhibit unbounded nondeterminism. We also consider amnesic PTMs, where each new computation begins with a blank work tape, and a corresponding notion of equivalence based on amnesic stream languages (ASLs). We show that the class of ASLs is strictly contained in the class of PSLs. Amnesic stream languages are representative of the classical view of Turing-machine computation. One may consequently conclude that, in a stream-based setting, the extension of the Turing-machine model with persistence is a nontrivial one, and provides a formal foundation for reasoning about programming concepts such as objects with static elds. We
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=57673&sort=cite&start=10","timestamp":"2014-04-17T20:17:57Z","content_type":null,"content_length":"39627","record_id":"<urn:uuid:1acc3f22-c2ef-4310-8547-130cb5fedc8b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
How to integrate cos(x)e-x^2 ? January 30th 2010, 03:10 AM #1 Dec 2009 How to integrate cos(x)e-x^2 ? I'd love to know how to integrate $<br /> \int cos(x)e^{-x^2} \,dx<br />$ and evaluate $<br /> \int_{-\infty}^{+\infty}cos(x)e^{-x^2} \,dx<br />$. I hope the latex comes out OK. Thanks for any ideas. I tried attacking the integral by doing it by parts, hoping for a bit of algebra in the style of sinx.e^x, but after two by parts, we end up with F = F if you know what I Last edited by Jester; January 30th 2010 at 06:49 AM. Reason: fixed latex I'd love to know how to integrate $<br /> \int \cos(x)e^{-x^2} \,dx<br />$ and evaluate $<br /> \int_{-\infty}^{+\infty} \cos(x)e^{-x^2} \,dx<br />$. I hope the latex comes out OK. Thanks for any ideas. I tried attacking the integral by doing it by parts, hoping for a bit of algebra in the style of sinx.e^x, but after two by parts, we end up with F = F if you know what I With correct LATEX formula Here is what Wolfram thinks about this antiderivative Wolfram Mathematica Online Integrator Last edited by running-gag; January 30th 2010 at 05:03 AM. Reason: Complement added Thanks very much for correcting the latex! I would like to know how to go about solving this integral, not just the 'final' answer. So if anyone could point me to a different technique than by parts that would be great. I tried u = x^2 substitution but made things worse!! As the Wolfram answer showed, it involves the error function, and so it can not be expressed in terms of elementary functions. You can approximate it using one of the various rules like Midpoint Rule or Simpson's Rule though... Can we get anywhere with the definite integral between -inf and inf? I'd settle for that! :-) Is there some substitution involving imaginary numbers that would help? You should write the whole question. The problem asked you to evaluate the value of the improper integral or just determine its convergence/divergence? The questions are the two integrals (corrected for me) above. So far we know that the indefinite integral is restated as another integal via the error function. What about the definite integral? As we know the integral of e^-x^2 from -inf to inf, I'm hoping to transform my question into something similar via a substitution. It's going to be a complex substitution. But I'm not good enough to get further ... Any ideas for a substitution? PS The actual thing I'm trying to solve is another integral, but this will do for starters as it's more simple, but probably involves same technique. Thanks shawsend, that's excellent. e^-x^2 integrated is sqrt(pi). But how are you able to go from e^{-[(z-i/2)^2]} to that value in one step above? I thought it might be something like e^-(x-a)^2 is just e^-x^2 shifted to the right so has same integral with these limits. But there's an i in there which I'm sort of worrying about because I don't know anything about complex analysis. If you could just help fill in this last query, that would make my weekend! Well if it was just real numbers that would make sense right like: $\int_{-\infty}^{\infty} e^{-(x+1000)^2}dx$ and I just let $u=x+1000$ then the limits are still at $\pm \infty$ so I get the same answer. Probably could use that same argument for any constant $x+k$ even if k is complex but also if it's complex like: $\int_{-\infty}^{\infty} e^{-(z+ai)^2}dz$ then I can consider a "contour integral" over a square contour which goes along the real axis, up $ai$ across then down $ai$ and the integral over the horizontal legs are zero so the integral over $z+ai$ is the same as the integral over $z$ by the Residue Theorem. Last edited by shawsend; January 30th 2010 at 02:45 PM. Well if it was just real numbers that would make sense right like: $\int_{-\infty}^{\infty} e^{-(x+1000)^2}dx$ and I just let $u=x+1000$ then the limits are still at $\pm \infty$ so I get the same answer. Probably could use that same argument for any constant $x+k$ even if k is complex but also if it's complex like: $\int_{-\infty}^{\infty} e^{-(z+ai)^2}dz$ then I can consider a "contour integral" over a square contour which goes along the real axis, up $ai$ across then down $ai$ and the integral over the horizontal legs are zero so the integral over $z+ai$ is the same as the integral over $z$ by the Residue Theorem. Sorry for delay in reply. Had to sleep :-) I have had a brief look at the contour integration and the residue theorem. Thanks very much for the introduction! And I do understand your explanation. I will come back to this to fill in my knowledge for sure. Now I've got the true problem to do which is the integral of cos(2x)*normal density from -inf to inf. Should fall out in a similar way hopefully. The whole point of this is I want to verify a Monte Carlo approximation and have had a nice excursion along the way. January 30th 2010, 04:59 AM #2 MHF Contributor Nov 2008 January 30th 2010, 06:15 AM #3 Dec 2009 January 30th 2010, 06:29 AM #4 Senior Member Jan 2009 January 30th 2010, 06:56 AM #5 Dec 2009 January 30th 2010, 08:50 AM #6 January 30th 2010, 09:18 AM #7 Dec 2009 January 30th 2010, 12:10 PM #8 Super Member Aug 2008 January 30th 2010, 12:32 PM #9 Dec 2009 January 30th 2010, 12:50 PM #10 Super Member Aug 2008 January 31st 2010, 12:20 AM #11 Dec 2009
{"url":"http://mathhelpforum.com/calculus/126230-how-integrate-cos-x-e-x-2-a.html","timestamp":"2014-04-20T07:45:32Z","content_type":null,"content_length":"64745","record_id":"<urn:uuid:cda1129e-1b5a-4bb5-b63e-6274fdb26b72>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Bouncing a Superball A highly elastic superball can show some surprising behavior. When thrown down between two vertical planes, it will, in many circumstances, bounce back to near its initial location after three bounces. This Demonstration lets you control gravity (on or off), the initial velocity (by moving the arrow), the initial spin, the coefficient of normal restitution (elasticity; 1 denotes perfect elasticity), and the friction coefficient of the walls (1 means that there is no slippage when the ball strikes a wall). The color of the text inside the ball indicates the direction of spin (yellow denotes zero spin). You can experiment with either zero gravity or full gravity (980 ). Snapshot 1: the ball bounces up and out of the container when the settings are realistic: gravity is Earth's gravity, elasticity is the superball elasticity of 0.85, and there is no slippage along the wall Snapshot 2: with zero gravity, a ball with only 0.65 elasticity can still end up higher than its starting point Snapshot 3: uses the same settings as Snapshot 1, but with an initial spin, which affects the motion The way that the parameters affect what happens at each bounce is described in detail in [1]. The main idea is that the laws of conservation of angular momentum and energy give equations that yield, at each bounce, new values of spin and velocity from the old values, with suitable modifications when either elasticity or friction is less than its ideal value. Between bounces the ball continues on its path, with a vertical acceleration if gravity is present. To describe what happens at each bounce, use the following notation. Let , where is the angular velocity and is the velocity of the center, representing the state of the ball's velocity. Let be the elasticity and let be the tangential restitution (which ranges from to and is , where is the coefficient of friction of the walls). In the ball's moment of inertia , let . If and are used to denote the state just before and just after the bounce, then the central transformation is elegantly given in matrix form as [1] B. T. Hefner, "The Kinematics of a Superball Bouncing between Two Vertical Surfaces," American Journal of Physics, 72 (7), 2004 pp. 875–883. (Macalester College)
{"url":"http://demonstrations.wolfram.com/BouncingASuperball/","timestamp":"2014-04-18T11:12:21Z","content_type":null,"content_length":"45904","record_id":"<urn:uuid:1baf9358-2375-4a12-a1a4-9b32fb351905>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
How to normalize a probability distribution November 2nd 2012, 06:27 PM #1 Dec 2010 How to normalize a probability distribution This is just a straightforward question that I can't find an answer to anywhere on the internet. I keep seeing examples of "normalizing a vector" but I don't think that's what I want. I have this probability distribution: {1/8, 1/16, 1/32} And I need to normalize it, which I guess means all of the members of the distribution sum to one. I don't know why, but it's apparently important. This isn't a homework question or anything, but I do need to know how to normalize. From just moving numbers around, I can see that adding them all together then dividing each member by the total sum appears to do the trick. Is this correct? Is this what "normalizing" is? {1/8, 1/16, 1/32} --> {(1/8)/(1/8+1/16+1/32),(1/16)/(1/8+1/16+1/32),(1/32)/(1/8+1/16+1/32)} --> {4/7, 2/7, 1/7} 4/7 + 2/7 + 1/7 = 1. Any help is appreciated, thanks. Re: How to normalize a probability distribution Yes, that is correct. That is the general method of normalizing a probability distribution. You add all (or integrate all over the relevant interval) and then divide the distribution function by that sum (integration result). Maths online November 2nd 2012, 09:22 PM #2 Junior Member Oct 2012
{"url":"http://mathhelpforum.com/statistics/206648-how-normalize-probability-distribution.html","timestamp":"2014-04-18T09:52:56Z","content_type":null,"content_length":"32084","record_id":"<urn:uuid:bdbc3251-7608-45fe-b6cc-ab17c12c5213>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
A nonreligious statement Through my logs, I came across a forum where people have pointed to a post on this blog. They then veer off into saying things about religion. I suspect this may be due to the title of this blog. I just want to state that "God Plays Dice" has nothing to do with the Judeo-Christian-Islamic-etc. deity. It is a reference to the following quote of Einstein, in a letter to Max Born: Quantum mechanics is very impressive. But an inner voice tells me that it is not yet the real thing. The theory produces a good deal but hardly brings us closer to the secrets of the Old One. I am at any rate convince that He does not play dice." (I'm copying this out of Gino Segre's Faust in Copenhagen ; it's originally from Einstein's letter to Born, December 4, 1926, which is reprinted in The Born-Einstein Letters .) The "Old One" to whom Einstein is referring here was, as far as we know, not what is usually meant by "God"; I suspect that this is why the translator (Irene Born) chose this translation, although I don't know what Einstein said in the original German. To be totally honest, I don't know if the original was even in German. The purpose of the title is that I feel that probability is an important tool for understanding the world, which Einstein may have been a bit skeptical about, at least in the case of quantum mechanics. And there's something of a tradition in the titling of math blogs of taking sayings of well-known mathematicians and "replying" to them. (By "tradition" I mean The Unapologetic Mathematician also does it, in response to Hardy's A Mathematician's Apology Also, for some reason I had thought it was , not , that he wrote this to. I suspect this is because I've heard more things about Bohr than Born, and they sound similar. I suspect the people at the forum in question won't read this, though. But making this post makes me feel like I've replied to them. edited, 5:56 pm : I was wondering if there were any blogs whose titles riff on the quote that "A mathematician is a device for turning coffee into theorems" (usually attributed to Erdos, but supposedly actually due to Renyi). I found Tales from an English Coffee Drinker . The quote from Goethe, "Mathematicians are like Frenchmen: whatever you say to them they translate into their own language and forthwith it is something entirely different", also would be good as a source for a blog title. 19 comments: This comment has been removed by the author. I take it that the title of a math blog that almost but didn't quite come into being, Coffee and Mathematics (which Charles Siegel at Rigorous Trivialities brought to our attention), was intended to be just such a riff on Erdos's quip. "It is a reference to the following quote of Einstein, in a letter to Max Born" I was mildly surprised to read the above statement in your post, for I had always assumed that the idea for your blog title was derived from Stephen Hawking's famous quote: " God not only plays dice but... He also sometimes throws the dice where they cannot be seen. " My belief in that assumption was also strengthened by the fact that Hawking's mother's name was Isobel! I guess I was wrong. I don't know for sure, but I can only assume that Hawking's quote follows from Einstein's. Oh, that's definitely true! Hawking's statement was certainly made in response to that famous quote of Einstein. All this of course goes on to show that you are in the same league as Hawking! :) Ok annoyingly long question earlier. Simple version: Algebraic manipulation, necessary skill or vestige of on a by gone era before cheap computational power? Sorry, for all my absurd thoughts, but since I'm new to all this and my friends didn't such things, I have no real concept of what it means to be math student... what skills i should focus on acquiring and what can be safely "ignored". If I may answer your question in brief (to the best of my ability), I will just say that traditional skills associated with algebraic manipulation these days are certainly getting obsolete, so to speak, especially with the proliferation of powerful math software. However, the neglect in development of some of those basic algebraic skills may lead to a situation wherein a student's mathematical ability may be somewhat "diminished". To take a more simple example, it may seem almost useless to learn to add/subtract numbers, given that calculators can deal with such mundane computation almost instantly, but not knowing how to add/subtract numbers can definitely deprive a student the opportunity to learn some of the fundamental mathematical ideas behind such Coming back to your question, a lot of times, the ability to "finish" a mathematical proof requires one to have a thorough knowledge of some of the basic techniques in "algebraic manipulation". What is more, some of those basic algebraic "skills" can be improved upon or generalized to yield even more useful techniques. This is something we do in mathematics all the time. I am sure there is a lot that others on this blog can add to what I have already said. i don't know about what 'the old one thinks' but i know in the true religion the Mets lead the Phillies by a 1/2 game. While we're on the subject, my favorite quote on mathematics and religion is by Bertrand Russell: "If a religion is defined to be a system of ideas that contains unprovable statements, then Gödel taught us that mathematics is not only a religion, it is the only religion that can prove itself to be one." I've always been rather fond of the story about G. H. Hardy and Ramanujan and the number 1729, and so I considered naming my blog "Uninteresting Numbers". I believe Einstein's letter was in german: http://de.wikipedia.org/wiki/Gott_w%C3%BCrfelt_nicht It's not a blog, but Brainfreeze Puzzles has the tagline "we turn coffee into puzzles". By the by, if you think deeply about the relationships that are implied by a lot of algebraic manipulation [at least of the multivariable kind], you can glean a lot of geometric insight. That we spend our formative years grinding away at small problems of algebra is so that we can not worry about our execution when we tackle bigger problems. Some people, though, never stop grinding the small problems, and they grind the glass of the lens to a fine, wispy dust. "By the by, if you think deeply about the relationships that are implied by a lot of algebraic manipulation [at least of the multivariable kind], you can glean a lot of geometric insight. That we spend our formative years grinding away at small problems of algebra is so that we can not worry about our execution when we tackle bigger problems. Some people, though, never stop grinding the small problems, and they grind the glass of the lens to a fine, wispy dust." Doesn't that imply that a computer could be used to check for violations of algebraic rules freeing one of such pedestrian concerns. Really is this any different than spell check? Or, Grammar check? A little red line appears under the violation. If you wish to ignore it for your purposes so be it. "... God does play dice with the universe. All the evidence points to him being an inveterate gambler, who throws the dice on every possible occasion. " (-- Stephen Hawking) "Einstein was very unhappy about this apparent randomness in nature. His views were summed up in his famous phrase, 'God does not play dice'. He seemed to have felt that the uncertainty was only provisional: but that there was an underlying reality, in which particles would have well defined positions and speeds, and would evolve according to deterministic laws, in the spirit of Laplace. This reality might be known to God, but the quantum nature of light would prevent us seeing it, except through a glass darkly. Einstein's view was what would now be called, a hidden variable theory. Hidden variable theories might seem to be the most obvious way to incorporate the Uncertainty Principle into physics. They form the basis of the mental picture of the universe, held by many scientists, and almost all philosophers of science. But these hidden variable theories are wrong. " (-- Stephen Hawking) The exchange obviously took place in German - how else would two native German speakers, at that time both professors in German universities, communicate after all... (most exchanges about quantum mechanics were conducted in German at that time, for sure) Actually, perhaps Einstein intended a pun: a piece in the game of backgammon is called "Stein" in German; using the indefinite article, "ein Stein". But Einstein refused to believe he's being moved according to a dice roll :) I'm glad you've given the background to the title in this post, because I've often wondered about its genesis and meaning. Though any regular reader can see that you're not ever writing about I think it would be cool to put that info somewhere in your "about" section, where I had originally looked to find an explanation. Hi, just came across this posting when playing with the backlinks feature on blogger. "Tales from an English Coffee Drinker" is my blog. The source for the blog title was actually the novel "Confessions of an English Opium Eater" by Thomas De Quincey. I went with tales instead of confessions as I didn't really feel like I was confessing anything! I've just started my own blog and i'm really happy to find articles like this
{"url":"http://godplaysdice.blogspot.com/2008/07/nonreligious-statement.html","timestamp":"2014-04-17T18:45:44Z","content_type":null,"content_length":"82251","record_id":"<urn:uuid:64b28454-bf92-4ebe-bc64-744c39b5dc99>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
s approximation Stirling's approximation Stirling’s approximation $\Gamma(x)=\sqrt{2\pi}x^{{x-\frac{1}{2}}}e^{{-x+\mu(x)}}$ (1) $\mu(x)=\sum_{{n=0}}^{\infty}\left(x+n+\frac{1}{2}\right)\ln\left(1+\frac{1}{x+% n}\right)-1=\frac{\theta}{12x}$ with $0<\theta<1$. Taking $x=n$ and multiplying by $n$, we have $n!=\sqrt{2\pi}n^{{n+\frac{1}{2}}}e^{{-n+\frac{\theta}{12n}}}$ (2) Taking the approximation for large $n$ gives us Stirling’s formula. There is also a big-O notation version of Stirling’s approximation: $n!=\left(\sqrt{2\pi n}\right)\left(\frac{n}{e}\right)^{n}\left(1+\mathcal{O}% \left(\frac{1}{n}\right)\right)$ (3) We can prove this equality starting from (2). It is clear that the big-O portion of (3) must come from $e^{{\frac{\theta}{12n}}}$, so we must consider the asymptotic behavior of $e$. But in our case we have $e$ to a vanishing exponent. Note that if we vary $x$ as $\frac{1}{n}$, we have as $n\longrightarrow\infty$ We can then (almost) directly plug this in to (2) to get (3) (note that the factor of 12 gets absorbed by the big-O notation.) MinkowskisConstant, AsymptoticBoundsForFactorial Stirling's formula, Stirling's approximation formula Mathematics Subject Classification no label found no label found no label found Added: 2001-11-17 - 18:07 Attached Articles
{"url":"http://planetmath.org/StirlingsApproximation","timestamp":"2014-04-18T15:42:18Z","content_type":null,"content_length":"74956","record_id":"<urn:uuid:ec2ea959-06b4-4185-ae0c-8480534aefd4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal stopping in Levy models, for non-monotone discontinuous payoffs Boyarchenko, Svetlana and Levendorskii, Sergei (2010): Optimal stopping in Levy models, for non-monotone discontinuous payoffs. Download (327Kb) | Preview We give short proofs of general theorems about optimal entry and exit problems in Levy models, when payoff streams may have discontinuities and be non-monotone. As applications, we consider exit and entry problems in the theory of real options, and an entry problem with an embedded option to exit. Item Type: MPRA Paper Original Optimal stopping in Levy models, for non-monotone discontinuous payoffs Language: English Keywords: optimal stopping, Levy processes, non-monotone discontinuous payoffs D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D81 - Criteria for Decision-Making under Risk and Uncertainty Subjects: C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C61 - Optimization Techniques; Programming Models; Dynamic Analysis Item ID: 27999 Depositing Svetlana Boyarchenko Date 14. Jan 2011 01:40 Last Modified: 12. Feb 2013 21:32 [1] L. Alili, and A. Kyprianou "Some remarks on first passage of Levy process, the American put and pasting principles." Annals of Applied Probability 15 (2005), 2062-2080 [2] S. Assmusen, F. Avram, and M.R. Pistorius "Russian and American put options under exponential phase-type Levy models," Stochastic Processes and Applications, 109 (2004), 79-111. [3] F. Avram, A.E. Kyprianou and M.R. Pistorius, "Exit problems for spectrally negative Levy processes and applications to (Canadized) Russian options", The Annals of Applied Probability, 14 (2004), 215-238 [4] Back, K. and D. Paulsen, "Open-Loop Equilibria and Perfect Competition in Option Exercise Games," Review of Financial Studies, 22:11 (2009), 4531-4552 [5] S. Boyarchenko, "Irreversible Decisions and Record-Setting News Principles", American Economic Review 94:3 (2004), pp. 557-568 [6] M. Boyarchenko and S.Z. Levendorskii, "Prices and sensitivities of barrier and first-touch digital options in Levy-driven models", International Journal of Theoretical and Applied Finance 12 (2009), pp. 1125-1170 [7] S.I. Boyarchenko and S.Z. Levendorskii, "Option pricing for truncated Levy processes", International Journal of Theoretical and Applied Finance, 3:3 (July 2000), pp. 549-552. [8] S.I. Boyarchenko and S.Z. Levendorskii, "Perpetual American Options under Levy Processes," SIAM Journal on Control and Optimization, 40 (2002), 1663-1696. [9] S.I. Boyarchenko and S.Z. Levendorskii, Non-Gaussian Merton-Black-Scholes theory. Singapore: World Scientific 2002. [10] S.I. Boyarchenko and S.Z. Levendorskii, "American options: the EPV pricing model", Annals of Finance 1:3 (2005), 267-292. [11] S.I. Boyarchenko and S.Z. Levendorskii, "General Option Exercise Rules, with Applications to Embedded Options and Monopolistic Expansion", Contributions to Theoretical Economics, 6:1 (2006), Article 2 [12] S.I. Boyarchenko and S.Z. Levendorskii, "Irreversible Decisions Under Uncertainty (Optimal Stopping Made Easy)", Springer, Berlin, 2007. [13] S.I. Boyarchenko and S.Z. Levendorskii, "Practical guide to real options in discrete time", International Economic Review, 48:1 (2007), 275-306. [14] S.I. Boyarchenko and S.Z. Levendorskii, \Exit Problems in Regime-Switching Models", Journ. of Mathematical Economics 44:2 (2008), 180-206 [15] S.I. Boyarchenko and S.Z. Levendorskii, "Pricing American Options in Regime-Switching Models", SIAM J Control and Optimization, 48:4 (2009), pp.1353-1375 [16] D.A. Darling, T. Ligget and H.M. Taylor, "Optimal Stopping for partial sums", Ann Math. Statistics, 43 (1972), 1363-1368. References: [17] G. Deligiannidis, H. Le, and S. Utev, "Optimal Stopping for processes with independent increments, and applications", Journ. Appl. Probability, 46 (2009), 1130-1145. [18] S. Grenadier, "Game Choices: The Intersection of Real Options and Game Theory," Risk Books, 2000 [19] S. Grenadier, "Option Exercise Games: An Application to the Equilibrium Investment Strategies of Firms," Review of Financial Studies 15:3 (2002), 691-721 [20] T.C. Johnson and M. Zervos, "The explicit solution to a sequential switching problem with non-smooth data," Stochastics: an International Journal of Probability and Stochastic Processes 82:1 (2010), 69-109 [21] A.E. Kyprianou, and B.A. Surya, "On the Novikov-Shiryaev optimal stopping problems in continuous time", Electronic Commun. Prob. 10 (2005), 146-154. [22] A. Merhi and M. Zervos, "A model for reversible investment capacity expansion," SIAM Journal on Control and Optimization 46:3 (2007), 839-876 [23] E. Mordecki, "Optimal stopping and perpetual options for Levy processes", Finance Stoch. 6 (2002), 473-493. [24] A.A. Novikov and A.N. Shiryaev, "On an effective case of the solution of the optimal stopping problem for random walks", Theory Prob. Appl. 49 (2005), 344-354. [25] A.A. Novikov and A.N. Shiryaev, "On a solution of the optimal stopping problem for processes with independent increments", Stochastics 79 (2007), 393-406. [26] G. Peskir and A. Shirayev, "Optimal Stopping and Free-Boundary Problems (Lectures in Mathematics. ETH Zurich)", Birkhauser, Basel Boston Berlin 2006 [27] F. Riedel, "Optimal stopping with multiple priors", Econometrica, 77:3 (2009), 857-908 [28] F. Riedel and Xia Su, "On Irreversible Investment," Finance and Stochastics, to appear [29] L.C.G. Rogers and D. Williams, Diffusions, Markov Processes, and Martingales. Volume 1. Foundations, 2nd ed. John Wiley & Sons, Ltd., Chichester, 1994. [30] M. Sirbu, and S.E. Shreve, "A two person game for pricing convertible bonds," SIAM Journal on Control and Optimization 45:4 (2006), 1508-1539 [31] Sato, Ken-Iti, Levy Processes and Infinitely Divisible Distributions. Cambridge: Cambridge University Press 1999. [32] B.A. Surya, "An approach for solving perpetual optimal stopping problems driven by Levy processes." Stochastics, 79 (2007), 337-361. [33] M.D. Whinston, "Exit with multiplant firms." RAND Journal of Economics, 19 (1988), 568-588. [34] M. Zervos, "A problem of sequential entry and exit decisions combined with discretionary stopping", SIAM Journal on Control and Optimization 42:2 (2003), 397-421 2 URI: http://mpra.ub.uni-muenchen.de/id/eprint/27999
{"url":"http://mpra.ub.uni-muenchen.de/27999/","timestamp":"2014-04-18T15:56:19Z","content_type":null,"content_length":"27342","record_id":"<urn:uuid:3287c203-243a-4c13-a3f8-faae09b2188c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
An example of a beautiful proof that would be accessible at the high school level? up vote 40 down vote favorite The background of my question comes from an observation that what we teach in schools does not always reflect what we practice. Beauty is part of what drives mathematicians, but we rarely talk about beauty in the teaching of school mathematics. I'm trying to collect examples of good, accessible proofs that could be used in middle school or high school. Here are two that I have come across thus far: (1) Pick's Theorem: The area, A, of a lattice polygon, with boundary points B and interior points I is A = I + B/2 - 1. I'm actually not so interested in verifying the theorem (sometimes given as a middle school task) but in actually proving it. There are a few nice proofs floating around, like one given in "Proofs from the Book" which uses a clever application of Euler's formula. A very different, but also clever proof, which Bjorn Poonen was kind enough to show to me, uses a double counting of angle measures, around each vertex and also around the boundary. Both of these proofs involve math that doesn't go much beyond the high school level, and they feel like real mathematics. (2) Menelaus Theorem: If a line meets the sides BC, CA, and AB of a triangle in the points D, E, and F then (AE/EC) (CD/DB) (BF/FA) = 1. (converse also true) See: http://www.cut-the-knot.org/ Generalization/Menelaus.shtml, also for the related Ceva's Theorem. Again, I'm not interested in the proof for verification purposes, but for a beautiful, enlightening proof. I came across such a proof by Grunbaum and Shepard in Mathematics Magazine. They use what they call the Area Principle, which compares the areas of triangles that share the same base (I would like to insert a figure here, but I do not know how. -- given triangles ABC and DBC and the point P that lies at the intersection of AD and BC, AP/PD = Area (ABC)/Area(DBC).) This principle is great-- with it, you can knock out Menelaus, Ceva's, and a similar theorem involving pentagons. And it is not hard-- I think that an average high school student could follow it; and a clever student might be able to discover this principle themselves. Anyway, I'd be grateful for any more examples like these. I'd also be interested in people's judgements about what makes these proofs beautiful (if indeed they are-- is there a difference between a beautiful proof and a clever one?) but I don't know if that kind of discussion is appropriate for this forum. Edit: I just want to be clear that in my question I'm really asking about proofs you'd consider to be beautiful, not just ones that are neat or accessible at the high school level. (not that the distinction is always so easy to make...) 3 Surely you know the books ''Math! Encounters with Undergraduates'', ''Math Talks for Undergraduates'' and ''The beauty of doing Mathematics'' by Serge Lang. They were written with your same intention. – Giuseppe Tortorella Sep 8 '11 at 11:15 44 I remember Lang giving a talk at Caltech supposedly aimed at undergraduates, but far more advanced. Lang called on a postdoc in the front row, apparently thinking he was a student, and the postdoc couldn't answer. At one point, he wrote out a complicated formula, and challenged Prof. Ramakrishnan, "Do you teach your students this?" "No." "So you see, Caltech is not better than anywhere [sic] else." After a bit more, he went back to the formula and corrected a sign. Ramakrishnan called out, "That, we teach." – Douglas Zare Sep 14 '11 at 0:38 2 Even the notion of proof may not be accessible at high school level – Fernando Muro Sep 15 '11 at 20:33 In the US, most high school mathematics classes are not based on proofs. Theorems are introduced without reference to proof. Only in a few limited situations are students asked to prove anything, 2 such as in some geometry classes and some calculus classes. Instead, high school math classes concentrate on introducing objects, their properties, and how to manipulate those objects. Finding neat accessible proofs to show them is reasonable, but this is very different from finding neat accessible math to show them. E.g., I think the Chaos Game is accessible, but few students can handle the proofs. – Douglas Zare Sep 17 '11 at 0:39 10 As far as I remember, the thing I liked the most in high school maths (age of 14) was the so called Ruffini's rule: $(x-a)$ divides a polynomial $P(x)$ if and only if $P(a)=0$. It looked to me so incredibly easy and so full of consequences. I hope they still learn it with a proof. – Pietro Majer Sep 18 '11 at 20:12 show 5 more comments 44 Answers active oldest votes Extending on Ralph's answer, there is a similar very neat proof for the formula for $Q_n:=1^2+2^2+\dots+n^2$. Write down numbers in an equilateral triangle as follows: Now, clearly the sum of the numbers in the triangle is $Q_n$. On the other hand, if you superimpose three such triangles rotated by $120^\circ$ each, then the sum of the numbers in each position equals $2n+1$. Therefore, you can double-count $3Q_n=\frac{n(n+1)}{2}(2n+1)$. $\square$ up vote (I first heard this proof from János Pataki). 51 down vote How to prove formally that all positions sum to $2n+1$? Easy induction ("moving down-left or down-right from the topmost number does not alter the sum, since one of the three summand increases and one decreases"). This is a discrete analogue of the Euclidean geometry theorem "given a point $P$ in an equilateral triangle $ABC$, the sum of its three distances from the sides is constant" (proof: sum the areas of $APB,BPC,CPA$), which you can mention as well. How to generalize to sum of cubes? [S:Same trick on a tetrahedron.:S] EDIT: there's some way to generalize it to higher dimensions, but unfortunately it's more complicated than this. See the comments below. If you wish to tell them something about "what is the fourth dimension (for a mathematician)", this is an excellent start. @Frederico: Careful. The sum of cubes does not correspond to a tetrahedron, but rather a pyramid. (Which does not have this nice symmetry) However the slightly modified sum $$Q_N^'=\sum_ 3 {n=1}^N n\frac{n(n+1)}{2}=\frac{1}{2}\sum_{n=1}^N n^3+\frac{1}{2}\sum_{n=1}^N n^2$$ will correspond to a tetrahedron. By using this symmetry we find $$Q_N^' = \frac{1}{4} (3N+1)\left(\ frac{N(N+1)(N+2)}{6}\right)$$ since each entry will be $3N+1$ and the tetrahedral numbers are $\binom{N+3}{3}=\frac{N(N+1)(N+2)}{6}$. From here we can deduce that $$\sum_{n=1}^N n^3=\ frac{(N^2+N)^2}{4}.$$ – Eric Naslund Sep 16 '11 at 23:31 3 In short, you won't have a nice generalization of the solution for $n=2$ to higher dimensions. Such a generalization is not expected either since Faulhaber's Formula is not so simple. – Eric Naslund Sep 16 '11 at 23:39 1 @Eric Oh, you're right. :( What a pity, it would've been an even neater generalization. – Federico Poloni Sep 17 '11 at 9:57 1 Cool. Both the example and the follow up comments. – Manya Sep 19 '11 at 8:30 show 4 more comments The theorem of "friends and strangers": the Ramsey number $R(3,3)=6$. Not only can the proof be understood by high-school students, a proof can be discovered by students at that level via something akin to the Socratic method. First students can establish the bound $R(3,3) > 5$ by 2-coloring the edges of $K_5$: Then they can reason through that a 2-coloring of the edges of $K_6$ must contain a monochromatic triangle, and so $R(3,3)=6$: in every group of six, three must be friends or three must be up vote 36 down After this exercise, an inductive proof of the 2-color version of Ramsey's theorem is in reach. An added bonus here is that one quickly reaches the frontiers of mathematics: $R(5,5)$ is unknown! It can be a revelation to students that there is a frontier of mathematics. And then one can tell the Erdős story about $R(6,6)$, as recounted here. :-) 4 R(3,3)=6 sticks in my memory as the one time I managed to explain mathematics to a non-mathematical friend in the pub – Yemon Choi Sep 8 '11 at 21:57 6 This might be apocryphal, but I've read a story about a sociologist who was surprised to discover such patterns of friendship or non-friendship among his subjects, and pondering the deep psychological origins. If true, a wonderful argument for the need for math literacy. – Thierry Zell Sep 9 '11 at 1:11 From Jacob Fox's lecture notes: "In the 1950’s, a Hungarian sociologist S. Szalai studied friendship relationships between children. He observed that in any group of around 20 children, 26 he was able to find four children who were mutual friends, or four children such that no two of them were friends. Before drawing any sociological conclusions, Szalai consulted three eminent mathematicians in Hungary at that time: Erdős, Turán and Sós. A brief discussion revealed that indeed this is a mathematical phenomenon rather than a sociological one." – Joseph O'Rourke Sep 9 '11 at 12:07 6 Re the Szalai story: $R(4,4) = 18$. – Joseph O'Rourke Sep 9 '11 at 13:36 3 I finally tracked down where I must have read about Szalai's story for the first time: N. Alon and M. Krivelevich's article on extremal combinatorics in the Princeton's companion. Also available at: cs.tau.ac.il/~krivelev/papers.html – Thierry Zell Sep 29 '11 at 17:30 show 3 more comments Euler's Bridges of Konigsberg problem. You can give it to students for five minutes to play with, watch them get annoyed, and then offer them the classical simple and beautiful up vote 32 impossibility proof. I think a lot of high school students, and even bright middle school students, would be totally convinced. down vote show 1 more comment The proof, by counting inversions, that you can't interchange the 14 and 15 in the 15 puzzle, just by sliding, is accessible to high-school students, introduces important ideas, and up vote 27 down might be found beautiful. add comment Going by the parameters of the question, I don't see why the proof would necessarily need to be of a sophisticated theorem. I think Euclid's proof of the infinitude of primes is beautiful up vote and definitely accessible to a high school audience. Having given the proof, one might reflect on some of its features that generalize to many other contexts, like proof by contradiction or 22 down the ability to use a clever construction to avoid infinite enumeration. No! Do not use this proof as an occasion to talk about proof by contradiction. Euclid's proof of this proposition was not by contradiction. The conventional practice of rearranging it 22 into a proof by contradiction not only adds an extra complication that serves no purpose, but also leads to confusions such as the belief that if you multiply the first n primes and add 1, the result is always prime. See the paper by me and Catherine Woodgold on this: "Prime Simplicity", Mathematical Intelligencer, autumn 2009, pages 44--52. – Michael Hardy Sep 15 '11 at 20:03 2 It IS by contradiction, just not the kind of contradiction we have in mind. – Mohamed Alaa El Behairy Sep 28 '11 at 1:35 To complete Michael Hardy's comment, Euclid's original proof proves the following statement: given any finite list of primes, we can extend the list by finding a prime not in the list. 11 (Proof: multiply the primes in the list and add one; any prime factor of this new number is a prime not in the original list.) So it's a constructive way of taking a list of primes and producing another prime; we don't have to assume that the original list was "all the primes" (as in the contradiction proof) or that they were the first n primes. – shreevatsa Dec 23 '11 at 4:42 show 2 more comments The trefoil knot is non-trivial. up vote 22 down Proof: It has a tricoloring: 1 +1 because it is so obviously technically uncomprehensible to a high school student (at myn hiogh school) yet plausibly beautiful, and beautifully plausible. – roy smith Sep 16 '11 at @Roy: I disagree with you. I don't think that this is incomprehensible to high school students. The notion of knot is pretty intuitive, and it's very easy to explain what a tricoloring 13 is. Also, the Reidemeister moves are pretty easy things to explain (ok -- I'm not talking about the actual proof that they generate everything). The most difficult aspect of the argument is probably to convince a high school student that there is need for proof. Namely, isn't the trefoil obviously non-trivial!? For that, it might be good to show them some monster unknots first... – André Henriques Sep 16 '11 at 5:17 When I was shown the Reidemeister moves in school, several of my classmates and I made the objection, in essence, that it wasn't clear that they generated everything. Worse, since we 7 didn't have topology to work with, we didn't really have a "real" definition to compare it with, so it felt to us that the real issues were being swept under the rug. – Daniel McLaury Jun 15 '12 at 18:49 add comment Two of my favorites: (1) The parameterization of all primitive Pythagorean triples; (2) The formula for the $n$th Fibonacci number in terms of the golden ratio $\phi = \frac{1+\sqrt up vote 20 {5}}{2}$, with the corollary that $\displaystyle \lim_{n \longrightarrow \infty} F_n/F_{n-1} = \phi$. down vote show 1 more comment Seeing the struggle of many students with standard trigonometry, I especially like the rational parametrization of $x^2+y^2=1$ (which is equivalent to listing all Pythagorean triples) by starting from $\sin^2\phi+\cos^2\phi=1$ and then using $$ \sin\phi=\frac{2t}{1+t^2}, \quad \cos\phi=\frac{1-t^2}{1+t^2}, \qquad t=\tan\frac{\phi}2. $$ Note that the formulas are usually used in the context of integration of rational expressions in sine and cosine. up vote 16 At the same time, a more general "geometric" argument (applicable to general quadratics), due to Bachet (1620), is still at school level. Namely, fix a single rational point on the curve, down vote $(x _ 0,y _ 0)$ say, and consider the intersection points of the curve and straight lines $y-y_0=t(x-x_0)$ with rational slope $t$ passing through the point. A beauty here is because of variety of different geometric and analytic methods for solving a classical arithmetic problem. 2 Since coordinate geometry was invented after 1620 I wonder how Bachet could have done that. – Franz Lemmermeyer Sep 8 '11 at 13:23 6 Franz, I will be definitely happier if you are a little bit more constructive in your critisism: you seem to be the right person to explain why the method is usually attributed to Bachet! – Wadim Zudilin Sep 10 '11 at 6:12 add comment Cantor's diagonal argument. up vote 16 down The warm-up could be an equally beautiful proof, namely that the rationals are countable. 7 I am a bit torn on this one. At least I believe before one would have to also prove, say, that the cardinality of the rationals is equal to that of the naturals (and/or related things). Because if not the 'insight' that there are more reals than naturals might seem too 'obvious' to make a proof enlightening. – quid Sep 8 '11 at 10:51 2 But the enumeration proving that the rationals are countable is also almost as beautiful as the diagonal argument! Perhaps they could be combined. – J.C. Ottem Sep 8 '11 at 11:11 1 Yes, I agree; combined it could be very interesting. I should have made my agreement clearer in my first comment; sorry about that. – quid Sep 8 '11 at 11:19 1 The counting-rationals argument loses some appeal when you have to deal properly with skipping the non-reduced fractions. The visual proof of course establishes that $\mathbb{N} \times \ mathbb{N} \cong \mathbb{N}$ as sets, and the countability of $\mathbb{Q}$ follows by applying an unrelated lemma on cardinality of surjective images. Glossing over a lemma like that seems exactly like what a beautiful proof (especially an exemplary proof) should avoid. – Ryan Reich Sep 9 '11 at 21:09 There is a simple bijection between the positive rationals and positive integers shown by the fusc function. Let $f(n)$ be the number of ways of representing $n$ as a sum of powers of 2 $2$ with at most $2$ copies of each power. $f(4)=3$ since $4=4,2+2,2+1+1$. The sequence $\{f(n)\}_{n=0} = \{1,1,2,1,3,2,3,1,4,...\}$. Over natural numbers $n$, $g(n) = f(n-1)/f(n)$ hits each positive rational precisely once. See also the Calkin-Wilf tree, how Euclid's algorithm reduces relatively prime ordered pairs. The sequences of numerators and denominators in order are offset copies of $\{f(n)\}$. – Douglas Zare Sep 10 '11 at 21:33 show 1 more comment The proof that $\sqrt{2}$ is irrational is a nice example of proof by contradiction. up vote 15 down 1 @Manya My conception of beauty has changed since I first saw that proof. At that time, I did indeed consider it quite beautiful. Now, however, it's so embedded in me that it's hard to appreciate its beauty. – Quinn Culver Sep 8 '11 at 13:58 show 9 more comments The Gale-Sharpley stable marriage theorem, http://en.wikipedia.org/wiki/Stable_marriage_problem . The algorithm and its proof are very much accessible to school students. Despite its innocuous look, the algorithm is not easy at all to invent. On a similar note, Hall's theorem: http://en.wikipedia.org/wiki/Hall%27s_marriage_theorem#Graph_theory . This looks like a recreational puzzle but actually is closer to university up vote mathematics than everything done in high school. 13 down vote Here is another combinatorial exercise which, properly presented, does not even look like mathematics: http://www.artofproblemsolving.com/Forum/viewtopic.php?p=279550#p279550 . The thing I don't like about it is that the standard "gotcha" proof (explained in the usual, informal way) requires a bit too much concentration to understand - some students might fail at it and take it as an example that mathematical proofs are something one either believes or not, rather than something one can check. Of course, one can formalize the proof, but this requires quite an amount of time in a high school class. I’ll point out that the “combinatorial exercise” is actually useful. Assume that we are given a sequence $s_1,\dots,s_n$ of symbols of arities $a_1,\dots,a_n\ge0$, respectively. It is 1 easy to see that if we can arrange them into a well-formed term, then $\sum_ia_i=n-1$. The exercise says that, conversely, if this identity holds, then there exists a (unique) cyclic permutation of the string $s_1\dots s_n$ which is a valid term in Polish (prefix) notation. Among other things, this allows you to count the well-formed terms consisting of $n$ symbols. – Emil Jeřábek Sep 8 '11 at 12:29 show 1 more comment The Halting Problem. The first time I saw this was my senior year in high school and it completely blew me away. All you need is a notion of what an algorithm is and very basic logic (enough to recognize that assuming $A$ and deriving $\neg A$ is a contradiction) In a similar vein, Russell's Paradox. The problem here is that you need some basic set theory, so this is more for advanced high school students. up vote 11 down vote The first beautiful proof I saw in high school (it was beautiful at the time, but now seems too trivial) was the fact that for a geometric series $a, ax, ax^2, \dots$ the sum of the first $n$ terms is $a \cdot \frac{1-x^n}{1-x}$. I thought this was cool because of all the cancellation that seems to come out of nowhere. The treatment here is nice. show 1 more comment I recommend Kelly's proof of the Sylvester-Gallai theorem (the original proof of Gallai was about 30 pages long, this one takes a few lines). The theorem and the proof can be read here. up vote 7 down vote 1 @Emil: There are at least 3 points on $l$, at least two on one side of the perpendicular. Call the closer of these two $B$ and the farther of these two $C$. Now let $m$ denote the line $PC$, then the pair $(B,m)$ has a smaller distance than $(P,l)$. Contradiction. – GH from MO Sep 8 '11 at 12:08 1 Thanks, now I got it. I misread it first. – Emil Jeřábek Sep 8 '11 at 12:36 show 1 more comment One of the keys to making a proof accessible to high school students (or just non-mathematicians) is to make the answer relevant. This gives a dual responsibility, to ensure that the theorem is motivated and that the proof is accessible. The proof of the infinity of the primes has been mentioned already and is a fantastic example. You can lead students in to it using the (almost trivial) proof that there is no largest integer. Another example is the classification of the regular polyhedra. With good students and models you can even lead them to the proof there there are at most 6 regular polytopes in 4d (actually showing they all exist is a little harder). up vote 7 Keeping with polyhedra, the Euler characteristic is also powerful. Start with balloons and get the students to draw lines freely so you get a tiling. Then get them to count faces, vertices down vote and edges. David Eppstein collected 19 proofs to choose from, several of which would be perfect for non-mathematicians: http://www.ics.uci.edu/~eppstein/junkyard/euler/ As a final example (and to show that it does not have to be deep mathematics to motivate) you can consider the question of blocking a square on a chess board and filling the remainder with tromioes. It starts with a puzzle, you can get people to play with, and leads to a lovely induction proof: http://www.cut-the-knot.org/Curriculum/Games/TriggTromino.shtml Actually polyominoes are a fantastic source of many other fun, non-trivial but accessible proofs. add comment 1) Many elementary binomial identities or identities with Fibonacci numbers have beautiful proofs. Let me only mention the matrix representation of Fibonacci numbers whose determinant gives Cassini's identity. up vote 6 2) Another elementary problem is the following: Is it possible to cover a checkerboard from which one white and one black square have been removed with dominoes? To show that this is down vote possible run through the board in a cyclical way. Observe that on this path between a white and a black square are an even number of squares. Since I don't know how to make figures I indicate such a path for a 4x4-board: ((1,1),(1,2),(1,3),(1,4),(2,4),(2,3),(2,2),(3,2),(3,3),(3,4),(4,4),(4,3),(4,2),(4,1),(3,1),(2,1)). I would also add the (checkerboard) proof that if you remove opposite corners from the 8x8 board, the resulting figure can't be tiled with dominoes; if presented without the board 3 coloration it's not immediately obvious, and its proof provides a wonderful a-ha moment that should be easily accessible for high school students (if not earlier). These (along with the various graph-theory problems) also have the advantage of showing students that mathematics is about more than just numbers. – Steven Stadnicki Sep 8 '11 at 21:31 show 1 more comment The formula $1 + 2 + ... + n = n(n+1)/2$ can be proved on middle school level: Assume first n is even. Then there are n/2 pairs (1,n), (2,n-1), ..., (n/2,n/2+1) those sum is always n+1. up vote 5 Thus the overall sum is n/2*(n+1). The case when n is odd can be treated in the same manner. down vote 13 Actually this proof would be more approriate for primary school (age 10 or so). Smarter kids (like Gauss) figure it out themselves. – GH from MO Sep 8 '11 at 12:04 2 I prefer the "visual proof" because you don't have to reason, you just "see" it: put 1 little square in the first raw, two in the second,... n in the last raw, and you've obtained a figure of which you can easily compute the area as: (AreaOfBigSquare-AreaOfSquaresOnTheDiagonal)/2+ AreaOfSquaresOnTheDiagonal=$(n^2-n)/2+n=n(n+1)/2$. Of course to pass from $(n^2-n)/ 2+n$ to $n(n+1)/2$ the kid must have learned some "algebra". – Qfwfq Sep 8 '11 at 13:48 No, the best proof is the visual one where you draw a triangle with 1, 2, ..., n circles in each row, and another row below the last with n + 1 circles. Anything in the smaller triangle 1 can be specified by two "coordinates" in the last row, obtained by dropping down parallel to the sides. Thus, $\binom{n + 1}{2}$ using the definition of "n + 1 choose 2"; to get a formula for that number you could use the argument in Dan's comment. This one is nice because it is both visual and a bijective proof, rather than computational. – Ryan Reich Sep 8 '11 at 15:31 3 How about this: write the numbers from $1$ to $n$ in row, then write them again in a row below, but backwards. Add up the columns, to get $n+1$ each time, so $n(n+1)$ in total, because there are $n$ columns. Divide by $2$. – euklid345 Sep 9 '11 at 4:22 show 2 more comments I suggest Euler's polyeder formula with application to the Platonic solids. One first observes that instead of polyeders one can consider graphs in the plane, counting the unbounded region as a face. One observes next that one can just as well allow the edges to be pieces of curves. Then one observes that the formula $F - E + V = 2$ is preserved if one edge or one vertex is added, up vote thus the formula is proved by induction. Then use the formula to prove that there are no regular polyeders except the five Platonic solids as follows: faces can only be triangles, squares or 5 down pentagons, edges are always common to exactly two faces, and for the number of faces that meet in a vertex there are a number of possibilities that one checks. For each case, $E$ and $V$ can vote be expressed in terms of $F$, and the formula gives the possible values of $F$. show 1 more comment I've been collecting simple, often one-step, proofs. up vote 5 down vote Some I judge beautiful - these are listed separately. show 1 more comment i suggest the proof archimedes wanted on his tombstone and its relatives. i.e. since two solids with the same horizontal slice area at every height have the same volume, hence by pythagoras, the volume of a cylinder equals the sum of the volumes of an inscribed cone and an inscribed solid hemisphere. to generalize this, the volume generated by revolving a solid hemisphere around a planar axis in 4 space equal that generated by revolving a cylinder minus that generated by revolving a cone. Using the fact that the center of gravity of a cone is 1/4 the way up from the base, one obtains the volume of a 4-sphere, as pi^2/2 R^4. up vote 4 a generalization of the first computation is that of the volume of a bicylinder (intersection of two perpendicular cylinders), since it is the difference of the volumes of a cube down vote containing the bicylinder and a square based double pyramid also inscribed in the cube. I find these beautiful, but of course that is subjective. I also like euclid's argument for pythagoras, and for constructing a regular pentagon, but they are hard to reproduce here briefly. add comment For someone in high school, I think it's good to prove that the sum of the interior angles of a triangle is $\pi$ if they don't know why. Personally, I was never shown why this fact is true, and I feel that it's generally a bad idea to not know why something in math is true, especially when the answer is pretty. My favorite proof is to think about how the normal vector up vote 4 changes as you walk around the triangle -- it's nice because it generalizes to other shapes (which may not even be polygons). down vote add comment This topological proof of the fundamental theorem of algebra is accessible to high school students, particularly those at the precalculus level. There are two major problems with this. First, while the winding number is intuitive, it takes effort to define it rigorously. Second, you also want to establish the basic property that the winding number doesn't change as you deform a curve without going over the origin, which again is difficult to establish rigorously without topology. Without these details, you might call this a hand-waving argument instead of a proof. It's good to give references to where these results will be established rigorously, and to give arguments for other results which are more up vote complete. 4 down vote Nevertheless, I like presenting this proof for several reasons. I think it's beautiful. Geometrically, what $x\mapsto x^n$ does to the complex plane is easy to understand, but many students have little intuition about what this map does, only what polynomials look like on the real line. So, this argument doesn't just say that the statement is true, it is illuminating. The fundamental theorem of algebra is also a result students encountered in algebra, but they usually don't know why it's called a theorem. This is also an opportunity to talk a little about what is studied in more advanced areas of mathematics. It can lead into discussions of topology or the difficulty solving polynomials by radicals. show 3 more comments You should certainly look at the two books by Ross Honsberger, "Mathematical Gems I" and "Mathematical Gems II". A favourite example of mine the proof due to Conway that there are configurations of checkers below the half plane on an infinite board that allow you to move a checker 4 rows into the upper half plane, but not five rows. up vote 3 down vote The negative result is an ingenious argument using nothing more than the quadratic formula, but provides a great example of to apply mathematics in unexpected contexts. show 6 more comments Minkowsky's Theorem (every convex region in the plane of area greater than 4 that's symmetric about the origin contains a lattice point other than (0,0)) is not at all obvious (are you up vote 3 sure you can't squeeze a sufficiently large "blob of irrational slope" in there?) but has a beautiful, simple, and surprising geometric proof. down vote 2 Do you want to give a hint about the proof? – Manya Nov 19 '11 at 12:22 Draw the region $R$ in the plane. Cut the plane into 2x2 squares by cutting along the lines $x=2i$ and $y=2j$ for all integers $i,j$; each square contains some part of $R$ (possible 2 none.) Stack the squares on top of each other. Since $R$ has area greater than 4, there exist two squares whose parts of $R$ overlap. Write down what this means algebraically, apply symmetry and convexity, and construct the nontrivial lattice point. – user6542 Nov 20 '11 at 11:14 add comment Those are pretty nice, and can be done at a fairly low level (say, from 12 years old onwards) : • The proof of the formula "half base times height" for the area of a triangle, by first considering a right triangle and completing a rectangle, then considering an arbitrary triangle up vote 3 and breaking it in two along an height (two cases : inside(+) or outside(-)) : it is a nice example of how mathematicians treat the general case by reduction to particular cases ; down vote • Euclid's proof of the pythagorean theorem using the previous formula, as in this animation. show 1 more comment I like the lovely theorem in 19th century Euclidean Geometry as follows. Let ABC be a triangle. let D,E,F be points on BC,CA,AB respectively. Then the circumcircles of AFE, BDF, CDE meet at a point. I like this because the proof uses the property of the angles of cyclic quadrilaterals, and its converse. Also if one wants to convince students of the necessity of proof, then one should start with a result which is surprising. up vote 3 down vote It is a good thing that this situation can we worked on for more implications. Let P,Q,R be the centres of the three circles just given. Then the triangle PQR is similar to the triangle For all these reasons I think it is a pity that some of Euclidean Geometry is not in University courses, or often school courses, in order to acquaint students with something important in our mathematical heritage. Should a student get a degree in maths without knowing why the angle in a semicircle is a right angle? show 1 more comment Im in high school and i loved the proof of the fermat-toriccelli point of a triangle. up vote 3 down vote show 1 more comment There is a very elegant proof that there exists no continuous injection from the plane into the real line. The proof can basically be given by drawing a picture on the blackboard. up vote 2 Suppose there is such an injection $f$. Let $x$ and $y$ be distinct points in the plane and let $g_1$ and $g_2$ be paths from $x$ to $y$ such that $g_1(r_1)\neq g_2(r_2)$ for $r_1,r_2\in down vote (0,1)$. Now this implies that $f\circ g_1((0,1))\cap f\circ g_2((0,1))=\emptyset$, contradicting the intermediate value theorem. 10 But what is the point of proving a theorem as intuitively obvious as this in the eyes of a high-schooler? – darij grinberg Sep 8 '11 at 10:18 11 Also, continuous functions in more than 1 dimension are usually not defined in high school... – darij grinberg Sep 8 '11 at 10:19 show 1 more comment Morley's Theorem. Wikipedia's proof is completely elementary and only involves trigonometric identities and euclidean plane geometry. up vote 2 down vote There is also a proof by Alain Connes, based on affine geometry techniques. Of course it is a bit more technical, but again it involves math that doesn't go much beyond the high school level, and could be appreciated by the most gifted students 1 Thanks. Actually this example was given by Rota in a famous paper about the phenomenology of mathematical beauty as an example of a theorem that is surprising but not beautiful (in response to Hardy's claim that beauty arises from a feeling of surprise.) Of course, it is open to debate... Thanks for the Connes reference, I didn't know about that. – Manya Sep 8 '11 at 11:40 1 See also Conway's proof (linked at the Wikipedia article). – Todd Trimble♦ Sep 8 '11 at 12:05 1 @Manya: Well, I guess it's a matter of taste. Personally, I do find this theorem beautiful. Thank you for the remark – Francesco Polizzi Sep 8 '11 at 12:40 1 But the question was asking for a beautiful proof, not a beautiful theorem. – euklid345 Sep 9 '11 at 4:51 1 P.S. (@ eucklid) Good point. Rota actually claimed that neither the theorem nor any of the proofs of it (thus far?) are beautiful. – Manya Sep 19 '11 at 8:16 show 1 more comment In the general game "Poset Chomp" the first player always has a winning strategy. The proof is a one-line strategy stealing argument, hence nonconstructive. In fact, a winning strategy is up vote 2 unknown in most cases, which makes the result interesting and mysterious. For a good quick account see here. down vote @GH: I'm aware of the history of chomp and of simple posets and the strategy-stealing argument. Have you worked with middle school students who were not selected to be competitors in a 4 math competition? What percentage do you think will understand and be impressed by a nonconstructive existence result about an abstract game they have not seen before? When I tell members of the general public about mathematics, I try to relate it to concrete objects and situations I think they know beforehand. – Douglas Zare Sep 9 '11 at 0:08 show 5 more comments Why not some elementary theorems of Euclidan geometry? As I recall, the more general and fundamental theorems were just taken as given in my schooling, but I think many of them can be given accessible and beautiful proofs. Here are some good ones: 1) The Pythagorean theorem. (many lovely proofs) 2) Parallelograms having congruent bases and heights have the same area. (Euclid's proof is pretty.) up vote 2 down 3) Use 2 to derive that similar triangles have corresponding sides in common proportion. 4) Two distinct circles have at most 2 points of intersection. 5) Prove the formula for volume of a pyramid without using calculus. 5 What I absolutely dislike about 4) is how it cements the common misconception that mathematics is about giving painstakingly difficult proofs to intuitively obvious statements. Part 3) is only slightly better in this aspect. The rest are pretty good. – darij grinberg Sep 9 '11 at 11:50 The arguments are not so painful. #4 merely involves some observations relating the center of a circle, isoceles triangles, and the fact that two distinct lines intersect at most once. 1 In any case I disagree with the sentiment. Part of the mathematical way of thinking is resisting the urge to accept things just because they seem obvious at first, and always demand that your knowledge but put on a firmer footing. I believe this is the essential "life lesson" students should take from mathematics. Sadly it is not being imparted in today's secondary schools much. – Monroe Eskew Sep 9 '11 at 19:00 I disagree. In school, mathematical proofs are like castles built on sand - not only do most students never realize what they are for, but they often tend to be sloppy right up to flawed 11 (not "flawed" in the sense of "informal", but flawed in the sense of arguments that wouldn't be accepted as a correct proof even in a published paper), and the idea that proofs can be interesting is totally missing (at best they are considered a necessary evil by students and teachers alike). Adding to this a "revelation" that mathematicians prove trivial things in complicated (for students, at least) ... – darij grinberg Sep 9 '11 at 21:04 ... ways doesn't help. In reality, mathematics is maybe 1% about proving things that are intuitively obvious (even topology), and 99% about proving things that are either surprising or 7 seem to be useful in proving surprising things. Skepticism is a good life lesson, but it is better taught by providing examples of false intuitively obvious assertions with counterexamples than by providing examples of correct intuitively obvious assertions with their seemingly redundant proofs. – darij grinberg Sep 9 '11 at 21:07 show 1 more comment Not the answer you're looking for? Browse other questions tagged mathematics-education soft-question examples teaching big-list or ask your own question.
{"url":"http://mathoverflow.net/questions/74841/an-example-of-a-beautiful-proof-that-would-be-accessible-at-the-high-school-leve/74867","timestamp":"2014-04-16T14:11:52Z","content_type":null,"content_length":"228339","record_id":"<urn:uuid:83ca6462-9c99-4981-b233-ac5ddfaba4c6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Author/Editor=(Mazur_B) This remarkable book is a celebration of the state of mathematics at the end of the millennium. Produced under the auspices of the International Mathematical Union (IMU), the volume was born as part of the activities observing the World Mathematical Year 2000. The volume consists of 30 articles written by some of the most influential mathematicians of our time. Authors of 15 contributions were recognized in various years by the IMU as recipients of the Fields Medal, from K. F. Roth (Fields Medalist, 1958) to W. T. Gowers (Fields Medalist, 1998). The articles offer valuable reflections about the amazing mathematical progress we have witnessed in this century and insightful speculations about the possible development of mathematics over the next century. Some articles formulate important problems, challenging future mathematicians. Others pay explicit homage to the famous set of Hilbert Problems posed one hundred years ago, giving enlightening commentary. Yet other papers offer a deeply personal perspective, allowing singular insight into the minds and hearts of people doing mathematics today. Mathematics: Frontiers and Perspectives is a unique volume that pertains to a broad mathematical audience of various backgrounds and levels of interest. It offers readers true and unequaled insight into the wonderful world of mathematics at this important juncture: the turn of the millennium. The work is one of those rare volumes that can be browsed, and if you do simply browse through it, you get a wonderful sense of mathematics today. Yet it also can be intensely studied on a detailed technical level for gaining insight into some of the great problems on which mathematicians are currently working. Editors Michael Atiyah and Peter Lax were winners of the famous Abel Prize awarded by The Norwegian Academy of Science and Letters for outstanding work in mathematics. Individual members of mathematical societies of the IMU member countries can purchase this volume at the AMS member price when buying directly from the AMS. Graduate students and research mathematicians; general mathematical audience, including historians. "This book should be in the library of every working mathematician." -- European Mathematical Society Newsletter "Many papers are ... broad in that they address the history of their subject and also make predictions about future developments. Many readers will be drawn to the general interest material ... Readers in search of controversy will find plenty ... especially intrigued by Arnold's elaborate schema in which the triple tetrahedron-octahedron-icosahedron corresponds to the triple reals-complexes-quaternions ... an excellent book." -- MAA Online "This collection demonstrates well that mathematics is alive and vital." -- American Scientist "You can read this book as if listening to a succession of high-powered old school friends who are passing through ... most tell you about the mathematics that animates them ... you get a good sense of what they do, what's difficult about it, and why it matters ... provocative remarks ... vigorous account of much of the Russian school of mathematics ... thought-provoking reflections about mathematical life and language ... The most pleasing feature of this handsome book is the emphasis on the unity of mathematics ... The connections many people here want to make between mathematics and physics, von Neumann algebras and knot theory, number theory and analysis, are not only fresh and vivid, but oddly coherent. They give a sense not only of mathematics undergoing one of its characteristic contractions around a few organising principles, but how productive this reorganisation can be." -- The LMS Newsletter "One hundred years ago, David Hilbert's famous list of 26 mathematical problems began to catalyze the collective efforts of the world's mathematicians toward a century's worth of new research and achievement. The International Mathematical Union commissioned the current volume to do the same for the century just beginning. Fully half the contributors here own a Fields Medal, mathematics' highest honor (and that does not even count Andrew Wiles). Obviously, simply by dint of the prestige and caliber of the authors, this volume deserves reader attention and a place on every library's shelves. The essays themselves vary from the entirely technical to the purely personal. The sort of reading encounter they offer can set the direction of a whole career, so the undergraduate who picks up this volume now may expect to return here many times in the years to come." -- CHOICE "The mere names of the editors ensure that [the book] is fascinating and exciting ... Several of the 29 authors touch on the nature and/or future of mathematics ... the most interesting essays are those whose authors get out on a limb and dogmatically announce, as saving truth, propositions radically different from common opinion ... Among the authors there are many famous pure mathematicians whose contributions constitute a smorgasbord of delicacies sufficient to satisfy every taste. Do sample them!" -- CMS Notes "This collection of essays will reward and stimulate anyone who dips into it.It belongs on every mathematician's bookshelf and in the mathematics collection of every library." -- MAA Monthly • A. Baker and G. Wüstholz -- Number theory, transcendence and Diophantine geometry in the next millennium • J. Bourgain -- Harmonic analysis and combinatorics: How much may they contribute to each other? • S.-S. Chern -- Back to Riemann • A. Connes -- Noncommutative geometry and the Riemann zeta function • S. K. Donaldson -- Polynomials, vanishing cycles and Floer homology • W. T. Gowers -- The two cultures of mathematics • V. F. R. Jones -- Ten problems • D. Kazhdan -- An algebraic integration • F. Kirwan -- Mathematics: The right choice? • P.-L. Lions -- On some challenging problems in nonlinear partial differential equations • A. J. Majda -- Real world turbulence and modern applied mathematics • Yu. I. Manin -- Mathematics as profession and vocation • G. Margulis -- Problems and conjectures in rigidity theory • D. McDuff -- A glimpse into symplectic geometry • S. Mori -- Rational curves on algebraic varieties • D. Mumford -- The dawning of the age of stochasticity • R. Penrose -- Mathematical physics of the 20\(^{\text{th}}\) and 21\(^{\text{st}}\) centuries • K. F. Roth -- Limitations to regularity • D. Ruelle -- Conversations on mathematics with a visitor from outer space • P. Sarnak -- Some problems in number theory, analysis and mathematical physics • S. Smale -- Mathematical problems for the next century • R. P. Stanley -- Positivity problems and conjectures in algebraic combinatorics • C. Vafa -- On the future of mathematics/physics interaction • A. Wiles -- Twenty years of number theory • E. Witten -- Magic, mystery, and matrix • S.-T. Yau -- Review of geometry and analysis • V. I. Arnold -- Polymathematics: Is mathematics a single science or a set of arts? • P. D. Lax -- Mathematics and computing • B. Mazur -- The theme of \(p\)-adic variation
{"url":"http://www.ams.org/bookstore?arg9=B._Mazur&fn=100&l=20&pg1=CN&r=1&s1=Mazur_B","timestamp":"2014-04-19T22:26:57Z","content_type":null,"content_length":"22358","record_id":"<urn:uuid:b7f8832b-3252-4b30-bf6d-47d073bf4854>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional expectation where I need to find E[X^2|Y=0.7] December 13th 2011, 12:18 PM #1 Junior Member Nov 2011 Conditional expectation where I need to find E[X^2|Y=0.7] The function is x+y where both x and y are between 0 and 1. Am I correct in saying if E[X|Y] = f(x,y)/h(y) where h(y) is the marginal probability density of Y that: E[X^2|Y]= [f(x,y)]^2/h(y) Sorry if this is a stupid question but I don't want to try and solve it in different ways and there still be some ambiguity as to what the correct way to tackle the question is. Re: Conditional expectation where I need to find E[X^2|Y=0.7] no i dont think so. As your wrote it, your E(X|Y) would almost certainly be a function of X, so you know that it is incorrect. E(X|Y) should be a function of Y only (or a constant). Lets start from the definition of expectation for any random variable, x: $E(X) = \int x f_x(x) dx$ Similarly for the conditional variable "X|Y" $E(X|Y) = \int x f_{x|y}(x) dx$ You can make this substitution $f_{x|y}(x) = \frac{f_{xy}(x,y)}{f_y(y)}$ to get: $E(X|Y) = \int x \frac{f_{xy}(x,y)}{f_y(y)} dx = \frac{1}{f_y(y)}\int x {f_{xy}(x,y) dx$ is this what you meant? $E(X^2)$ is similar except the starting definition is $E(X^2) = \int x^2 f(x) dx$ You try and finish. If you need further help, be sure to post the density functions you are supposed to be manipulating. Re: Conditional expectation where I need to find E[X^2|Y=0.7] Thanks for the reply springfan. If it was E[x|y] I would find the marginal density of y and then take the orginal function and put it over the marginal density of y right? I'm still a little perplexed as to how to do it for E[X^2|Y]. So for f(x,y)= x+y if I needed to get the conditional expectation E[X|Y] it would be (x+y)/(xy)+(y^2/2)? So how does that change when it becomes E[X^2|y]. Re: Conditional expectation where I need to find E[X^2|Y=0.7] your posts are hard to follow because you: 1) havn't told me the densities of X and Y, or their joint density 2) dont put any integral signs in what you write. E[X|Y] = (x+y)/(xy)+(y^2/2) i dont see where this could have come from, i dont think its right. If this is supposed to be the final answer, it should not be a function of X. If you think otherwise, you are fundamentally misunderstanding what an expectation is and you may want to review your notes. The integral you need to do is: $E(X^2|Y) = \int x^2 f_{x|y}(x,y) dx$ this can be re-written as (same method as post #2) $E(X^2|Y) = \frac{1}{f_y(y)}\int x^2 f_{x,y}(x,y) dx$ IF you need further help then post the the density functions of X and Y, or the joint density function Last edited by SpringFan25; December 13th 2011 at 01:28 PM. Re: Conditional expectation where I need to find E[X^2|Y=0.7] The function is x+y where both x and y are between 0 and 1. Just noticed this in post #1. if "the function" means "the joint density function" then: step 1: find the marginal density of y step 2: find the conditional density x|y step 3: evaluate the integral: $E(X^2|Y) = \int_0^1 x^2 f_{x|y} dx$ Re: Conditional expectation where I need to find E[X^2|Y=0.7] Apologies for being so vague, here is the question I need to solve, perhaps this would make it clearer. EDIT: I have re-read my notes and I think I understand how to do this now, thank you for your help and the solution. Re: Conditional expectation where I need to find E[X^2|Y=0.7] sorry, i saw the the density you put in post #1 while you were typing that. Try and follow the instructions in post #5. Re: Conditional expectation where I need to find E[X^2|Y=0.7] I think I have solved this question correctly, thank you for your help. Whilst this thread is new, could you give me some advice on something else please? I know I need to find the expectations of X1, X2 and X3 to find the answers first off, to do this do I simply integrate x*f(x1,x2,x3) w.r.t x1 then x2 then x3? December 13th 2011, 12:26 PM #2 MHF Contributor May 2010 December 13th 2011, 12:34 PM #3 Junior Member Nov 2011 December 13th 2011, 01:13 PM #4 MHF Contributor May 2010 December 13th 2011, 01:34 PM #5 MHF Contributor May 2010 December 13th 2011, 01:39 PM #6 Junior Member Nov 2011 December 13th 2011, 01:42 PM #7 MHF Contributor May 2010 December 13th 2011, 01:58 PM #8 Junior Member Nov 2011
{"url":"http://mathhelpforum.com/advanced-statistics/194193-conditional-expectation-where-i-need-find-e-x-2-y-0-7-a.html","timestamp":"2014-04-16T05:27:58Z","content_type":null,"content_length":"55584","record_id":"<urn:uuid:df2a391c-4f21-40a2-9765-3fdf7e9b354a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
How to code a swinging movement - Game Programming Hi guys! I have to built a small prototype of a concept my team came up with. I'm having some issues on letting to spider swing it's web. So how can I code a swinging movement of a spider with a web attached to a fixed point? It has to swing a few times, before losing it's velocity and then hanging still. This may sound a bit confusing, but this "drawing" should make it clear. Many thanks in advance! (This would have been better in the Math & Physics forum...) Edited by MattProductions, 08 May 2012 - 05:26 PM.
{"url":"http://www.gamedev.net/topic/624653-how-to-code-a-swinging-movement/","timestamp":"2014-04-20T18:29:14Z","content_type":null,"content_length":"107301","record_id":"<urn:uuid:c799dc45-775c-4620-ad68-e0318f82159d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Interpreting interaction effects This web page contains various Excel worksheets which help interpret two-way and three-way interaction effects. They use procedures by Aiken and West (1991), Dawson (2013) and Dawson and Richter (2006) to plot the interaction effects, and in the case of three way interactions test for significant differences between the slopes. You can either use the Excel worksheets directly from this page, or download them to your computer by right-clicking on the relevant links. A note about standardisation of variables. Standardised variables are those that are both centred around zero and are scaled so that they have a standard deviation of 1. Personally, I prefer to use these when testing interactions because the intepretation of coefficients can be slightly simpler. Some authors, such as Aiken and West (1991), recommend that variables are centred (but not standardised). The results obtained should be identical whichever method you use. If you prefer to analyse centred (but not standardised) variables, you can use the "unstandardised" versions of the Excel worksheets, and enter the mean of the variables as zero. Two-way interactions To test for two-way interactions (often thought of as a relationship between an independent variable (IV) and dependent variable (DV), moderated by a third variable), first run a regression analysis, including both independent variables (referred to hence as the IV and moderator) and their interaction (product) term. It is recommended that the independent variable and moderator are standardised before calculation of the product term, although this is not essential. The product term should be significant in the regression equation in order for the interaction to be interpretable. If you have two unstandardised variables, you can plot your interaction effect by entering the unstandardised regression coefficients (including intercept/constant) and means & standard deviations of the IV and moderator in the following worksheet. If you have control variables in your regression, the values of the dependent variable displayed on the plot will be inaccurate unless you standardise (or centre) all control variables first (although the pattern, and therefore the interpretation, will be correct). 2-way_unstandardised.xls If you have two standardised variables, you can plot your interaction effect by entering the just unstandardised regression coefficients (including intercept/constant) in the following worksheet. If you have control variables in your regression, the values of the dependent variable displayed on the plot will be inaccurate unless you also standardise (or centre) all control variables first (although the pattern, and therefore the interpretation, will be correct). Note that the interaction term should not be standardised after calculation, but should be based on the standardised values of the IV & moderator. 2-way_standardised.xls If you have a binary moderator, you can plot your interaction more usefully by entering the unstandardised regression coefficients (including intercept/constant) and mean & standard deviation of your IV in the following worksheet. Again, if you have control variables in your regression, the values of the dependent variable displayed on the plot will be inaccurate unless you also standardise (or centre) all control variables first (although the pattern, and therefore the interpretation, will be correct). The binary variable should have possible values of 0 and 1, and should not be standardised. 2-way_with_binary_moderator.xls If you want to test simple slopes, you can use the following worksheet. Again, control variables should be centered or standardised before the analysis. However, note that simple slope tests are only useful for testing significance at specific values of the moderator. Where possible, meaningful values should be chosen, rather than just one standard deviation above and below the mean. You will also need to request the coefficient covariance matrix as part of the regression output. If you are using SPSS, this can be done by selecting "Covariance matrix" in the "Regression Coefficients" section of the "Statistics" dialog box. Note that the variance of a coefficient is the covariance of that coefficient with itself - i.e. can be found on the diagonal of the coefficient covariance matrix. 2-way_unstandardised_with_simple_slopes.xls Other forms of two-way interaction plots that may be helpful for experienced users: Three-way interactions To test for three-way interactions (often thought of as a relationship between a variable X and dependent variable Y, moderated by variables Z and W), run a regression analysis, including all three independent variables, all three pairs of two-way interaction terms, and the three-way interaction term. It is recommended that all the independent variable are standardised before calculation of the product terms, although this is not essential. As with two-way interactions, the interaction terms themselves should not be standardised after calculation. The three-way interaction term should be significant in the regression equation in order for the interaction to be interpretable. If you wish to use the Dawson & Richter (2006) test for differences between slopes, you should request the coefficient covariance matrix as part of the regression output. If you are using SPSS, this can be done by selecting "Covariance matrix" in the "Regression Coefficients" section of the "Statistics" dialog box. Note that the variance of a coefficient is the covariance of that coefficient with itself - i.e. can be found on the diagonal of the coefficient covariance matrix. If you have used unstandardised variables, you can plot your interaction effect by entering the unstandardised regression coefficients (including intercept/constant) and means & standard deviations of the three independent variables (X, Z and W) in the following worksheet. If you have control variables in your regression, the values of the dependent variable displayed on the plot will be inaccurate unless you standardise all control variables first (although the pattern, and therefore the interpretation, will be correct). To use the test of slope differences, you should also enter the covariances of the XZ, XW and XZW coefficients from the coefficient covariance matrix, and the total number of cases and number of control variables in your regression. 3-way_unstandardised.xls If you have used standardised variables, you can plot your interaction effect by entering the just unstandardised regression coefficients (including intercept/constant) in the following worksheet. If you have control variables in your regression, the values of the dependent variable displayed on the plot will be inaccurate unless you also standardise all control variables first (although the pattern, and therefore the interpretation, will be correct). To use the test of slope differences, you should also enter the covariances of the XZ, XW and XZW coefficients from the coefficient covariance matrix, and the total number of cases and number of control variables in your regression. 3-way_standardised.xls Other forms of three-way interaction plots that may be helpful for experienced users: • Quadratic_three-way_interactions.xls - for plotting curvilinear interactions between a quadratic main effect and two moderators (see below) • 3-way_logistic_interactions.xls - for plotting three-way interactions from binary logistic regression • 3-way_with_all_options.xls - a generalised version of the main worksheets, allowing any combination of continous/binary IV and moderators, including a simple slope test (see earlier warning about this) as well as the slope difference tests. Also allows slopes to be plotted at specific values of the moderators chosen by the user. Please note: a previous version of the "3 way with all options" sheet included an error in the slope difference test: apologies for any inconvenience caused. This has now been corrected. Quadratic Effects If you wish to plot a quadratic (curvilinear) effect, you can use one of the following Excel worksheets. In each case, you test the quadratic effect by including the main effect (the IV) along with its squared term (i.e. the IV*IV) in the regression. In the case of a simple (unmoderated) relationship, the significance of the squared term determines whether there is a quadratic effect. If you are testing a moderated quadratic relationship, it is the significance of the interaction between the squared term and the moderator(s) that determines whether there is a moderated effect. Note that despite this, all lower order terms need to be included in the regression: so, if you have an independent variable A and moderators B and C, then to test whether there is a three-way interaction you would need to enter all the following terms: A, A*A, B, C, A*B, A*C, A*A*B, A*A*C, B*C, A*B*C, A*A*B*C. It is only the last, however, that determines the significance of the three-way quadratic There are a number of common problems encountered when trying to plot these effects. If you are having problems, consider the following: • If the graph does not appear, it may be because it is off the scale. You can change the scale of the dependent variable by right-clicking on the axis and choosing "Format Axis" • Make sure you enter the unstandardised regression coefficients, whether or not you are using standardised variables • If you use standardised variables, ensure that you calculate the interaction (product) terms from the standardised variables, but do not standardise the interaction terms themselves • When performing simple slopes or slope difference tests, it is easy to enter the wrong figures for variances & covariances of coefficients! SPSS is prone to printing the covariances in a different order from the regression coefficients themselves, which can be confusing. Also, SPSS automatically prints a correlation matrix of the coefficients above the variance-covariance matrix: ensure that you do not enter these in error. Note that the variances of the coefficients are along the diagonal of this matrix: e.g. the variance of the Var1*Var2 coefficient is the covariance of this coefficient with itself. If you think there are any errors in these sheets, please contact me, Jeremy Dawson. Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Newbury Park, London, Sage. Dawson, J. F. (2014). Moderation in management research: What, why, when and how. Journal of Business and Psychology, 29, 1-19. (This article includes information about most of the tests included on this page, as well as much more! Click here for this article.) Dawson, J. F., & Richter, A. W. (2006). Probing three-way interactions in moderated multiple regression: Development and application of a slope difference test. Journal of Applied Psychology, 91, Other online resources Kristopher Preacher's web site contains templates for testing simple slopes, and findings regions of significance, for both 2-way and 3-way interactions. It also includes options for hierarchical linear modelling (HLM) and latent curve analysis. Yung-jui Yang's web site contains SAS macros to plot interaction effects and run the slope difference tests for three-way interactions
{"url":"http://www.jeremydawson.co.uk/slopes.htm","timestamp":"2014-04-20T08:29:04Z","content_type":null,"content_length":"32773","record_id":"<urn:uuid:b44e85ff-5092-4359-9eb2-9381fccc2372>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: a wave moves with speed 300m/s on awire which is under a tention of 500N . find how much the tention must be changed to increase the speed to 312m/s • one year ago • one year ago Best Response You've already chosen the best response. v is inversely proportional to the square root of tension. do you need further help? Best Response You've already chosen the best response. Best Response You've already chosen the best response. since: \(v=\sqrt \frac{T}{m}\)where m is the mass perlength, a constant, rearranging, \(\frac{v_1}{\sqrt T_1}=\frac{v_2}{\sqrt T_2}\) sub it all in. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51026dd3e4b0ad57a5626a13","timestamp":"2014-04-19T10:00:32Z","content_type":null,"content_length":"32448","record_id":"<urn:uuid:e31fb01b-06ed-4e28-b680-02ba169ef215>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Independent Exponential Random Variables March 7th 2013, 11:39 AM #1 Mar 2013 Independent Exponential Random Variables Let X and Y be independent exponential random variables, each with mean 1/μ. Further, let Z = min(X,Y), and W = max( X,Y). Find E(Z), E(W), Var(Z) and Var(W). Re: Independent Exponential Random Variables Hey Oceanamie. Hint: Calculate the distribution for the order statistic. March 8th 2013, 03:23 AM #2 MHF Contributor Sep 2012
{"url":"http://mathhelpforum.com/advanced-statistics/214398-independent-exponential-random-variables.html","timestamp":"2014-04-18T09:55:31Z","content_type":null,"content_length":"31947","record_id":"<urn:uuid:0f42368f-3e3b-4a92-80ae-befbd0fef0cc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
SfC Home > Physical Science > Physics > Key words: physical science, work, force, kinetic energy, potential energy, motion, velocity, mass, heat, light, chemical, nuclear, School for Champions. Copyright © Restrictions Overview of Energy by Ron Kurtus (revised 9 July 2013) An object has kinetic energy if it is moving. If there are some constrained or pent-up forces, preventing the object to move, the object is said to have potential energy. There are various subsets or forms of both kinetic and potential energy. Questions you may have include: • What is kinetic energy? • What is potential energy? • What are subsets of kinetic and potential energy? This lesson will answer those questions. Useful tool: Metric-English Conversion Kinetic energy The standard textbook definition of energy is the "ability to do work." Unfortunately, this definition does not really give a good picture of what energy is all about. We normally think of an object having energy as one that is moving. The energy of a moving object is called kinetic energy and is abbreviated as KE. The properties of kinetic energy are that the greater the mass of a moving object, the greater its energy will be. Also, the faster it goes, the greater its energy. That energy is proportional to the square of the velocity. The formula for calculating the kinetic energy of an object is KE = ½ mv² • m is the mass of the object • v is its speed of velocity and v² is the velocity squared or v times v • ½ mv² is one-half times m times v² Note that the velocity of the object must be much less than the speed of light. When the speed of an object—such as an atomic particle—approaches the speed of light (c), its kinetic energy approaches E = mc², according to the Theory of Relativity. Potential Energy There are situations when an object has the potential to start moving and gain kinetic energy. Often there are forces acting on the object, but the forces aren't yet sufficient to move the object. Potential due to gravity If you hold an object a distance from the floor, it has the potential to start moving, once you let it go. The force of gravity is pulling on the object, giving it potential energy. The equation is PE = mgh • PE is the potential energy • m is its mass • g is the acceleration of gravity (32 ft/s² or 9.8 m/s²) • mg is the weight of the object (m times g) • h is the height of the object from the floor or ground PE becomes KE If you drop the object, its potential energy will become the kinetic energy of motion (PE = KE). Since PE = mgh and KE = ½ mv², then: mgh = ½ mv² You can determine the speed it will be traveling after falling a height h by solving the equation for v: v² = 2gh Take the square root of both sides of the equation: v = SQRT(2gh) or v = √(2gh) Note that the mass m cancels out of the equation, meaning that all objects fall at the same rate. Thus, if h = 1 ft, and since g = 32 ft/s², then v² = 2*32*1 = 64 and v = √64 = 8 ft/s. Other types of PE Other examples of potential energy that could cause motion include explosive chemical compounds and a coiled spring, ready to be released. A stretched rubber band, also has potential energy. With chemical explosives, it is difficult to calculate the potential energy without experimenting to see how much kinetic energy is released in an explosion. With a compressed spring, there are calculations that can determine its strength and potential energy. Other forms or subsets of energy Often, you will hear about other forms of energy, such as heat and electrical energy. In reality, they are also forms of potential and kinetic energy. Heat energy Heat is the movement of molecules. It is the sum of the kinetic energy of an object's molecules. In many physics textbooks, they look at heat as some sort of substance and heat energy as something independent of kinetic energy. In our lessons, it is just one subset of kinetic energy. Electrical energy Electrical energy is the movement of electrons. That is kinetic energy. The voltage in an electrical circuit is the potential energy that can start electrons moving. Electrical forces cause the movement to occur. Chemical energy Chemical energy is potential energy until the chemical reaction puts atoms and molecules in motion. Heat energy (KE) is often the result of a chemical reaction. Light energy Light is the movement of waves and/or light particles (photons). It is usually formed when atoms gain so much kinetic energy from being heated that they give off radiation. This is often from electrons jumping orbits and emitting moving photons. Nuclear energy Certain elements have potential nuclear energy, such that there are internal forces pent up on their nucleus. When that potential energy is released, the result is kinetic energy in the form of rapidly moving particles, heat and radiation. Energy is sometimes defined as the capacity to do work. An object has kinetic energy if it is moving. If there are some constrained or pent-up forces, preventing the object to move, the object is said to have potential energy. There are various subsets or forms of both kinetic and potential energy, such as heat, chemical, electrical, light and nuclear energy. Be energetic in your pursuit of excellence Resources and references Top-rated books on Energy Physics Questions and comments Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible. Click on a button to bookmark or share this page through Twitter, Facebook, email, or other services: Students and researchers The Web address of this page is: Please include it as a link on your website or as a reference in your report, document, or thesis. Where are you now? Overview of Energy
{"url":"http://school-for-champions.com/science/energy.htm","timestamp":"2014-04-20T00:58:21Z","content_type":null,"content_length":"15291","record_id":"<urn:uuid:fce93f57-58e9-4c51-8db1-e6b1131a9e21>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
The ARORA guessing game November 9, 2010 By Pat The game ARORA (A random or real array) is a website that gives you two time series at a time. Your job is to guess which series is real market data and which is permuted data. It’s fun — try it. With some practice you will probably be able to guess which is which well above chance. I have a hypothesis or two about why. But before you read my hypotheses, you should try it out yourself without my contamination. Some hypotheses The original paper describing the experiment is Is it real or is it randomized?: A financial Turing test by Jasmina Hasanhodzic, Andrew Lo and Emanuele Viola. My hypotheses are explained in the working paper Some hypotheses about ARORA, the financial Turing test. Do you think I’m right? Do you have other hypotheses? Do it yourself in R Generating a random series in (presumably) the way that ARORA does it is easy in R. Here is some code that imitates a static version of a single ARORA test: > par(mfcol=c(2,1)) > plot(priceSeries, type="l", axes=FALSE, xlab='', + ylab=''); box() > plot(exp(c(0, cumsum(sample(diff(log(priceSeries)))))), + type="l", axes=FALSE, xlab='', ylab=''); box() The par command sets up the graphics page to have two plots on it, one above the other. (In this case it doesn’t matter if you use mfcol or mfrow.) The first plot command plots the price series as a line with the usual labeling of the axes removed. The box command draws a box around the plot — this is usually done for such plots but not when the axes are not drawn. The second plot command contains all of the computation of the random series. We can explain it by starting on the inside and working outwards. diff(log(priceSeries)) computes log returns from the series (see A tail of two returns). Then the sample command does a random permutation of those numbers. The cumsum function performs a cumulative sum of the permuted returns. We add a zero onto the front of that vector of numbers, and finally use exp to go from log returns back to prices (starting at 1). A more polished version would add a mar argument to the call to par in order to not waste so much space in the resulting graphic. The real question Does it matter if you can tell a panthera onca from a panthera pardus? Probably not (though knowing what continent you’re on might be useful). What matters is if you can outrun her if she decides to eat you. (Probably not.) The ability to distinguish the real data series has been used to give credence to chartists. While the opposite result would tend to rule out the efficacy of chart-reading, I’m not convinced that this is especially supportive of chartism. The real task is to tell where a price series is going. A test of that is the Technical Analysis Challenge on the Burns Statistics website. This is another multiple choice game that you can play yourself. However, it isn’t as nicely presented as ARORA. You are given a price series and four possible extensions of that series. Only one of the four, of course, is the correct extension. Of the few people who officially entered the challenge, there was no indication of skill at guessing the extensions. (Except for a certain someone who industriously cheated.) Thanks to Lisa Goldberg for pointing out ARORA. Other blogs that have spoken about this include Mind Your Decisions and Technology Review. Photo from stock.xchng. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/the-arora-guessing-game/","timestamp":"2014-04-19T07:24:56Z","content_type":null,"content_length":"39007","record_id":"<urn:uuid:6ddc8d24-0c22-48b8-8ef9-7184f3290fdd>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
R-MAT: A recursive model for graph mining Results 1 - 10 of 124 , 2005 "... How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include hea ..." Cited by 301 (39 self) Add to MetaCart How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time. Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) orO(log (log n)). Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a “forest fire” spreading process, that has a simple, intuitive justification, requires very few parameters (like the “flammability” of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study. - ACM TKDD , 2007 "... How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include hea ..." Cited by 120 (13 self) Add to MetaCart How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time. Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing super-linearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O (log(log n)). Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a “forest fire” spreading process, that has a simple, intuitive justification, requires very few parameters (like the “flammability ” of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study. We also notice that the “forest fire” model exhibits a sharp transition between sparse graphs and graphs that are densifying. Graphs with decreasing distance between the nodes are generated around this transition point. Last, we analyze the connection between the temporal evolution of the degree distribution and densification of a graph. We find that the two are fundamentally related. We also observe that real networks exhibit this type of r "... A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structur ..." Cited by 120 (10 self) Add to MetaCart A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structural properties of such sets of nodes. We define the network community profile plot, which characterizes the “best ” possible community—according to the conductance measure—over a wide range of size scales, and we study over 70 large sparse real-world networks taken from a wide range of application domains. Our results suggest a significantly more refined picture of community structure in large real-world networks than has been appreciated previously. Our most striking finding is that in nearly every network dataset we examined, we observe tight but almost trivial communities at very small scales, and at larger size scales, the best possible communities gradually “blend in ” with the rest of the network and thus become less “community-like.” This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models. Moreover, this behavior is exactly the opposite of what one would expect based on experience with and intuition from expander graphs, from graphs that are well-embeddable in a low-dimensional structure, and from small social networks that have served as testbeds of community detection algorithms. We have found, however, that a generative model, in which new edges are added via an iterative “forest fire” burning process, is able to produce graphs exhibiting a network community structure similar to our observations. , 2008 "... A large body of work has been devoted to defining and identifying clusters or communities in social and information networks, i.e., in graphs in which the nodes represent underlying social entities and the edges represent some sort of interaction between pairs of nodes. Most such research begins wit ..." Cited by 79 (6 self) Add to MetaCart A large body of work has been devoted to defining and identifying clusters or communities in social and information networks, i.e., in graphs in which the nodes represent underlying social entities and the edges represent some sort of interaction between pairs of nodes. Most such research begins with the premise that a community or a cluster should be thought of as a set of nodes that has more and/or better connections between its members than to the remainder of the network. In this paper, we explore from a novel perspective several questions related to identifying meaningful communities in large social and information networks, and we come to several striking conclusions. Rather than defining a procedure to extract sets of nodes from a graph and then attempt to interpret these sets as a “real ” communities, we employ approximation algorithms for the graph partitioning problem to characterize as a function of size the statistical and structural properties of partitions of graphs that could plausibly be interpreted as communities. In particular, we define the network community profile plot, which characterizes the “best ” possible community—according to the conductance measure—over a wide range of size scales. We study over 100 large real-world networks, ranging from traditional and on-line social networks, to technological and information networks and - In PKDD , 2005 "... Abstract. How can we generate realistic graphs? In addition, how can we do so with a mathematically tractable model that makes it feasible to analyze their properties rigorously? Real graphs obey a long list of surprising properties: Heavy tails for the in- and out-degree distribution; heavy tails f ..." Cited by 76 (21 self) Add to MetaCart Abstract. How can we generate realistic graphs? In addition, how can we do so with a mathematically tractable model that makes it feasible to analyze their properties rigorously? Real graphs obey a long list of surprising properties: Heavy tails for the in- and out-degree distribution; heavy tails for the eigenvalues and eigenvectors; small diameters; and the recently discovered “Densification Power Law ” (DPL). All published graph generators either fail to match several of the above properties, are very complicated to analyze mathematically, or both. Here we propose a graph generator that is mathematically tractable and matches this collection of properties. The main idea is to use a non-standard matrix operation, the Kronecker product, to generate graphs that we refer to as “Kronecker graphs”. We show that Kronecker graphs naturally obey all the above properties; in fact, we can rigorously prove that they do so. We also provide empirical evidence showing that they can mimic very well several real graphs. 1 - ACM COMPUTING SURVEYS , 2006 "... How does the Web look? How could we tell an abnormal social network from a normal one? These and similar questions are important in many fields where the data can intuitively be cast as a graph; examples range from computer networks to sociology to biology and many more. Indeed, any M : N relation i ..." Cited by 70 (6 self) Add to MetaCart How does the Web look? How could we tell an abnormal social network from a normal one? These and similar questions are important in many fields where the data can intuitively be cast as a graph; examples range from computer networks to sociology to biology and many more. Indeed, any M : N relation in database terminology can be represented as a graph. A lot of these questions boil down to the following: "How can we generate synthetic but realistic graphs?" To answer this, we must first understand what patterns are common in real-world graphs and can thus be considered a mark of normality/realism. This survey give an overview of the incredible variety of work that has been done on these problems. One of our main contributions is the integration of points of view from physics, mathematics, sociology, and computer science. Further, we briefly describe recent advances on some related and interesting graph problems. - IN 24TH ICML , 2007 "... Given a large, real graph, how can we generate a synthetic graph that matches its properties, i.e., it has similar degree distribution, similar (small) diameter, similar spectrum, etc? We propose to use “Kronecker graphs”, which naturally obey all of the above properties, and we present KronFit, a f ..." Cited by 53 (8 self) Add to MetaCart Given a large, real graph, how can we generate a synthetic graph that matches its properties, i.e., it has similar degree distribution, similar (small) diameter, similar spectrum, etc? We propose to use “Kronecker graphs”, which naturally obey all of the above properties, and we present KronFit, a fast and scalable algorithm for fitting the Kronecker graph generation model to real networks. A naive approach to fitting would take super-exponential time. In contrast, Kron-Fit takes linear time, by exploiting the structure of Kronecker product and by using sampling. Experiments on large real and synthetic graphs show that KronFit indeed mimics very well the patterns found in the target graphs. Once fitted, the model parameters and the resulting synthetic graphs can be used for anonymization, extrapolations, and graph summarization. - JOURNAL OF MACHINE LEARNING RESEARCH 11 (2010) 985-1042 , 2010 "... How can we generate realistic networks? In addition, how can we do so with a mathematically tractable model that allows for rigorous analysis of network properties? Real networks exhibit a long list of surprising properties: Heavy tails for the in- and out-degree distribution, heavy tails for the ei ..." Cited by 48 (2 self) Add to MetaCart How can we generate realistic networks? In addition, how can we do so with a mathematically tractable model that allows for rigorous analysis of network properties? Real networks exhibit a long list of surprising properties: Heavy tails for the in- and out-degree distribution, heavy tails for the eigenvalues and eigenvectors, small diameters, and densification and shrinking diameters over time. Current network models and generators either fail to match several of the above properties, are complicated to analyze mathematically, or both. Here we propose a generative model for networks that is both mathematically tractable and can generate networks that have all the above mentioned structural properties. Our main idea here is to use a non-standard matrix operation, the Kronecker product, to generate graphs which we refer to as “Kronecker graphs”. First, we show that Kronecker graphs naturally obey common network properties. In fact, we rigorously prove that they do so. We also provide empirical evidence showing that Kronecker graphs can effectively model the structure of real networks. We then present KRONFIT, a fast and scalable algorithm for fitting the Kronecker graph generation model to large real networks. A naive approach to fitting would take super-exponential - In Privacy, Security, and Trust in KDD Workshop (PinKDD , 2008 "... The advent of social network sites in the last few years seems to be a trend that will likely continue in the years to come. Online social interaction has become very popular around the globe and most sociologists agree that this will not fade away. Such a development is possible due to the advancem ..." Cited by 38 (4 self) Add to MetaCart The advent of social network sites in the last few years seems to be a trend that will likely continue in the years to come. Online social interaction has become very popular around the globe and most sociologists agree that this will not fade away. Such a development is possible due to the advancements in computer power, technologies, and the spread of the World Wide Web. What many naïve technology users may not always realize is that the information they provide online is stored in massive data repositories and may be used for various purposes. Researchers have pointed out for some time the privacy implications of massive data gathering, and a lot of effort has been made to protect the data from unauthorized disclosure. However, most of the data privacy research has been focused on more traditional data models such as microdata (data stored as one relational table, where each row represents an individual entity). More recently, social network data has begun to be analyzed from a different, specific privacy perspective. Since the individual entities in social networks, besides the attribute values that characterize them, also have relationships with other entities, the possibility of privacy breaches increases. Our main contributions in this paper are the development of a greedy privacy algorithm for anonymizing a social network and the introduction of a structural information loss measure that quantifies the amount of information lost due to edge generalization in the anonymization process.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=468398","timestamp":"2014-04-16T11:07:14Z","content_type":null,"content_length":"42468","record_id":"<urn:uuid:3e43563e-fce6-4c7e-867f-190dea096578>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
PostgreSQL wiki Fibonacci Numbers From PostgreSQL wiki Fibonacci Numbers Works with PostgreSQL Written in Depends on Create a fibonnaci sequence to the limiting number given CREATE OR REPLACE FUNCTION fib(f integer) RETURNS SETOF integer AS $$ WITH RECURSIVE t(a,b) AS ( UNION ALL SELECT greatest(a,b), a + b AS a FROM t WHERE b < $1 SELECT a FROM t;
{"url":"https://wiki.postgresql.org/index.php?title=Fibonacci_Numbers&oldid=8453","timestamp":"2014-04-18T21:03:13Z","content_type":null,"content_length":"15360","record_id":"<urn:uuid:157a7376-19a0-47e8-aa1e-9973ef8ab525>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Volunteering – 04/09/08 Back at Cityview and we’re working on homework again. Most of the kids have a math worksheet to do, multiplication and division with word and picture problems. One student who I had noticed is usually alone – who doesn’t talk much but still isn’t that quiet because of his constant wondering and fiddling – seemed to be having trouble with the math worksheet. I offered to help but he wouldn’t even respond to me, so I let him be. I went around briefly answering questions from other students, then went back to the first student to see if he would accept any help this time. He seemed annoyed with me and basically told me to leave him alone, so I did. I began to help a student sitting close to him. Something must have made the first student change his mind because as I went through the worksheet with the other student he asked several times for my help. I went back and forth between the two students, and once the second student had all the help he needed I sat down with the first student and we finished the worksheet.
{"url":"http://blog.lib.umn.edu/haysx067/architecture/2008/04/volunteering_040908.html","timestamp":"2014-04-16T10:53:43Z","content_type":null,"content_length":"4755","record_id":"<urn:uuid:3297559c-7adf-4d89-b572-82d2f827449b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
FanGraphs Baseball RSS feed for comments on this post. 1. What kind of filters did you use to select these pitchers? Comment by novaether — June 28, 2012 @ 10:37 am 2. Adam Dunn. Comment by Danny — June 28, 2012 @ 10:54 am 3. Doh. Qualified starters since 2002, or the pitch f/x era. Comment by Eno Sarris — June 28, 2012 @ 11:49 am 4. Using just 2011 data for pitchers with at least 300 batters faced, I found an interesting quadratic relationship in first-pitch strike percentage, given zone% was in the model. The quadratic relationship suggested that BB% was minimized at an F-strike% of about 71.3%. The p-value for the (centered) quadratic term was 0.051. I’m thinking if this is true, then f-strike percentages that are quite high might indicate a pitcher that uses fastballs early in the count to get ahead, but can’t finish anyone off with his breaking stuff. It could also be a quirky p-value with only one season of data. Were you able to test for polynomial relationships using your much larger data set? Comment by Matthias — June 28, 2012 @ 1:41 pm 5. Eno, Who were the pitchers at the very top left and very bottom right of the regression line? Comment by RMD — June 28, 2012 @ 2:53 pm Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> 0.130 Powered by WordPress
{"url":"http://www.fangraphs.com/blogs/?comments_popup=90987","timestamp":"2014-04-17T00:19:20Z","content_type":null,"content_length":"5508","record_id":"<urn:uuid:d74a6a57-8ba7-4dd6-95c8-54f44708add2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
No scheme, just math. If Ca is center of a, and Ta is tolerance of a, a is the interval a = [Ca*(1 - 0.5*Ta), Ca*(1 + 0.5*Ta)] and b is b = [Cb*(1 - 0.5*Tb), Cb*(1 + 0.5*Tb)] If the endpoints are positive, a*b has the endpoints (after simplifying): a*b = [Ca*Cb*(1 - 0.5*(Ta + Tb) + 0.25*Ta*Tb), Ca*Cb*(1 + 0.5*(Ta + Tb) + 0.25*Ta*Tb)] Ta*Tb will be a wee number, so it can be ignored. So, it appears that for small tolerances, the tolerance of the product will be approximately the sum of the component tolerances. A quick check: (define (make-interval a b) (cons a b)) (define (upper-bound interval) (max (car interval) (cdr interval))) (define (lower-bound interval) (min (car interval) (cdr interval))) (define (center i) (/ (+ (upper-bound i) (lower-bound i)) 2)) ;; Percent is between 0 and 100.0 (define (make-interval-center-percent c pct) (let ((width (* c (/ pct 100.0)))) (make-interval (- c width) (+ c width)))) (define (percent-tolerance i) (let ((center (/ (+ (upper-bound i) (lower-bound i)) 2.0)) (width (/ (- (upper-bound i) (lower-bound i)) 2.0))) (* (/ width center) 100))) (define (mul-interval x y) (let ((p1 (* (lower-bound x) (lower-bound y))) (p2 (* (lower-bound x) (upper-bound y))) (p3 (* (upper-bound x) (lower-bound y))) (p4 (* (upper-bound x) (upper-bound y)))) (make-interval (min p1 p2 p3 p4) (max p1 p2 p3 p4)))) (define i (make-interval-center-percent 10 0.5)) (define j (make-interval-center-percent 10 0.4)) (percent-tolerance (mul-interval i j)) ;; Gives 0.89998, pretty close to (0.5 + 0.4).
{"url":"http://community.schemewiki.org/cgi-bin/scheme.cgi?sicp-ex-2.13","timestamp":"2014-04-20T05:45:44Z","content_type":null,"content_length":"10881","record_id":"<urn:uuid:6e84e967-78db-4dc7-8ee4-865fe2c45191>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling Complex, Multiphase Porous Media Systems April 3, 2002 Researchers determined that it was the removal of water from the Venetian water table that caused the framework holding it---the sediments on which Venice rests---to contract, resulting in widespread subsidence, with many buildings sinking by several centimeters. Naomi Lubick In the 1970s, scientists figured out why (and how) Venice is sinking. They modeled the sediments on which the watery city rests, and the liquid that fills the pores of the loosely consolidated materials below. The researchers determined that the residents of the city had pumped out too much water, which exacerbated the effects of the rising sea level. What these researchers were describing was a permeable material that was changing because of changes in the liquid it held-a perfect example of deformable porous media. The term "deformable porous media" basically encompasses a framework---perhaps microns across, perhaps miles---made of a somewhat rigid material containing open spaces. Within the pore spaces is a more pliable material, which can be set in motion. The subsequent changes in pressure---whether brought about by the removal, addition, or motion of a liquid or a gas---can cause deformation in the less flexible lattice holding the fluid. In the case of Venice, the removal of water from the water table caused the framework holding it---the unconsolidated soils on which Venice is built---to contract; the result has been subsidence across the region and buildings that have dropped by several centimeters. Elsewhere, seismologists have used the concept of deformable porous media to predict where water-pumping activities might induce earthquakes. For example, the removal or addition of water (whether as steam or liquid) from the pore spaces and interstices of a geothermal field might result in subsidence across the entire steam reservoir as pressure on the cracks changes. Such deformation of rocks can create very small earthquakes, as it did in Rangely, Colorado, during experiments with water forced into gas wells in the area. (Feeling earthquakes up to magnitudes 4 and 5 from the pumping, Denver residents got mighty nervous.) You can also go deeper into the earth and apply the concept to magma fluid flow, but mathematical modeling of deformable porous media does not stop with geophysical applications. Biophysicists apply the idea to the human body, modeling the behavior of cartilage or the deformation of the brain during neurosurgery. Pharmacologists use it to model drug delivery. Materials scientists apply it to injection molding processes. The applications of models of deformable porous media are numerous-but getting the models right can be extremely tricky. Scaling Issues Marc Spiegelman, an associate professor of geology and applied mathematics at Columbia University, says it all starts with the scaling. The scale at which he works is quite large, as Spiegelman models the earth's magma layer. The scale may not be that of a typical deformable porous media case study, but the modeling process he uses is the same. "The big picture is that you want to model a sponge," Spiegelman says, a metaphor he tends to use to represent whatever deformable porous media he wants to model. To start the model, he says, "you imagine yourself sitting in the pores of the sponge. . . . You can imagine yourself sitting at the level of the pores or you can imagine the whole sponge as a continuum. . . . Then you can model the liquid and solid, using a set of pipes and tubes," to take a first stab at the approach you want to use, either at the microscopic or the macroscopic level. The scale at which you model something will affect your parameters and your results. "We have to distinguish between the micro and the macro," says Lynn Bennethum, an assistant professor of mathematics at the University of Colorado at Denver. "At the microscopic level, you can distinguish between phases, say on the order of microns for soil. At the macro level, it's all smeared out---you're looking at it from far enough away [to see the media as a single phase]. This might be on the order of centimeters for soil." Most modelers start small and then average up to larger scales; the first "upscaling" techniques were developed in the 1960s. Because you cannot model every individual nook and cranny, Bennethum says, you have to model at "a continuous scale," where your assumptions will hold across the board. "When you are modeling such complicated systems, you can't rely on empirical models. We don't know enough," Bennethum says. "There are just so many factors going on that you can't isolate one." But it is possible to construct mathematical models of deformable porous media because modeling capabilities have grown to handle more and more complicated systems. The assumptions for such models are an indication of how complicated they can be. A modeler has to have an idea of the stress and strain in a system, the viscosities of the materials involved, the changes in pressure, the variance over the system, and the system's basic geometric configuration. At smaller scales, a modeler needs to know the geometry of the elements involved. In some cases, such as the rock structure in an oil field, those geometries are impossible to guess and the model is forced to a larger scale. Some of the most important assumptions have to do with the interfaces between the substances in the system. Changes in the pressure on a higher-viscosity material that holds a lower-viscosity flowing substance have to be fit into the equations that describe the model. But the basics are in determining those relationships, says Marc Spiegelman: "This piece is solid, this piece is liquid-and you try to work out the physics at the interface level," the interface between the water and the sponge, so to speak. Modeling Approaches One of the most useful weapons in the modeling arsenal is the continuum mechanics approach, which includes use of conservation equations and the averaging of properties over small volumes. Other tools for upscaling include asymptotic expansion, stochastic-convective approaches, and Martingale methods, says Bennethum. "Conservation equations are basically fancy ways of saying you can't get something for nothing," Spiegelman says, adding that "the tricky part is the constitutive relationships." What is the relationship between the energy of something and its temperature? What is the force balance of the liquid in this solid? The tough example Spiegelman gives is "some twisty tube," through which you want to push a liquid. "The geometry of the walls will affect the pressures on the liquid---about the hardest thing to do is model that," he says. The important conditions are permeability and porosity, he explains, the strength of the matrix and how it will respond to change. In magma models, for example, change in pressure will often bring changes in temperature and the immediate crystallization of certain elements. In a sedimentary rock matrix, change in pressure may bring subsidence, or even trigger a fault through the rocks. Bennethum uses an upscaling method called "hybrid mixture theory" to get to the big picture. "What I do is start with conservation equations at the microscale for each phase, and then I volume-average those equations up to the macroscale," she says. After volume-averaging, she points out, the material is no longer considered as a solid at one point, as a liquid at another. Rather, the system is "smeared out. . . . at every point you have density for both liquid and solid phase." Calculations of the average density of a representative elementary volume of that system give an average density of fluid and solid at every point that is consistent with the physical laws at the microscale. Bennethum compares her method with homogenization, or averaging, which begins at the microscale with conservation and constitutive equations and then continues with a perturbation approach to get the governing equations at the macroscale. "Up to the macroscale, you can get the exact form of the constitutive equations, assuming you know the geometry," Bennethum says. "You have to know the geometry exactly, so it's useful if you are working with manmade materials, like foam---the way you manufacture it is very structured and periodic." The perturbation approach considers more and more terms and thus becomes increasingly complex. The advantage, Bennethum says, is that if you know the geometry and the governing equations, no more experiments are necessary. The disadvantage is that your materials must have a periodic geometry. She finds that homogenization complements the hybrid mix theory she uses. Averages are not necessarily the real thing, of course. "Because it's averages, you're never really sure about what's going on inside," says Spiegelman, whose own work is on a system-magma migration in the earth's mantle-that's hard to check. Permeability is the single hardest element to describe in these models: In accordance with Darcy's law, the flux of fluid will depend on the pressure gradient in the system, and that will depend on its microstructure. You could have pockets of porosity in a system, but if they are not connected to each other in some way, the permeability is zero. A system of pipes in a matrix might have the same porosity as a system of sheets with space between them. Knowing the pressure gradient and its relationship to fluid flux in a model, Spiegelman says, depends on geometry-and sometimes, you have to model without knowing the geometry. Complex Models in the Oil Industry Geometry is very hard to pin down in some of the more common applications of such models. Rick Dean, a senior research fellow at the University of Texas at Austin, says that much of this work is done in the oil industry, for reservoir models, drilling plans, and industrial cleanup. Dean worked for 19 years at ARCO, developing and writing models for maximizing production from oil fields. "What we do on the models is we write the algorithms, and on forming some of the mathematical bases, we try to look at the consistencies in the theory," he says. "How will math models predict what will happen in going from two phases to three?" Dean and his colleagues look at laboratory experiments, and sometimes field experiments, and may spend years working out the kinks. "You never scrap one entirely," he says. Instead, each model is written in a modular way so that almost always, pieces of them prove useful in other models. For example, when Dean joined the University of Texas parallel computing group, he had to reformulate his entire code for parallel platforms; base codes had to be completely rewritten, but the modular components could still be used. Modeling these complex, multiphase, porous media systems requires incredible computer muscle. When computer power was costly, those needs to some extent controlled how detailed a model could be. With the advent of cheap personal computers, groups of PCs are hooked together with a fast connection and run as a parallel computer. "Some of these Beowulf-type clusters are becoming more common," says Dean, who has 64 processors hooked together to do his dirty work. "What that does is give you a lot of computer power for relatively little money," he says, and what used to be computationally expensive in terms of time and the size of the problem is much more within reach. "Now with these faster machines, people are trying to put more physics into the models." Today, modelers must take into account not only physical conditions, but also initial input parameters and the variability in initial conditions. A modeler has to deal with several types of mathematics, Dean says, from geostatistics to fluid flow calculations. "Deformable media for geomechanics is becoming a more popular area now, where it used to be just the fluid flow," he says. Current models need to incorporate where fluid flow calculations need to be applied and where they don't. Dean gives the example of Imperial Oil Resources, a Canadian company whose production work in an oil reservoir was affecting a nearby aquifer at a shallower depth. The Alberta Energy and Utilities Board, a Canadian regulatory agency, had seen the company's data and was concerned that its operations may have created a direct connection to the freshwater aquifer. Imperial called in Tony Settari, whose group, with the help of company geoscientists, examined the assumption that there was some kind of communication between the oil reservoir and the aquifer. Settari was able to do calculations showing the problem to be simply one of geomechanics. "What was happening was the earth was essentially moving and causing the pressure to change in the pores above," Dean explains, even though there was no exchange of fluids between reservoirs. "[Settari's findings] allowed them to keep the operation going, because they really weren't polluting the In the past, Dean says, modelers tended to ignore the geomechanics in their calculations. "Now we have a better understanding of physics," he says, "and with that understanding, we're better able to optimize our efforts, whether it's cleanup or production. Most people want to take that a step further and apply what they've learned." For an offshore example, Dean points to subsidence in the North Sea at the Ekofisk field. After the oil company had built its platforms and started producing in the field, the platforms began to sink. "They subsided so much they had to go in and raise the platforms, costing them several billion dollars. If they'd known ahead of time, they could have built their platforms taller," Dean says. Onshore, Dean's examples are subsidence in places like Venice, Italy, and Long Beach, California. In Long Beach, oil field production has caused the center of town to sink below sea level, and the casings for some of the oil wells in the area have poked up above the ground. Subsidence in Venice is a bit more catastrophic. "You can't afford to let it subside," Dean says. Now that researchers know that the subsidence is caused by the removal of water from their aquifer, Dean says that the models allow the city to conduct injection experiments to further model the aquifer for the exact location and source of the problem. What they hope to determine is where water should be injected, and in what amounts, to slow the sinking of the city. In conclusion, Dean looks beyond the geophysical applications. "Anywhere you have fluid flow in solids that are porous, there are a lot of applications that use similar equations," he says. "You probably could find examples in just about every field." Naomi Lubick is a freelance writer based in Folsom, California.
{"url":"https://www.siam.org/news/news.php?id=417","timestamp":"2014-04-21T14:40:38Z","content_type":null,"content_length":"21728","record_id":"<urn:uuid:58d2d604-0c9d-4484-ad70-fed0fd922ef6>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional Dice Rolls It happened to me again at the Wednesday Battletech game, one of those honest mistakes that can happen in a complicated situation. My Griffin fired two medium lasers at a Jenner needing 7's or better, and both hit. A short time later I realized my mistake, I had miscounted and the roll should have been 8 or better. I could not remember what the original rolls had been. There is a way to fix this, to repair my mistake, to determine if my shots really should have missed, and that is fair to both players. It's a good trick, and you don't even have to do any math to use it. But you might have to roll a lot of dice. ---> More after the fold ---> The trick is to use conditional probability, an application of Bayes Law to correct the mistake. We are not going to do the calculations though, the dice will take care of that for us. To make this work, you need to know two things: The first condition in which you were first rolling the dice (the error), and what the correct and second condition should have been. It works like this: Recreate the first event (the wrong one) by rolling the dice until you get the same result ( success, or failure ) as the first time. Next apply the second (correct) condition to the same roll, possibly changing the first outcome. The procedure changes the result from the original roll with the probability needed to correct the original error. Here is a really simple example using a success-roll on a single six-sided die to show how this is done. You might spot some ways to make it even simpler. Example 1: You roll 1d6 needing 4, 5, or 6 to succeed, and you do succeed. Later you realize that was a mistake, and it should have been just 5 or 6 to succeed. You do not remember what the actual roll was, only that it was successful (therefore a 4, 5, or 6). To correct the mistake roll the die again, re-rolling any results of 1,2,or three. You should now have a die with a 4, 5, or 6 showing. If the die shows a 5 or 6, keep the original result, if it is a 4, change the result to a failure. Now a little math: The first roll had a 50% probability of success (3 out of 6), but should have been ~33% (2/6 or 1/3). After re-rolling the die until you get 4, 5,or 6, the probability of 5 or 6 is ~67% (2/3), and ~33% (1/3) of a 4. The probability that this second roll is a success (2/3) given that you know the first roll succeeded (50%) is 0.5 times 0.67, or ~33%, which gives you the correct probability of success (1/3) as if you had done it right in the first place. That may have been a little overly-complicated. If you followed the math you know this could have been done with only one additional die roll, which would be much simpler. I did it this way because the same procedure works no matter what sort of dice you are using. Try this on my original example. Example 2 (my opening example): Dan rolled and hit with two medium lasers needing 7+ on 2d6. Later he realizes the base of the Jenner miniature he was shooting at obscured a light woods (a +1 modifier), and he should have rolled with 8+ to-hit. Dan then approaches Scott to correct his mistake. He rolls for the first laser getting a 10 on the first try, and this remains a hit. The rolls for the second laser, re-rolling several results less than 7, and finally gets a 7. This result is changed to a miss, and Scott erases the appropriate damage from the record sheet. Now the math: The probability of rolling 7+ on 2d6 is 21/36, for 8+ is 15/36, and of rolling exactly 7 is 6/36. Given that my first roll(s) were at least 7 (I know that, because they hit) the conditional probability distribution of my second roll becomes 1/21 chance of a 12, 2/21 chance of 11, 3/21 chance of 10, 4/21 chance of 9, 5/21 chance of 8, and 6/21 chance of 7 and the conditional probability of 8+ is now (adding things up) 15/21. The probability of my first roll (21/36) times the probability of the conditional roll (15/21) is (21/36) * (15/21) = 15/36 --> The probability I should have used in the first place (Hooray!). Conditional probability is confusing to a lot of people, usually because they do not consider all the event-probabilities that lead to the final outcome. If you use most obvious way to correct a mistake - a simple re-roll where you ignore the first result and roll again correctly - the player faces a sort of double jeopardy, and has to succeed on two different rolls. The chance of rolling both 7+ and 8+ is (21/36) times (15/36) = (315/1296) or ~24.3%, considerably less that the (15/36) = ~41.7% probability that results if you ignore the result of the first roll. Conditional dice rolls also work to correct mistakes in failed rolls, and here the unfairness of the simple re-roll is more evident. Example 3a: Dan rolls to hit at 8+ on 2d6 and misses. Later he realizes the roll should have been at 7+, and talks Scott into giving him a new roll, throwing out the first result (the miss) and replacing it with the result of the second roll. Dan now has a 21/26 chance of hitting with the second roll, but this ignores that Dan already had one chance of hitting which failed. Dan is essentially getting a free re-roll, which is unfair to Scott. Now let's redo Example 3 with conditional probability, and this time just a little math. Example 3b: Dan rolls to hit at 8+ on 2d6 and misses. Later he realizes the roll should have been at 7+, and Scott agrees to a conditional re-roll. Dan and Scott figure out the difference between the two rolls is just the probability of rolling exactly 7 on 2d6, which is 6/36 or 1/6. Scott agrees that Dan should roll 1d6, and if the results is a 6, to change the miss into a hit. One assumption I am using for all these examples is that there is no memory at all of what the first roll might have been. What if you don't recall exactly what the roll was, but maybe you know it was "more than 5", or "one die was a 6"? You can take this sort of partial information into account too, and probably should. I'll save that for part 2 , where I contemplate Conditional Dice Rolls with Partial Information [EDIT] - As noted in the comments, there is an additional assumption that there is nothing else that happens in between that cannot be undone. If players make decision based on the outcome, if becomes especially difficult to go back and fix it. 2 comments: Kemp said... An interesting and useful post. I might have to consider applying this technique in the future. The one thing you can't account for, of course, is the game events in the meantime. Certainly you can retcon the damage you should have caused, but (except in very simple/recent cases) the rest of the game between the mistaken roll and the fix has to remain, even if inconsistent with the new result. EastwoodDC said... Kemp: You are correct - you can't correct other results that might be influenced by the error, except if (as you say) it is a very simple chain of events. I drew my example from our groups current company-on-company Battletech scenario, which makes for long fire phases. My mistake occurred at the beginning of the phase, and I caught it at the end of the same phase (maybe 10 minutes later). All damage it simultaneous, so there weren't any non-fixable events to worry about. Thanks for the comment. :-)
{"url":"http://giantbattlingrobots.blogspot.com/2012/02/conditional-dice-rolls.html","timestamp":"2014-04-21T02:00:22Z","content_type":null,"content_length":"152177","record_id":"<urn:uuid:e5646fbf-6947-4343-8707-c8939cefc16c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
RFC 3874 A 224-bit One-way Hash Function: SHA-224 RFC 3874 • Document • IESG Evaluation Record • IESG Writeups • History Network Working Group R. Housley Request for Comments: 3874 Vigil Security Category: Informational September 2004 A 224-bit One-way Hash Function: SHA-224 Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2004). This document specifies a 224-bit one-way hash function, called SHA-224. SHA-224 is based on SHA-256, but it uses a different initial value and the result is truncated to 224 bits. 1. Introduction This document specifies a 224-bit one-way hash function, called SHA-224. The National Institute of Standards and Technology (NIST) announced the FIPS 180-2 Change Notice on February 28, 2004 which specifies the SHA-224 one-way hash function. One-way hash functions are also known as message digests. SHA-224 is based on SHA-256, the 256-bit one-way hash function already specified by NIST [SHA2]. Computation of a SHA-224 hash value is two steps. First, the SHA-256 hash value is computed, except that a different initial value is used. Second, the resulting 256-bit hash value is truncated to 224 NIST is developing guidance on cryptographic key management, and NIST recently published a draft for comment [NISTGUIDE]. Five security levels are discussed in the guidance: 80, 112, 128, 192, and 256 bits of security. One-way hash functions are available for all of these levels except one. SHA-224 fills this void. SHA-224 is a one-way hash function that provides 112 bits of security, which is the generally accepted strength of Triple-DES [3DES]. This document makes the SHA-224 one-way hash function specification available to the Internet community, and it publishes the object identifiers for use in ASN.1-based protocols. Housley Informational [Page 1] RFC 3874 A 224-bit One-way Hash Function: SHA-224 September 2004 1.1. Usage Considerations Since SHA-224 is based on SHA-256, roughly the same amount of effort is consumed to compute a SHA-224 or a SHA-256 digest message digest value. Even though SHA-224 and SHA-256 have roughly equivalent computational complexity, SHA-224 is an appropriate choice for a one-way hash function that provides 112 bits of security. The use of a different initial value ensures that a truncated SHA-256 message digest value cannot be mistaken for a SHA-224 message digest value computed on the same data. Some usage environments are sensitive to every octet that is transmitted. In these cases, the smaller (by 4 octets) message digest value provided by SHA-224 is important. These observations lead to the following guidance: * When selecting a suite of cryptographic algorithms that all offer 112 bits of security strength, SHA-224 is an appropriate choice for one-way hash function. * When terseness is not a selection criteria, the use of SHA-256 is a preferred alternative to SHA-224. 1.2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [STDWORDS]. 2. SHA-224 Description SHA-224 may be used to compute a one-way hash value on a message whose length less than 2^64 bits. SHA-224 makes use of SHA-256 [SHA2]. To compute a one-way hash value, SHA-256 uses a message schedule of sixty-four 32-bit words, eight 32-bit working variables, and produces a hash value of eight 32-bit words. The function is defined in the exact same manner as SHA-256, with the following two exceptions: First, for SHA-224, the initial hash value of the eight 32-bit working variables, collectively called H, shall consist of the following eight 32-bit words (in hex): Housley Informational [Page 2] RFC 3874 A 224-bit One-way Hash Function: SHA-224 September 2004 H_0 = c1059ed8 H_4 = ffc00b31 H_1 = 367cd507 H_5 = 68581511 H_2 = 3070dd17 H_6 = 64f98fa7 H_3 = f70e5939 H_7 = befa4fa4 Second, SHA-224 simply makes use of the first seven 32-bit words in the SHA-256 result, discarding the remaining 32-bit words in [include full document text] Network Working Group R. Housley Request for Comments: 3874 Vigil Security Category: Informational September 2004 A 224-bit One-way Hash Function: SHA-224 Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2004). Abstract This document specifies a 224-bit one-way hash function, called SHA-224. SHA-224 is based on SHA-256, but it uses a different initial value and the result is truncated to 224 bits. 1. Introduction This document specifies a 224-bit one-way hash function, called SHA-224. The National Institute of Standards and Technology (NIST) announced the FIPS 180-2 Change Notice on February 28, 2004 which specifies the SHA-224 one-way hash function. One-way hash functions are also known as message digests. SHA-224 is based on SHA-256, the 256-bit one-way hash function already specified by NIST [SHA2]. Computation of a SHA-224 hash value is two steps. First, the SHA-256 hash value is computed, except that a different initial value is used. Second, the resulting 256-bit hash value is truncated to 224 bits. NIST is developing guidance on cryptographic key management, and NIST recently published a draft for comment [NISTGUIDE]. Five security levels are discussed in the guidance: 80, 112, 128, 192, and 256 bits of security. One-way hash functions are available for all of these levels except one. SHA-224 fills this void. SHA-224 is a one-way hash function that provides 112 bits of security, which is the generally accepted strength of Triple-DES [3DES]. This document makes the SHA-224 one-way hash function specification available to the Internet community, and it publishes the object identifiers for use in ASN.1-based protocols. Housley Informational [Page 1] RFC 3874 A 224-bit One-way Hash Function: SHA-224 September 2004 1.1. Usage Considerations Since SHA-224 is based on SHA-256, roughly the same amount of effort is consumed to compute a SHA-224 or a SHA-256 digest message digest value. Even though SHA-224 and SHA-256 have roughly equivalent computational complexity, SHA-224 is an appropriate choice for a one-way hash function that provides 112 bits of security. The use of a different initial value ensures that a truncated SHA-256 message digest value cannot be mistaken for a SHA-224 message digest value computed on the same data. Some usage environments are sensitive to every octet that is transmitted. In these cases, the smaller (by 4 octets) message digest value provided by SHA-224 is important. These observations lead to the following guidance: * When selecting a suite of cryptographic algorithms that all offer 112 bits of security strength, SHA-224 is an appropriate choice for one-way hash function. * When terseness is not a selection criteria, the use of SHA-256 is a preferred alternative to SHA-224. 1.2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [STDWORDS]. 2. SHA-224 Description SHA-224 may be used to compute a one-way hash value on a message whose length less than 2^64 bits. SHA-224 makes use of SHA-256 [SHA2]. To compute a one-way hash value, SHA-256 uses a message schedule of sixty-four 32-bit words, eight 32-bit working variables, and produces a hash value of eight 32-bit words. The function is defined in the exact same manner as SHA-256, with the following two exceptions: First, for SHA-224, the initial hash value of the eight 32-bit working variables, collectively called H, shall consist of the following eight 32-bit words (in hex): Housley Informational [Page 2] RFC 3874 A 224-bit One-way Hash Function: SHA-224 September 2004 H_0 = c1059ed8 H_4 = ffc00b31 H_1 = 367cd507 H_5 = 68581511 H_2 = 3070dd17 H_6 = 64f98fa7 H_3 = f70e5939 H_7 = befa4fa4 Second, SHA-224 simply makes use of the first seven 32-bit words in the SHA-256 result, discarding the remaining 32-bit words in
{"url":"http://datatracker.ietf.org/doc/rfc3874/","timestamp":"2014-04-18T05:33:28Z","content_type":null,"content_length":"32106","record_id":"<urn:uuid:6a89bcf7-e599-43f2-babc-4c7d2666c1df>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
ALEX Lesson Plans Subject: Mathematics (6 - 12) Title: What are You - Prime or Composite? Description: This lesson is on factors and greatest common factors. The students will view presentations on prime numbers, composite numbers, and prime factorization (some using factor trees). The students will participate in interactive games on various websites, and discover by using their initials in their name, if they are prime or composite. Subject: Mathematics (4 - 6) Title: My Favorite Number Description: The activity allows students to review many of the number theory concepts. Students will pick a composite number write verbal expressions about the number, find the factors, prime factorization, list multiples, draw a cartoon character of the number, and write a word problem. The purpose of this activity is to be a review of the number theory topics taught during middle school.This lesson plan was created as a result of the Girls Engaged in Math and Science University, GEMS-U Project. Thinkfinity Lesson Plans Subject: Mathematics Title: Playing the Product Game Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students learn the rules of and play the Product Game. In this game, players start with a set of given factors and then multiply to find the product. Students create their own game boards and develop strategies for winning the game. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Look for Patterns Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students further develop their understanding of ratio, proportion, and least common multiple. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: The Factor Game Add Bookmark Description: In this Illuminations lesson, students play the Factor Game, which engages students in a friendly contest where strategies involve distinguishing between numbers with many factors and numbers with few factors. Students are then guided through an analysis of game strategies and introduced to the definitions of prime and composite numbers. Thinkfinity Partner: Illuminations Grade Span: 3,4,5,6,7,8 Subject: Cross-Disciplinary - Informal Education , Mathematics - Algebra , Mathematics - Applied Mathematics , Mathematics - Arithmetic , Mathematics - Calculus , Mathematics - Discrete Mathematics , Mathematics - Functions , Mathematics - Geometry , Mathematics - Measurement , Mathematics - Number Sense , Mathematics - Number Theory , Mathematics - Patterns , Mathematics - Probability , Mathematics - Statistics , Mathematics - Trigonometry , Informal Education - Homework Help/Tutoring Title: What Is a Prime Number? Add Bookmark Description: Division will definitely factor into today's mathematical Wonder of the Day. April is Math Education Month, so join us on a numerical journey while you're still in the prime of your Thinkfinity Partner: Wonderopolis Grade Span: K,PreK,1,2,3,4,5 Subject: Mathematics Title: Making Your Own Product Game Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students review strategies for playing the Product Game. In this game, players start with a set of given factors and multiply to find the product. Students then work in groups to create their own game boards. Thinkfinity Partner: Illuminations Grade Span: 3,4,5,6,7,8 Subject: Mathematics Title: Classifying Numbers Add Bookmark Description: In this lesson, one of a multi-part unit from Illuminations, students use Venn diagrams to organize information about numbers. Using the Product Game board, students look for relationships and characteristics of numbers to determine what numbers belong to a descriptor and what numbers belong to more than one descriptor. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: The Factor Game: Activity Sheet Add Bookmark Description: This reproducible activity sheet, from an Illuminations lesson, guides students in their exploration of factors. They consider strategy for playing the Factor Game on a 49-board and reflect on the importance of factors in the establishment of a 24-hour day. Thinkfinity Partner: Illuminations Grade Span: 6,7,8 Subject: Mathematics Title: Factor Game Add Bookmark Description: This interactive game, from Illuminations, allows students to exercise their factoring ability. They can play against a friend or the computer as they take turns choosing a number and identifying all of its factors. Thinkfinity Partner: Illuminations Grade Span: 3,4,5,6,7,8
{"url":"http://alex.state.al.us/plans2.php?std_id=53846","timestamp":"2014-04-20T20:55:37Z","content_type":null,"content_length":"30472","record_id":"<urn:uuid:22b728d1-6efd-4828-8170-6cb3e81aa8b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursive Fibonacci sequence optimization June 23rd, 2012, 07:59 PM Recursive Fibonacci sequence optimization I've written an algorithm for finding terms in a Fibonacci Sequence using a recursive method. It's fairly quick until around the 45th term which takes roughly 5 seconds to calculate using a primitive type; it goes until the 42nd term using the BigInteger class in about 10 seconds. I need to find the first term that contains 1000 digits. Which also means I have to use the BigInteger class which slows it down. Is there anyway I can speed up my algorithm short of hard-coding all the terms in the sequence up to a specific point? Code java: public static BigInteger fibSeq(int x) if( x <= 2) return BigInteger.ONE; return fibSeq(x-1).add(fibSeq(x-2)); public static long fibSeqL(long x) if( x <= 2) return 1; return fibSeqL(x-1)+(fibSeqL(x-2)); public static void main(String[] args) long start = System.currentTimeMillis(); long end = System.currentTimeMillis(); System.out.println("Elapsed time: " + (end - start) + "ms"); June 24th, 2012, 12:11 AM Re: Recursive Fibonacci sequence optimization Here's the thing: Lots of things are "easy" to define and explain, and, maybe even easy to understand as recursive processes. There is a lot of overhead (time and memory) in deep recursive calls. In the case of the Fibonacci sequence, each term requires two recursive calls. Implement iteratively instead of recursively. Make a loop, not a recursive call. Code : public static BigInteger fibSeq(int x) // Declare BigInteger variables for sum, nminus1 and nminus2 // Initialize sum to zero and the others to 1 if( x <= 2) return BigInteger.ONE; } // x <= 2 for (int i = 3; i <= x; i++) // Set sum equal to nminus1 + nminus2 // Set nminus2 equal to nminus1 // Set nminus1 equal to sum } // for } // else return sum; Now, this can get an answer much quicker. But, here's the thing: Your problem is not to calculate, say the 3000th term of the sequence. It is to keep calculating until there are 1000 digits in the result and then stop. I mean, the whole problem is to determine how many terms it takes to get up to the 1000 digit thing, right? Here's what I suggest: Declare a Fibonacci class. It has private BigInteger variables for sum, nminus1 and nminus2. Because of the quirkiness of the startup routine, I might make a variable named "called" that takes care of the first two terms. Then, instead of trying to tell it how many terms to give just make a "next()" method that gives the next term. After creating the object, the first call to next() gives F1, the second call gives F2. Just keep calling next() until you get a term with the number of digits you need. (I would probably just keep count of the terms in main(), although you could use the "called" variable in the object if you wanted to create a "getter" method for it.) Code java: class Fibonacci private int called; private BigInteger nm2; private BigInteger nm1; // Constructor public Fibonacci() nm2 = new BigInteger("1"); nm1 = new BigInteger("1"); called = 0; public BigInteger next() // Special condition to get the first two terms if (called < 2) return BigInteger.ONE; } // if (called < 2) {// A loop to get higher terms. Note that nm1 and nm2 have been "primed" with values of 1, so it's ready to go // Set sum equal to nmi1+nm2 // Set nm2 equal to nm1 // Set nm1 equal to sum return sum; }// next } // Fibonacci Now in main(), create a Fibonacci instance and call its next() method in a loop where the loop control function is the length of the string that the BigInteger.toString() function gives you for the terms as they flow out from the object. Of course, instead of making a separate class, the whole Fibonacci loop could be put into main(), and it would save the time-consuming function calls. My methodology: Implement first. Optimize later, if appropriate. (It's not a speed contest, right? I mean if a clean, modular object-oriented approach does the deed in a second or so, is there any practical reason to try to get it down to ten milliseconds? maybe so; maybe not.) June 24th, 2012, 08:49 AM Re: Recursive Fibonacci sequence optimization I thought about doing it in a loop, but I needed the practice in recursion. Thanks a lot.
{"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/16302-recursive-fibonacci-sequence-optimization-printingthethread.html","timestamp":"2014-04-17T10:32:09Z","content_type":null,"content_length":"20325","record_id":"<urn:uuid:aab91462-aa41-4519-98e0-4f46ec39fe04>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
The BurStFin R package February 16, 2012 By Pat Version 1.01 of BurStFin is now on CRAN. It is written entirely in R, and meant to be compatible with S+. The package is aimed at quantitative finance, but the variance estimation functions could be of use in other applications as well. Also of general interest is threeDarr which creates a three-dimensional array out of matrices. Variance estimation The most important functions in the package are: • var.shrink.eqcor to estimate a variance matrix using Ledoit-Wolf shrinkage towards equal correlation. • factor.model.stat to create a statistical factor model of a variance matrix Both of these functions can estimate variances when there are more variables (assets in finance) than observations. Both of them also allow missing values in the input. The tawny package has a function for Ledoit-Wolf shrinkage but it does not allow missing values. Also in tawny is a function to estimate variance based on random matrix theory. Variance manipulation • var.add.benchmark takes a variance matrix plus a named vector of weights and returns a variance matrix with the additional asset which is the linear combination of the existing assets given by the weight vector. • var.relative.benchmark takes a variance matrix and returns a variance matrix of one less asset that is the variance relative to the dropped asset. Both of these functions allow the variance to be a three-dimensional array representing multiple variance matrices. • threeDarr takes one or more matrices and creates a three-dimensional array out of them. • alpha.proxy shows the effect that volatility and correlation have on the utility of an investor in a certain setting. threeDarr is new to the package. It was written to streamline some tasks with Portfolio Probe, but is of general use. By default there is now a warning in both var.shrink.eqcor and factor.model.stat if the input x is all non-negative. This asks the question: Were prices accidentally given rather than returns? There is now a sum.to.one argument to var.add.benchmark which can be set to FALSE if the “benchmark” is something with weights that do not sum to one. The use case that prompted this was a vector of portfolio weights minus benchmark weights. Research projects The estimation of variance matrices in finance is (perhaps amazingly) not especially well researched. The functionality in this package suggests several questions that would be nice to have • When is Ledoit-Wolf shrinkage better than a factor model (or anything else). This is explored a little in some blog posts. • What is the best way to handle missing values? This occurs, for example, when stocks did not exist for the entire historical period. Is it different for different estimation techniques? The application does matter — what you should shrink toward is different with or without a benchmark, for instance. • What is the best time weighting to use? Is it different for different estimation techniques? Is it different for different applications? Getting it R: 2.13 and 2.14 on Windows It is on CRAN, so you can just do: This will also be the way to get it for new versions of R and/or BurStFin. R: older versions on Windows The 2.14 build is spread around the Burns Statistics repository. So you can do: install.packages("BurStFin", repos="http://www.burns-stat.com/R") The help doesn’t work for older versions (because of changes in the help system), but the code is fine. R: on other platforms CRAN looks to have builds for 2.14 on Linux, MacOS and Solaris. And of course it has the source. You can browse to http://www.burns-stat.com/Splus/BurStFin where you will find a dot-q file of the code that can be sourced, and a pdf of the help files. Subscribe to the Portfolio Probe blog by Email daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/the-burstfin-r-package/","timestamp":"2014-04-20T13:22:49Z","content_type":null,"content_length":"38760","record_id":"<urn:uuid:36ef7acb-484a-418d-8fda-f85f86fce136>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
RC Time Constant The time required to charge a capacitor to 63 percent (actually 63.2 percent) of full charge or to discharge it to 37 percent (actually 36.8 percent) of its initial voltage is known as the TIME CONSTANT (TC) of the circuit. The charge and discharge curves of a capacitor are shown in figure 3-11. Note that the charge curve is like the curve in figure 3-9, graph (D), and the discharge curve like the curve in figure 3-9, graph (B). Figure 3-11. - RC time constant. The value of the time constant in seconds is equal to the product of the circuit resistance in ohms and the circuit capacitance in farads. The value of one time constant is expressed mathematically as t = RC. Some forms of this formula used in calculating RC time constants are: Q.14 What is the RC time constant of a series RC circuit that contains a 12-megohm resistor and a 12-microfarad capacitor? Because the impressed voltage and the values of R and C or R and L in a circuit are usually known, a UNIVERSAL TIME CONSTANT CHART (fig. 3-12) can be used to find the time constant of the circuit. Curve A is a plot of both capacitor voltage during charge and inductor current during growth. Curve B is a plot of both capacitor voltage during discharge and inductor current during decay. Figure 3-12. - Universal time constant chart for RC and RL circuit. The time scale (horizontal scale) is graduated in terms of the RC or L/R time constants so that the curves may be used for any value of R and C or L and R. The voltage and current scales (vertical scales) are graduated in terms of percentage of the maximum voltage or current so that the curves may be used for any value of voltage or current. If the time constant and the initial or final voltage for the circuit in question are known, the voltages across the various parts of the circuit can be obtained from the curves for any time after the switch is closed, either on charge or discharge. The same reasoning is true of the current in the circuit. The following problem illustrates how the universal time constant chart may be used. An RC circuit is to be designed in which a capacitor (C) must charge to 20 percent (0.20) of the maximum charging voltage in 100 microseconds (0.0001 second). Because of other considerations, the resistor (R) must have a value of 20,000 ohms. What value of capacitance is needed? Find: The capacitance of capacitor C. Solution: Because the only values given are in units of time and resistance, a variation of the formula to find RC time is used: Find the value of RC by referring to the universal time constant chart in figure 3-12 and proceed as follows: • Locate the 20 point on the vertical scale at the left side of the chart (percentage). • Follow the horizontal line from this point to intersect curve A. • Follow an imaginary vertical line from the point of intersection on curve A downward to cross the RC scale at the bottom of the chart. Note that the vertical line crosses the horizontal scale at about .22 RC as illustrated below: The value selected from the graph means that a capacitor (including the one you are solving for) will reach twenty percent of full charge in twenty-two one hundredths (.22) of one RC time constant. Remember that it takes 100 ms for the capacitor to reach 20% of full charge. Since 100 ms is equal to .22 RC (twenty-two one-hundredths), then the time required to reach one RC time constant must be equal to: Now use the following formula to find C: To summarize the above procedures, the problem and solution are shown below without the step by step explanation. Transpose the RC time constant formula as follows: Substitute the R and RC values into the formula: The graphs shown in figure 3-11 and 3-12 are not entirely complete. That is, the charge or discharge (or the growth or decay) is not quite complete in 5 RC or 5 L/R time constants. However, when the values reach 0.99 of the maximum (corresponding to 5 RC or 5 L/R), the graphs may be considered accurate enough for all practical purposes. Q.15 A circuit is to be designed in which a capacitor must charge to 40 percent of the maximum charging voltage in 200 microseconds. The resistor to be used has a resistance of 40,000 ohms. What size capacitor must be used? (Use the universal time constant chart in figure 3-12.)
{"url":"http://www.tpub.com/neets/book2/3d.htm","timestamp":"2014-04-21T02:11:31Z","content_type":null,"content_length":"19317","record_id":"<urn:uuid:24911d97-9581-431c-9646-c79954777aeb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate the work against gravity..? April 29th 2010, 02:28 PM #1 Feb 2010 Calculate the work against gravity..? Calculate the work against gravity required to build a tower of height 35 ft in the shape of a right circular cone with base of radius 7 ft out of brick. Assume that brick has density The work required to raise a mass $m$ to a height $h$ is $w=mgh$. The radius of the cone at a height $0\le h\le 35$ feet is: The volume of a slice of the cone perpendicular to the axis at a height $h$ and of thickness $\Delta h$ is: $V_{\Delta h}(h)=\pi r^2 \Delta h=\pi (7-(h/5))^2 \Delta h$ Now the work can be seen to be: $\int_{h=0}^{35} \pi r^2 \rho\, g\, h \; dh=\int_{h=0}^{35}\pi (7-(h/5))^2 \rho\, g\, h \; dh$ April 30th 2010, 06:38 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/142182-calculate-work-against-gravity.html","timestamp":"2014-04-19T13:13:02Z","content_type":null,"content_length":"35581","record_id":"<urn:uuid:31974b55-ae28-4fbf-a13c-73f06e9fe980>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
External Influences on U.S. Undergraduate Mathematics Curricula: 1950-2000 - Applications and Modeling Courses Applications and Modeling Courses │Above, Solomon Garfunkel, founder of COMAP and co-author, with Lynn Steen, of For All Practical Purposes, in 2013 (Photo provided to MAA by Sol Garfunkel in 2013) │ │ │ │Below, Walter Meyer, co-author with Joseph Malkevitch of Graphs, Models and Finite Mathematics, leader of the Cajori Two Project, and author of the present article, in 2013 (Photo provided to MAA │ │by Walter Meyer in 2013) │ The ultimate form in which the mathematical world can engage with the external world is to solve its problems and to teach students to do so as well. Although applied undergraduate mathematics courses were common in the late 19^th century, by the middle of the 20^th century there was considerably less emphasis on applications [43]. CUPM was not happy with this disregard of the external world. Between 1962 and 1972, CUPM issued seven reports [29, 30, 32-36] that recommended a change in this situation in the form of upper division modeling courses, courses where real-world problems would be confronted with what might be a varied set of mathematical tools. Despite the prestige of CUPM and the respected leaders who sat on it, the community was slow to respond to the calls issued as early as 1962. The first example we know of a modeling course in the upper division category was a course offered at Middlebury College by Michael Olinick in 1971. Also noteworthy was the Mathematics Clinic at Harvey Mudd College, starting up in 1973, but with roots in a seminar running in 1970-72. Maynard Thompson and Daniel Maki published an early influential modeling text in 1973 [27]. Finally, Henry Pollak, in “A History of the teaching of modeling” in [49], discussed possible influences from mathematical modeling clinics in England. Applications outside upper division modeling courses By Fall 2000, upper division mathematical modeling course enrollments had reached about 2000 students. Considering that the number of graduating mathematics majors in that era was about seven times as large, it does not seem as if upper division modeling courses were a big success. But the influence of the greater interest in applications went beyond upper division courses in modeling to include calculus reform, lower division courses in modeling, and lectures and discussions at professional meetings. In the wake of upper division modeling courses, lower division modeling courses aimed at those not majoring in any of the sciences also arrived. Pioneers here were the COMAP volume For All Practical Purposes [16] and its forerunner Graphs, Models and Finite Mathematics [28]. These texts indicated a major new direction for “liberal arts math.” By Fall 2000, there were about 14,000 students enrolled in lower division mathematical modeling courses. The temper of the times in our discipline is found not only in our courses. One can see it in professional mathematics meetings. Consider the programs of the 2007 and 2008 MathFest meeting of MAA. We attended and analyzed all invited hour addresses at these meetings with the intention of counting those that were mainly pure mathematics and those that were mainly applied mathematics. Our classification depended greatly on the speaker’s presentation of the motivation for the work, not merely whether some use could be made of it in the end. Some talks were hard to classify for one reason or another: some seemed irrelevant to the issue (e.g., the Leitzel lectures which concerned how mathematicians ought to behave and organize themselves, talks on history or pedagogy); some dealt with recreational mathematics, raising the question whether mathematics used for paper folding and other recreations really is applied mathematics; and two were hard to classify since they touched on pure and applied mathematics rather evenly. Table 1 shows how we classified the talks. ┃MAA MathFest Invited Addresses: Pure vs. Applied │2007 │2008 ┃ ┃# Mostly pure │2 │1 ┃ ┃# Mostly applied │6 │3 ┃ ┃# Recreational but with no other evident applications │1 │4 ┃ ┃# Pure and applied somewhat evenly │1 (Hedrick #3)│1 (Hedrick #2)┃ ┃# Other (history, organizational, pedagogical) │3 │2 ┃ ┃# Total │13 │11 ┃ Table 1. Invited Hour Addresses at Mathfest 2007 and 2008 Clearly, applied mathematics was emphasized more than pure mathematics, and even more so if applying mathematics to recreations is considered applied mathematics. Another example of how pervasive advocacy of applications became is the frequency with which applications were cited as a main objective of calculus reform. This will be discussed more fully later. Influences leading to modeling and applications Wanting to teach applications from a variety of other disciplines – especially in the high-profile form of a modeling course – is, to begin with, an example of interest in the world outside mathematics. But why did this interest in other disciplines not manifest itself in modeling courses earlier, say in 1920? The simple fact is that for much of the 20^th century, if not before, there has been enough mathematics and enough applications to have made modeling courses with varied topics, including some outside the physical sciences, possible. ┃Model of Sputnik 1, launched into space by the Soviet Union in 1957, National Air and Space Museum, Washington, D.C., U.S.A. (Photo source: NASA via Wikimedia Commons)┃ One might suppose that the 1957 launching of Sputnik, the world’s first space satellite, by the Soviet Union would have been influential in unleashing interest in applications and modeling in the U.S. However, the two notable “applied” textbooks closest in time after Sputnik, Ben Noble’s 1966 Applied Linear Algebra and Garrett Birkhoff’s and Thomas Bartee’s Modern Applied Algebra, do not seem to have been influenced by Sputnik. These are not modeling textbooks, but according to WorldCat, they were the first two books in the 20^th century to use the words “algebra” and “applied” (or a similar word) together in their titles. Although Birkhoff’s and Bartee’s text may not be suitable for undergraduates, it was followed by other undergraduate level applied algebra books which were, and it was surely a stimulus for these other authors. Nowadays one can find undergraduate courses with the title “Applied Algebra” (or similar) at diverse institutions, such as Harvard University, Saginaw Valley State University, University of Wyoming, Prescott College, and SUNY at Stony Brook. Noble’s book was conceived as an undergraduate text, as its preface makes clear, and, according to a later co-author, James Daniel, was definitely used for undergraduates as intended [9]. For more about the evolution of linear algebra, see [42]. Although there is no doubt that the Sputnik launch spurred an intense effort by the U.S. to catch up with and surpass Soviet space technology, we have had considerable interaction with authors Noble and with Bartee by email, phone, and letter and Sputnik never came up. Noble’s interest in applied linear algebra dates to his difficulty grasping an abstract linear algebra course he took as part of his graduate work at Cambridge University in 1946 and is well-described in the preface of [48]. Bartee’s interest in applications dates from the late 1940s when he worked on defense-related issues at the Firestone Corporation. There is no doubt that Sputnik had a huge influence on science and mathematics generally, raising levels of interest and funding, but we have seen no definite evidence of any specific immediate stimulus to applied mathematics in undergraduate teaching. However, it may well have contributed to a cumulative impact along with CUPM reports and other factors that were not, by themselves, immediately decisive. The quantum leaps in mathematics represented by linear programming, applied combinatorics, and graph theory occurred decades before modeling courses arose to include them, so they also cannot be regarded as immediate causes of the development of modeling courses. Therefore, it is reasonable to hunt even further for the crucial spark that ignited the aforementioned accumulated pressures and brought modeling courses into existence. In [43] we argued that the following aspects of our environment in the 1970s and 1980s may have turned mathematics educators toward the uses of mathematics: difficulties new Ph.D.’s had in getting jobs; enrollment declines of mathematics majors, leading many to want more directly useful curricula; and the heady winds of experiment and change that were blowing through American society, and in particular the educational sector, during the late 1960s and the 1970s.
{"url":"http://www.maa.org/publications/periodicals/convergence/external-influences-on-us-undergraduate-mathematics-curricula-1950-2000-applications-and-modeling","timestamp":"2014-04-17T02:23:50Z","content_type":null,"content_length":"111946","record_id":"<urn:uuid:d088c0b4-15d1-4a34-9095-4fbd2259a2bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
why it actually matters It’s been a while since I’ve posted anything here – I’ve been pretty busy with work. In earlier posts I’d mentioned interesting articles on mathematics by Bill Thurston, Terry Tao, and Paul Lockhart. Today I’m going to try and relate these articles to the topic of educating children in mathematics generally. Let me start with Lockhart’s article. In that article, Paul takes the aggressive position that math is more like art than anything else; because we treat it in a rote fashion, we rob children of the beauty of the subject matter. I find this an interesting point because I think it perfectly demonstrates our collective bipolar attitude towards math. There are two simultaneous memes about mathematics out there: 1) Math is rigorous, precise, mechanical, devoid of spirituality, dry, cold, logical, spock-ish, etc. 2) Math is beautiful, creative, artistic, elegant, wondrous, God keeps its most amazing results in a little book, etc. I assert that most people have had exposure to *both* memes; unfortunately, most people merely have evidence for the first meme, and not the second, and so the first one is perhaps the more commonly-held belief. But my main point here is that both memes are one-dimensional: they fail to capture the entirety of mathematics, and (more importantly) they fail to capture the spectrum of reasons why children should learn math. There are at least two dimensions to this problem. One dimension might represent the range of attractions of mathematics itself: the stereotypical engineer loves the pragmatic value of the Fast Fourier Transform or of error correcting codes; the stereotypical head-in-the-clouds number theorist loves ancient riddles about whole numbers (ok, yes, this is foreshadowing – turns out the lover of error-correcting codes now has a lot in common with the lover of arithmetic – but that’ll be a *much* later post!). But the other dimension to this problem is that there’s a hierarchy of specialization: not every citizen needs to get a PhD in mathematics to be deemed mathematically literate. So, in effect, a math curriculum will by necessity partition the body of math knowledge into ~3 strata: 1) The math that is valuable for the general public (basic math literacy) 2) The math that is needed for certain vocations (say, for undergrad majors in engineering, science, etc) 3) The math that is needed for specialists in mathematics itself. Paul makes a very compelling argument that, in partitioning math as above, we’ve inadvertently also removed all the interesting and beautiful stuff about math from the first and second tiers of this strata. I would generally agree with this point. But there is a flip side to this: focusing on the beautiful patterns, and on selling the wondrous-ocity of these patterns to kids. In my experience, middle-school kids can be quite pragmatic, too, and you want to appeal to that side as well. Richard Feynman once wrote that he’d aimed his physics lectures at both the theorist and the experimentalist: he wanted to provide something for each personality type. Similarly, if you teach math purely as an art form, and fail to note the “unreasonable effectiveness of mathematics,” then some of your class will miss the point. BTW, I’m very confident that Paul personally provides something for everyone in his class, I am pointing this out mainly to reference what I think is a weakness in his thesis. To expound on this point some more, let me suggest that math is in many ways like English. Consider the problem of teaching English to the masses. There are two parallel goals: basic proficiency with the language (both literacy and writing skills) and exposure to literature as an art. Great English teachers understand that reading and discussing great works of literature is a powerful and effective way of developing a child’s ability to understand the existing corpus and reason effectively about it. At the same time, diagramming sentences and “mechanically” writing 5 paragraph essays – these are great etudes, a focused bit of deliberate practice that (when properly motivated) hone a student’s craft. Great English teachers juxtapose both kinds of work throughout their curriculum. The same is of course true for music: a great music education trains you in musicianship, in instrument-specific technique, in theory and in developing a rich understanding of the history of music and the body of work that has been done before. You study the great works, and understand why they are wonderful, and you strive to copy them, and then later to find your own musical voice. So, big surprise: math education is like many other human intellectual endeavors, and getting great at it requires developing facility with the tools and techniques of the trade, while also deepening one’s conceptual understanding of the subject. This conceptual understanding is in turn achieved thru a combination of carefully studying specific examples in great depth, along with then abstracting out core principles, patterns, and connections to other topics. At various times, the subject of mathematics has been developed and expanded by folks who were motivated in dramatically different ways: some were motivated by very pragmatic, even mundane or quotidian problems; others were driven purely by aesthetic considerations, whether it was beauty, fun, farce, or perhaps even a combination of these and other driving forces. Filed under: Meta Leave a Comment
{"url":"http://mathforparents.wordpress.com/2010/04/06/why-it-actually-matters/","timestamp":"2014-04-19T12:51:05Z","content_type":null,"content_length":"47901","record_id":"<urn:uuid:1cbdd49a-83ee-4f2e-a916-7ab2c70dc8e4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with Quadrilaterals! December 15th 2009, 01:33 AM #1 Dec 2009 Help with Quadrilaterals! I have a few questions that I need help with. What quadrilateral has both diagonal lines bisect the angles corner? (Square, Kite, Trapezium, parallelogram, Rectangle, etc) How many number of lines are symmetrical for Square, Kite, Trapezium, Parrallelogram, and rectangle? The order of rotational symmetry for the same listed quadrilaterals above. Which of the following would be sufficient to prove that a quadrilateral is a parallelogram? a) Both pair of opposite sides are Parrell b)A pair of adjacent angles are supplementary How do we determine if a rectangle is a square? How do we determine if a rhombus is a square? How do we determine if a trapezium is a parallelogram? Complete the following tree diagram: Quadrilateral -> Trapezium -> Parallelogram -> Rectangle --> ?? Thanks in advance! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/geometry/120552-help-quadrilaterals.html","timestamp":"2014-04-20T19:00:37Z","content_type":null,"content_length":"29036","record_id":"<urn:uuid:e34e7a03-9aa8-4e32-a054-0b47e69313f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: if statement Replies: 3 Last Post: Aug 20, 2013 12:48 PM Messages: [ Previous | Next ] Re: if statement Posted: Aug 20, 2013 12:48 PM "Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message news:kv03b8$mcl$1@newscl01ah.mathworks.com... > "S C.Carl" wrote in message <kuvtvb$gv5$1@newscl01ah.mathworks.com>... >> a = >> 0.0188 >> b = >> 0.0188 > - - - - - - - - - > The fact that the 'format short' displays of a and b are the same doesn't > mean that a and b are equal. That display is rounded off to four decimal > places and they could very easily be unequal. I suggest you try 'format > hex' on them. Or just execute: difference = a-b For == to say that they are the same, difference must be EXACTLY 0 (ignoring nonfinite and nondouble edge cases involving saturation.) Close doesn't count here; == is neither horseshoes nor hand grenades. In this case difference will likely be a small number, but not exactly 0. Steve Lord To contact Technical Support use the Contact Us link on
{"url":"http://mathforum.org/kb/message.jspa?messageID=9227774","timestamp":"2014-04-20T03:53:55Z","content_type":null,"content_length":"20393","record_id":"<urn:uuid:1f2c4971-1ca8-40d8-a543-a265cee432aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
A question about Chi-Squared test May 22nd 2011, 07:19 AM #1 May 2011 I have done a study and found that the prevalence of a particular disease is higher in my study population (11 out of 429 patients) than the UK population (0.3% prevalence). How would I be able to compare these using Chi-squared test to find out whether the prevalence of the disease is higher in my population than the UK population? Thank you if there is only 1 disease state (ie, people are "sick" or "not") then you dont need a chi square test for this. I think a test with the normal distribution is sufficient. google threw up this (more or less) step by step guide: Statistics Tutorial: Hypothesis Test for a Proportion You may want to check its accuracy with your stats textbook before using it for anything important! Thanks, so would you not be able to use Chi-squared to calculate P value in this case? I thought Chi-squared could be used to compare proportions. I don't believe you can use $\chi^2$ distribution for your test. As this discussion details, among other things, the degree of freedom of the test corresponds to the number of cells in your analysis. Their example is to compare three distributions of fish, call them A, B, and C. They each have certain proportions a, b, and c. The expected proportions (E) were equally 1/3 for each cell. Thus, the test statistic is $\chi^2 = \frac{(a - E)^2}{E} + \frac{(b - E)^2}{E} + \frac{(c - E)^2}{E}$ The degrees of freedom for this test is given by: $df = "number\ of\ cells" - 1$ For the example, df = 2. In your case, you could only have a test of cell size = 1. Then your df = 0. How do you do a test with zero degrees of freedom? The answer is you cannot. Now, if you were to split your sample into males and females and have the population expected (empirical?) proportions for males and females. Then you could do the test with cell size = 2 for df = 1. The test is then straight-forward. Last edited by bryangoodrich; May 23rd 2011 at 10:09 AM. Reason: corrected latex and added URL I don't believe you can use $\chi^2$ distribution for your test. As this discussion details, among other things, the degree of freedom of the test corresponds to the number of cells in your analysis. Their example is to compare three distributions of fish, call them A, B, and C. They each have certain proportions a, b, and c. The expected proportions (E) were equally 1/3 for each cell. Thus, the test statistic is $\chi^2 = \frac{(a - E)^2}{E} + \frac{(b - E)^2}{E} + \frac{(c - E)^2}{E}$ The degrees of freedom for this test is given by: $df = "number\ of\ cells" - 1$ For the example, df = 2. In your case, you could only have a test of cell size = 1. Then your df = 0. How do you do a test with zero degrees of freedom? The answer is you cannot. Now, if you were to split your sample into males and females and have the population expected (empirical?) proportions for males and females. Then you could do the test with cell size = 2 for df = 1. The test is then straight-forward. wouldn't there be two cells (sick, not sick) and 1 degree of freedom in the proposed test? Not that i think it is the appropriate test, but it seems feasible enough to me. What does it mean to be not-sick when we already have the incidence known? His sample would then be 11/429 and (429-11)/429. Equivalently by proportions, 2.57% and 97.44%. The population proportion is then the pair (0.3%, 99.7%). Let's look at the statistic: $\chi^2 = \frac{(.0257 - 0.003)^2}{0.003} + \frac{(.9744 - .997)^2}{.997} = 0.1722756$ Using R with 95% confidence, the distribution has quantiles for $\chi^2 (1 - \alpha, df) = 3.841, (\alpha = 0.05)$. The null hypothesis is that the two are the same, and the test statistic falls within the acceptance region (fail to reject). Yet, this doesn't seem right given the drastic difference we observed in sick people (>2% vs 0.3%). Why would this be? The reason is what I alluded to above. It is a false appearance that we gained a degree of freedom by partitioning "sick and nonsick" people. The reason is that the other is wholly determined by the available information. Maybe I'm wrong, though. My understanding was that the reason the test statistic has 1 less degree of freedom than the number of cells is that the total for the cells always adds up to 100% of the sample size. ie, the fact that one of the cells is determined by the others is already allowed for when setting the number of degrees of freedom. The test may have low power but that does not show it is unfeasible or that its distribution is asymtotically incorrect. I never intended to imply that the test was a good one (as per my first post in this thread, where i drew the OP's attention to an alternative). Edit Minor edits were made before I saw the reply below Last edited by SpringFan25; May 23rd 2011 at 11:38 AM. You may be correct, but aren't the cells supposed to be independent? If one is determined by the other, we don't have that independence. Thus, we really have one estimate and we lose its degree of freedom, making the test impotent. If I am wrong, then your critique is spot on, and my calculations above would be the result. this Derivation appears to assume that the probability in being in the cells must sum to 1. That is only the case if we include 2 cells (sick, not sick) in the analysis. Last edited by SpringFan25; May 23rd 2011 at 12:27 PM. Reason: fixed ambiguity Of course you can use a chi-square test for this. It's standard. I would be shocked if the usual chi-square test wasn't exactly equal to the square of the usual Z test (by "usual" I mean the one where you use the exact null standard deviation in the denominator, as opposed to estimating it). One degree of freedom, of course. There are implicitly two cells in the data: the successes and the failures. You lose one degree of freedom so you have one left over. Obviously this has to be true since an equivalent Z test can be formed, and squaring the Z gives a chi-square with one degree of freedom. It's a little bit misleading to speak of "the" chi-square test. The usual tests - the Wald, score, and likelihood ratio tests - are all chi-square tests. IIRC the chi-square test that most people think of is equivalent to the score test in this particular case. Incidentally, if you invert the score test to get a confidence interval, it turns out to be the same as adding two successes and two failures, which is where that trick comes from for small samples. What does it mean to be not-sick when we already have the incidence known? His sample would then be 11/429 and (429-11)/429. Equivalently by proportions, 2.57% and 97.44%. The population proportion is then the pair (0.3%, 99.7%). Let's look at the statistic: $\chi^2 = \frac{(.0257 - 0.003)^2}{0.003} + \frac{(.9744 - .997)^2}{.997} = 0.1722756$ Using R with 95% confidence, the distribution has quantiles for $\chi^2 (1 - \alpha, df) = 3.841, (\alpha = 0.05)$. The null hypothesis is that the two are the same, and the test statistic falls within the acceptance region (fail to reject). Yet, this doesn't seem right given the drastic difference we observed in sick people (>2% vs 0.3%). Why would this be? The reason is what I alluded to above. It is a false appearance that we gained a degree of freedom by partitioning "sick and nonsick" people. The reason is that the other is wholly determined by the available information. Maybe I'm wrong, though. Shouldn't you be using expected cell counts, not expected proportions? It makes a huge difference. I also checked in R that this test is equivalent to the Z test and, sure enough, if you square the Z test you get this one. If you replace the proportions with expected counts you get 73.5, so the result is highly significant. $\displaystyle Z = \frac{\hat p - p_0}{\sqrt{p_0 (1 - p_0) / 429}} = 8.5747 \Rightarrow Z^2 = 73.5$ $\displaystyle \chi^2 = \frac{(11 - (.003)429)^2}{(.003)429} + \frac{(418 - (.997)429)^2}{(.997)429} = 73.5$ Thanks for the details. I was about to comment that SpringFan was right, and if we think of it in terms of the Z test we should see the parallel. I don't know why I was using proportions, though. As you pointed out, you're supposed to use the counts, and you aptly show the test comes out significant as we should have expected. I prefer the Z, which is approximate by the CLT, because you can do a one sided test here. When you square the test stat it, now a 2 sided test. Same with the t and F when you have 1 df. May 22nd 2011, 07:36 AM #2 MHF Contributor May 2010 May 22nd 2011, 07:39 AM #3 May 2011 May 23rd 2011, 10:06 AM #4 May 2011 Sacramento, CA May 23rd 2011, 10:44 AM #5 MHF Contributor May 2010 May 23rd 2011, 11:20 AM #6 May 2011 Sacramento, CA May 23rd 2011, 11:24 AM #7 MHF Contributor May 2010 May 23rd 2011, 11:36 AM #8 May 2011 Sacramento, CA May 23rd 2011, 11:46 AM #9 MHF Contributor May 2010 May 23rd 2011, 12:37 PM #10 Senior Member Oct 2009 May 23rd 2011, 12:54 PM #11 Senior Member Oct 2009 May 23rd 2011, 01:04 PM #12 May 2011 Sacramento, CA May 24th 2011, 08:20 AM #13
{"url":"http://mathhelpforum.com/advanced-statistics/181312-question-about-chi-squared-test.html","timestamp":"2014-04-17T15:30:39Z","content_type":null,"content_length":"71217","record_id":"<urn:uuid:75103367-1fb4-4c20-a0a6-5eba683f9b8c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: RE: RE: RE: RE: where did my matrix go after calling -diagt- Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: RE: RE: RE: RE: where did my matrix go after calling -diagt- From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: RE: RE: RE: RE: where did my matrix go after calling -diagt- Date Thu, 18 Feb 2010 14:38:48 -0000 Agreed that the finger does point in that direction. The implications for Moleps appear to be one or both of 1. Use some other method for storing results. 2. Clone -diagti- and try changing the line -clear- to -version 10: clear-. The revised program is your responsibility! Martin Weiss -preserve- does not seem to have much of an effect in this example, though: mat a=(1,1) mat l a vers 9: clear mat l a Nick Cox But that -clear- is preceded by -preserve-, so I'm not sure that's it. Martin Weiss " Somewhere along that chain, matrices are being cleared." -diagti- says: -clear- in its line 228, which these days would not hurt Moleps' matrices. Yet in this case, -diagt- being old, it is called under -version- 9 where a destruction of matrices was still part of -clear-`s Nick Cox You've got much more than that going on. -diagt- calls -diagti- which calls many other things, and so on. Somewhere along that chain, matrices are being cleared. It's not your problem below, but I did notice that -diagt- overwrites any existing matrix called T. As it destroys such a matrix any way, that's incidental, but using a temporary name would be widely considered better I´m creating a table using diagt (available from ssc) for different cutoff values of p. However the matrix is lost after calling diagt (version 2.0.5) and the same happens using this example-code: (stata v10.1) (ssc install diagt) sysuse auto.dta gen ind=price>10000 diagt ind foreign mat a=(r(sens),r(spec)) mat list a diagt ind foreign mat dir viewsource diagt.ado --no reference as far as I can tell to dropping of matrices...Neither with diagti. There is a previous post with diagt for bootstrap purposes, but there is no reference to dropping of matrices. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-02/msg00843.html","timestamp":"2014-04-17T09:50:27Z","content_type":null,"content_length":"10852","record_id":"<urn:uuid:d31cb38b-80e0-4c08-85d9-9736268ce8b7>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Functor instance for Set? Daniel Gorín dgorin at dc.uba.ar Wed Feb 29 19:54:01 CET 2012 I was always under the impression that the fact that Data.Set.Set can not be made an instance of Functor was a sort of unavoidable limitation. But if we look at the definition of Data.Set.map: map :: (Ord a, Ord b) => (a->b) -> Set a -> Set b map f = fromList . List.map f . toList we see that 1) "Ord a" is not really used and 2) "Ord b" is only needed for the final fromList. Since every interesting function in Data.Set requires an Ord dictionary anyway, one could implement fmap using only the "List.map f . toList" part and leave the "fromList" to the successive function calls. It appears to me, then, that if "Set a" were implemented as the sum of a list of a and a BST, it could be made an instance of Functor, Applicative and even Monad without affecting asymptotic complexity (proof of concept below). Am I right here? Would the overhead be significant? The one downside I can think of is that one would have to sacrifice the Foldable instance. import qualified Data.Set as Internal import Data.Monoid import Control.Applicative data Set a = This (Internal.Set a) | List [a] toInternal :: Ord a => Set a -> Internal.Set a toInternal (This s) = s toInternal (List s) = Internal.fromList s toAscList :: Ord a => Set a -> [a] toAscList (This s) = Internal.toAscList s toAscList (List s) = Internal.toAscList $ Internal.fromAscList s toList :: Set a -> [a] toList (This s) = Internal.toList s toList (List s) = s -- Here we break the API by requiring (Ord a). -- We could require (Eq a) instead, but this would force us to use -- nub in certain cases, which is horribly inefficient. instance Ord a => Eq (Set a) where l == r = toInternal l == toInternal r instance Ord a => Ord (Set a) where compare l r = compare (toInternal l) (toInternal r) instance Functor Set where fmap f = List . map f . toList instance Applicative Set where pure = singleton f <*> x = List $ toList f <*> toList x instance Monad Set where return = pure s >>= f = List $ toList s >>= (toList . f) empty :: Set a empty = This Internal.empty singleton :: a -> Set a singleton = This . Internal.singleton insert :: Ord a => a -> Set a -> Set a insert a = This . Internal.insert a . toInternal delete :: Ord a => a -> Set a -> Set a delete a = This . Internal.delete a . toInternal instance Ord a => Monoid (Set a) where mempty = This mempty mappend (This l) (This r) = This (mappend l r) mappend l r = This . Internal.fromAscList $ mergeAsc (toAscList l) (toAscList r) where mergeAsc :: Ord a => [a] -> [a] -> [a] mergeAsc [] ys = ys mergeAsc xs [] = xs mergeAsc ls@(x:xs) rs@(y:ys) = case compare x y of EQ -> x : mergeAsc xs ys LT -> x : mergeAsc xs rs GT -> y : mergeAsc rs ys More information about the Libraries mailing list
{"url":"http://www.haskell.org/pipermail/libraries/2012-February/017605.html","timestamp":"2014-04-17T18:42:17Z","content_type":null,"content_length":"5441","record_id":"<urn:uuid:50690826-95a8-4ac9-836f-2f989a5c1a27>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverdale, MD Algebra 1 Tutor Find a Riverdale, MD Algebra 1 Tutor ...Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals. I feel very strongly about help students succeed in math because I believe a true understanding of math can make many other subjects and future studies easier and more rewarding. I graduated from University of Virginia with a degree in economics and mathematics. 22 Subjects: including algebra 1, calculus, geometry, GRE ...I have been teaching for the past three years and I have vast years of tutoring experience. Last summer, I worked as a Algebra/Pre-Calculus instructor for rising college freshmen at Univ. of MD. College Park. 6 Subjects: including algebra 1, GED, elementary math, algebra 2 ...My practical experience includes my two years in financial analysis, one year in the local legislature, and three cumulative years in the legal field. I believe in challenging students to achieve the highest outcomes by using methods of problem solving and guided decision making.I played soccer ... 23 Subjects: including algebra 1, English, geometry, grammar ...I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus, Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Chemistry, even though they a... 11 Subjects: including algebra 1, chemistry, French, calculus ...Right now I am a full-time SAS programmer, and I have finished the first SAS programming source provided by SAS Institute. I was born and raised in China. I lived in Qingdao, China for 17 13 Subjects: including algebra 1, calculus, geometry, Chinese Related Riverdale, MD Tutors Riverdale, MD Accounting Tutors Riverdale, MD ACT Tutors Riverdale, MD Algebra Tutors Riverdale, MD Algebra 2 Tutors Riverdale, MD Calculus Tutors Riverdale, MD Geometry Tutors Riverdale, MD Math Tutors Riverdale, MD Prealgebra Tutors Riverdale, MD Precalculus Tutors Riverdale, MD SAT Tutors Riverdale, MD SAT Math Tutors Riverdale, MD Science Tutors Riverdale, MD Statistics Tutors Riverdale, MD Trigonometry Tutors Nearby Cities With algebra 1 Tutor Bladensburg, MD algebra 1 Tutors Brentwood, MD algebra 1 Tutors Cheverly, MD algebra 1 Tutors College Park algebra 1 Tutors Edmonston, MD algebra 1 Tutors Greenbelt algebra 1 Tutors Hyattsville algebra 1 Tutors Landover Hills, MD algebra 1 Tutors Lanham Seabrook, MD algebra 1 Tutors Mount Rainier algebra 1 Tutors New Carrollton, MD algebra 1 Tutors North Brentwood, MD algebra 1 Tutors Riverdale Park, MD algebra 1 Tutors Riverdale Pk, MD algebra 1 Tutors University Park, MD algebra 1 Tutors
{"url":"http://www.purplemath.com/Riverdale_MD_algebra_1_tutors.php","timestamp":"2014-04-21T04:33:09Z","content_type":null,"content_length":"24241","record_id":"<urn:uuid:414783df-11b6-41e7-a0fe-e7f70f223caf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
For a finite sequence of nonzero numbers, the number of Author Message For a finite sequence of nonzero numbers, the number of [#permalink] 23 May 2008, 13:02 For a finite sequence of nonzero numbers, the number of variations in sign is defined as the number of pairs of consecutive terms of the sequence for which the product of the two Joined: 01 May consecutive terms is negative. What is the number of variations in sign for the sequence 1, -3, 2, 5, -4, -6? a. one Posts: 797 b. two c. three Followers: 1 d. four e. five Kudos [?]: 45 [0 ], given: 0 alpha_plus_gamma 1 Director This post received Joined: 14 Aug 2007 Answer: C, couldn't put in better words than YihWei Posts: 736 YihWei wrote: Followers: 7 Variation in sign = consecutive numbers whose product is negative. Kudos [?]: 89 [1 (1,-3), (-3,2), and (5,-4) satisfy the definition. ] , given: 0 Answer: C. Current Student Variation in sign = consecutive numbers whose product is negative. Joined: 17 Jan 2008 (1,-3), (-3,2), and (5,-4) satisfy the definition. Posts: 587 Answer: C. Location: Ann _________________ Arbor, MI Profile | GMAT | Erb Institute Schools: Ross '12 (MBA/MS) Followers: 1 Kudos [?]: 61 [0 ], given: 34 YihWei, that is the correct answer...I got this simple one wrong because I ordered the set...why would one not order the set in this case? How would you reword this question if they Joined: 01 May DID want you to order the set? Posts: 797 Followers: 1 Kudos [?]: 45 [0 ], given: 0 Current Student jimmyjamesdonkey wrote: Joined: 17 Jan YihWei, that is the correct answer...I got this simple one wrong because I ordered the set...why would one not order the set in this case? How would you reword this question if they 2008 DID want you to order the set? Posts: 587 I didn't order the set because the question didn't ask me to do that. I just try to "play dumb" and do exactly what the question asks me to do and nothing more. If they did want us to order the set I would probably just add a statement to the end of the question saying, "Put the sequence in ascending/descending order prior to solving the question". I think this Location: Ann is just a simple case of you overthinking the question. Stop being smarter than the GMAT Arbor, MI Schools: Ross '12 (MBA/MS) Profile | GMAT | Erb Institute Followers: 1 Kudos [?]: 61 [0 ], given: 34 Intern Jimmy, Joined: 29 Mar Because this is NOT a set... it clearly states SEQUENCE. There are either Finite or Infinite sequences. In this case it was Finite so there is a set number of values. Posts: 28 Followers: 0 Kudos [?]: 3 [0] , given: 0 gmat blows jimmyjamesdonkey wrote: YihWei, that is the correct answer...I got this simple one wrong because I ordered the set...why would one not order the set in this case? How would you reword this question if they Joined: 19 Aug DID want you to order the set? Ive been stumped on this question for soo long (I just memorized the answer and somehow convinced myself that the answer is 3 and not 1) - I did the same thing you did. Posts: 206 thanks for the clarification all. Followers: 1 Kudos [?]: 9 [0] , given: 0 Intern Or just think about what they are asking for in common sense terms. The "number of variations in sign" is how many times the sequence flips between positive and negative numbers. So it starts positive at 1, then flips once at -3, then flips a second time at 2, then stays the same at 5, then flips a third time at -4, then stays the same at -6. You could get the Joined: 27 Jun answer without even looking at the numerals, just the signs. +-++-- = 3 flips. Posts: 18 Followers: 0 Kudos [?]: 2 [0] , given: 0 Current Student tritium6 wrote: Joined: 28 Dec 2004 Or just think about what they are asking for in common sense terms. The "number of variations in sign" is how many times the sequence flips between positive and negative numbers. So it starts positive at 1, then flips once at -3, then flips a second time at 2, then stays the same at 5, then flips a third time at -4, then stays the same at -6. You could get the Posts: 3411 answer without even looking at the numerals, just the signs. +-++-- = 3 flips. Location: New nice way of doing this..but i gurantee on exam day under the stress from the exam..one will most likely get this wrong cause they will most likely over look one of the - signs or + York City signs..and under pressure to hurry will make a careless mistake.. Schools: best approach is laid out by YihWei.. Followers: 13 Kudos [?]: 148 [ 0], given: 2 AlinderPatel jimmyjamesdonkey wrote: Intern YihWei, that is the correct answer...I got this simple one wrong because I ordered the set...why would one not order the set in this case? How would you reword this question if they DID want you to order the set? Joined: 08 Apr 2008 Jimmy I did the same things as well, I ordered the sequence which screwed me! Posts: 47 A few takeaways from this problem: Followers: 0 1). Dont assume anything, in this case the assumption made was to order the sequence in ascending order when not explicitly told to do so Kudos [?]: 6 [0] 2). Pay attention to Detail - As chengliu pointed out, this is NOT a set, rather a sequence, so I can see how ordering the set might make sense in that case, but once again something , given: 0 should state/trigger that action. (For this problem, "consecutive" was the keyword that screwed me. In a set consecutive numbers means they are either ordered in ascending/descending order. However in a sequence, depending on the sequence pattern - consecutive numbers are NOT necessarily ordered in ascending/descending order. Similar topics Author Replies Last post For a finite sequence of nonzero numbers, the number of jet1445 3 25 Feb 2007, 09:13 For a finite sequence of nonzero numbers, the number of alimad 4 17 May 2007, 06:05 For a finite sequence of nonzero numbers, the number of priyankur_saha@ml.com 2 05 Jun 2007, 13:41 For a finite sequence of nonzero numbers, the number of marcodonzelli 2 18 Dec 2007, 09:39 For a finite sequence of nonzero numbers, the number of smarinov 4 03 Jan 2009, 21:12
{"url":"http://gmatclub.com/forum/for-a-finite-sequence-of-nonzero-numbers-the-number-of-64391.html?kudos=1","timestamp":"2014-04-19T22:12:24Z","content_type":null,"content_length":"158213","record_id":"<urn:uuid:779d029c-2674-4b75-ab6a-49737828d33b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] stats.distributions.poisson loc parameter : is it wise ? [SciPy-dev] stats.distributions.poisson loc parameter : is it wise ? nicky van foreest vanforeest@gmail.... Thu Aug 6 16:21:40 CDT 2009 I agree. Anything that makes the behavior of the distribution functions more intuitive is helpful, at least to me. BTW, I find the term loc already by itself very confusing---what does it actually mean? For instance, >>> Help on gamma_gen in module scipy.stats.distributions object | cdf(self, x, *args, **kwds) | Cumulative distribution function at x of the given RV. | Parameters | ---------- | x : array-like | quantiles | arg1, arg2, arg3,... : array-like | The shape parameter(s) for the distribution (see docstring of the | instance object for more information) | loc : array-like, optional | location parameter (default=0) | scale : array-like, optional | scale parameter (default=1) I am inclined to characterize the gamma distbution by means of n (number of stages if one is used to the Erlang distribution) and the rate parameter lambda, say, and I am clueless as to the meaning of scale and location here. Actually, I am not alone in this: see for Of course, this is not to say that I am not happy with the distribution package. It makes me a happier man every day :-) 2009/8/6 Pierre GM <pgmdevlist@gmail.com>: > All, > Consider the poisson distribution in stats.distributions: it requires > a mandatory argument, `mu`, as the mean/variance of the distribution. > All is fine, but the `loc` parameter is still available, and that's my > problem. When `loc` is not 0, the mean becomes `mu+loc`, > `.cdf(range(loc))==0`, but the variance stays `mu`. That's a bit > confusing. > I thought I could use `loc` as a way to control truncation, but that > doesn't seem to work either: emulating zero-truncation by using > `loc=1` gives a distribution with a mean `mu+1` when is should be `mu/ > (1-exp(-mu))` (the exact expression for zero-truncation). > In short, I don't really see any advantage in having a location > parameter for the Poisson distribution. AAMOF, for any discrete > distribution. I suggest we would implement some mechanism to force loc > to 0 while outputting a warning. > Any comment ? > P. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev@scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2009-August/012510.html","timestamp":"2014-04-17T21:24:14Z","content_type":null,"content_length":"5666","record_id":"<urn:uuid:f9a0359f-4dd0-4dd7-a990-c8dc19ade8d6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
PAL (Pareto Active Learning) PAL (Pareto Active Learning) Algorithm for Multi-objective Optimization What is it? PAL is an algorithm designed to quickly predict the Pareto-optimal solutions in a multi-objective design space. This is a particularly useful when measuring the target objectives given a design point is expensive, and therefore only a few samples of the design space should be drawn for measurement. All our work on learning for optimization and autotuning. The challenge of multi-objective design spaces is that rather than having a single optimal solution, several solutions may simultaneously optimize the target objective functions. A concrete example in the domain of hardware design, are the design spaces generated by the Spiral DFT and sorting network hardware generators, when exploring the different settings provided by the tool. The user needs to choose between different candidate designs that perform the same task but that trade differently throughput and area. This challenge is aggravated by the fact that evaluating each design is extremely expensive, since complex synthesis processes must be executed. Thus, it is impractical and sometimes infeasible to evaluate every possible design in order to find the Pareto-optimal solutions that maximize throughput and minimize area at the same time. Having drawn a few samples of the design space, PAL builds models using Gaussian processes to predict the objective functions for the rest of the space. Given these predictions and their corresponding uncertainties, PAL speculatively classifies designs as Pareto-optimal or not Pareto-optimal. Training samples are smartly selected until all points can be classified with the desired accuracy. This accuracy can be specified with the parameter epsilon. More details can be found in [1,2]. An illustrative example of an iteration of the PAL is shown below. The implementation of PAL that is provided here is written in Matlab, and uses the GPML libraries written by Carl Edward Rasmussen and Chris Williams [3]. PAL can be used for any number of objective functions, however this implementation is limited to two objective functions. In the plots below we show that different accuracy/training-cost tradeoffs can be obtained by varying the value of the parameter epsilon. Accuracy in predicting the Pareto front of a design space is measured using the logarithmic hypervolume error. The plots only show the errors obtained with the objective function 1. The plots also show the effect of choosing epsilon on the termination of PAL. A larger epsilon causes PAL to stop earlier. Design spaces used are based on data sets obtained in [4], [5] and [6]. We compare the efficiency of PAL with that of the evolutionary algorithm ParEGO [7]. These results show that PAL in almost all cases significantly improves over ParEGO. A detail explanation of these experiments can be found in [1]. Our implementation of PAL can be downloaded . This archive also contains the datasets used for our experiments presented in . Detail instructions on how to use this program are included in the README file. All our work on learning for optimization and autotuning 1. Marcela Zuluaga, Andreas Krause, Guillaume Sergent and Markus Püschel Active Learning for Multi-Objective Optimization to appear in Proc. International Conference on Machine Learning (ICML), 2013 2. Marcela Zuluaga, Andreas Krause, Peter A. Milder and Markus Püschel "Smart" Design Space Sampling to Predict Pareto-Optimal Solutions Proc. Languages, Compilers, Tools and Theory for Embedded Systems (LCTES), pp. 119-128 , 2012 3. Marcela Zuluaga, Peter A. Milder, and Markus Püschel Computer Generation of Streaming Sorting Networks Proc. Design Automation Conference (DAC), 2012 4. Carl Edward Rasmussen and Christopher K. I. Williams Gaussian Processes for Machine Learning The MIT Press, 2006 5. Oscar Almer, Nigel Topham, and Björn Franke Learning-Based Approach to the Automated Design of MPSoC Networks Proc. Architecture of Computing Systems (ARCS), 2011 6. N. Siegmund, S. S. Kolesnikov, C. Kastner, S. Apel, D. Batoryx, M. Rosenmuller, and G. Saake Predicting Performance via Automated Feature-Interaction Detection Proc. Int'l Conference on Software Engineering (ICSE), 2012 7. J. Knowles ParEGO: a Hybrid Algorithm with Online Landscape Approximation for Expensive Multiobjective Optimization Problems IEEE Trans. on Evolutionary Computation, 2006
{"url":"http://www.spiral.net/software/pal.html","timestamp":"2014-04-20T13:18:55Z","content_type":null,"content_length":"8922","record_id":"<urn:uuid:b838410a-717e-476d-aa7e-6ef14eeb7cb1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Half adder circuit Half adder Half adder circuit. To understand what is a half adder you need to know what is an adder first. Adder circuit is a combinational digital circuit that is used for adding two numbers. A typical adder circuit produces a sum bit (denoted by S) and a carry bit (denoted by C) as the output. Typically adders are realized for adding binary numbers but they can be also realized for adding other formats like BCD (binary coded decimal, XS-3 etc. Besides addition, adder circuits can be used for a lot of other applications in digital electronics like address decoding, table index calculation etc. Adder circuits are of two types: Half adder ad Full adder. Full adder s have been already explained in a previous article and in this topic I am giving stress to half adders. Half adder is a combinational arithmetic circuit that adds two numbers and produces a sum bit (S) and carry bit (C) as the output. If A and B are the input bits, then sum bit (S) is the X-OR of A and B and the carry bit (C) will be the AND of A and B. From this it is clear that a half adder circuit can be easily constructed using one X-OR gate and one AND gate. Half adder is the simplest of all adder circuit, but it has a major disadvantage. The half adder can add only two input bits (A and B) and has nothing to do with the carry if there is any in the input. So if the input to a half adder have a carry, then it will be neglected it and adds only the A and B bits. That means the binary addition process is not complete and that’s why it is called a half adder. The truth table, schematic representation and XOR//AND realization of a half adder are shown in the figure below. NAND gates or NOR gates can be used for realizing the half adder in universal logic and the relevant circuit diagrams are shown in the figure below. 8 Responses to “Half adder” • great ….. • oops sry!! its giving the right result • the half adder circuit with nor gates isnt giving the desired output • nice • Fine ….. • half adder circuit • full adder circuit • nice…!!!
{"url":"http://www.circuitstoday.com/half-adder","timestamp":"2014-04-18T06:53:09Z","content_type":null,"content_length":"55977","record_id":"<urn:uuid:9700781a-d10c-4fd7-b490-5d0000e4f0f3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00417-ip-10-147-4-33.ec2.internal.warc.gz"}