content
stringlengths
86
994k
meta
stringlengths
288
619
Having issues with my FFT 04-05-2010 #1 Having issues with my FFT I coded up an FFT, and I'm having issues with it. I can't figure out what the problem is. It's not the 1st time I've coded the FFT, and I've compared my new FFT code (in C++) to a previous time I coded the FFT (in C#), and I don't personally see any mistakes in the way I've coded it (although the two implementations have some minor semantic differences). Here is the graph of the signal I am sending into the function: Here is the graph I should get as output: And here is what I really am getting as output: The output off the fft looks oddly like the sync function...which shouldn't be the result for a simple sine wave... Here is my code: std::vector < std::complex<double> > InternalFastFourierTransform (std::vector < std::complex<double> > input) static std::complex<double> i = std::complex<double>(0, 1); static std::complex<double> NegativeTwoPiI = -2.0 * PI * i; int N = (int)input.size(); int halfN = N / 2; if (N == 1) return input; //Recurse and get evens/odds std::vector< std::complex<double> > evens = InternalFastFourierTransform( GetVectorRange(input, 0, N, 2) ); std::vector< std::complex<double> > odds = InternalFastFourierTransform( GetVectorRange(input, 1, N, 2) ); std::vector< std::complex<double> > result = std::vector< std::complex<double> > (N); for(int k = 0; k < halfN; k++) std::complex<double> exponent = std::exp(NegativeTwoPiI * double(k / N)); result[k] = evens[k] + exponent * odds[k]; result[k + halfN] = evens[k] - exponent * odds[k]; return result; Do you guys see anything wrong? The only thing that I could see is that possibly my function GetVectorRange which takes a slice of a vector from some starting index to some ending index at a stride of 2 could be functiong incorrectly...I haven't thoroughly tested that function...but is that really the problem, or do you think it could be some math mistake based off of the graph Last edited by DavidP; 04-05-2010 at 10:03 AM. Reason: grammatical correction I might as well include the code for the GetVectorRange function if that is suspicious as being the culprit, although I don't think it is. Here it is anyways: std::vector< std::complex<double> > GetVectorRange(std::vector< std::complex<double> > input, int startIndex, int endIndex, int stride) std::vector< std::complex<double> > output; for(int k = startIndex; k < endIndex && k < (int)input.size(); k += stride) return output; Last edited by DavidP; 04-05-2010 at 10:29 AM. std::complex<double> exponent = std::exp(NegativeTwoPiI * double(k / N)); I have completely no idea how the math works, but this part stood out as a possible culprit. I think it ought to be std::complex<double> exponent = std::exp(NegativeTwoPiI * double(k)/double(N)); if you want a decimal rather than an int there. Consider this post signed genius! thanks it works if (a) do { f( b); } while(1); else do { f(!b); } while(1); Might I ask what library it is you are using to do the graphing in, in C++? "What's up, Doc?" "'Up' is a relative concept. It has no intrinsic value." Agreed. I need to study the concept of roots of unity more. When I took an algorithms class about 2 years ago, we discussed the complex roots of unity, and after lots of studying I had a basic, but useable, understanding of them. Now I've lost much of my understanding of the roots of unity...so I need to go back over them. I'm actually not outputting the graphs in my C++ program in this case. I output my array data to file, and then I open up a program called Octave (an open source clone of Matlab), and use its plot function to do the plotting of the data for me. Octave uses gnuplot as its plotting utility. Okay, even though the problem posted at the beginning of the thread has already been solved, in follow up to brewbuck's suggestion, I did some research on roots of unity and refreshed myself a little bit. I believe this optimizes out the call to std::exp correctly: std::vector < std::complex<double> > InternalFastFourierTransform (std::vector < std::complex<double> > input) static std::complex<double> i = std::complex<double>(0, 1); static std::complex<double> NegativeTwoPiI = -2.0 * PI * i; int N = (int)input.size(); int halfN = N / 2; std::complex<double> omega_n = std::exp(NegativeTwoPiI / double(N)); std::complex<double> omega = std::complex<double> (1, 0); if (N == 1) return input; //Recurse and get evens/odds std::vector< std::complex<double> > evens = InternalFastFourierTransform( GetVectorRange(input, 0, N, 2) ); std::vector< std::complex<double> > odds = InternalFastFourierTransform( GetVectorRange(input, 1, N, 2) ); std::vector< std::complex<double> > result = std::vector< std::complex<double> > (N); for(int k = 0; k < halfN; k++) result[k] = evens[k] + omega * odds[k]; result[k + halfN] = evens[k] - omega * odds[k]; omega = omega * omega_n; return result; Agreed. I need to study the concept of roots of unity more. When I took an algorithms class about 2 years ago, we discussed the complex roots of unity, and after lots of studying I had a basic, but useable, understanding of them. Now I've lost much of my understanding of the roots of unity...so I need to go back over them. It might be easier to imagine them as phasors, little stopwatches where the second hand rotates around some integral number of revolutions -- the value of k dictates how quickly the phasor hand rotates. Essentially, what it's doing is correlating your signal with a harmonic basis vector, and the number of revolutions of the phasor goes up by integral amounts for each basis -- the first one makes one full revolution over the transform period, the second makes two revolutions, etc. if (a) do { f( b); } while(1); else do { f(!b); } while(1); 04-05-2010 #2 04-05-2010 #3 04-05-2010 #4 04-05-2010 #5 04-05-2010 #6 Ex scientia vera Join Date Sep 2007 04-05-2010 #7 04-05-2010 #8 04-05-2010 #9
{"url":"http://cboard.cprogramming.com/cplusplus-programming/125514-having-issues-my-fft.html","timestamp":"2014-04-17T22:30:44Z","content_type":null,"content_length":"81877","record_id":"<urn:uuid:67763aa8-c918-4d50-a264-412eb03383c6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
North Bethesda, MD Find a North Bethesda, MD Math Tutor ...In-person tutoring session: $35/hr (depending on travel distance). Sessions at my home are $30/hr. If you want to boost your grades or get ready for college, give me a call. Gene, PE. 31 Subjects: including ACT Math, probability, discrete math, SAT math ...I find that teaching a few simple processes allows the student to simplify even the most complex accounting problems. I have worked with a variety of students in all age groups on organization, study skills and test taking strategy. I focus on repetitive skills that can be used in any subject t... 28 Subjects: including calculus, physics, business, QuickBooks John received his Bachelor's Degree in Computer Science from Morehouse College and a Master of Business Administration (MBA) from Georgia Tech with concentrations in Finance and Information Technology. He has served as a Life Leadership Adviser for the NBMBAA Leaders of Tomorrow Program (LOT) for the past 7 years, and has provided students with instruction for financial literacy 18 Subjects: including differential equations, geometry, Microsoft Word, Microsoft PowerPoint ...In addition, I always try to relate seemingly hard/abstract mathematical ideas to concrete/pragmatic illustrations.I am currently working as a T.A in Algebra 1 and as a tutor in my school's mathematics department's tutorial lab where I help students with subjects ranging from Basic Algebra to Vec... 16 Subjects: including trigonometry, economics, GRE, linear algebra My love of teaching Mathematics comes from a lifelong love of learning. As an Addis Ababa University graduate, I know that hard work and dedication to learning can pay off. As one of the top Montgomery college tutors for the last one year, I have had the pleasure of helping hundreds of students realize their own potential and see their grades and test scores improve. 18 Subjects: including algebra 2, discrete math, differential equations, linear algebra Related North Bethesda, MD Tutors North Bethesda, MD Accounting Tutors North Bethesda, MD ACT Tutors North Bethesda, MD Algebra Tutors North Bethesda, MD Algebra 2 Tutors North Bethesda, MD Calculus Tutors North Bethesda, MD Geometry Tutors North Bethesda, MD Math Tutors North Bethesda, MD Prealgebra Tutors North Bethesda, MD Precalculus Tutors North Bethesda, MD SAT Tutors North Bethesda, MD SAT Math Tutors North Bethesda, MD Science Tutors North Bethesda, MD Statistics Tutors North Bethesda, MD Trigonometry Tutors Nearby Cities With Math Tutor Adelphi, MD Math Tutors Aspen Hill, MD Math Tutors Camp Springs, MD Math Tutors Cloverly, MD Math Tutors Colesville, MD Math Tutors Darnestown, MD Math Tutors Franconia, VA Math Tutors Garrett Park Math Tutors Glenmont, MD Math Tutors N Chevy Chase, MD Math Tutors North Chevy Chase, MD Math Tutors North Potomac, MD Math Tutors Oak Hill, VA Math Tutors W Bethesda, MD Math Tutors Wheaton, MD Math Tutors
{"url":"http://www.purplemath.com/north_bethesda_md_math_tutors.php","timestamp":"2014-04-18T16:10:01Z","content_type":null,"content_length":"24444","record_id":"<urn:uuid:0e7dfc9e-8dbb-4d4a-8802-f3cdd9609b93>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Labor not included Author Labor not included Author and Joined: Jan 10, 2002 Posts: 60059 [Asking smart questions] [Bear's FrontMan] [About Bear] [Books by Bear] I like... Ranch Hand Joined: Feb Here, let me be the first to suck the humor out of this: 18, 2005 Posts: 988 I think the price comes out to more like $2.97 per square foot. Anyone care to either confirm or refute that? If I recall, a penny is 19mm in diameter, and they appear to be packing them like hexagons. I'm trying to come up with a comment about the labor cost that is both socially relevant and a clever play on words, but I got nothing. Ryan McGuire wrote:If I recall, a penny is 19mm in diameter, and they appear to be packing them like hexagons. Joined: Oct 02, 2003 Posts: 10916 According to several websites, "The penny is 19.05 mm in diameter". I like... There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Joined: Sep Regardless, how does the math work to get to $2.97 per square feet? It obviously isn't $1.44, which I am thinking was calculated assuming pennies that are one-inch diameter, in a square 28, 2004 pattern (which clearly, they are not). Posts: 18120 Books: Java Threads, 3rd Edition, Jini in a Nutshell, and Java Gems (contributor) I like... baba Area of a hexagon = 1/2(perimeter)(apothem). Apothem is the distance from the center point to the center of the edge. Joined: Oct 02, 2003 I calculate the apothem at 9.525. Posts: 10916 That makes a side about 11mm, so the area of one penny - if it were a pefect hexagon - at 314.325 mm^2. Google says 1 square foot = 92,903.04 square mm. I like... That means 295.5ish pennies. Factor in a few rounding differences between Ryan and me, and I'd say we agree. Ranch Hand Henry Wong wrote: Joined: Feb Regardless, how does the math work to get to $2.97 per square feet? It obviously isn't $1.44, which I am thinking was calculated assuming pennies that are one-inch diameter, in a 18, 2005 square pattern (which clearly, they are not). Posts: 988 Henry If the pennies are packed in a hexagon pattern (or do you call that a triangular pattern), we can pretend that they are hexagons with a side-to-side (as opposed to corner-to-corner) diameter of 19.05mm or 0.75in. The area that hexagon is 0.75^2 * sqrt(3) / 2 or 0.48713928962 in^2 *. How many of those would fit in 144 in^2? 144/..487 = 295.6something. 295 pennies would cost approximately $2.95. * As luck would have it, I remember from high school the formula for the area of a hexagon given its side-to-side diameter, but I still have to work it out when give the length of one side. A = D^2 * sqrt(3)/2 Sheriff Oh, I get it !! I was wondering why hexagon approximations were used. You are not concerned with the area of the penny, you are concerned with the amount of coverage it was doing. And that means that it needs to calculate the space between the pennies that were not being covered. Joined: Sep 28, 2004 Great job. I am giving cows all-around !! Posts: 18120 I like... Ryan McGuire wrote: * As luck would have it, I remember from high school the formula for the area of a hexagon given its side-to-side diameter, but I still have to work it out when give the length of one side. A = D^2 * sqrt(3)/2 BTW, I never knew the formula for a hexagon. I had to calculate it as six equilateral triangles, where the height is the radius of the penny, and the base is calculated via trigonometry (of a 30/60/90 triangle). Joined: Oct Henry Wong wrote:BTW, I never knew the formula for a hexagon. I had to calculate it as six equilateral triangles, where the height is the radius of the penny, and the base is 02, 2003 calculated via trigonometry (of a 30/60/90 triangle). Posts: 10916 12 I had to look up the formula, and the page I found did exactly what you describe to derive it. I like... Joined: Jun 06, 2007 If it were per square foot, then how does it matter if it is arranged as a hexagon? I admit that geometry was not my cup of tea ever, but still. Posts: 2712 So I calculated using the total mm in a square foot given by fred which is 92,903.04 mm and dividing it by area of the penny which is 285.022 square mm. Which gives me $3.26. Am I missing something obvious here?? I like... SCJP, SCWCD. |Asking Good Questions| Sheriff Amit Ghorpade wrote:If it were per square foot, then how does it matter if it is arranged as a hexagon? I admit that geometry was not my cup of tea ever, but still. So I calculated using the total mm in a square foot given by fred which is 92,903.04 mm and dividing it by area of the penny which is 285.022 square mm. Joined: Sep Which gives me $3.26. Am I missing something obvious here?? 28, 2004 Posts: 18120 39 Yes. You are forgetting about the gaps between the pennies. The reason that the pennies are treated as a hexagon, and not as a circle, is because that is the shape that it is taking up on the floor (from the picture). In your case, since you are not allowing for that, the extra 28 or 29 cents is needed to get enough metal, which presumably needs to be melted down, to fill in those gaps. I like... Bartender in other words... Joined: Oct A penny is a circle. you cannot get solid coverage with circles, as there are spaces between them. 02, 2003 Posts: 10916 You can with hexagons, or squares, or many other shapes... 12 Take a look at this. You can see some shpaes allow complete coverage with no overlaps or gaps, and others don't. There are also better ways to 'pack' them in. If you aligned them in a square grid, you'd have larger gaps than if you off-set each row - you can kind of squeeze down the size of the gaps. I believe that packing them this way minimizes the gaps, and is best represented by a hexagon. I like... Ranch Hand fred rosenberger wrote:in other words... Joined: Feb 18, 2005 A penny is a circle. you cannot get solid coverage with circles, as there are spaces between them. Posts: 988 You can with hexagons, or squares, or many other shapes... Take a look at . You can see some shpaes allow complete coverage with no overlaps or gaps, and others don't. There are also better ways to 'pack' them in. If you aligned them in a square grid, you'd have larger gaps than if you off-set each row - you can kind of squeeze down the size of the gaps. I believe that packing them this way minimizes the gaps, and is best represented by a hexagon. If you did pack pennies (0.75 in diameter) in a square grid, the math works out nicely. You'd end up with a 16x16 grid of pennies worth $2.56 in each square foot. Square grid -> larger gaps -> fewer pennies -> cheaper per square foot. Joined: Apr 06, 2010 Ryan McGuire wrote:Square grid -> larger gaps -> fewer pennies -> cheaper per square foot. Posts: 4240 But you would have to factor in greater wear-and-tear on your shoes. I like... Joined: Jun 06, 2007 fred rosenberger wrote:iA penny is a circle. you cannot get solid coverage with circles, as there are spaces between them. Posts: 2712 5 Yes I had this in mind. I also recall of studying problems like this but was unsure how the gaps are compensated. Got it now I like... Joined: Aug 22, 2010 Posts: 3438 Matthew Brown wrote:But you would have to factor in greater wear-and-tear on your shoes. You could somewhat reduce the shoe wear by placing a thick carpet from one dollar bills over the penny floor. I like... author and Joined: Jul You guys are totally forgetting that pennies have two sides, so you have to divide by two; you use both sides of the pennies. Look carefully at the photo and you'll see both heads and 08, 2003 tails. $2.95/2 = $1.44 (within error bars.) Sheesh. Posts: 24166 I like... [Jess in Action][AskingGoodQuestions] subject: Labor not included
{"url":"http://www.coderanch.com/t/614858/md/Labor-included","timestamp":"2014-04-21T05:51:55Z","content_type":null,"content_length":"62935","record_id":"<urn:uuid:740bf451-4a82-4659-91b8-508a3812e556>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
East Orange Math Tutor Find an East Orange Math Tutor ...I spent two years straight in a classroom teaching 9th graders Algebra 1. I have been working with students since 2006. If you want to convince yourself that math is the easiest subject to learn, then you should make the right choice by electing me to be your tutor. 13 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have taught college algebra at the local community college. I have tutored several students and prepared them for the advanced algebra and trig regents. I am going for training in the new Common Core curriculum. 20 Subjects: including geometry, business, economics, finance ...My GPA is a 3.4. My goal is to become a Math professor for students in either middle or high school. I can tutor students with Algebra 1 and 2. 4 Subjects: including algebra 1, algebra 2, prealgebra, Spanish ...I have also been tutoring all levels of math from elementary through college for the past two years. I have a Bachelor's Degree in Math and a Master's Degree in Math Education. I am a certified teacher with three years of experience teaching high school math. 9 Subjects: including algebra 1, algebra 2, calculus, geometry ...It is not wrong to ask for help, and I promise if you ask I will do my best to help in any way I can even in all subject. I have worked with students of all ages in different subject, and I loved working with every one of them. Hopefully I get to help more as the years go by and have the chance... 26 Subjects: including algebra 1, SAT math, trigonometry, statistics Nearby Cities With Math Tutor Ampere, NJ Math Tutors Belleville, NJ Math Tutors Bloomfield, NJ Math Tutors Doddtown, NJ Math Tutors Harrison, NJ Math Tutors Irvington, NJ Math Tutors Kearny, NJ Math Tutors Montclair, NJ Math Tutors Newark, NJ Math Tutors Orange, NJ Math Tutors South Kearny, NJ Math Tutors South Orange Math Tutors Union Center, NJ Math Tutors Union, NJ Math Tutors West Orange Math Tutors
{"url":"http://www.purplemath.com/East_Orange_Math_tutors.php","timestamp":"2014-04-16T04:37:49Z","content_type":null,"content_length":"23569","record_id":"<urn:uuid:58e0d733-cf9e-4c1e-b3b5-07b7bb0259be>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
interpreting graphs of limits... February 26th 2009, 03:36 PM #1 Junior Member Feb 2009 interpreting graphs of limits... ok so say you have the lim as x approaches -1 from the right f(x) = f(-3) Apparently it's true by looking at the graph. The point at f(-3) =1. And as lim as x approaches -1 from the right, it is an open circle which is 1 and a closed circle which is 2. -I don't know why that is true since f(x) = 2 points where one is and open point and one is a closed point...I thought if you had an open and closed point then that would mean it isn't continuous or differentiable or something? The right hand limit and left hand limits can be different and are different for the graph you've described. It doesn't matter what the value is at the point for one-sided limits, only the point that the graph is approaching. You are right, at x=-3 this graph isn't differentiable because true limit doesn't exist for this point. February 26th 2009, 04:52 PM #2 MHF Contributor Oct 2005
{"url":"http://mathhelpforum.com/calculus/75944-interpreting-graphs-limits.html","timestamp":"2014-04-16T07:56:17Z","content_type":null,"content_length":"32480","record_id":"<urn:uuid:d03dad4a-9348-4c72-8ffa-aa8642c40a58>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Classics in the History of Psychology -- Menabrea (1842) Classics in the History of Psychology An internet resource developed by Christopher D. Green York University, Toronto, Ontario (Return to Classics index) Sketch of the Analytical Engine invented by Charles Babbage, Esq. By L. F. MENABREA, of Turin, Officer of the Military Engineers. Originally published in French in 1842 in the Bibliothèque Universelle de Genève, No. 82 Translation orignally published in 1843 in the Scientific Memoirs, 3, 666-731. [Classics Editor's note: The following document has a rather complicated publication history. In August of 1840 Charles Babbage gave a series of lecturs on his Analytical Engine in Turin. Luigi Menabrea--then an obscure military engineer, but later to become a general in Garibaldi's army and prime minister of Italy--published an account of Babbage's lectures in French in the Bibliothèque Universelle de Genève in October of 1842. In early 1843 this article was translated into English by Augusta Ada Byron King, Countess of Lovelace, who then added extensive "Notes" written in close collaboration with Babbage. The translation and Notes were published in Richard Taylor's Scientific Memoirs in October 1843, and were preceded by a short account of Babbage's exploits written by Taylor himself (viz., the initial portion of the document in square brackets and signed "Editor"). Lovelace left the translation anonymous (as was customary at the time) but signed the Notes "A.A.L." so that readers would know that they were written by the same person as future works of hers to be identified in the same way. As it turned out, however, she would publish no other scholarly work. Since its original publication, the article and Notes have been republished several times, with various "corrections" and changes. The complete piece was republished by H.P. Babbage in 1889 as part of a collection entitled Babbage's Calculating Engines. The following document is based on that edition. It was also republished in B.V. Bowden's Faster than Thought in 1953, and in P. & E. Morrison's Charles Babbage and his Calculating Engines in 1961. The key differences between these texts are discussed in the footnotes included in the version below. CDG] [Before submitting to our readers the translation of M. Menabrea's memoir 'On the Mathematical Principles of the ANALYTICAL ENGINE' invented by Mr. Babbage, we shall present to them a list of the printed papers connected with the subject, and also of those relating to the Difference Engine by which it was preceded. For information on Mr. Babbage's "Difference Engine," which is but slightly alluded to by M. Menabrea, we refer the reader to the following sources:- 1. Letter to Sir Humphry Davy, Bart., P.R.S., on the Application of Machinery to Calculate and Print Mathematical Tables. By Charles Babbage, Esq., F.R.S. London, July 1822. Reprinted, with a Report of the Council of the Royal Society, by order of the House of Commons, May 1823. 2. On the Application of Machinery to the Calculation of Astronomical and Mathematical Tables. By Charles Babbage, Esq. - Memoirs of the Astronomical Society, vol. i. part 2. London, 1822. 3. Address to the Astronomical Society by Henry Thomas Colebrooke, Esq., F.R.S., President, on presenting the first Gold Medal of the Society to Charles Babbage, Esq., for the invention of the Calculating Engine.- Memoirs of the Astronomical Society. London, 1822. 4. On the Determination of the General Term of a New Class of Infinite Series. By Charles Babbage, Esq.- Transactions of the Cambridge Philosophical Society. 5. On Mr. Babbage's New Machine for Calculating and Printing Mathematical Tables.- Letter from Francis Baily, Esq., F.R.S., to M. Schumacher. No. 46, Astronomische Nachrichten. Reprinted in the Philosophical Magazine, May 1824. 6. On a Method of expressing by Signs the Action of Machinery. By Charles Babbage, Esq.- Philosophical Transactions. London, 1826. 7. On Errors common to many Tables of Logarithms. By Charles Babbage, Esq.- Memoirs of the Astronomical Society. London, 1827. 8. Report of the Committee appointed by the Council of the Royal Society to consider the subject referred to in a communication received by them from the Treasury respecting Mr. Babbage's Calculating Engine, and to report thereon. London, 1829. 9. Economy of Manufactures, chap. xx. 8vo. London, 1832. 10. Article on Babbage's Calculating Engine,- Edinburgh Review, July 1834. No. 120. vol. lix. The present state of the Difference Engine, which has always been the property of Government, is as follows:- The drawings are nearly finished, and the mechanical notation of the whole, recording every motion of which it is susceptible, is completed. A part of the Engine, comprising sixteen figures, arranged in three orders of differences, has been put together, and has frequently been used during the last eight years. It performs its work with absolute precision. This portion of the Difference Engine, together with all the drawings, are at present deposited in the Museum of King's College, London. Of the ANALYTICAL ENGINE, which forms the principal object of the present memoir, we are not aware that any notice has hitherto appeared, except a Letter from the Inventor to M. Quetelet, Secretary to the Royal Academy of Sciences at Brussels, by whom it was communicated to that body. We subjoin a translation of this Letter, which was itself a translation of the original, and was not intended for publication by its author. Royal Academy of Sciences at Brussels. General Meeting of the 7th and 8th of May, 1835. "A Letter from Mr. Babbage announces that he has for six months been engaged in making the drawings of a new calculating machine of far greater power than the first. "I am myself astonished,' says Mr. Babbage, 'at the power I have been enabled to give to this machine; a year ago I should not have believed this result possible. This machine is intended to contain a hundred variables (or numbers susceptible of changing); each of these numbers may consist of twenty-five figures, v[1], v[2], . . . . . v[n] being any numbers whatever, n being less than a hundred; if f (v[1], v[2], v[3], . . v[n]) be any given function which can be formed by addition, subtraction, multiplication, division, extraction of roots, or elevation to powers, the machine will calculate its numerical value; it will afterwards substitute this value in the place of v, or of any other variable, and will calculate this second function with respect to v. It will reduce to tables almost all equations of finite differences. Let us suppose that we have observed a thousand values of a, b, c, d, and that we wish to calculate them by the formula p = SqR [(a+b) / (cd)], the machine must be set to calculate the formula; the first series of the values of a, b, c, d must be adjusted to it; it will then calculate them, print them, and reduce them to zero; lastly, it will ring a bell to give notice that a new set of constants must be inserted. When there exists a relation between any number of successive coefficients of a series, provided it can be expressed as has already been said, the machine will calculate them and make their terms known in succession; and it may afterwards be disposed so as to find the value of the series for all the values of the variable.' "Mr. Babbage announces, in conclusion,' that the greatest difficulties of the invention have already been surmounted, and that the plans will be finished in a few months.'" In the Ninth Bridgewater Treatise, Mr. Babbage has employed several arguments deduced from the Analytical Engine, which afford some idea of its powers. See Ninth Bridgewater Treatise, 8vo, second edition. London, 1834. Some of the numerous drawings of the Analytical Engine have been engraved on wooden blocks, and from these (by a mode contrived by Mr. Babbage) various stereotype plates have been taken. They comprise - 1. Plan of the figure wheels for one method of adding numbers. 2. Elevation of the wheels and axis of ditto. 3. Elevation of framing only of ditto. 4. Section of adding wheels and framing together. 5. Section of the adding wheels, sign wheels and framing complete. 6. Impression from the original wooden block. 7. Impressions from a stereotype cast of No. 6, with the letters and signs inserted. Nos. 2, 3, 4 and 5 were stereotypes taken from this. 8. Plan of adding wheels and of long and short pinions, by means of which stepping is accomplished. N.B. This process performs the operation of multiplying or dividing a number by any power of ten. 9. Elevation of long pinions in the position for addition. 10. Elevation of long pinions in the position for stepping. 11. Plan of mechanism for carrying the tens (by anticipation), connected with long pinions. 12. Section of the chain of wires for anticipating carriage. 13. Sections of the elevation of parts of the preceding carriage. All these were executed about five years ago. At a later period (August 1840) Mr. Babbage caused one of his general plans (No. 25) of the whole Analytical Engine to be lithographed at Paris. Although these illustrations have not been published, on account of the time which would be required to describe them, and the rapid succession of improvements made subsequently, yet copies have been freely given to many of Mr. Babbage's friends, and were in August 1838 presented at Newcastle to the British Association for the Advancement of Science, and in August 1840 to the Institute of France through M. Arago, as well as to the Royal Academy of Turin through M. Plana. - EDITOR.] Sketch of the Analytical Engine invented by Charles Babbage, Esq. By. L. F. MENABREA, of Turin, Officer of the Military Engineers. [ From the Bibliothèque Universelle de Genève, No. 82. October 1842.] Those labours which belong to the various branches of the mathematical sciences, although on first consideration they seem to be the exclusive province of intellect, may, nevertheless, be divided into two distinct sections; one of which may be called the mechanical, because it is subjected to precise and invariable laws, that are capable of being expressed by means of the operations of matter; while the other, demanding the intervention of reasoning, belongs more specially to the domain of the understanding. This admitted, we may propose to execute, by means of machinery, the mechanical branch of these labours, reserving for pure intellect that which depends on the reasoning faculties. Thus the rigid exactness of those laws which regulate numerical calculations must frequently have suggested the employment of material instruments, either for executing the whole of such calculations or for abridging them; and thence have arisen several inventions having this object in view, but which have in general but partially attained it. For instance, the much-admired machine of Pascal is now simply an object of curiosity, which, whilst it displays the powerful intellect of its inventor, is yet of little utility in itself. Its powers extended no further than the execution of the first four[1] operations of arithmetic, and indeed were in reality confined to that of the first two, since multiplication and division were the result of a series of additions and subtractions. The chief drawback hitherto on most of such machines is, that they require the continual intervention of a human agent to regulate their movements, and thence arises a source of errors; so that, if their use has not become general for large numerical calculations, it is because they have not in fact resolved the double problem which the question presents, that of correctness in the results, united with economy of time. Struck with similar reflections, Mr. Babbage has devoted some years to the realization of a gigantic idea. He proposed to himself nothing less than the construction of a machine capable of executing not merely arithmetical calculations, but even all those of analysis, if their laws are known. The imagination is at first astounded at the idea of such an undertaking; but the more calm reflection we bestow on it, the less impossible does success appear, and it is felt that it may depend on the discovery of some principle so general, that, if applied to machinery, the latter may be capable of mechanically translating the operation which may be indicated to it by algebraical notation. The illustrious inventor having been kind enough to communicate to me some of his views on this subject during a visit he made at Turin, I have, with his approbation, thrown together the impressions they have left on my mind. But the reader must not expect to find a description of Mr. Babbage's engine; the comprehension of this would entail studies of much length; and I shall endeavour merely to give an insight into the end proposed, and to develope the principles on which its attainment depends. I must first premise that this engine is entirely different from that of which there is a notice in the 'Treatise on the Economy of Machinery,' by the same author. But as the latter gave rise[2] to the idea of the engine in question, I consider it will be a useful preliminary briefly to recall what were Mr. Babbage's first essays, and also the circumstances in which they originated. It is well known that the French government, wishing to promote the extension of the decimal system, had ordered the construction of logarithmical and trigonometrical tables of enormous extent. M. de Prony, who had been entrusted with the direction of this undertaking, divided it into three sections, to each of which was appointed a special class of persons. In the first section the formulae were so combined as to render them subservient to the purposes of numerical calculation; in the second, these same formulae were calculated for values of the variable, selected at certain successive distances; and under the third section, comprising about eighty individuals, who were most of them only acquainted with the first two rules of arithmetic, the values which were intermediate to those calculated by the second section were interpolated by means of simple additions and subtractions. An undertaking similar to that just mentioned having been entered upon in England, Mr. Babbage conceived that the operations performed under the third section might be executed by a machine; and this idea he realized by means of mechanism, which has been in part put together, and to which the name Difference Engine is applicable, on account of the principle upon which its construction is founded. To give some notion of this, it will suffice to consider the series of whole square numbers, 1, 4, 9, 16, 25, 36, 49, 64, &c. By subtracting each of these from the succeeding one, we obtain a new series, which we will name the Series of First Differences, consisting of the numbers 3, 5, 7, 9, 11, 13, 15, &c. On subtracting from each of these the preceding one, we obtain the Second Differences, which are all constant and equal to 2. We may represent this succession of operations, and their results, in the following table: - >From the mode in which the last two columns B and C have been formed, it is easy to see, that if, for instance, we desire to pass from the number 5 to the succeeding one 7, we must add to the former the constant difference 2; similarly, if from the square number 9 we would pass to the following one 16, we must add to the former the difference 7, which difference is in other words the preceding difference 5, plus the constant difference 2; or again, which comes to the same thing, to obtain 16 we have only to add together the three numbers 2, 5, 9, placed obliquely in the direction a b. Similarly, we obtain the number 25 by summing up the three numbers placed in the oblique direction d c: commencing by the addition 2 + 7, we have the first difference 9 consecutively to 7; adding 16 to the 9 we have the square 25. We see then that the three numbers 2, 5, 9 being given, the whole series of successive square numbers, and that of their first differences likewise, may be obtained by means of simple additions. Now, to conceive how these operations may be reproduced by a machine, suppose the latter to have three dials, designated as A, B, C, on each of which are traced, say a thousand divisions, by way of example, over which a needle shall pass. The two dials, C, B, shall have in addition a registering hammer, which is to give a number of strokes equal to that of the divisions indicated by the needle. For each stroke of the registering hammer of the dial C, the needle B shall advance one division; similarly, the needle A shall advance one division for every stroke of the registering hammer of the dial B. Such is the general disposition of the mechanism. This being understood, let us, at the beginning of the series of operations we wish to execute, place the needle C on the division 2, the needle B on the division 5, and the needle A on the division 9. Let us allow the hammer of the dial C to strike; it will strike twice, and at the same time the needle B will pass over two divisions. The latter will then indicate the number 7, which succeeds the number 5 in the column of first differences. If we now permit the hammer of the dial B to strike in its turn, it will strike seven times, during which the needle A will advance seven divisions; these added to the nine already marked by it will give the number 16, which is the square number consecutive to 9. If we now recommence these operations, beginning with the needle C, which is always to be left on the division 2, we shall perceive that by repeating them indefinitely, we may successively reproduce the series of whole square numbers by means of a very simple mechanism. The theorem on which is based the construction of the machine we have just been describing, is a particular case of the following more general theorem: that if in any polynomial whatever, the highest power of whose variable is m, this same variable be increased by equal degrees; the corresponding values of the polynomial then calculated, and the first, second, third, &c. differences of these be taken (as for the preceding series of squares); the mth differences will all be equal to each other. So that, in order to reproduce the series of values of the polynomial by means of a machine analogous to the one above described, it is sufficient that there be (m + 1) dials, having the mutual relations we have indicated. As the differences may be either positive or negative, the machine will have a contrivance for either advancing or retrograding each needle, according as the number to be algebraically added may have the sign plus or minus. If from a polynomial we pass to a series having an infinite number of terms, arranged according to the ascending powers of the variable, it would at first appear, that in order to apply the machine to the calculation of the function represented by such a series, the mechanism must include an infinite number of dials, which would in fact render the thing impossible. But in many cases the difficulty will disappear, if we observe that for a great number of functions the series which represent them may be rendered convergent; so that, according to the degree of approximation desired, we may limit ourselves to the calculation of a certain number of terms of the series, neglecting the rest. By this method the question is reduced to the primitive case of a finite polynomial. It is thus that we can calculate the succession of the logarithms of numbers. But since, in this particular instance, the terms which had been originally neglected receive increments in a ratio so continually increasing for equal increments of the variable, that the degree of approximation required would ultimately be affected, it is necessary, at certain intervals, to calculate the value of the function by different methods, and then respectively to use the results thus obtained, as data whence to deduce, by means of the machine, the other intermediate values. We see that the machine here performs the office of the third section of calculators mentioned in describing the tables computed by order of the French government, and that the end originally proposed is thus fulfilled by it. Such is the nature of the first machine which Mr. Babbage conceived. We see that its use is confined to cases where the numbers required are such as can be obtained by means of simple additions or subtractions; that the machine is, so to speak, merely the expression of one[3] particular theorem of analysis; and that, in short, its operations cannot be extended so as to embrace the solution of an infinity of other questions included within the domain of mathematical analysis. It was while contemplating the vast field which yet remained to be traversed, that Mr. Babbage, renouncing his original essays, conceived the plan of another system of mechanism whose operations should themselves possess all the generality of algebraical notation, and which, on this account, he denominates the Analytical Engine. Having now explained the state of the question, it is time for me to develope the principle on which is based the construction of this latter machine. When analysis is employed for the solution of any problem, there are usually two classes of operations to execute: first, the numerical calculation of the various coefficients; and secondly, their distribution in relation to the quantities affected by them. If, for example, we have to obtain the product of two binomials (a + bx) (m + nx), the result will be represented by am + (an + bm) x + bnx^2, in which expression we must first calculate am, an, bm, bn; then take the sum of an + bm; and lastly, respectively distribute the coefficients thus obtained amongst the powers of the variable. In order to reproduce these operations by means of a machine, the latter must therefore possess two distinct sets of powers: first, that of executing numerical calculations; secondly, that of rightly distributing the values so obtained. But if human intervention were necessary for directing each of these partial operations, nothing would be gained under the heads of correctness and economy of time; the machine must therefore have the additional requisite of executing by itself all the successive operations required for the solution of a problem proposed to it, when once the primitive numerical data for this same problem have been introduced. Therefore, since, from the moment that the nature of the calculation to be executed or of the problem to be resolved have been indicated to it, the machine is, by its own intrinsic power, of itself to go through all the intermediate operations which lead to the proposed result, it must exclude all methods of trial and guess-work, and can only admit the direct processes of It is necessarily thus; for the machine is not a thinking being, but simply an automaton which acts according to the laws imposed upon it. This being fundamental, one of the earliest researches its author had to undertake, was that of finding means for effecting the division of one number by another without using the method of guessing indicated by the usual rules of arithmetic. The difficulties of effecting this combination were far from being among the least; but upon it depended the success of every other. Under the impossibility of my here explaining the process through which this end is attained, we must limit ourselves to admitting that the first four operations of arithmetic, that is addition, subtraction, multiplication and division, can be performed in a direct manner through the intervention of the machine. This granted, the machine is thence capable of performing every species of numerical calculation, for all such calculations ultimately resolve themselves into the four operations we have just named. To conceive how the machine can now go through its functions according to the laws laid down, we will begin by giving an idea of the manner in which it materially represents numbers. Let us conceive a pile or vertical column consisting of an indefinite number of circular discs, all pierced through their centres by a common axis, around which each of them can take an independent rotatory movement. If round the edge of each of these discs are written the ten figures which constitute our numerical alphabet, we may then, by arranging a series of these figures in the same vertical line, express in this manner any number whatever. It is sufficient for this purpose that the first disc represent units, the second tens, the third hundreds, and so on. When two numbers have been thus written on two distinct columns, we may propose to combine them arithmetically with each other, and to obtain the result on a third column. In general, if we have a series of columns[5] consisting of discs, which columns we will designate as V[0], V[1], V[2], V[3], V[4], &c., we may require, for instance, to divide the number written on the column V[1] by that on the column V[4], and to obtain the result on the column V[7]. To effect this operation, we must impart to the machine two distinct arrangements; through the first it is prepared for executing a division, and through the second the columns it is to operate on are indicated to it, and also the column on which the result is to be represented. If this division is to be followed, for example, by the addition of two numbers taken on other columns, the two original arrangements of the machine must be simultaneously altered. If, on the contrary, a series of operations of the same nature is to be gone through, then the first of the original arrangements will remain, and the second alone must be altered. Therefore, the arrangements that may be communicated to the various parts of the machine may be distinguished into two principal classes: First, that relative to the Operations. Secondly, that relative to the Variables. By this latter we mean that which indicates the columns to be operated on. As for the operations themselves, they are executed by a special apparatus, which is designated by the name of mill, and which itself contains a certain number of columns, similar to those of the Variables. When two numbers are to be combined together, the machine commences by effacing them from the columns where they are written, that is, it places zero[6] on every disc of the two vertical lines on which the numbers were represented; and it transfers the numbers to the mill. There, the apparatus having been disposed suitably for the required operation, this latter is effected, and, when completed, the result itself is transferred to the column of Variables which shall have been indicated. Thus the mill is that portion of the machine which works, and the columns of Variables constitute that where the results are represented and arranged. After the preceding explanations, we may perceive that all fractional and irrational results will be represented in decimal fractions. Supposing each column to have forty discs, this extension will be sufficient for all degrees of approximation generally It will now be inquired how the machine can of itself, and without having recourse to the hand of man, assume the successive dispositions suited to the operations. The solution of this problem has been taken from Jacquard's apparatus[7], used for the manufacture of brocaded stuffs, in the following manner:- Two species of threads are usually distinguished in woven stuffs; one is the warp or longitudinal thread, the other the woof or transverse thread, which is conveyed by the instrument called the shuttle, and which crosses the longitudinal thread or warp. When a brocaded stuff is required, it is necessary in turn to prevent certain threads from crossing the woof, and this according to a succession which is determined by the nature of the design that is to be reproduced. Formerly this process was lengthy and difficult, and it was requisite that the workman, by attending to the design which he was to copy, should himself regulate the movements the threads were to take. Thence arose the high price of this description of stuffs, especially if threads of various colours entered into the fabric. To simplify this manufacture, Jacquard devised the plan of connecting each group of threads that were to act together, with a distinct lever belonging exclusively to that group. All these levers terminate in rods, which are united together in one bundle, having usually the form of a parallelopiped with a rectangular base. The rods are cylindrical, and are separated from each other by small intervals. The process of raising the threads is thus resolved into that of moving these various lever-arms in the requisite order. To effect this, a rectangular sheet of pasteboard is taken, somewhat larger in size than a section of the bundle of lever-arms. If this sheet be applied to the base of the bundle, and an advancing motion be then communicated to the pasteboard, this latter will move with it all the rods of the bundle, and consequently the threads that are connected with each of them. But if the pasteboard, instead of being plain, were pierced with holes corresponding to the extremities of the levers which meet it, then, since each of the levers would pass through the pasteboard during the motion of the latter, they would all remain in their places. We thus see that it is easy so to determine the position of the holes in the pasteboard, that, at any given moment, there shall be a certain number of levers, and consequently of parcels of threads, raised, while the rest remain where they were. Supposing this process is successively repeated according to a law indicated by the pattern to be executed, we perceive that this pattern may be reproduced on the stuff. For this purpose we need merely compose a series of cards according to the law required, and arrange them in suitable order one after the other; then, by causing them to pass over a polygonal beam which is so connected as to turn a new face for every stroke of the shuttle, which face shall then be impelled parallelly to itself against the bundle of lever-arms, the operation of raising the threads will be regularly performed. Thus we see that brocaded tissues may be manufactured with a precision and rapidity formerly difficult to obtain. Arrangements analogous to those just described have been introduced into the Analytical Engine. It contains two principal species of cards: first, Operation cards, by means of which the parts of the machine are so disposed as to execute any determinate series of operations, such as additions, subtractions, multiplications, and divisions; secondly, cards of the Variables, which indicate to the machine the columns on which the results are to be represented. The cards, when put in motion, successively arrange the various portions of the machine according to the nature of the processes that are to be effected, and the machine at the same time executes these processes by means of the various pieces of mechanism of which it is constituted. In order more perfectly to conceive the thing, let us select as an example the resolution of two equations of the first degree with two unknown quantities. Let the following be the two equations, in which x and y are the unknown quantities:- We deduce x = dn' - d'n , and for y an analogous expression. Let us continue to represent by V[0], V[1], V[2], &c. the n'm - nm' different columns which contain the numbers, and let us suppose that the first eight columns have been chosen for expressing on them the numbers represented by m, n, d, m', n', d', n and n', which implies that V[0 ]= m, V[1]= n, V[2 ]= d, V[3 ]= m', V[4 ]= n', V[5 ]= d', V[6 ]= n, V[7 ]= n'. The series of operations commanded by the cards, and the results obtained, may be represented in the following table:- Since the cards do nothing but indicate in what manner and on what columns the machine shall act, it is clear that we must still, in every particular case, introduce the numerical data for the calculation. Thus, in the example we have selected, we must previously inscribe the numerical values of m, n, d, m', n', d', in the order and on the columns indicated, after which the machine when put in action will give the value of the unknown quantity x for this particular case. To obtain the value of y, another series of operations analogous to the preceding must be performed. But we see that they will be only four in number, since the denominator of the expression for y, excepting the sign, is the same as that for x, and equal to n'm - nm'. In the preceding table it will be remarked that the column for operations indicates four successive multiplications, two subtractions, and one division. Therefore, if desired, we need only use three operation-cards; to manage which, it is sufficient to introduce into the machine an apparatus which shall, after the first multiplication, for instance, retain the card which relates to this operation, and not allow it to advance so as to be replaced by another one, until after this same operation shall have been four times repeated. In the preceding example we have seen, that to find the value of x we must begin by writing the coefficients m, n, d, m', n', d', upon eight columns, thus repeating n and n' twice. According to the same method, if it were required to calculate y likewise, these coefficients must be written on twelve different columns. But it is possible to simplify this process, and thus to diminish the chances of errors, which chances are greater, the larger the number of the quantities that have to be inscribed previous to setting the machine in action. To understand this simplification, we must remember that every number written on a column must, in order to be arithmetically combined with another number, be effaced from the column on which it is, and transferred to the mill. Thus, in the example we have discussed, we will take the two coefficients m and n', which are each of them to enter into two different products, that is m into mn' and md', n' into mn' and n'd. These coefficients will be inscribed on the columns V[0 ]and V[4]. If we commence the series of operations by the product of m into n', these numbers will be effaced from the columns V[0 ]and V[4], that they may be transferred to the mill, which will multiply them into each other, and will then command the machine to represent the result, say on the column V[6]. But as these numbers are each to be used again in another operation, they must again be inscribed somewhere; therefore, while the mill is working out their product, the machine will inscribe them anew on any two columns that may be indicated to it through the cards; and as, in the actual case, there is no reason why they should not resume their former places, we will suppose them again inscribed on V[0 ]and V[4], whence in short they would not finally disappear, to be reproduced no more, until they should have gone through all the combinations in which they might have to be used. We see, then, that the whole assemblage of operations requisite for resolving the two[8] above equations of the first degree may be definitely represented in the following table:- In order to diminish to the utmost the chances of error in inscribing the numerical data of the problem, they are successively placed on one of the columns of the mill; then, by means of cards arranged for this purpose, these same numbers are caused to arrange themselves on the requisite columns, without the operator having to give his attention to it; so that his undivided mind may be applied to the simple inscription of these same numbers. According to what has now been explained, we see that the collection of columns of Variables may be regarded as a store of numbers, accumulated there by the mill, and which, obeying the orders transmitted to the machine by means of the cards, pass alternately from the mill to the store and from the store to the mill, that they may undergo the transformations demanded by the nature of the calculation to be performed. Hitherto no mention has been made of the signs in the results, and the machine would be far from perfect were it incapable of expressing and combining amongst each other positive and negative quantities. To accomplish this end, there is, above every column, both of the mill and of the store, a disc, similar to the discs of which the columns themselves consist. According as the digit on this disc is even or uneven, the number inscribed on the corresponding column below it will be considered as positive or negative. This granted, we may, in the following manner, conceive how the signs can be algebraically combined in the machine. When a number is to be transferred from the store to the mill, and vice versa, it will always be transferred with its sign, which will be effected by means of the cards, as has been explained in what precedes. Let any two numbers then, on which we are to operate arithmetically, be placed in the mill with their respective signs. Suppose that we are first to add them together; the operation-cards will command the addition: if the two numbers be of the same sign, one of the two will be entirely effaced from where it was inscribed, and will go to add itself on the column which contains the other number; the machine will, during this operation, be able, by means of a certain apparatus, to prevent any movement in the disc of signs which belongs to the column on which the addition is made, and thus the result will remain with the sign which the two given numbers originally had. When two numbers have two different signs, the addition commanded by the card will be changed into a subtraction through the intervention of mechanisms which are brought into play by this very difference of sign. Since the subtraction can only be effected on the larger of the two numbers, it must be arranged that the disc of signs of the larger number shall not move while the smaller of the two numbers is being effaced from its column and subtracted from the other, whence the result will have the sign of this latter, just as in fact it ought to be. The combinations to which algebraical subtraction give rise, are analogous to the preceding. Let us pass on to multiplication. When two numbers to be multiplied are of the same sign, the result is positive; if the signs are different, the product must be negative. In order that the machine may act conformably to this law, we have but to conceive that on the column containing the product of the two given numbers, the digit which indicates the sign of that product has been formed by the mutual addition of the two digits that respectively indicated the signs of the two given numbers; it is then obvious that if the digits of the signs are both even, or both odd, their sum will be an even number, and consequently will express a positive number; but that if, on the contrary, the two digits of the signs are one even and the other odd, their sum will be an odd number, and will consequently express a negative number. In the case of division, instead of adding the digits of the discs, they must be subtracted one from the other, which will produce results analogous to the preceding; that is to say, that if these figures are both even or both uneven, the remainder of this subtraction will be even; and it will be uneven in the contrary case. When I speak of mutually adding or subtracting the numbers expressed by the digits of the signs, I merely mean that one of the sign-discs is made to advance or retrograde a number of divisions equal to that which is expressed by the digit on the other sign-disc. We see, then, from the preceding explanation, that it is possible mechanically to combine the signs of quantities so as to obtain results conformable to those indicated by algebra[9]. The machine is not only capable of executing those numerical calculations which depend on a given algebraical formula, but it is also fitted for analytical calculations in which there are one or several variables to be considered. It must be assumed that the analytical expression to be operated on can be developed according to powers of the variable, or according to determinate functions of this same variable, such as circular functions, for instance; and similarly for the result that is to be attained. If we then suppose that above the columns of the store, we have inscribed the powers or the functions of the variable, arranged according to whatever is the prescribed law of development, the coefficients of these several terms may be respectively placed on the corresponding column below each. In this manner we shall have a representation of an analytical development; and, supposing the position of the several terms composing it to be invariable, the problem will be reduced to that of calculating their coefficients according to the laws demanded by the nature of the question. In order to make this more clear, we shall take the following[10] very simple example, in which we are to multiply (a + bx^1) by (A + B cos^1 x). We shall begin by writing x^0, x^1, cos^0 x, cos^1 x, above the columns V[0], V[1], V[2], V[3]; then since, from the form of the two functions to be combined, the terms which are to compose the products will be of the following nature, x^0 .cos^0 x, x^0 .cos^1 x, x^1.cos^0 x, x^1 .cos^1 x, these will be inscribed above the columns V[4], V[5], V [6], V[7]. The coefficients of x^0, x^1, cos^0 x, cos^1 x being given, they will, by means of the mill, be passed to the columns V[0], V[1], V[2] and V[3]. Such are the primitive data of the problem. It is now the business of the machine to work out its solution, that is, to find the coefficients which are to be inscribed on V[4], V[5], V[6], V[7]. To attain this object, the law of formation of these same coefficients being known, the machine will act through the intervention of the cards, in the manner indicated by the following table[11]:- It will now be perceived that a general application may be made of the principle developed in the preceding example, to every species of process which it may be proposed to effect on series submitted to calculation. It is sufficient that the law of formulation of the coefficients be known, and that this law be inscribed on the cards of the machine, which will then of itself execute all the calculations requisite for arriving at the proposed result. If, for instance, a recurring series were proposed, the law of formation of the coefficients being here uniform, the same operations which must be performed for one of them will be repeated for all the others; there will merely be a change in the locality of the operation, that is, it will be performed with different columns. Generally, since every analytical expression is susceptible of being expressed in a series ordered according to certain functions of the variable, we perceive that the machine will include all analytical calculations which can be definitively reduced to the formation of coefficients according to certain laws, and to the distribution of these with respect to the variables. We may deduce the following important consequences from these explanations, viz. that since the cards only indicate the nature of the operations to be performed, and the columns of Variables with which they are to be executed, these cards will themselves possess all the generality of analysis, of which they are in fact merely a translation. We shall now further examine some of the difficulties which the machine must surmount, if its assimilation to analysis is to be complete. There are certain functions which necessarily change in nature when they pass through zero or infinity, or whose values cannot be admitted when they pass these limits. When such cases present themselves, the machine is able, by means of a bell, to give notice that the passage through zero or infinity is taking place, and it then stops until the attendant has again set it in action for whatever process it may next be desired that it shall perform. If this process has been foreseen, then the machine, instead of ringing, will so dispose itself as to present the new cards which have relation to the operation that is to succeed the passage through zero and infinity. These new cards may follow the first, but may only come into play contingently upon one or other of the two circumstances just mentioned taking place. Let us consider a term of the form ab^n; since the cards are but a translation of the analytical formula, their number in this particular case must be the same, whatever be the value of n: that is to say, whatever be the number of multiplications required for elevating b to the nth power (we are supposing for the moment that n is a whole number). Now, since the exponent n indicates that b is to be multiplied n times by itself, and all these operations are of the same nature, it will be sufficient to employ one single operation-card, viz. that which orders the multiplication. But when n is given for the particular case to be calculated, it will be further requisite that the machine limit the number of its multiplications according to the given values. The process may be thus arranged. The three numbers a, b and n will be written on as many distinct columns of the store; we shall designate them V[0], V[1], V[2]; the result ab^n will place itself on the column V[3]. When the number n has been introduced into the machine, a card will order a certain registering-apparatus to mark (n - 1), and will at the same time execute the multiplication of b by b. When this is completed, it will be found that the registering-apparatus has effaced a unit, and that it only marks (n - 2); while the machine will now again order the number b written on the column V[1] to multiply itself with the product b^2 written on the column V[3], which will give b^3. Another unit is then effaced from the registering-apparatus, and the same processes are continually repeated until it only marks zero. Thus the number b^n will be found inscribed on V[3], when the machine, pursuing its course of operations, will order the product of b^n by a; and the required calculation will have been completed without there being any necessity that the number of operation-cards used should vary with the value of n. If n were negative, the cards, instead of ordering the multiplication of a by b^n, would order its division; this we can easily conceive, since every number, being inscribed with its respective sign, is consequently capable of reacting on the nature of the operations to be executed. Finally, if n were fractional, of the form p/q, an additional column would be used for the inscription of q, and the machine would bring into action two sets of processes, one for raising b to the power p, the other for extracting the qth root of the number so obtained. Again, it may be required, for example, to multiply an expression of the form ax^m + bx^n by another Ax^p + Bx^q, and then to reduce the product to the least number of terms, if any of the indices are equal. The two factors being ordered with respect to x, the general result of the multiplication would be Aax^m+p + Abx^n+p + Bax^m+q Bbx^n+q. Up to this point the process presents no difficulties; but suppose that we have m = p and n = q, and that we wish to reduce the two middle terms to a single one (Ab + Ba) x^m+q. For this purpose, the cards may order m + q and n + p to be transferred into the mill, and there subtracted one from the other; if the remainder is nothing, as would be the case on the present hypothesis, the mill will order other cards to bring to it the coefficients Ab and Ba, that it may add them together and give them in this state as a coefficient for the single term x^n+p = x^m+q. This example illustrates how the cards are able to reproduce all the operations which intellect performs in order to attain a determinate result, if these operations are themselves capable of being precisely defined. Let us now examine the following expression:- which we know becomes equal to the ratio of the circumference to the diameter, when n is infinite. We may require the machine not only to perform the calculation of this fractional expression, but further to give indication as soon as the value becomes identical with that of the ratio of the circumference to the diameter when n is infinite, a case in which the computation would be impossible. Observe that we should thus require of the machine to interpret a result not of itself evident, and that this is not amongst its attributes, since it is no thinking being. Nevertheless, when the cos[ 12] of n = 1/0 has been foreseen, a card may immediately order the substitution of the value of p(p being the ratio of the circumference to the diameter), without going through the series of calculations indicated. This would merely require that the machine contain a special card, whose office it should be to place the number p in a direct and independent manner on the column indicated to it. And here we should introduce the mention of a third species of cards, which may be called cards of numbers. There are certain numbers, such as those expressing the ratio of the circumference to the diameter, the Numbers of Bernoulli, &c., which frequently present themselves in calculations. To avoid the necessity for computing them every time they have to be used, certain cards may be combined specially in order to give these numbers ready made into the mill, whence they afterwards go and place themselves on those columns of the store that are destined for them. Through this means the machine will be susceptible of those simplifications afforded by the use of numerical tables. It would be equally possible to introduce, by means of these cards, the logarithms of numbers; but perhaps it might not be in this case either the shortest or the most appropriate method; for the machine might be able to perform the same calculations by other more expeditious combinations, founded on the rapidity with which it executes the first four operations of arithmetic. To give an idea of this rapidity, we need only mention that Mr. Babbage believes he can, by his engine, form the product of two numbers, each containing twenty figures, in three minutes. Perhaps the immense number of cards required for the solution of any rather complicated problem may appear to be an obstacle; but this does not seem to be the case. There is no limit to the number of cards that can be used. Certain stuffs require for their fabrication not less than twenty thousand cards, and we may unquestionably far exceed even this quantity[13]. Resuming what we have explained concerning the Analytical Engine, we may conclude that it is based on two principles: the first, consisting in the fact that every arithmetical calculation ultimately depends on four principal operations - addition, subtraction, multiplication, and division; the second, in the possibility of reducing every analytical calculation to that of the coefficients for the several terms of a series. If this last principle be true, all the operations of analysis come within the domain of the engine. To take another point of view: the use of the cards offers a generality equal to that of algebraical formulae, since such a formula simply indicates the nature and order of the operations requisite for arriving at a certain definite result, and similarly the cards merely command the engine to perform these same operations; but in order that the mechanisms may be able to act to any purpose, the numerical data of the problem must in every particular case be introduced. Thus the same series of cards will serve for all questions whose sameness of nature is such as to require nothing altered excepting the numerical data. In this light the cards are merely a translation of algebraical formulae, or, to express it better, another form of analytical notation. Since the engine has a mode of acting peculiar to itself, it will in every particular case be necessary to arrange the series of calculations conformably to the means which the machine possesses; for such or such a process which might be very easy for a calculator may be long and complicated for the engine, and vice versâ. Considered under the most general point of view, the essential object of the machine being to calculate, according to the laws dictated to it, the values of numerical coefficients which it is then to distribute appropriately on the columns which represent the variables, it follows that the interpretation of formulae and of results is beyond its province, unless indeed this very interpretation be itself susceptible of expression by means of the symbols which the machine employs. Thus, although it is not itself the being that reflects, it may yet be considered as the being which executes the conceptions of intelligence[14]. The cards receive the impress of these conceptions, and transmit to the various trains of mechanism composing the engine the orders necessary for their action. When once the engine shall have been constructed, the difficulty will be reduced to the making out of the cards; but as these are merely the translation of algebraical formulae, it will, by means of some simple notations, be easy to consign the execution of them to a workman. Thus the whole intellectual labour will be limited to the preparation of the formulae, which must be adapted for calculation by the engine. Now, admitting that such an engine can be constructed, it may be inquired: what will be its utility? To recapitulate; it will afford the following advantages: - First, rigid accuracy. We know that numerical calculations are generally the stumbling-block to the solution of problems, since errors easily creep into them, and it is by no means always easy to detect these errors. Now the engine, by the very nature of its mode of acting, which requires no human intervention during the course of its operations, presents every species of security under the head of correctness: besides, it carries with it its own check; for at the end of every operation it prints off, not only the results, but likewise the numerical data of the question; so that it is easy to verify whether the question has been correctly proposed. Secondly, economy of time: to convince ourselves of this, we need only recollect that the multiplication of two numbers, consisting each of twenty figures, requires at the very utmost three minutes. Likewise, when a long series of identical computations is to be performed, such as those required for the formation of numerical tables, the machine can be brought into play so as to give several results at the same time, which will greatly abridge the whole amount of the processes. Thirdly, economy of intelligence: a simple arithmetical computation requires to be performed by a person possessing some capacity; and when we pass to more complicated calculations, and wish to use algebraical formulae in particular cases, knowledge must be possessed which presupposes preliminary mathematical studies of some extent. Now the engine, from its capability of performing by itself all these purely material operations, spares intellectual labour, which may be more profitably employed. Thus the engine may be considered as a real manufactory of figures, which will lend its aid to those many useful sciences and arts that depend on numbers. Again, who can foresee the consequences of such an invention? In truth, how many precious observations remain practically barren for the progress of the sciences, because there are not powers sufficient for computing the results! And what discouragement does the perspective of a long and arid computation cast into the mind of a man of genius, who demands time exclusively for meditation, and who beholds it snatched from him by the material routine of operations! Yet it is by the laborious route of analysis that he must reach truth; but he cannot pursue this unless guided by numbers; for without numbers it is not given us to raise the veil which envelopes the mysteries of nature. Thus the idea of constructing an apparatus capable of aiding human weakness in such researches, is a conception which, being realized, would mark a glorious epoch in the history of the sciences. The plans have been arranged for all the various parts, and for all the wheel-work, which compose this immense apparatus, and their action studied; but these have not yet been fully combined together in the drawings[15] and mechanical notation[16]. The confidence which the genius of Mr. Babbage must inspire, affords legitimate ground for hope that this enterprise will be crowned with success; and while we render homage to the intelligence which directs it, let us breathe aspirations for the accomplishment of such an undertaking. [1] This remark seems to require further comment, since it is in some degree calculated to strike the mind as being at variance with the subsequent passage (page 10), where it is explained that an engine which can effect these four operations can in fact effect every species of calculation. The apparent discrepancy is stronger too in the translation than in the original, owing to its being impossible to render precisely into the English tongue all the niceties of distinction which the French idiom happens to admit of in the phrases used for the two passages we refer to. The explanation lies in this: that in the one case the execution of these four operations is the fundamental starting-point, and the object proposed for attainment by the machine is the subsequent combination of these in every possible variety; whereas in the other case the execution of some one of these four operations, selected at pleasure, is the ultimatum, the sole and utmost result that can be proposed for attainment by the machine referred to, and which result it cannot any further combine or work upon. The one begins where the other ends. Should this distinction not now appear perfectly clear, it will become so on perusing the rest of the Memoir, and the Notes that are appended to it. - NOTE BY TRANSLATOR. [2] The idea that the one engine is the offspring and has grown out of the other, is an exceedingly natural and plausible supposition, until reflection reminds us that no necessary sequence and connexion need exist between two such inventions, and that they may be wholly independent. M. Menabrea has shared this idea in common with persons who have not his profound and accurate insight into the nature of either engine. In Note A. (see the Notes at the end of the Memoir) it will be found sufficiently explained, however, that this supposition is unfounded. M. Menabrea's opportunities were by no means such as could be adequate to afford him information on a point like this, which would be naturally and almost unconsciously assumed, and would scarcely suggest any inquiry with reference to it. - NOTE BY TRANSLATOR. [3] See Note A. [4] This must not be understood in too unqualified a manner. The engine is capable, under certain circumstances, of feeling about to discover which of two or more possible contingencies has occurred, and of then shaping its future course accordingly. - NOTE BY TRANSLATOR. [5] See Note B. [6] Zero is not always substituted when a number is transferred to the mill. This is explained further on in the memoir, and still more fully in Note D. - NOTE BY TRANSLATOR. [7] See Note C. [8] See Note D. [9] Not having had leisure to discuss with Mr. Babbage the manner of introducing into his machine the combination of algebraical signs, I do not pretend here to expose the method he uses for this purpose; but I considered that I ought myself to supply the deficiency, conceiving that this paper would have been imperfect if I had omitted to point out one means that might be employed for resolving this essential part of the problem in question. [10] See Note E. [11] For an explanation of the upper left-hand indices attached to the V's in this and in the preceding Table, we must refer the reader to Note D, amongst those appended to the memoir. - NOTE BY [12] Classics Editor's note: Lovelace has here literally translated a printer's error that appeared in the original French edition of the article. It should read ". . .in the case of n = 1/0. . ." The edition published in B.V. Bowden's Faster than thought (New York, Pittman, 1953) corrects the mistake without comment. It has, however, caused some to question Lovelace's mathematical competence (cf. Stein, D. (1985). Ada: A life and a legacy. Cambridge, MA: MIT Press.). [13] See Note F. [14] See Note G. [15] This sentence has been slightly altered in the translation in order to express more exactly the present state of the engine. - NOTE BY TRANSLATOR. [16] The notation here alluded to is a most interesting and important subject, and would have well deserved a separate and detailed Note upon it amongst those appended in the Memoir. It has, however, been impossible, within the space allotted, even to touch upon so wide a field. - NOTE BY TRANSLATOR.
{"url":"http://psychclassics.yorku.ca/Lovelace/menabrea.htm","timestamp":"2014-04-23T20:37:39Z","content_type":null,"content_length":"67428","record_id":"<urn:uuid:1917c179-0036-454b-8ff6-a634b4578191>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Example of shear force and bending moment diagram, Mechanical Engineering Example of Shear force and bending moment diagram: Q: Draw the Shear Force and bending moment diagram for the simply supported beam loaded as shown in the figure given below. Sol.: Let reaction at support A and B be, R[A] and R[B] First find support reaction For that, Taking moment about the point A, ΣM[A] = 0 2 X 1 + 4 X 2 + 2 X 3 - R[B] X 4 = 0 R[B ]= 4KN ...(2) From equation (1), R[A] = 4KN ...(3) Calculation for Shear force Diagram Draw the section line, here total four section line, which break the load RA and 2KN(Between A and C), 2KN and 4KN(Between C and D), 4KN and 2KN (Between D and E) and 2KN and RB(Between E and B) Consider left portion of beam Take section 1-1 Force on left of section 1-1 is RA SF[1-1] = 4KN (constant value) Constant value means value of shear force at both the nearest point of section is equal that is SFA = SFC = 4KN ...(4) Take section 2-2 Forces on left of section 2-2 is RA and 2KN SF[2-2 ]= 4 - 2 = 2KN (constant value) Constant value means value of shear force at both the nearest point of section is equal that is SF[C ]= SF[D ]= 2KN ...(5) Take section 3-3 Forces on left of section 3-3 is R[A], 2KN, 4KN SF[3-3] = 4 - 2 - 4 = -2KN (constant value) The constant value means that value of shear force at both the nearest point of the section is equal that is SF[D] = SF[E ] = -2KN ...(6) Take section 4-4 Forces on left of the section 4-4 is R[A], 2KN, 4KN, 2KN SF[4-4] = 4 - 2 - 4 - 2 = - 4KN (constant value) The constant value means that value of shear force at both the nearest point of section is equal that is SF[E] = SF[B ] = -4KN ...(7) Plot the SF[D] with the help of above shear force values. Calculation for bending moment Diagram The distance of section 1-1 from A is X[1] The distance of section 2-2 from A is X[2] The distance of section 3-3 from A is X[3] The distance of section 4-4 from A is X[4] Take left portion of the beam Take section 1-1, taking moment about section 1-1 BM[1-1] = 4.X[1] It is the Equation of straight line (Y = mX + C), inclined linear. Inclined linear means that the value of bending moment at both the nearest point of section is varies with It is Equation of the straight line (Y = mX + C), inclined linear. Inclined linear means that the value of bending moment at both the nearest point of section varies with Plot the BMD with the help of obtained bending moment values. Posted Date: 10/20/2012 1:38:18 AM | Location : United States Your posts are moderated Magnitude and direction of force: Determine magnitude and direction of smallest force P which is required to start the wheel over the block. As shown in the figure. So Q. Inspection and testing for coating application? The Applicator shall perform all inspection and testing necessary to assure that the surface preparation and coating applicat Determine the normal stresses: A short hollow pier 1.6 m × 1.6 m outsides as well as 1.0 m × 1.0 m intersides supports a vertical load of 2000 kN at a point located on a diago Problem s on Cantilever Truss: In the case of cantilever trusses, it is not required to determine the support reactions. The forces in members of cantilever truss are obtaine How are footings described and introduced properly? The selection of different types of footings and the need for reinforcing the same are discussed in this unit. Methods of de Q. Design Stairways - Ladders and Platforms? Stairways, rather than ladders, should be provided for main access to elevated structures. Ladders should be reserved for isolated A carnot engines performs between two reservoirs whose difference in temperature is 200 o C. If the work output of the engine is 0.5 times the heat rejected, make evaluation for th Find out the components of the force: The frame of a certain machine accelerates to the right at 0.8 g m/sec 2 . As illustrated in Figure, it carries a uniform bent bar ABC we Basic Concept: Like any fine machine a motorcycle can be split into various systems like lubrication system, transmission system, electrical system, etc. It is important that a A Composite from Multidirectional Laminates Figure: A Composite from Multidirectional Laminates Glass fibres usually u
{"url":"http://www.expertsmind.com/questions/example-of-shear-force-and-bending-moment-diagram-30119530.aspx","timestamp":"2014-04-21T14:48:48Z","content_type":null,"content_length":"36068","record_id":"<urn:uuid:cfd41759-8f59-4cc8-868b-72dbbce27bc7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Why we need numbers ? Re: Why we need numbers ? How do you know he has not already done that millions of times before...? Always with the same result. I can only hope that I am appointed on the panel to evaluate him why he tries to wiggle back into existence. In the immortal words of my neighbor an extreme nihilist, "he ain't gettin anymore In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=289832","timestamp":"2014-04-21T14:55:47Z","content_type":null,"content_length":"12486","record_id":"<urn:uuid:127a7bd8-4948-4969-a190-8c85d4f9df49>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Jacques Tits Jean-Francois Dars—CNRS Images/The Abel Prize/The Norwegian Academy of Science and Letters Jacques Tits, (born Aug. 12, 1930, Uccle, Belg.), Belgian mathematician awarded the 2008 Abel Prize by the Norwegian Academy of Sciences and Letters, which cited him for having “created a new and highly influential vision of groups as geometric objects.” Tits, the son of a mathematician, passed the entrance exam to the Free University of Brussels at age 14. In 1950 he earned a doctorate from that institution, where he stayed to teach until 1964, when he moved to the University of Bonn in West Germany. In 1973 he accepted the chair of group theory at the Collège de France in Paris, where he remained until his retirement in 2000. Tits became a French citizen in 1974, the same year that he became a member of the French Academy of Sciences. In awarding him the Abel Prize, the academy noted especially that “he introduced what is now known as a Tits building, which encodes in geometric terms the algebraic structure of linear groups.” In addition to his teaching and research, Tits was the editor in chief for mathematical publications at the Institut des Hautes Études Scientifiques (1980–99) near Paris and served on the committees that awarded the Fields Medals in 1978 and 1994. Among dozens of other awards, in 1993 Tits received the prestigious Wolf Prize in Mathematics, an annual international award presented in recognition of outstanding work in the field of mathematics.
{"url":"http://www.britannica.com/print/topic/1459058","timestamp":"2014-04-21T10:43:42Z","content_type":null,"content_length":"8952","record_id":"<urn:uuid:1614f208-39e8-414a-b08b-60f9fc50ac56>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Online Dictionary of Crystallography From Online Dictionary of Crystallography Pyroélectricité (Fr). Pyroelectrizität (Ge). Pyroelectricidad (Sp). Piroelettricità (It). ピロ電気 (Ja) Pyroelectricity is the property presented by certain materials that exhibit an electric polarization P[i] when a temperature variation δΘ is applied uniformly: P[i] = p[i]^T δΘ where p[i]^T is the pyroelectric coefficient at constant stress. Pyroelectric crystals actually have a spontaneous polarization, but the pyroelectric effect can only be observed during a temperature change. If the polarisation can be reversed by the application of an electric field, the crystal is ferroelectric. If the crystal is also piezoelectric, the polarization due to an applied temperature variation is also partly due to the piezoelectric effect. The coefficient describing the pure pyroelectric effect is the pyroelectric coefficient at constant strain, p[i]^S. The two coefficients are related by: p[i]^T = c[ijkl]d[kln]α[jn] + p[i]^S where the c[ijkl] are the elastic stiffnesses, the d[kln] the piezoelectric coefficients and the α[jn] the linear thermal expansion coefficients. The converse effect is the electrocaloric effect. If a pyroelectric crystal is submitted to an electric field, it will undergo a change of entropy Δσ: Δσ = p[i] E^i and will release or absorb a quantity of heat gien by Θ V Δσ where Θ is the temperature of the specimen and V its volume. Pyroelectric point groups The geometric crystal classes for which the piezoelectric effect is possible are determined by symmetry considerations (see Curie laws). They are the classes of which the symmetry is a subgroup of the symmetry associated with that of the electric field, A[∞] ∞M: 1, 2, 3, 4, 6, m, 2mm, 3m, 4mm, 6mm The appearance of electrostatic charges upon changes of temperature has been observed since ancient times, in particular on tourmaline. It is Sir David Brewster (1781-1788) who coined the term 'pyroelectricity' (Brewster D., 1824, Edinburgh. J. Sci., 1, 208-215, Observations on the pyroelectricity of minerals, translated into German, Poggendorf Ann. Phys., 1824, 2, 297-307, Beobachtungen über die, in den Mineralien, durch Wärme erregte Electricität). See also • An introduction to crystal physics (Teaching Pamphlet of the International Union of Crystallography) • Section 10.2 of International Tables of Crystallography, Volume A • Section 1.1.4 and part 3 of International Tables of Crystallography, Volume D
{"url":"http://reference.iucr.org/dictionary/Pyroelectricity","timestamp":"2014-04-21T15:28:05Z","content_type":null,"content_length":"16447","record_id":"<urn:uuid:13b00651-9c0f-4c77-8e72-5d6e7cc020f2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Bremen, GA Algebra 1 Tutor Find a Bremen, GA Algebra 1 Tutor ...I am however, highly qualified to tutor in Study Skills and Test Preparation for the CRCT, ACT, and SAT. As a tutor for your child, I am dedicated to their academic improvement and success. I recognize and accept that each child has their own learning style. 47 Subjects: including algebra 1, chemistry, reading, biology ...I am experienced in Pre-Algebra and Algebra. I have taken my math courses and have always passed with an B or higher. In my college Algebra class I received a grade of an A. 2 Subjects: including algebra 1, prealgebra ...I never bill for a tutoring session if the student or parent is not completely satisfied. While I have a 24 hour cancellation policy, I often provide make-up sessions. I usually tutor students in a public library close to their home, however I will travel to another location if that is more convenient for the student. 8 Subjects: including algebra 1, statistics, trigonometry, algebra 2 ...I prefer to learn what my students' weak points, strong suits, and interests are. With this system, I am able to determine what concepts they are not comprehending fully. Isolating and eventually eradicating these issues. 14 Subjects: including algebra 1, chemistry, reading, biology ...My knowledge of German was obtained both through education as well as immersion. I owe my knowledge and love of writing to Dr. Fraser, a professor of mine in my freshman year of college. 40 Subjects: including algebra 1, reading, English, geometry Related Bremen, GA Tutors Bremen, GA Accounting Tutors Bremen, GA ACT Tutors Bremen, GA Algebra Tutors Bremen, GA Algebra 2 Tutors Bremen, GA Calculus Tutors Bremen, GA Geometry Tutors Bremen, GA Math Tutors Bremen, GA Prealgebra Tutors Bremen, GA Precalculus Tutors Bremen, GA SAT Tutors Bremen, GA SAT Math Tutors Bremen, GA Science Tutors Bremen, GA Statistics Tutors Bremen, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/bremen_ga_algebra_1_tutors.php","timestamp":"2014-04-19T05:18:04Z","content_type":null,"content_length":"23553","record_id":"<urn:uuid:b3dcbd28-49c5-4777-ac70-77032a6c2b32>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory and Techniques of Data Assimilation October 31st 2013, 10:57 AM #1 Oct 2013 Theory and Techniques of Data Assimilation Suppose we have a dynamical system for a vector x = (u,v,p)^T where u,v,p are scalar quantities. Let the dynamical system be represented by the equations u_(k+1) = u_k +v_k +2p_k v_(k+1) = 2u_k +v_k +2p_k p_(k+1) = 3u_k +3v_k + p_k where k indicates the time index. we wish to apply a four- dimensional data assimilation scheme to determine the vector x_0 at time t_0. Suppose that we take observations of both u and p together at the two times t_0 and t_1. Determine whether we have enough information to reconstruct the vector x_0 uniquely. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-math-topics/223732-theory-techniques-data-assimilation.html","timestamp":"2014-04-20T06:43:07Z","content_type":null,"content_length":"29509","record_id":"<urn:uuid:0e81b844-25f6-4b5b-a44c-4cf46d853472>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Contact resistance in transistors From Wikiversity Contact resistance in transistors is the resistance between the metal and silicon in a contact. This is due to the differrence in the bandgap energies between the two materials. The closer the bandgap the less resistance. Early on in the semiconductor world, Aluminium was used as the metal, however this was replaced by copper which provided a better contact resistance. If the example of a 1 micron and a 0.1 micron manufacturing process with aluminum contacts is taken into consideration it will become apparently clear how important this is becoming. The resistance of the device (R) is equal to the contact resistance (PC) divided by the area (A). For aluminum to heavily doped silicon the contact resistance is around 1x10-6 ohms cm2. For the 1 micron example the area will be (1x10-4)2 which equals 1x10-8 cm2 thus the resistance will equal 1x10-6 / 1x10-8 which will equal 100 ohms. The 0.1 micron example the area will be (1x10-5)2 which equals 1x10-10 cm2 thus the resistance will equal 1x10-6 / 1x10-10 which will equal 10,000 ohms. Measurements on the metal to find the contact resistance has been the Cross bridge Kelvin Resistor, however in small devices it has been noted that the results have not been very acurate using the standard calculation. Understanding, better calculations and a new measuring technique was introduced in a 2002 paper "A simple approach to understanding measurement errors in the cross-bridge Kelvin resistor and a new pattern for measurements of specific contact resistivity (Mizuki Ono, Akira Nishiyama, Akira Toriumi : Solid-State Electronics 46 (2002) 1325-1331)". Cross Bridge Kelvin Resistor (CKR)[edit] Above shows the structure of the CKR. In an ideal situation the arm's of the silicon and metal layers would be the same width of the contact but due to alignment margins this is impossible under normal manufacturing processes. It was seen that by moving the contact area around the arm's would provide different resistance measurements which has provided a lot of mystery around the device until the 2002 paper by Mizuki Ono. This paper showed how the effect of the extra silicon affected the results and hence why the measurements changed when the contact area was moved around. Ono Equation[edit] Pc= specific contact resistivity Ps= sheet resistance Using this equation, we can more accuratly calculate the contact resistance for the metal's using the standard CKR and hence are able to use premade version by accuratly finding out the measuremnets required using an appropriate electron microscope. Ono Design[edit] This design was created as a replacement to the CKR, due to the large length the design irradicates a lot of the errors, the CKR is prone to, and thus gives a more accurate result. For this to work, the length L must be longer than (Pc/Ps)1/2 in which case the equation I=(W.Vo)/sqrroot(PcPs) where Vo is the voltage at the right edge of the contact hole. Vo can be found by extrapolating the ratio of voltage between the potential terminal of the metal and each potential terminal in the silicon active layer (ie Vsa, Vsb and Vsc). Additional files[edit] Media:contactresistance.sxw - my Final year project report on contact resistance (At Queens University, belfast) in open office format Contact resistance tool mentioned in final year project is avaible upon request
{"url":"http://en.wikiversity.org/wiki/Contact_resistance_in_transistors","timestamp":"2014-04-16T19:08:51Z","content_type":null,"content_length":"29148","record_id":"<urn:uuid:e34190e0-ada6-429b-a012-d0c284a15b5f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Prove that if the 3 altitudes of a triangle are equal then the triangle is equilateral. See attachment • one year ago • one year ago Best Response You've already chosen the best response. I tried to do this and have got so far that AB = AC Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. I took following triangles :ABF and AEC by AAS I got both of these triangles congruent and therefore by CPCT AB = AC Best Response You've already chosen the best response. (CPCT = Congruent Parts of Congruent Triangles are equal) Best Response You've already chosen the best response. That idea should work to get that AB=BC as well. Best Response You've already chosen the best response. So far , we have got ABC as isosceles. Now, I took triangle ABC and AFC both of these triangles are congruent by RHS BF = FC ( by CPCT ) therefore F is mid point of BC an AF is median and altitude also Therefore , this is the property of an equilateral triangle , so ABC is an equi. triangle Best Response You've already chosen the best response. Is this correct? Best Response You've already chosen the best response. You mean to take ABE and BEC triangles @joemath314159 ? Best Response You've already chosen the best response. yep. and do exactly what you did with the other triangles again. Best Response You've already chosen the best response. I meant this : ADC and ABD in the above comment. Best Response You've already chosen the best response. OK wait.. Best Response You've already chosen the best response. I think I had tried that earlier but I didnt' get any favorable result , wait let me do it again Best Response You've already chosen the best response. there are many combinations you could take. The pair I see is triangles BEC and BDA. Best Response You've already chosen the best response. Oh yes that will work ... BEC and BDA .. this gives some output to me. Best Response You've already chosen the best response. Thanks a lot @joemath314159 for your help .. Might gonna work in more concentration for me.. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d088f9e4b038af01957c0a","timestamp":"2014-04-20T08:47:33Z","content_type":null,"content_length":"64847","record_id":"<urn:uuid:02d71dbe-ead5-48b7-8769-6de205b986a5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Markov chain Hello all I have a question about Markov chain I've obtained in an application. There is no need to mention the application or the details of markov chain because my question is simply: The transition probabilities are derived with equations that depend on the stationary probability, I know it's something complicated ... The question is: 1. do you know what is the class of these markov chains? 2. how to solve it numerically, does it depend on Power method? If you have any paper or book it will be great
{"url":"http://www.physicsforums.com/showthread.php?t=579030","timestamp":"2014-04-20T11:31:34Z","content_type":null,"content_length":"19333","record_id":"<urn:uuid:6fa7e881-fe79-4c88-b020-87c4da6c0c3a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - Visualizing integration quietrain Feb22-12 02:16 AM Visualizing integration Lets say I want to integrate sin from 0 to pi The answer is 2 But how do visualize it in terms of the graph? am I summing up the area under the graph? So it's like max value of 1 on the y axis While the x axis stretches from 0 to 3.14? If so, then why does doing the calculus in terms of cos (after integrating) give me the same result? What is the reason behind it? What am I doing essentially? HallsofIvy Feb22-12 08:50 AM Re: Visualizing integration Most of what you are saying is almost correct- you can visualize the integral as the signed area below the graph of y= sin(x) and above y= 0 between x= 0 and [itex]x= \pi[/itex]. If the graph is below the y= 0, the "area" is negative. But I don't understand your statement "why does doing the calculus in terms of cos (after integrating) give me the same result?". That's not true at all: [tex]\int_0^{\pi} sin(x)dx= \left[-\cos(x)\right]_0^{\pi}= -(-1)-(-1)= 2[/tex] [tex]\int_0^{\pi} cos(x)dx= \left[sin(x)\right]_0^{\pi}= 0- 0= 0[/tex] With y= cos(x), for [itex]\pi/2< \pi< \pi[/itex], one half of the graph is above the y-axis, the other is below and so the two cancel. quietrain Feb24-12 09:36 AM Re: Visualizing integration er no , thats not what i meant i mean, if integrating sin from 0 to pi means counting the area underneath the graph, then i will get 2 as the answer right? so now my other question is why does doing the same integration, sin from 0 to pi, BUT using the method of ∫ sinx = -cos x over 0 to pi, gives me 2 too? essentially, why is the integral of sin, -cos? and why does summing 0 to pi for -cos = counting the area underneath the graph? Re: Visualizing integration Fundamental theorem of calculus. tiny-tim Feb24-12 09:54 AM hi quietrain! :smile: Quote by quietrain (Post 3781905) essentially, why is the integral of sin, -cos? and why does summing 0 to pi for -cos = counting the area underneath the graph? because if the area from 0 to x is A(x), then A(x + dx) - A(x) is the area of a little almost-rectangle with width dx and height sinx. ie approximately A(x + dx) - A(x) = sinx dx, or approximately [A(x + dx) - A(x)]/dx = sinx … in the limit, dA/dx = sinx :wink: (and cos(x+dx) - cosx = 2sin(x + dx/2)sin(dx/2) ~ sinx) Re: Visualizing integration "Why" is a question for Religion, not Science. "What" and "How" are the province of Science and Mathematics. Now your question ..... "What am I doing essentially ?" Answer; You are finding the area under the curve by multiplying the equation for the curve by dx and adding all those slim vertical rectangular areas while taking the limit as dx => 0. The limit gives you the most accurate answer possible. So ...... Integration is Multiplication [in the limit] while one of the multiplicands is changing That is why Integration also yields Volume or Work or Distance and thus is so useful. Any quantity under change that is calculated by multiplication requires Integration. Remember, Calculus is the Mathematics of Change. elliott Feb26-12 10:37 PM Re: Visualizing integration quietrain Feb27-12 06:09 AM Re: Visualizing integration alright thanks everyone! All times are GMT -5. The time now is 05:18 PM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=580096","timestamp":"2014-04-16T22:18:28Z","content_type":null,"content_length":"11260","record_id":"<urn:uuid:e0f379b8-0cfd-479d-a638-cd81615d9022>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
How to solve Justin's Pocket (Corner) Cube? View unanswered posts | View active topics All times are UTC - 5 hours Print view Previous topic | Next topic Author Message Post subject: How to solve Justin's Pocket (Corner) Cube? Pete the Geek Posted: Sun Nov 10, 2013 2:06 am Joined: Thu Dec 15, 2011 10:04 pm Pocket-Corner-Cube-Solved_sm.png [ 1.27 MiB | Viewed 2682 times ] Location: Sioux Lookout, Canada This was a lot of fun to figure out and solve, though I still have to work out an efficient method for the return to cubic step. I think a six-colour version would be cool, though not much different from solving the 4-colour version that has two logo corners. One thing I'm curious about: can the Corner Pocket Cube be solved into other rotations of the special corner? For example, can we have yellow on the front, blue on top and green on the right? EDIT to clarify my questions: the solve would include the logo cubes in their normal solved positions. PeteTheGeek196 on YouTube Last edited by Pete the Geek on Sun Nov 10, 2013 11:08 am, edited 1 time in total. Post subject: How to solve Justin's Pocket (Corner) Cube? TheCubingKyle Posted: Sun Nov 10, 2013 2:34 am Pete the Geek wrote: Joined: Sun Jul 14, 2013 11:10 pm One thing I'm curious about: can the Corner Pocket Cube be solved into other rotations of the special corner? For example, can we have yellow on the front, blue on top and green on the right? I don't think it can. Mostly because The "corners" are all 3x3 centers, and because of the way this is bandaged, the other "cubies" wouldn't maintain cubic shape, if it even allowed the centers hidden within to be rotated in this fashion. My budding baby blog, Twisted Interests! Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Sun Nov 10, 2013 4:27 am I have moved the two posts above from Joined: Thu Sep 17, 2009 6:07 am here Location: Germany, Bavaria This puzzle was announced first by Justin cisco guessed correctly the nature of this puzzle cisco wrote: Justin wrote: I used a 32mm cube as a base, 2mm styrene sheet to extend, and hand-cut the metallic stickers. So I guess this is an offset-extended 3x3x3. The result is aweseome! I'm happy that even today there are some beautiful yet simple puzzles to create I wrote in the other thread Konrad wrote: I have made the underlying bandaged 3x3x3 (3 quads, 3 stripes by Andreas). At first I found it hard to scramble, now I find it hard to solve. I'm looking forward to a mass-produced version of this, hopefully it is as nice as the original. And Andreas and I had a long conversation about it starting I have got the easier red and gold version, because I liked the original pictures from Justin so much. Konrad wrote: The bandaged 3x3x3 was Andreas Nortmann's 2013930080085E aka "3 Quads, 3 Stripes". For the two colour version the solution is: Reconstruct the cubic shape. As Pete, I do not have a "method" for doing this. I do it just intuitively. Andreas' method is based on the findings of his computer program. The method starts for the "3 Quads, 3 Stripes" when you have got in back into its normalized form = signature As Andreas, the inventor of the "3 Quads, 3 Stripes", admits, he struggled with this first step too. Pete the Geek wrote: ..., though I still have to work out an efficient method for the return to cubic step. I think a six-colour version would be cool, though not much different from solving the 4-colour version that has two logo corners. One thing I'm curious about: can the Corner Pocket Cube be solved into other rotations of the special corner? For example, can we have yellow on the front, blue on top and green on the right? You can do this by rotating the whole cube around the corner UFR. The big cubie at UFL, actually an extended Quad, would become DFR and Justin's signature would be on the F face. Looking at the underlying nature, it is clear that a Quad can never leave its face, because it contains a bandaged 3x3x3 centre, unless you turn the whole cube. Here follow a few pictures showing how Justin's Puzzle corresponds to the "3 Quads, 3 Stripes". I have chosen a situation where the two single 3x3 corners are swapped. The three "Q" denote the Quads, the three "S" the 3 Stripes. If you consider a form as the following as the "regular" form of the "3 Quads, 3 Stripes" then Justin's Pocket Corner in the two colour version has just two different shapes, that cannot distinguished from other permutations of the stripes and the 3x3 edges. The one above and the cubic shape. Here you see both swapped 3x3 corners: My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Pete the Geek Posted: Sun Nov 10, 2013 11:06 am TheCubingKyle wrote: Joined: Thu Dec 15, 2011 10:04 pm Pete the Geek wrote: Location: Sioux Lookout, Canada One thing I'm curious about: can the Corner Pocket Cube be solved into other rotations of the special corner? For example, can we have yellow on the front, blue on top and green on the right? I don't think it can. Mostly because The "corners" are all 3x3 centers, and because of the way this is bandaged, the other "cubies" wouldn't maintain cubic shape, if it even allowed the centers hidden within to be rotated in this fashion. I took a closer look at it this morning and I'm sure you are correct. Now if I don't care about the position of the logos, the whole puzzle can just be rotated into two alternate positions to "rotate" that corner. PeteTheGeek196 on YouTube Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Pete the Geek Posted: Sun Nov 10, 2013 11:35 am Solving the 4-Colour version (and 6-Colour if anyone ever makes one) requires a few lengthy but straightforward algorithms. Even the 2-Colour version may require a few algorithms to Joined: Thu place the logo corners and possibly swap a small corner with a large corner. As Konrad noted earlier, this puzzle is a shape-mod of Andreas' 3 Quads, 3 Stripes. The rest is in spoiler Dec 15, 2011 tags. 10:04 pm Location: [spoiler] Sioux Lookout, On page 4 of the thread that Konrad mentioned, are some posts from Andreas and Konrad. One is a post from Andreas with 9 algorithms (separated into phases) for solving 3 Quads, 3 Canada Stripes and a few posts down Konrad provided an English translation. I made a 3 Quads, 3 Stripes with my CubeTwist 3x3 Bandage Kit and started to work out what each of the 9 algorithms does. After awhile, I noticed that later in the thread Konrad had also posted a beautiful diagram showing exactly what each of the algorithms does. To solve the Corner Pocket Cube: I returned it to cubic state (even though I had one small cube swapped with a big cube). Then I started applying the algorithms. I ignored the "phases" and just looked at the diagrams, selected one that looked useful, and applied it. Only a few are needed. Some Tips: In the 3 Quads, 3 Stripes material, edges and corners are referred to by pairs and triples of the adjacent face names. Having the 3 Quads, 3 Stripes puzzle for reference helps show that edge BD is actually the BDR corner on the Pocket Corner Cube. So any algorithm that moves BD on 3 Quads, 3 Stripes will move BDR on Pocket Corner Cube. It can take some strategic application of algorithms to avoid splitting the logo corners, as the algorithms that work on LB will move the Meffert's Logo corner (good if it is not positioned yet, bad if it is). Use the 3 Quads, 3 Stripes to test things out. PeteTheGeek196 on YouTube Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Sun Nov 10, 2013 3:45 pm Pete the Geek wrote: Joined: Thu Sep 17, 2009 6:07 am Solving the 4-Colour version (and 6-Colour if anyone ever makes one) requires a few lengthy but straightforward algorithms. Even the 2-Colour version may require a few algorithms Location: Germany, to place the logo corners and possibly swap a small corner with a large corner. As Konrad noted earlier, this puzzle is a shape-mod of Andreas' 3 Quads, 3 Stripes. The rest is in Bavaria spoiler tags .... Pete, why do you call these algorithms "straight-forward"? I would not say that they are easy to find, right? If you base a method on the information pointed to in your spoiler, you can certainly state that them is straight forward. When I solved the "3 Quads, 3 Stripes" for the first time, it took me a long time to create a situation with two stripes swapped. I was not able to solve it from there without Andreas' help! And Andreas got his move sequences out of his program. Has anybody found a different method? For the two colour version, a move sequence is enough to place the one distinguishable piece onto its original location, once you have achieved the "cubic" form. (The term "cubic" includes the real cubic shape and the shape above with the two swapped 3x3 corners.) The very same sequence (a [6,1]) can be used to swap the two corners. My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Pete the Geek Posted: Sun Nov 10, 2013 4:47 pm Konrad wrote: Joined: Thu Dec 15, 2011 10:04 pm Pete, why do you call these algorithms "straight-forward"? I would not say that they are easy to find, right? Location: Sioux If you base a method on the information pointed to in your spoiler, you can certainly state that performing them is straight forward. Lookout, Canada Yes, I should have been much clearer! I meant that applying the algorithms is straight-forward. I find they have nice symmetric patterns that are easy for me to follow and your diagrams make them very easy to select. Generating these algorithms was definitely not a simple task and, as someone who used to write software for a living, I appreciate Andreas' effort and genius to make them and your work to illustrate them. PeteTheGeek196 on YouTube Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andreas Nortmann Posted: Mon Nov 11, 2013 1:35 am Konrad wrote: Joined: Mon Aug 02, 2004 7:03 am Location: Koblenz, Germany As Andreas, the inventor of the "3 Quads, 3 Stripes", admits, he struggled with this first step too. And therefore I enhanced the program (back in those days) with an automatic signature solver. It creates a sequence for every signature which leads back to the solved signature. I didn't posted it earlier because I considered such a lengthy list of no practical use. I couldn't find an easier way to guide a solver through this first step ... Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Pete the Geek Posted: Mon Nov 11, 2013 10:18 am Andreas Nortmann wrote: Joined: Thu Dec 15, 2011 10:04 pm Konrad wrote: Location: Sioux Lookout, Canada As Andreas, the inventor of the "3 Quads, 3 Stripes", admits, he struggled with this first step too. And therefore I enhanced the program (back in those days) with an automatic signature solver. It creates a sequence for every signature which leads back to the solved signature. I didn't posted it earlier because I considered such a lengthy list of no practical use. I couldn't find an easier way to guide a solver through this first step ... I'm glad you posted this idea and the file. Yesterday, I was making making a visual guide on how to get the Corner Pocket Cube back to the cubic state. The idea is not to have every possible state (signature) shown, just enough that some twisting and turning of an unrecognized state will lead to one in the guide and then the algorithm to get back to cubic. If it turns out to be useful, I will add a few more entries to this chart. Generator-Sample_w_Text_sm.jpg [ 130.67 KiB | Viewed 2500 times ] PeteTheGeek196 on YouTube Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Mon Nov 18, 2013 1:55 am Hi friends, Joined: Wed Apr 13, 2011 8:37 am Location: Germany I build the cube with the Cube twist set. Thanks for your work , Andreas, Konrad, Pete. I'm unsure to order the pocket cube. I cannot solve the version with Cubetwist set. This puzzle looks very beautifull. Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Tue Nov 19, 2013 12:53 pm I found a little technique to explore sequences without a program. Joined: Wed Apr 13, 2011 8:37 am Location: Germany The idea is to find symetric positions. Symetric in relation to the shape. pocket.jpg [ 204.86 KiB | Viewed 2258 times ] The first 2 moves of a sequence is always equal. The symetry is the reason. After that no much moves with sense are possible. An example is the shape of the picture. Put the cube that the 2x2x2 subcube with single stickers are on left/down/front position. The yellow blue red corner is on back/right/top The sequence :A = U R' F2 U F U' R' F2 R So it's possible to do an F2 Then undo the sequence A' = R' F2 R U F' U' F2 R U' This causes an swap of 2 opposite corners , 2 stripes and 2 single edges. One more technique is to get a (shape) symetric position. Then mirror the sequence and instead of the inversion of original sequence you can do the inverse of mirror. I still work on it. The sucess is small. Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Tue Nov 19, 2013 5:31 pm Solving the pocket cube: Joined: Wed Apr 13, 2011 8:37 am I will explain the technique with mirroring sequences. Location: Germany In the last posting the sequence A = U R' F2 U F U' R' F2 R causes a face with symetry. after doing A you can do sequence B = R' F' R U F' U' Then the shape is symetric. You can replace U with R' and vice versa. F becomes F'. The sequence is A + B U R' F2 U F U' R' F2 R R' F' R U F' U' The inverse with mirror is R' F' R U F' U' U F2 U' R' F R F2 U' R all together 1) U R' F2 U F U' R' F2 R R' F' R U F' U' R' F' R U F' U' U F2 U' R' F R F2 U' R This flips 2 edges and makes a 3 cycle of stripes and turns 2 opposite corners. A third sequence is : C = R U' A + B + C causes a new symetry. Now you can replace U with B' and F with D'. U R' F2 U F U' R' F2 R R' F' R U F' U' R U' The mirrored inverse is B' R B' D' B R D' R' R D2 R' B' D B D2 R' B all together : U R' F2 U F U' R' F2 R R' F' R U F' U' R U' B' R B' D' B R D' R' R D2 R' B' D B D2 R' B This turns 2 opposite corners and makes a unflipped 3cycle of edges. It's possible to solve all positions with this sequences. The sequence 1) three times makes a clean flip of two edges. Basic for all this the three sequences: A : U R' F2 U F U' R' F2 R B : R' F' R U F' U' C : R U' Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Pete the Geek Posted: Tue Nov 19, 2013 5:51 pm Andrea wrote: Joined: Thu Dec 15, 2011 10:04 pm Location: Sioux Lookout, Canada It's possible to solve all positions with this sequences. The sequence 1) three times makes a clean flip of two edges. Basic for all this the three sequences: A : U R' F2 U F U' R' F2 R B : R' F' R U F' U' C : R U' Wow, that's a great analysis, Andrea. I've been looking for a solution and I never expected it to be so simple and elegant. PeteTheGeek196 on YouTube Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Wed Nov 20, 2013 3:14 pm Hi Andrea, Joined: Thu Sep 17, 2009 6:07 am Location: Germany, Bavaria thank you very much for sharing your ideas. I'm afraid, I have not yet understood everything. I shall have a detailed look at your method, but I would like to ask a question first. My impression is that your sequences work after the "3 Quads, 3 Stripes" is put back into its regular form. Andreas' program has found 580 "signatures", states of the puzzle. How do I get back from one of those to the "regular form"? That was the question discussed by Andreas, Pete and me (and this is more or less the only challenge on a two colour version of the Pocket Cube). Obviously, you assume that your sequences are performed on a "regular 3 Quads, 3 Stripes" (which is a " solved two colour Pocket (Corner)Cube" ? will not change anything recognizable on a two colour version. Still, this is quite an achievement that you have found these three basic sequences without the help of a computer! My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Wed Nov 20, 2013 5:12 pm Have you seen this in the Joined: Thu Sep 17, 2009 6:07 am Location: Germany, Bavaria other thread leonid wrote: I received mine! Was so excited that I accidentally pushed a block back a bit too hard. The same happened to me today. OK, I admit, it was hard work to produce this special state. My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Wed Nov 20, 2013 5:21 pm Hi Konrad, Joined: Wed Apr 13, 2011 8:37 am Quote: Location: Germany How do I get back from one of those to the "regular form"? I didn't discussed about it. My analysis is only based an a regular shape. I explored sequences to permute pieces without destroing the shape. Andreas' program has found 580 "signatures", states of the puzzle. Is it possible to download this program ? I don't know the functionality of this program. How do I get back from one of those to the "regular form"? That was the question discussed by Andreas, Pete and me. Ok , my posting was different to this intention. Pete did a good work with his table from Mon Nov 11, 2013 4:18 pm. I solve the shape with intuition. The problem are the permuted pieces in the correct shape.(4 color version or cube-twist) I ordered the 4 color version of this cube. I hope the nice patterns you posted here are possible with this.If the 2 opposite cubes are swapped, my sequence A F2 A' is the solution. Do you assume that your sequences are performed on a "regular 3 Quads, 3 Stripes"? I didn't understand this exact. I try to answer. I checked all the sequences with a cube-twist version. They are ok. But the single corner is on right top back and the 2x2x2 subcube on left front down. I used a different start position. A z axis 180 degrees rotation from your start - position. Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Wed Nov 20, 2013 5:28 pm Hi Konrad, Joined: Wed Apr 13, 2011 8:37 am Location: Germany are the nice patterns possible with the 4 color version ? That means that the colors on each faces of 2x2x2 subcube is equal. Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Wed Nov 20, 2013 6:33 pm Andrea wrote: Joined: Thu Sep 17, 2009 6:07 am Location: Germany, Bavaria Hi Konrad, are the nice patterns possible with the 4 color version ? That means that the colors on each faces of 2x2x2 subcube is equal. Thanks for your answers. (I have edited my post above a bit). When I typed my reply I had not looked at all your sequences. Nice finding! To answer your question: I believe it is possible, but I would have to check it with the CT Cube. EDIT: I tried to get the special pattern on a "3 Quads, 3 Stripes" CT cube. I could not do it in a way that the 2x2x2 corner of 1x1x1 cubies remains solved. I'm not sure if it is possible at all. My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Thu Nov 21, 2013 9:52 am Hi Konrad, Joined: Wed Apr 13, 2011 8:37 am Location: Germany Quote: E.g. U,R',F2,U,F,U',R',F2,R,R',F',R,U,F',U',R,U',B',R,B',D',B,R,D',R',R,D2,R',B',D,B,D2,R',B will not change anything recognizable on a two colour version. Yes ! On a 2 color this has no effect. My solution. Try intuitive to build the shape. There are some difficult cases with sequences. Two opposite corners are exchanged : A F2 A' On a 2 color cube this makes a nice pattern. A corner and a stripe are exchanged. (A + B + c ) ' Or the second halve of the posted sequence B' R B' D' B R D' R' R D2 R' B' D B D2 R' B Exchanges a stripe with a corner. On a 2 color cube this shows a (move / push) of 2 big corners. Then the shape is correct but edges are flipped or permuted. For this situations I developed the sequences. Perhaps all other cases are inuitive solveable. Perhaps these sequences are usefull for nice shape pattern. I cannot wait until my pocket cubes arrives. Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Thu Nov 21, 2013 1:42 pm Andrea, thank you for the additional clarification. Joined: Thu Sep 17, 2009 6:07 am As I wrote above, looking at your sequences and your reply made it perfectly clear, how you will solve the puzzle. I hope it arrives soon. Location: Germany, Bavaria Congratulations that you have mastered the "3 Quads, 3 Stripes" puzzle without the help of a computer program! You seem to be the first, as far as I know Just to make clear what I meant with the "regular form" of the 3 Quads 3 Stripes: Konrad wrote: The bandaged 3x3x3 was Andreas Nortmann's 2013930080085E aka "3 Quads, 3 Stripes". For the two colour version the solution is: Reconstruct the cubic shape. The colours of the tiles are not relevant, any permutation of the pieces that show this state are "regular" and have Andreas' signature 2013930080085E. Naturally, it has nothing to do with the location of the 2x2x2 sub-cube. Personally, I prefer diagrams where it is located at UFR. This shows the overall symmetry quite nicely. BTW, I can now prove that the nice pattern of leonid can be made on a 4-colour version. (I'm currently not at home, but I'll write more later or by tomorrow.) PS Your A F2 A' is a bit similar to one of Andreas Nortmann's sequences. My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Thu Nov 21, 2013 3:17 pm Hi Andrea and others who might be interested, Joined: Thu Sep 17, 2009 6:07 am you had asked about the program of Andreas Nortmann. Location: Germany, Bavaria Actually, there are Andreas has made available an Analyser program in this thread Andreas Nortmann wrote: Hi guys, After the presentation of the 3X3X3 DIY Bandaged Cubes I thought it a good moment to start another thread (my third during the years after 2004 and 2010) here in which I present my work about bandaged 3x3x3s and other bandaged puzzles. This program presents English language on every Windows system except those with German settings. What has changed? The most important thing is this: I succeeded in generalizing the algorithm for "dead ends" for all puzzles and therefore the number of essentially different 3x3x3 is now 5844 of which 5705 are non-trivial of which 3563 can be implemented with the 3X3X3 DIY Bandaged Cubes. The 3 Quads, 3 Stripes is one of the 3563. You can download this Analyser program in the other thread Additionally, Andreas wrote in that thread: Andreas Nortmann wrote: The project was about programming a generator for human readable solution strategies. I wanted to create a solution strategy for every of the 5844 bandaged cubes in the program. I froze that project because of the museum. Before I have frozen it I succeeded in some preliminary steps. These allowed me to calculate the number of different signatures each bandaged variant can have by being turned. the number of permutations for each bandaged variant. the restriction factor (quotient between naive and true number of permutations) for each bandaged variant. an automatic solution strategy for the ca. 3900 bandaged cubes which number of permutations (ca. 4 million if I remember right) is low enough. Some days ago I just dug into my archives and grabbed the file for "3Quads and 3 stripes" cube. I even outlined an article for CFF in which I wanted to present all kinds of statistics about the 5844 bandaged variants. BTW, CFF = "Cubism For Fun" is the magazine of the NKC = Nederlandse Kubus Club. (NKC organizes the DCD = Dutch Cube Day since 1981 without interruption.) Probably you can get more details about this solution program directly from Andreas via PM. My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Thu Nov 21, 2013 4:58 pm Hi Konrad, Joined: Wed Apr 13, 2011 8:37 am Location: Germany Quote: Congratulations that you have mastered the "3 Quads, 3 Stripes" puzzle without the help of a computer program! You seem to be the first, as far as I know Sorry, this is not correct. I found some configurations which are not solveable with my sequences. One Example is a clean swap of two stripes. This is equivalent to a swap of two edges and two opposite corners. Perhaps it's not possible to solve this puzzle without a paper. It's difficult to memorize many states and sequences without a paper. My analysis was not complete. But a first step. Cheers , Post subject: Re: How to solve Justin's Pocket (Corner) Cube? doctor twist Posted: Thu Nov 21, 2013 7:55 pm Dear puzzlers Joined: Tue Jun 11, 2013 12:48 the corner pocket cube is a really fascinating puzzle. I must admit that I underestimated it in the very beginning. pm At the homepage of Mefferts puzzle shop the skill level for the 2 colored version is rated with 2 stars and the 4 colored version with 4 stars. Just for comparison: the megaminx is rated with 6 stars (maximum difficulty). If it is true what Andrea assumed that it might not be possible to solve the corner pocket cube without pen and paper then I would say that this is one of the hardest puzzles I know. I have the 4 colored version and I was fiddling around with it for quite a while. Then I solved it accidently. But actually I have not really an idea how I did it. While solving it I realised that there are certain "key"-states which repeated very often. I also found some easy moves how to switch between certain states. I guess maybe it could be possible to subsume similar states to groups. Sort of analogue to the Friedrich method where you can decrease the 54 oll cases (not sure if this number is correct) of the Friedrich Method down to nine "2 look oll algorithms". I wished I had the time to analyse that puzzle more deeply. It is kind of new for me I wish everyone good luck! Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andreas Nortmann Posted: Fri Nov 22, 2013 1:30 am Konrad wrote: Joined: Mon Aug 02, 2004 7:03 am Location: Koblenz, Germany you had asked about the program of Andreas Nortmann. Actually, there are Andreas has made available an Analyser program in this thread Thank you Konrad for posting this. I was absent for some days. Family goes first. As hinted by Konrad I haven't made the solver public yet. It is unprobable that I will. Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Fri Nov 22, 2013 6:25 am Back to Leonid's special pattern on a 4-coloured version: Joined: Thu Sep 17, 2009 6:07 am Konrad wrote: Location: Germany, Bavaria ... I tried to get the special pattern on a "3 Quads, 3 Stripes" CT cube. I could not do it in a way that the 2x2x2 corner of 1x1x1 cubies remains solved. state on a "3 Quads, 3 Stripes" would look like this: In Andreas' (please be aware of the difference between Andrea's and Andreas' Please note, Andreas uses a different colour scheme! Performing this sequence on a solved "3Quads, 3 Stripes", (Please note, that I hold the 2x2x2 block at UFR): L', D, F, L, D', B', L, U, B, L', B', L, U', L', B, D, L', F', D', L, B, D'2, R', D, B, R', B', R'2, D, B', L, B', U'2, L, U, L', B', U'2, B, U'2, B', U'2, B, L, U', L', U, B, L', D, L', F' I can produce this close to target state: We see that two 3x3 edges need to be swapped, they are in an odd permutation. As we know, 3x3 edges can be in odd permutations only if there exists an odd permutation of corners. The two single 3x3 corners (on the Pocket Cube: one 1x1x1 cubie and one extended) are located correctly. The remaining six 3x3 corners are bandaged with 3x3 edges to 2x1x1 corner-edges and can be permuted together, only. The aggregated permutations of corners and edges when permuting corner-edges will always be an This means that the target pattern can never be reached by legal turns! My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Fri Nov 22, 2013 9:11 am Hi Konrad Joined: Wed Apr 13, 2011 8:37 am Quote: Location: Germany L', D, F, L, D', B', L, U, B, L', B', L, U', L', B, D, L', F', D', L, B, D'2, R', D, B, R', B', R'2, D, B', L, B', U'2, L, U, L', B', U'2, B, U'2, B', U'2, B, L, U', L', U, B, L', D, L', F' Great job. Thank you for the answer. How did you find this difficult sequence ? Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Fri Nov 22, 2013 3:12 pm Andrea wrote: Joined: Thu Sep 17, 2009 6:07 am ... How did you find this difficult sequence ? Location: Germany, Bavaria Hi Andrea, actually I just combined a few sequences from Andreas' input. Here is a sequence that puts the 2-coloured version into Leonid's nice pattern state: L, B', U', B, L'2, D, F'2, D', L, D, L', F' doctor twist wrote: ... If it is true what Andrea assumed that it might not be possible to solve the corner pocket cube without pen and paper then I would say that this is one of the hardest puzzles I know. I guess, Andrea meant that it needs a lot of work to find a solution on your own. If you use all the information contained in this thread, it is not so extremely difficult any longer. The spoiler in Pete's post from November 10th contains all hints you need. A major part is in my diagram and in Andreas' method (based on his solution program). Finding your own method without the help of a computer program is the big challenge. If you want help, you can use the spoiler on this thread in my post from July 2nd and the diagrams in my post from July 4th. As Pete has pointed out in his post on November 10th, the 4-colour version does require a smaller part of the overall solution of a "3 Quads, 3 Stripes" after you have reached the cubic form. If you accept all the help you can get (That's what I have done. 1. Try to understand the relationship between the Pocket (Corner) Cube and the "3 Quads, 3 Stripes". Just try to cut off (mentally 2. Scramble and solve it to the cubic shape (intuitively). There is the special case only where the single corners (the 1x1x1 corner and the extended single 3x3x3 corner are swapped: In this case you can swap the two single corners by L,B',U',B,L2,D,F2,D',L2,B',U,B,L' it is a relatively easy to memorize [6:1] conjugate. F2 is the "1" part. 3. Use some of the sequences in my diagram from July 4th (in the other thread) to permute and flip the four pieces in the 2x2x2 sub-block. Obviously you are done with 2-colour version when it is back in cubic shape. No step 3 for this one! BTW, I have ordered the 4-colour version today. This puzzle is really something special My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Sun Nov 24, 2013 6:15 pm Hi puzzlers, Joined: Wed Apr 13, 2011 8:37 am Location: Germany I wrote a little C program to analyse the 3quads3stripes / pocket cube. The algorithm is an iterative deep first search without pruning table. The calculation duration is only a few seconds. Open the command window where the program pocket.exe is. Type pocket. Then the program calculates some sequences. The position of subcube is L/D/F. If you type > "pocket ruf " then the 2x2x2 subcube is on R-U-F like Konrad's position. The most difficult sequence is exchanging only 2 stripes. ( Parity) The last pattern is the nice pattern from Leonid. The sourcecode is added. It's possible to compile it under linux / bsd etc. The output from "pocket ruf" : Pocket Cube Analyse Program by Andrea Exchange 2 stripes : D L' F2 D F D' L' F2 L F D' L B' L U L U' L' U' L' B D L' F' D' L B D' R' B' D Exchange 3 stripes : L B' U' B L' D L' F2 D' L D L' F2 L D' L' D F2 L D' L' D F2 D' L2 B' U B L' Exchange 2 opposite corners : L B' U' B L2 D F2 D' L2 B' U B L' Turn 2 opposite Corners : L B' U2 L U L' B' U B2 L' D' B R D B' L' D F D' L D L' F' D' L B D' R' B' D 3 cycle of 3 edges : L B' U' B L2 D F2 D' L D L' F' D' L B D' R D B' D' B R2 B' D2 L' F L D' flip 2 egdes : L B' U' L' B D L' F' D' L B D' R' B' D L B' U' L' B D L' F' D' L B D' R' B' D Exchange 3 stripes and turn corners : L B' U' B L' D L' F' L D' B' L U B L' B D' R D B' Nice Pattern (pushed blocks) : L' D F D' L D L' F' L' F L Download this: pocket.zip [26.94 KiB] Downloaded 17 times Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andreas Nortmann Posted: Mon Nov 25, 2013 2:03 am Konrad wrote: Joined: Mon Aug 02, 2004 7:03 am Location: Koblenz, Germany In Andreas' (please be aware of the difference between Andrea's and Andreas' If you look again into that lengthy text file you can find this line: 22039310000BC2 L=008 F L D' L B' U B L' (my notation obviously) That means you can restore the target siganture with only 8 moves. Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Mon Nov 25, 2013 4:21 am Andreas Nortmann wrote: Joined: Thu Sep 17, 2009 6:07 am Location: Germany, Bavaria ... That means you can restore the target signature with only 8 moves. Exactly. The shown signature is that of Leonid's nice pattern. After applying the sequence quoted by Andreas, you are back to the regular You still need to permute the 1x1x1 cubies, though. That's why I had shown this pretty lengthy sequence (combination of Andreas' sequences) to create a pattern where the odd permutation of 1x1x1 edges is obvious. I was answering Andrea's question, if the 4 colour Pocket Cube can be permuted to Leonid's nice pattern. The answer is , not exactly. The shape can be reached naturally, but the 1x1x1 edges will be swapped. P.S. Nice to see two posts by Andrea and Andreas in a row. @Andrea: This shows again your programming skills My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Konrad Posted: Mon Nov 25, 2013 4:05 pm Andrea has found with her program a lot of good sequences. Thanks a lot for sharing it with us. Joined: Thu Sep 17, 2009 6:07 am (Maybe, when I have more time, I'll setup a C environment and look at your program. Location: Germany, Bavaria For now I want to show how much I appreciate your skills, by adding diagrams to your sequences) I thought that I should make a little compendium about A&A's sequences (A&A = Andrea & Andreas). It is interesting that there is some overlap (several found by Andreas were found by Andrea as well) and some from Andrea are new. Here is all (or more than) you need to solve the four-colour version. The first step (back to the cubic form) is still to be done intuitively. First (because his solution was earlier in the "3 Quads, 3 Stripes" topic) the most important sequences found by Andreas Nortmann's program: And here the sequences found by Andrea: If you want to check something by "copy and paste" into Gelatinbrain, here follow the sequences in plain text: 3 Quads, 3 Stripes 2013930080085E We use David Singmaster´s Notation (normal turns clockwise) U=Orange F=Green L=Yellow D=Red B=Blue R=White 580 signatures 432 permutations Position Corner-Edges a) L, B', U', B, L2, D, F2, D', L2, B', U, B, L', (13) inverted is identical Position 1x1x1 Edges – 3-cycle b) L, B', U', B, L', D, L', F2, D', L, D', B, R', B', D (15) D', B, R, B', D, L', D, F'2, L, D', L, B', U, B, L' Position 1x1x1 Edges – swap of two edges c) B', L, U ,L', B, L, B', U', B', U, B2, L', D', B, R, D, B', D, L', F, L, D' (22) D, L', F', L, D', B, D', R', B', D, L, B'2, U', B, U, B, L', B', L, U', L', B Flip 1x1x1 Edges d) D', B, R, D, B', L', D, F, L, D', B', L, U, B, L', D', B, R, D, B', L', D, F, L, D', B', L, U, B, L' (30) L, B', U', L', B, D, L', F', D', L, B, D', R', B', D, L, B', U', L', B, D, L', F', D', L, B, D', R', B', D Orientate 1x1x1 corners e) L', D, F, L, D', B', L, U, B, L', B', L, U', L', B, D, L', F', D', L, B, D2, R', D, B, R', B', R2, D, B' (30) B, D', R'2, B, R, B', D', R, D'2, B', L', D, F, L, D', B', L, U, L', B, L, B', U', L', B, D, L', F', D', L Andrea's Sequences Exchange 2 stripes D,L',F2,D,F,D',L',F2,L,F,D',L,B',L,U,L,U',L',U',L',B,D,L',F',D',L,B,D',R',B',D (31) D', B, R, D, B', L', D, F, L, D', B', L, U, L, U, L', U', L', B, L', D, F', L', F'2, L, D, F', D', F'2, L, D' 3-cycle of Stripes L,B',U',B,L',D,L',F2,D',L,D,L',F2,L,D',L',D,F2,L,D',L',D,F2,D',L2,B',U,B,L' (29) L, B', U', B, L'2, D, F'2, D', L, D, L', F'2, D', L, D, L', F'2, L, D', L', D, F'2, L, D', L, B', U, B, L' Exchange 2 opposite corners identical to "a)" above Turn 2 opposite Corners : identical to "e) inverted " above Pure 3 cycle of 3 edges (the impure version in b) above is just 15 turns): L,B',U',B,L2,D,F2,D',L,D,L',F',D',L,B,D',R,D,B',D',B,R2,B',D2,L',F,L,D' (28) D, L', F', L, D'2, B, R'2, B', D, B, D', R', D, B', L', D, F, L, D', L', D, F'2, D', L'2, B', U, B, L' flip 2 egdes : identical to "d) inverted" above Exchange 3 stripes and turn corners : L,B',U',B,L',D,L',F',L,D',B',L,U,B,L',B,D',R,D,B', (20) B, D', R', D, B', L, B', U', L', B, D, L', F, L, D', L, B', U, B, L' Nice Pattern (pushed blocks) : From solved to pattern L', F', L, F, L, D', L', D, F', D', L We can still offer the position of the "first solver without the help of a program". Andrea has achieved a good part, but the rest was done by programming (which is by itself). Do not underestimate the first step: Back to the cubic form of the Pocket Corner just using your intuition. The text file generated by Andreas can be a help if you are completely lost. If you want to use it, the problem remains to generate the hexadecimal signature. Yesterday I reduced the 580 cases shown in Andreas' table to 200. (Please, note that you can turn the cubic form around the corner RUF generating three out of one sequence. I'm not sure why Andreas has in his text file 579 cases, but only 279 were duplicates when turning the whole cube in space.) I started with a document in which I included pictures of the "3 Quads, 3 Stripes" for half of the 200 cases. I gave up this idea, because it is definitely not so easy to map mentally a shape shifted Pocket Corner Cube to a 2D picture of a "3 Quads, 3 Stripes") My collection at: http://sites.google.com/site/twistykon/home Post subject: Re: How to solve Justin's Pocket (Corner) Cube? Andrea Posted: Mon Nov 25, 2013 5:31 pm Hi Konrad, Joined: Wed Apr 13, 2011 8:37 am Location: Germany great job. You put it together very visual. One more sequence is L' D F D' L B' L U2 B L' B D' R D B' This is equivalent to b. Turn corners and a 3cycle of edges. A good idea is to use short sequences. e.g. Use a to exchange stripes. Use b to make an edge 3cycle, and turn the corners later. The worst case is the swap of 2 stripes. Perhaps it's easier to solve this case with intuition/try and error than memorize 31 moves. Andreas Nortmann Joined: Mon Aug 02, 2004 7:03 am Location: Koblenz, Germany Joined: Mon Feb 27, 2012 10:57 am Location: In my study drooling over my puzzle hoard - Precioussssss! Joined: Thu Sep 17, 2009 6:07 am Location: Germany, doctor twist Joined: Tue Jun 11, 2013 12:48 pm Joined: Wed Apr 13, 2011 8:37 am Location: Germany Joined: Wed Apr 13, 2011 8:37 am Location: Germany Joined: Wed Apr 13, 2011 8:37 am Location: Germany Joined: Thu Sep 17, 2009 6:07 am Location: Germany, Joined: Wed Apr 13, 2011 8:37 am Location: Germany Joined: Wed Apr 13, 2011 8:37 am Location: Germany All times are UTC - 5 hours Who is online Users browsing this forum: No registered users and 5 guests You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot post attachments in this forum
{"url":"http://www.twistypuzzles.com/forum/viewtopic.php?f=8&t=26282&p=310088","timestamp":"2014-04-21T10:01:37Z","content_type":null,"content_length":"199173","record_id":"<urn:uuid:f2fc55fb-ced8-4176-a78a-823d3755ac62>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Γ-Extension of Binary Matroids ISRN Discrete Mathematics Volume 2011 (2011), Article ID 629707, 8 pages Research Article Γ-Extension of Binary Matroids Department of Mathematics, Faculty of Sciences, University of Urmia, P.O. Box 57135, Urmia, Iran Received 13 August 2011; Accepted 22 September 2011 Academic Editors: Y. Hou and T. Prellberg Copyright © 2011 Habib Azanchiler. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We extend the notion of a point-addition operation from graphs to binary matroids. This operation can be expressed in terms of element-addition operation and splitting operation. We consider a special case of this construction and study its properties. We call the resulting matroid of this special case a Γ-extension of the given matroid. We characterize circuits and bases of the resulting matroids and explore the effect of this operation on the connectivity of the matroids. 1. Introduction Slater [1] defined few operations for graphs which preserve connectedness of graphs. One such operation is a point-addition (vertex-addition) operation. This operation is defined in the following way. Let be a graph and be the set of vertices of . Let be the graph obtained from by adding a new vertex adjacent to vertices of . The graph is said to be obtained from by point-addition operation. Letting , for convenience, we denote the graph by . Thus, and . Point-addition operation has several applications in graph theory. For example, Slater classified -connected graphs using point-addition operation along with some other operations [2]. If , then the new vertex can be joined to at most vertices of the graph. That means, we can add at most edges in the original graph. Definition 1.1. Let be a binary matroid of rank on a set . Let be the matrix obtained from by the following way.(1)Adjoin columns to with labels say . Let the resulting matrix be denoted by .(2) Adjoin a new row to with entries zero except in the columns corresponding to , where it takes the value 1. Let be the vector matroid of the matrix . We say that is obtained from by the point-addition operation. We call the matroid point-addition matroid or -extension of . Let us denote by , the set of columns which are adjoined to in the first step. That is, . Then, second step consists of splitting the matroid with respect to the set (see [3, 4]). In fact, the matroid is obtained by elements addition and generalized splitting operation [5]. As an immediate consequence of the definition, we have the following result. Let and be two vertices of . Then, the addition of an edge , results in the smallest supergraph of containing edge . Proposition 1.2. Let be a cycle matroid of rank . Let be the graph obtained from by adding adjacent edges to . Let . Then, the point-addition matroid is graphic and . Proof. Let be representation matrix of over . Let the matrix be obtained from by adding column vectors say, . Suppose that is obtained from by adding a new row where entries are zero, except in the columns corresponding to , where it takes the value 1. Thus is a binary matroid with ground set . Since are adjacent edges in , the splitting of with respect to is graphic (see [5]), and we have , where is the graph obtained from by splitting operation with respect to . It follows that . We assume that the reader is familiar with elementary notions in matroid theory, including minors, binary, and connectivity. For an excellent introduction to the subject, read Oxley [6]. 2. -Extension of a Binary Matroid If a matroid is obtained from a matroid by adding a nonempty subset of , then is called an extension of . In particular, if , then is a single-element extension of (see [6]). Another term, that is sometimes used instead of single-element extension, is addition (see [7]). Now we consider a special case of the operation that is introduced in the first section. Definition 2.1. Let be a binary matroid of rank on a set , and let be the standard representation of over . Let be a base of , and let be a subset of . We obtain the matrix by the following way.(1) Obtain a matrix from by adjoining columns say to , parallel to , respectively.(2)Split the matrix with respect to the set , where . Denote the resulting matrix by . Let be the vector matroid of the matrix . We say that is -extension of . Note that is a binary matroid with ground set , where , and . The transition from to is called -extension operation on . In particular, if , it is called --extension operation, and, for , we call it single--extension operation. The next example illustrates this construction for the dual of Fano matroid. Example 2.2. Let be the dual of the Fano matroid , and let be the ground set of . The matrix that represents over is given by. Consider the set contained in the base of . Then, the corresponding matrix is given by The vector matroid of is the matroid . Corollary 2.3. Let be a binary matroid on . Let be a subset of a base of , and be the -extension of on the set . Then, , that is, is an extension of . Corollary 2.4. Let and be the rank functions of the matroids and , respectively. Then . With the help of Lemma 2.5, we characterize the circuits of the matroid . Lemma 2.5. (1) Every circuit of is a circuit of . (2) Every circuit of contains at least one element of . (3) Every circuit of contains even number of elements of . The proof follows from the construction of the matrix . Remark 2.6. Let be a single--extension of (i.e., ). Then every circuit of is a circuit of and vice versa. In fact, the added element is a coloop in the resulting matroid. Theorem 2.7. Let be a binary matroid on with representation matrix over and be a subset of a base of . Then, a subset of is a circuit of if and only if one of the following conditions hold:(1), where and for ,(2), where , is an even integer and is such that is a circuit in , where . Proof. If , then, by Definition 2.1 of , is a circuit of . Now, let be as stated in (2). If , then , and is a circuit of . Suppose that , and is a circuit of . Then clearly, is a circuit of . Conversely, let be a circuit of , we have two cases:(I). Then . Thus, is a circuit in and the condition (2) in the result holds.(II)Let , and suppose that . We have two subcases:(i). Then, and . Thus, and condition (1) in the result holds.(ii). Take . Then is a circuit of and is a circuit of . Thus, , and the condition (2) in the result holds. We characterize the independent sets of in terms of independent sets of . Firstly, we have the following lemma. Lemma 2.8. (1) Every independent set of is independent in . (2) Every subset of is independent in . The proof is straightforward. Remark 2.9. Let be a single--extension of . Then, every independent set in is also independent in and vice versa. Theorem 2.10. Let be a binary matroid on and be the -extension matroid of with respect to . Let be a collection of independent sets of . Then, a subset of is an independent set of if and only if one of the following conditions hold:(1), where and .(2), where , and contains no circuit of . Proof. If , then clearly for is an independent set in . Now, suppose that contains no circuit of . On the contrary, suppose that contains a circuit say of . Then and . But is a circuit of and is contained in , a contradiction. Conversely, let be an independent set in and . We have two cases.(I)Let . Then and is independent in .(II)Let , and let . Then and . We prove that is an independent set in . On the contrary, suppose that contains a circuit of , say , then and gives a contradiction. Letting , we have . We claim that does not contain any circuit of . If contains a circuit of , say , then . Further, , and thus leads to a contradiction as is a circuit of . This completes the proof of the theorem. Corollary 2.11. Let and denote the collection of independent sets of and , respectively. If , then . Corollary 2.12. A subset of is independent in if and only if for is independent in . Corollary 2.13. Let and be the rank functions of and , respectively. Then for . In the next theorem, we characterize the bases of the matroid in terms of the bases of . Lemma 2.14. Let , then is an independent set in if and only if is an independent set in . The proof is straightforward. Corollary 2.15. Let be a subset of . Then where and are rank functions of and , respectively. Theorem 2.16. A subset of is a base for if and only if , where and contains no circuit of . Proof. Let be a base for . Then is an independent set in , and so is independent in . Let . Then, by Theorem 2.10, , where and contains no circuit of and hence is independent in . Moreover, . We conclude that is a base for . Conversely, let be a base for . Firstly, we show that . On the contrary, suppose that . Then and is independent in . So by Lemma 2.14, is independent in . Also by Corollary 2.15, This shows that and ; a contradiction. Now, let . Then is independent in as well as in . It can be extended to form the base of . Let be such that is a base for . Then . We claim that . Now, Since , we conclude that , that is, . Finally, we show that contains no circuit of . On the contrary, suppose that contains a circuit, say of . Then and . Thus, leads to a contradiction. This completes the proof of theorem. Corollary 2.17. Every base of contains at least one element of . 3. Connectivity of Let be a binary matroid on a set and be the representation matrix of over . If is bridgeless, then -extension of with respect to a singleton subset of yields a disconnected matroid. Lemma 3.1. Let be a coloop in a matroid and . Then is a coloop in . The proof is straightforward. Corollary 3.2. Suppose that no element of is a coloop of . Then has no coloops. Theorem 3.3. Let . If is connected matroid, then, so is . Proof. Assume that is connected. We show that for every pair of elements there is a circuit of containing and . We have three cases.(1)Let . By hypothesis, is connected. So there is a circuit of say , containing and . Since is a circuit in , we are through.(2)Let and let and . Then the 4-circuit in contains and .(3)Let and . By assumption . So there is an element say . Let be a circuit of containing and . By Lemma 2.5, contains at least one element of , say . Now , and is connected, so there is a circuit of , say which contains and . Thus and . Then there is a circuit in , say , such that and . This completes the proof of the theorem. Remark 3.4. Converse of the above theorem is not true. Theorem 3.5. Let be a 3--extension matroid of and . If is a 3-connected matroid on , then is 3-connected. Proof. On the contrary, suppose that is not 3-connected, then has a 1-separated or 2-separated partition. Let be a 2-separated partition of . That means, and We consider three cases.(i)Let and . By Lemma 2.8, is independent in , so Also, by Lemma 2.14, Thus, This is a contradiction to (*).(ii)Let and .By Lemma 2.14, . So gives a contradiction to (*).(iii)Let and , where , and . Then and . Moreover, Thus, , and we conclude that is a 1-separated partition for .This is a contradiction to the fact that is 3-connected. By the same argument, we can show that does not have 1-separated In the last theorem, the condition that is necessary. Consider the following example. Example 3.6. is a 3-connected matroid. Let a representation matrix of be Let and . Then By row operations on , we can show that By , is not 3-connected. If , then has a coloop, and it is not 3-connected. In general, we state the following result whose proof is immediate. Corollary 3.7. Let be a -connected binary matroid and . Then is not -connected. 1. P. J. Slater, “A classification of 4-connected graphs,” Journal of Combinatorial Theory. Series B, vol. 17, pp. 281–298, 1974. View at Publisher · View at Google Scholar 2. P. J. Slater, “Soldering and point splitting,” Journal of Combinatorial Theory. Series B, vol. 24, no. 3, pp. 338–343, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 3. T. T. Raghunathan, M. M. Shikare, and B. N. Waphare, “Splitting in a binary matroid,” Discrete Mathematics, vol. 184, no. 1–3, pp. 267–271, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. M. M. Shikare, “Splitting operation and connectedness in binary matroids,” Indian Journal of Pure and Applied Mathematics, vol. 31, no. 12, pp. 1691–1697, 2000. View at Zentralblatt MATH 5. M. M. Shikare, G. Azadi, and B. N. Waphare, “Generalized splitting operation and its applications to binary matroids,” Preprint. 6. J. G. Oxley, Matroid Theory, Oxford Science Publications, The Clarendon Press/Oxford University Press, New York, NY, USA, 1992. 7. K. Truemper, Matroid Decomposition, Academic Press, Boston, Mass, USA, 1992.
{"url":"http://www.hindawi.com/journals/isrn.discrete.mathematics/2011/629707/","timestamp":"2014-04-18T21:54:53Z","content_type":null,"content_length":"397614","record_id":"<urn:uuid:55b2a5d9-c273-4a37-8d91-3ec62b1f582a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Time-varying Covariates .. Stat_love posted on Wednesday, November 16, 2005 - 8:12 am Dear Muth, These days, I use the Mplus lots of time. First, I get a result from Latent Class analysis and then got 2 classes and I didn't consider any covariates. But I have age variable of each waves. ID Sex Age1 Y1 Age2 Y2 Age3 Y3...Age5 Y5 1 F 8 0 10 0 11 1 ... 16 1 2 M 10 0 12 1 15 1 ... 18 1 ... like this.. I want to consider age effect, even if each age rang is not equal. I find Ex.6.10 in User's Guide 2004. I think Age variable is effect Y's, so I want to apply this formate. But our example use time-invariant and time-varying covariates..I just think time-varying covariates are each age case. Is it corrct?? or not? I really want to consider age of each wave. Before, I just use the age variable, I have lots of missing data, for age range is not equal. Subject 1 have starting age is 8, but 2 have starting age is 10 like this.. Could you give me your good suggestion? bmuthen posted on Wednesday, November 16, 2005 - 8:53 am Perhaps ex 6.12 is useful. It sounds like you have "individually-varying times of observation", so that people are not of the same age at a given measurement occasion. Ex 6.12 also has random slopes for the time-varying covariates which you don't have to use. Stat_love posted on Wednesday, November 16, 2005 - 11:26 am Dear Muth, You said that ex6.12 is useful, because a1-a5 is individually-varying times of observation. I got an Error message *Data format: ID FID SubID S R a1 a2 a3 a4 a5 y1 y2 y3 y4 y5 A1 1 1 F 1 8 10 12 17 18 0 0 1 0 1 A2 2 1 F 0 7 10 11 15 17 0 0 0 0 1 Q1 79 2 M 1 9 12 15 17 18 0 1 1 0 1 • a1~a5 is Age variables it’s not equal range in each subject. • y1~y5 is binary case 0: absent / 1: present • Each subject have 5 wave(times) – longitudinal data * Input: Data: File is ex612.txt; Variable: Names are ID FID SubID S R a1-a5 y1-y5; USEVARIABLE ARE a1-a5 y1-y5; Categorical = y1-y5; TSCORES = a1-a5; Class = C (2); Analysis: Type = Mixture; Starts = 50 2; Model: %overall% i s | y1@0 y2@1 y3@2 y4@3 y5@4 AT a1-a5; Output: Tech1 Tech8; * Purpose: Find how many class in here? So I use ‘Type=mixture’ and ‘Class=’ If I consider age-variable, it have lots of missing. So just consider effect to y1~y5. Error is that ‘Class=’ and ‘Type=Mixture’ commend, but I want to analysis the admixture model. How to solve this problem? I also consider Age-variable information. Stat_love posted on Wednesday, November 16, 2005 - 12:36 pm I fix the input. Data: File is ex612.txt; Variable: Names are ID FID SubID Sex Risk a1-a5 y1-y5; USEVARIABLE ARE a1-a5 y1-y5; Categorical = y1-y5; TSCORES = a1-a5; Class = c(2); Analysis: Type = Mixture Random ; Model: %Overall% i s | y1-y5 AT a1-a5; Output: Tech1 Tech8; (Q1)This is corret? (Q2) Got this message: Unperturbed starting value run did not converge. FOR PARAMETER 2 IS 0.11420395D-01. ------> So finally, I didn't get the correct result. How can do it? I try to change 'start-value', it's not working. * Could you give me your advise? bmuthen posted on Wednesday, November 16, 2005 - 4:50 pm This type of question is better answered by sending your input, output, data and license number to support@statmodel.com. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=14&page=907","timestamp":"2014-04-19T17:04:13Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:8ee0b78b-62e9-4531-b0b5-8f3946592b1a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Lines n and p lie in the xy-plane. Is the slope of line n Author Message Lines n and p lie in the xy-plane. Is the slope of line n [#permalink] 11 Nov 2006, 12:28 Lines n and p lie in the xy-plane. Is the slope of line n less than the slope of line p? (1) Lines n and p intersect at the point (5,1) artshep (2) The y-intercept of line n is greater than the y-intercept of line p. Manager I keep ending up with answer E - both statements together are not sufficient. However, this is not the correct answer. Joined: 04 Nov Here's my work, using the equation for a line, y = Mx + B, where M is slope and B is y-intercept. (1) 5 * Mn + Bn = 5 * Mp + Bp Posts: 159 Not sufficient Followers: 3 (2) Obviously not sufficient Kudos [?]: 6 [0] (1 & 2 together) 5 * Mn + Bn = 5 * Mp + Bp , given: 0 Simplifying this equation gives: Mp / Mn = Bn - Bp So, to know if slope of line n is less than slope of line p, we have to know whether the left hand side of this equation is greater than 1. For it to be greater than 1, Bn has to be at least 1 greater than Bp, but we don't know that for sure. We only know that Bn is greater than Bp, not to what extent. Where am I going wrong here? This post received josh478 wrote: I have a question about this: I don't get the algebraic method.. however I was thinking that if the points intersect, then the line originating higher, or sufficiently lower than the other would result in a yogeshsheth greater slope (assuming the slope can be positive or negative). But, if the y intercept for n is +1 and the y intercept for point P is -5, isnt the slope greater for P? Director At first it was straight C, until I drew a picture on my scrap paper and realized that a higher Y intercept doesn't seem to necessarily mean a greater slope. Joined: 01 Oct I am going wrong somewhere!! Help! Here is the algebraic method Posts: 500 each statement is individually insuff. Followers: 1 Combining we get Kudos [?]: 15 [1 ] , given: 0 Let the eq of two lines be y1=m1x1+c1 and y2=m2x2+c2 Now the pt (5,1) lies on both lines so must satisfy the above equations 1=5m1+c1 1=5m2+c2 thus 5m1+c1=5m2+c2 ...A From two we get c1 >c2...B From A and B m1<m2 ..thus suff Hence the answer is C Hope this helps. The answer is (C) By a physics reasonning: As both lines pass by the same point (5,1) and as the slope represents the constant acceleration or deceleration between 2 points, we can see now that if each line passes by a second point that differs for both lines and that is defined by a rule such as Y interceptor of n > Y interceptor of n, we can conclude on the way that the 2 slopes are related one another. By the mathematical approach: SVP o For the line n : y = a(n)*x+b(n) o For the line p : y = a(p)*x+b(p) Joined: 01 May 2006 We know from (2) that b(n) > b(p) Posts: 1818 and we have as well: o 1 = a(n)*5+b(n) Followers: 8 <=> b(n) = 1-5*a(n) Kudos [?]: 83 [0 o 1 = a(p)*5+b(p) ], given: 0 <=> b(p) = 1-5*a(p) 1-*5a(n) > 1-5*a(p) <=> a(p) > a(n) Manager Ahhh...yep, I see it now. The answer via physical reasoning is the best way for me to look at it. Joined: 04 Nov Thanks, 2006 Artis Posts: 159 Followers: 3 Kudos [?]: 6 [0] , given: 0 Manager artshep Joined: 03 Nov You can also draw the lines on the x-y plane and find out that the slope of line n is always greater than that of line p (when both slopes are either negative or positive) if you 2006 take both statements together. Posts: 162 Followers: 2 Kudos [?]: 2 [0] , given: 0 I have a question about this: I don't get the algebraic method.. however I was thinking that if the points intersect, then the line originating higher, or sufficiently lower than the other would result in a Joined: 01 Apr greater slope (assuming the slope can be positive or negative). But, if the y intercept for n is +1 and the y intercept for point P is -5, isnt the slope greater for P? At first it was straight C, until I drew a picture on my scrap paper and realized that a higher Y intercept doesn't seem to necessarily mean a greater slope. Posts: 98 I am going wrong somewhere!! Help! Followers: 2 Kudos [?]: 5 [0] , given: 0 Senior Manager I think we have to assume it is saying that using both together the slope of line n will be less positive than that of line p. Joined: 20 Feb 2006 Otherwise, if we were simply looking at absolute value of slope then the slope of line n could be more or less than that of line p. Posts: 375 Followers: 1 Kudos [?]: 0 [0] , given: 0 Manager Can anyone please tell me, if there are 2 slopes with values 1 & -3, which one is greater? Joined: 24 Jun Posts: 63 Followers: 1 Viperace mitul wrote: Manager Can anyone please tell me, if there are 2 slopes with values 1 & -3, which one is greater? Joined: 25 Sep You got to tell us greater in what? Gradient,slope, intercept y, intercept x? Posts: 153 Followers: 1 livin in a prison island... Sorry Guys, Let me rephrase my question Joined: 24 Jun 2006 If Y intercept of one Line is 3 and Y intercept of another line = -10 Posts: 63 Which Y intercept would be greater -10(which is negative) or 3(which is positive) Followers: 1
{"url":"http://gmatclub.com/forum/lines-n-and-p-lie-in-the-xy-plane-is-the-slope-of-line-n-38279.html?kudos=1","timestamp":"2014-04-17T13:13:10Z","content_type":null,"content_length":"159687","record_id":"<urn:uuid:a80b3901-6f9f-47b2-bfa5-a442f73b5160>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding symmetries of a triangular prism March 25th 2013, 06:32 AM #1 Mar 2013 Finding symmetries of a triangular prism The above prism has three identical rectangular faces and equilateral triangles at the top and base. The positions of the faces of the prism have been numbered so that we may represent the elements of the group G of all symmetries of the prism as permutations of the set {1,2,3,4,5}. Write down all the symmetries of the prism in cycle form as permutations of the set {1,2,3,4,5} and describe each symmetry geometrically. Re: Finding symmetries of a triangular prism The above prism has three identical rectangular faces and equilateral triangles at the top and base. The positions of the faces of the prism have been numbered so that we may represent the elements of the group G of all symmetries of the prism as permutations of the set {1,2,3,4,5}. Write down all the symmetries of the prism in cycle form as permutations of the set {1,2,3,4,5} and describe each symmetry geometrically. Here's just one: $2 \rightarrow 3, 3 \rightarrow 5, 5 \rightarrow 2, 1 \rightarrow 1, 4 \rightarrow 4$ i.e. counter-clockwise rotation on z-axis (2 3 5)(1)(4) = (2 3 5) Can you find the others? Re: Finding symmetries of a triangular prism I've got eight with that one included. The identity 1 counter- clockwise rotation on z-axis = (235) 2 counter- clockwise rotations on z-axis = (253) Rotation by pi (where 1 and 4 changes places) = (14)(23) Reflections I get (25) cuts through the middle of 3, 1 and 4 (35) cuts through the middle of 2, 1 and 4 (23) cuts through the middle of 5, 1 and 4 (14) is a reflection in the horizontal axis Last edited by Jojo55; March 25th 2013 at 11:45 AM. Re: Finding symmetries of a triangular prism Very naughty this is an Open University T.M.A. assessment question-Don't get caught! March 25th 2013, 10:51 AM #2 Mar 2013 BC, Canada March 25th 2013, 11:01 AM #3 Mar 2013 April 3rd 2013, 04:29 AM #4 Apr 2013 Tunbridge Wells
{"url":"http://mathhelpforum.com/geometry/215520-finding-symmetries-triangular-prism.html","timestamp":"2014-04-16T20:30:04Z","content_type":null,"content_length":"41668","record_id":"<urn:uuid:a0e271e8-20c6-43bf-82e0-b991efd94a58>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Coplay Trigonometry Tutors ...In addition to all the calculus courses I had in college, I also taught a calc course while student teaching. While it has been a while since I taught this subject, I do still feel I know it, and I am willing to put in the time to refresh my memory ahead of any tutoring sessions. I have taught ... 12 Subjects: including trigonometry, calculus, statistics, geometry ...My scores on the SAT and ACT were perfect (2400 and 36 respectively), though I too had to work hard to get there, so I know what it's like when you're just starting out. I've worked with kids suffering from ADD and ADHD, dyslexia, depression and speech disorders. I personally grew up inured to social anxiety so I can always empathize. 34 Subjects: including trigonometry, English, physics, calculus ...My background includes a Master's degrees in Mathematics and Statistics, as well as Masters degrees in Computer Science and Electrical Engineering. I have been a practicing engineer for many years and thus I am familiar with many practical applications of math concepts to real world examples. M... 12 Subjects: including trigonometry, calculus, geometry, statistics ...Once learned anything is possible. Precalculus is another subject I've been teaching at the college level for about four years now. The concepts are a bit more challenging, but I've learned how to make them clear. 11 Subjects: including trigonometry, physics, statistics, geometry ...I graduated high school in 1983 from William Allen High School, and I graduated with a degree in Mathematics from Cedar Crest College. I also earned a Secondary Teaching Certificate. I have been working in education since 1988 when I started substitute teaching. 9 Subjects: including trigonometry, geometry, algebra 1, algebra 2
{"url":"http://www.algebrahelp.com/Coplay_trigonometry_tutors.jsp","timestamp":"2014-04-18T05:47:29Z","content_type":null,"content_length":"24991","record_id":"<urn:uuid:11bbe023-160d-4c20-8484-10c6cf5f2828>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Annotations for Piece = Part = Portion : Fractions = Decimals = Percents Baker & Taylor Explains how in the language of mathematics, fractions, decimals and percents are three different ways of describing the same parts of things. Random House, Inc. Just as hola and bonjour mean ?hello,? in the language of math, fractions, decimals, and percents describe the same thing in slightly different ways. So why are so many kids bewildered by this math basic? Because rarely is the explanation of this important concept presented so clearly. Now there'¬?s PIECE = PART = PORTION, to offer clarity with hip graphic presentation to boot. Finally!Engaging antidote to mathphobia. Photography will appeal to visual learners.
{"url":"http://sherloc.imcpl.org/enhancedContent.pl?contentType=AnnotationDetail&isbn=9781582461021","timestamp":"2014-04-21T15:00:44Z","content_type":null,"content_length":"1605","record_id":"<urn:uuid:5febdec9-6ee5-4887-a26e-953bfcdade96>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: please help. thanks:) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508bda34e4b077c2ef2e974a","timestamp":"2014-04-18T18:44:05Z","content_type":null,"content_length":"62756","record_id":"<urn:uuid:9858339b-9662-45d1-bc36-dc4ddcdce3d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café Progic V Posted by David Corfield I’ve come across something promissing for the Progic project. Apparently there is a way to complete the analogy: propositional logic : predicate logic :: Bayesian networks: ? The answer, it is claimed, is ‘probabilistic relational models’. Now before we take a look a them, it would be worth considering in what sense Bayesian networks are propositional. And before that we need to know what a Bayesian network is. Well, first of all, it’s a directed acyclic graph (DAG), so a graph with directed arrows, and no cycles. Each vertex corresponds to a variable which may take any of a set of values, e.g., , $A$ taking values in $\{a_i\}$. A ‘parent’ of vertex $A$ is a vertex with an edge pointing from it to $A$. Then the final ingredient for a Bayesian network is a conditional probability distribution for each vertex given its parents, $Pr(a_i | Pa(A)_j)$. A Bayesian network with vertices, say, $A, B, C, D$, gives a joint probability distibution over the variables. We know we can factorise any joint distribution as, say, $P(a, b, c, d) = P(d) \cdot P(c | d) \cdot P(b | c, d) \cdot P(a | b, c, d).$ This corresponds to a network with arrows from $D$ to $A$, $B$ and $C$, from $C$ to $A$ and $B$, and from $B$ to $A$. $A$ is childless. Of course, we could permute the variables and have, say, $D$ the childless vertex. But what we hope happens is that there are some independencies between variables. So for instance, perhaps $P(b | c, d) = P(b | d)$ and $P(a | b, c, d) = P(a | b, c)$. Or in words, $B$ is independent of $C$ conditional on $D$, and $A$ is independent of $D$ conditional on $B$ and $C$. Then a DAG may be drawn to represent these independences, with arrows going from $D$ to $B$ and $C$, and arrows converging from $B$ and $C$ to $A$. The distribution is encoded far more efficiently if the DAG is sparse. If Bayesian networks are to be thought of as ‘propositional’, you’d imagine it’s because each variable is an atomic proposition. But this suggests that each vertex ought to be binary valued. While there are binary Bayesian networks, there is no need for them to have this property. Even restricting to discrete variables, they may take values in a set of any countable cardinality. One natural response is to wonder why in propositional logic we restrict ourselves to pairs of propositions, $\{P, eg P\}$, precisely one of which is true. Don’t we encounter situations where we’re reasoning about an entity which may be in one of three states, say, a gear stick in drive, neutral or reverse? Why treat them as three separate atoms, only to add the premise that precisely one holds? Another issue for us at the Café is that a Bayesian network seems to be very much like an arrow from the product of the parentless vertices to probability distributions over the product of the childless vertices, an arrow in the Kleisli category. This suggests that if we have two networks, the childless vertices of the first matching the parentless vertices of the second, then they could be composed. So with a pair of simple graphs, $C \to B$ and $B \to A$, we would have: $P(a_i | c_k) = \sum_{b_j}P(a_i | b_j) \cdot P(b_j | c_k).$ What doesn’t quite fit however is that a Bayesian network represents a distribution over the product of all the node variables. In the case above we ought to have composed a joint distribution over $C$ and $B$ with one over $B$ and $A$ resulting in one over $C, B$ and $A$. Maybe the issue is that there’s an invisible 1, out of which arrows feed into the parentless vertices, giving an (unconditional) distribution over them. If so, one ought not to compose as I suggested as the domain and codomain would not match. If we include these unwritten arrows, e.g., $1 \to C \to B$ as a network representing a joint distribution over $B$ and $C$, the final node doesn’t mention $C$. So we might think to draw an edge out of the parent $C$ to a copy of itself at the bottom (right, here) of the network, with obvious distribution $P(c_i | c_j) = \delta_{ij}$. Then we’d have a network $1 \to C \to B \times C$. Something a bit odd is going on here. Should we be after multispans, rather than arrows in a monoidal category? Posted at December 6, 2007 12:16 PM UTC Re: Progic V I guess an important part of what’s going on here is that there is no category with sets as objects, and morphisms joint probability distributions over the product of domain and codomain. In a category like Rel we can compose $R: A \to B$ and $S: B \to C$ to give a relation $S \cdot R: A \to C$. But given distributions over $A \times B$ and $B \times C$, there’s no obvious composition. It’s only when we have conditional distributions $P(B | A)$ and $P(C | B)$ that we can compose, in the Kleislian way. So maybe the graphical representation of the Bayesian network is rather misleading. Posted by: David Corfield on December 7, 2007 10:11 AM | Permalink | Reply to this Re: Progic V Would we ever represent a relation between, say, sets $A, B, C, and D$, as the subset of $D$ featuring in the relation, followed by for each $d$ the subset of $C$, for each $d$ and $c$ the subset of $B$, and for each $d, c, and b$ the subset of $A$? Posted by: David Corfield on December 14, 2007 9:24 AM | Permalink | Reply to this Re: Progic V Ooh, Coecke and Spekkens have brought their category theoretical picture calculus to bear on Bayesian inference here. Posted by: David Corfield on February 14, 2011 8:33 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2007/12/progic_v.html","timestamp":"2014-04-18T06:07:03Z","content_type":null,"content_length":"32658","record_id":"<urn:uuid:36bdf46a-c48e-4ec3-bb6b-e740706a8f67>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Clarkdale, GA SAT Math Tutor Find a Clarkdale, GA SAT Math Tutor ...I presently teach Precalculus at Chattahoochee Tech. I have also taught Calculus at the college level, so I am able to help prepare students for the highest levels of math. What sets me apart is the ability to provide simple and clear explanations of advanced concepts. 13 Subjects: including SAT math, calculus, statistics, geometry ...I have made hundreds of speaking engagements before adults, students,parents, and school boards. In addition,I have trained students on how to make presentations before their peers and professors. I have performed thee activities for more than 20 years. 47 Subjects: including SAT math, chemistry, English, physics ...In keeping with that goal, I monitor new academic research in the cognitive & education fields on a weekly basis for different methodologies. I have been a private tutor since my freshman year of college in 2009 and have tutored more than 90 students in the last five years. Because I am motivated by my own acquisition of knowledge, I have been able to tutor in many different subjects. 26 Subjects: including SAT math, reading, English, geometry ...It is easy for me to teach or tutor on this subject. In addition to everything else I mentioned, I also got a perfect score on the math section of the SAT when I took it many years ago. I'm a mechanical engineering professor at a top university. 12 Subjects: including SAT math, calculus, algebra 2, geometry ...EBD 2. Specific LD 3. Dyslexia 4. 22 Subjects: including SAT math, reading, English, GED Related Clarkdale, GA Tutors Clarkdale, GA Accounting Tutors Clarkdale, GA ACT Tutors Clarkdale, GA Algebra Tutors Clarkdale, GA Algebra 2 Tutors Clarkdale, GA Calculus Tutors Clarkdale, GA Geometry Tutors Clarkdale, GA Math Tutors Clarkdale, GA Prealgebra Tutors Clarkdale, GA Precalculus Tutors Clarkdale, GA SAT Tutors Clarkdale, GA SAT Math Tutors Clarkdale, GA Science Tutors Clarkdale, GA Statistics Tutors Clarkdale, GA Trigonometry Tutors Nearby Cities With SAT math Tutor Aragon, GA SAT math Tutors Austell SAT math Tutors Braswell, GA SAT math Tutors Chattahoochee Hills, GA SAT math Tutors Dallas, GA SAT math Tutors Ellenwood SAT math Tutors Hapeville, GA SAT math Tutors Hiram, GA SAT math Tutors Lebanon, GA SAT math Tutors Palmetto, GA SAT math Tutors Powder Springs, GA SAT math Tutors Red Oak, GA SAT math Tutors Taylorsville, GA SAT math Tutors Temple, GA SAT math Tutors Winston, GA SAT math Tutors
{"url":"http://www.purplemath.com/Clarkdale_GA_SAT_Math_tutors.php","timestamp":"2014-04-18T00:34:13Z","content_type":null,"content_length":"23803","record_id":"<urn:uuid:a4ddac19-0831-49c4-9505-ec62c8d6eb3f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Electric "Brain" Weighs Three Tons - Science And Mechanics (Aug, 1935) Electric “Brain” Weighs Three Tons (Aug, 1935) Electric “Brain” Weighs Three Tons Computing Machine Can Run Rings Around Einstein in Solving Mathematical Kinks of the Way that the Universe Operates THE “Brain Trust” now runs a risk in the competition of the big, complex machine shown above, which was recently built in the school of electrical engineering at the University of Pennsylvania, in Philadelphia, by C. W. A. workers with government funds. Now, it is said, the U. S. Army wants another like it, and would ask to take this over in case of war. The explanation is that it is a machine for solving the most complicated mathematical problems, and doing this in a hurry. In fact, it can solve problems too complicated for any living mathematician to work out—with an answer not always guaranteed mathemically exact, but at least good enough for practical purposes. The purpose is to show what will happen when several continually changing factors enter into the case; in other words, to perform operations in the calculus, when they become exceedingly intricate. The “fire, control” machines, now used to plot the flight of shells from modern guns in moving ships, against moving targets, deal with practical conditions like this; and the machine pictured could answer a question of this nature, as well as a good many others less specialized. For instance, three or more heavenly bodies (like Earth, Sun, and Moon) are moving in their orbits at different rates of speed and varying distances, attracting each other. What will be the combined result of their forces, in changing the positions of each, in a given period? It is an enormously difficult proposition for the best mathematician in the world. With this machine, its ten “integrators” would be adjusted (by setting dials) to represent the varying factors of the problem, and then started turning. The friction discs and gears of the machine would operate on each other, each of them with an effect proportioned to the energy and speed it represented; and, on the final chart at the “answer table” of the machine (see illustration) a curve would be drawn by a metal pen, representing the formula desired (not necessarily a physical picture of the motion of one of the heavenly bodies, but a mathematical picture of it). The machine operates on the principle of one designed by Dr. Bush, of the Massachusetts Institute of Technology, three years ago (as illustrated in this magazine at the time) but is much larger. 15 comments 1. And it worked by turning gears. It was about ten years later that the electronic ENIAC was built. 2. I have always had a warm spot in my heart for the DA. They competed with computers up until the early 60′s for cost and reliability. One of our local universities still had a working one in the early 70′s. If’n you would like to see one work I suggest the George Pal movies ‘Destination Moon’ and ‘When Worlds Collide’. 3. “an answer not always guaranteed mathemically exact, but at least good enough for practical purposes.” Sounds like a Pentium predecessor for sure. Neat. 4. jmyint Good catch! I should have referenced those movies myself (I’m a Pal fan from way back). Here is a website that has animated pictures of a differential analyzer in action. http://web.mit.edu/klun… 5. “Computing Machine Can Run Rings Around Einstein in Solving Mathematical Kinks” Albert was notoriously bad at advanced math but knew it and always sought mathematicians’ help it finding the proofs he needed. 6. I used to have the instruction manual for DA, and I could have had the thing itself at one point. But, I was 23 and foolish and didn’t do that. Anyway, it is not in Destination Moon, but in “When Worlds Collide”. In that movie it plays itself. In “Earth vs the Flying Saucers” it plays a translation machine. To build something like it, one should look in the Amateur Scientist column of Scientific American–now available on CD. One needs to make a “force amplifier”. Bush’s original was made from a British Meccano Set, by the way. He was a true mechanical genius. 7. An electronic differential integrator is easy to make with op amps, and heathkit used to sell one–which I owned. It was a lot of fun. 8. Jmynt, Would that local uni be UCLA–where I could have had mine in 1978-79? It was being thrown out at the bottom floor junkpile of the Engineering building–where I was a Hedrick math prof–and I was astonished to see it. It was like meeting an old friend, as I had built a small scale version earlier–after reading the SA article, and I had seen those movies. 9. Bush was traveling in England, and Norbert Wiener suggested the idea to him–as an MIT project. He then tested the idea by going to a toy store and buying a Meccano Set. That anyone could build a test D.A. from a toy construction kit blew my mind. Later MIT build the one shown in the article. Bush and Norbert were the people who made MIT a great place. Norbert was a great mathematician who was not above dealing with engineers in a period when this was quite infradig. Another example of the same thing was Charles Proteus Steimetz, but Wiener was a much greater mathematician. 10. Penny I have the “Destination Moon” DVD and a (if not THE) DA appears early in chapter 4 at 19:44 to 19:47. I doubt if the soundman used the actual sound and used Yatzee dice in a cup instead. 11. It is my understanding that a moth in one of the relays of a Navy gun computer caused a failure… thereby originating the use of the term “bug”. 12. Widely believed but a bit more involved than that http://en.wikipedia.org… 13. Dear J, Thanks. I will look. It is not good movie-writing, because later, when they change the launch window, the celestial mechanics expert is shown recomputing the trajectory using a SLIDE RULE. ” Give him a cup of hot coffee and all the assistance that he can use.” Always happy to see DA in another movie! 14. At least they understood the CONCEPT of a mathematical trajectory computed from differential equations!!! We still had good high schools in America back then. Could you imagine that in Star Trek? ” Mr Sulu, a course to Starbase 11 365 mark 3.” “Sulu, what is he talking about? I will compute the usual maximizing geodesic in our 11 dimensional warped product Lorentzian Hypermanifold.” “Aye, Aye Sir”. 15. Hello, I am planning to publish a paper on the state of computers in 1935. I presented this paper in April 2011, at a symposium held by Hofstra University to celebrate their founding in 1935. I included the photo above that shows three men using the Big Brain. The photo, however, will not reproduce well. Can you e-mail me a high-quality photo of Big Brain? I would also like your written permission to utililize this better photograph in the published paper. I would greatly appreciate your providing with this information. Thank you very much. Philip M. Sherman [email protected] You must be logged in to post a comment.
{"url":"http://blog.modernmechanix.com/electric-brain-weighs-three-tons/","timestamp":"2014-04-16T16:14:47Z","content_type":null,"content_length":"74595","record_id":"<urn:uuid:f1c9777e-b78e-4bfe-ad6c-ab6e305e2340>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Sizing Conductors, Part X Selecting the correct size conductor is not a difficult task, but there is more to it than just picking a conductor from Table 310.15(B)(16) in the National Electrical Code (NEC). The conductor must be selected and installed in accordance with all applicable provisions pertaining to conductors. For example, the ampacity of a conductor must not exceed the terminal connection temperature limitations in 110.14(C). Another consideration pertains to a conductor’s conditions of use. The number of current-carrying conductors in a raceway or cable is one condition of use. Ambient temperature is another condition of use. A conductor also shall not be used in a manner that its operating temperature exceeds the limit designated for the type of insulated conductor involved [310.15(A)(3)]. There are other things to consider when selecting a conductor. Is the load, or any part of the load, continuous? Is the small-conductor rule in 240.4(D) applicable? Is the next higher standard overcurrent device rating (above the ampacity of the conductors being protected) below, equal to or above 800 amperes (A)? Will the conductors be feeder taps or transformer secondary conductors? Will the conductors be used as motor feeder or motor branch-circuit conductors? I have previously discussed some of these considerations in this series; I will discuss others in this and upcoming issues. Last month’s column concluded by covering requirements for counting (or not counting) neutral conductors as current-carrying conductors. This month, the discussion continues with adjustment factors for more than three current-carrying conductors in a raceway, cable or earth (directly buried). It may or may not be necessary to count neutral conductors as current-carrying conductors. Neutral conductor provisions in 310.15(B)(5) are divided into three sections. The first section, covered last month, states that if the neutral conductor carries only the unbalanced current from other conductors of the same circuit, it is not necessary to count the neutral as a current-carrying conductor. The second section in 310.15(B)(5) pertains to a specific electrical system. The neutral conductor must be counted if it is supplied from a three-phase, 4-wire, wye-connected system, but only if it is in a 3-wire circuit that consists of two-phase conductors and the neutral. For example, three multiwire branch circuits supplying power to incandescent lighting will be installed in a raceway. An equipment grounding conductor will also be installed in the raceway. Each multiwire branch circuit will consist of a neutral conductor and only two ungrounded (hot) conductors. The power system is a three-phase, 4-wire, wye-connected system, and the voltage is 208/120 volts (V). Each ungrounded conductor draws 12A at 120V. Counting the three 3-wire, multiwire branch circuits and the equipment ground, there will be 10 conductors in the raceway. What is the adjustment factor for the conductors in this example? Because each multiwire branch circuit will consist of a neutral conductor and only two ungrounded conductors and they are supplied from a three-phase, 4-wire, wye-connected system, each neutral must be counted. The equipment ground does not count. Because of the six ungrounded conductors and three neutrals, there will be nine current-carrying conductors in this raceway. The Table 310.15(B)(3)(a) adjustment factor for nine current-carrying conductors is 70 percent (see Figure 1). The reason the neutral conductor must be counted in this type of circuit and system is because the neutral (or common conductor) carries approximately the same current as the line-to-neutral load currents of the other conductors. This is even stated in 310.15(B)(5)(b). This theory can be verified by the electrical formula for finding neutral current when the system is three-phase, 4-wire, The letter I represents current, and the subscript letters represent phases A, B and C. The superscript 2 means that the current (or number) must be squared. (The square of a number is the product of a number multiplied by itself.) For example, each incandescent lighting circuit in Figure 1 draws 12A at 120V. Each 3-wire multiwire branch circuit will consist of a neutral conductor and two ungrounded conductors. The 3-wire circuits will be supplied from a three-phase, 4-wire wye-connected system. Each 3-wire circuit will use two of the three phases to supply power to the lights. One of the multiwire branch circuits supplying power to the 120V lighting circuits will be terminated on phases B and C. What is the current draw of the neutral in this multiwire branch circuit? Replace the letters in the formula with the known factors and solve for neutral current. Since this multiwire branch circuit does not use phase A, the current on phase A will be 0A. Replace the subscript A with 0. Replace the subscript B with 12. Replace the subscript C with 12. Since there is no current on phase A, the current of A squared is 0. The current of B squared is 144 (12 12 = 144). The current of C squared is also 144 (12 12 = 144). The current of A multiplied by the current of B is 0 (0 12 = 0). The current of B multiplied by the current of C is 144 (12 12 = 144). The current of A multiplied by the current of C is 0 (0 12 = 0). After adding and subtracting, the sum is 144 (0 + 144 + 144 – 0 – 144 – 0 = 144). The square root of 144 is 12. Each 3-wire multiwire branch circuit will have a neutral current of 12A, which is the same current as the line-to-neutral load currents of the other conductors (see Figure 2). The third neutral conductor provision pertains to specific loads in a specific electrical system. On a 4-wire, three-phase wye circuit where the major portion of the load consists of nonlinear loads, harmonic currents are present in the neutral conductor; the neutral conductor shall, therefore, be considered a current-carrying conductor [310.15(B)(5)(c)]. Electronic equipment, electronic/ electric-discharge lighting, adjustable-speed drive systems, and similar equipment may be nonlinear loads. For example, a multiwire branch circuit consisting of five conductors has been installed in a raceway. Three of the conductors are ungrounded (hot) conductors, one conductor is a neutral, and one conductor is an equipment ground. The multiwire branch-circuit supplies power to fluorescent lighting. The power system is a three-phase, 4-wire wye-connected system, and the voltage is 208/120V. Each phase or leg of the multi-wire branch draws 13A at 120V. What is the adjustment factor for the conductors in this example? Because this is a three-phase, 4-wire, wye-connected system and the loads are nonlinear, the neutral must be counted as a current-carrying conductor. Since the equipment ground does not count, there are four current-carrying conductors. The Table 310.15(B)(3)(a) adjustment factor for four current-carrying conductors is 80 percent (see Figure 3). Next month’s column continues the discussion of sizing conductors. MILLER, owner of Lighthouse Educational Services, teaches classes and seminars on the electrical industry. He is the author of “Illustrated Guide to the National Electrical Code” and “The Electrician’s Exam Prep Manual.” He can be reached at 615.333.3336, charles@charlesRmiller.com and www.charlesRmiller.com.
{"url":"http://www.ecmag.com/print/section/codes-standards/sizing-conductors-part-x?qt-issues_block=0","timestamp":"2014-04-19T03:30:12Z","content_type":null,"content_length":"14538","record_id":"<urn:uuid:b711da73-54b1-486e-9ff7-da84428af819>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Resource Allocation for Epidemic Control in Metapopulations • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information PLoS One. 2011; 6(9): e24577. Resource Allocation for Epidemic Control in Metapopulations Michael George Roberts, Editor^ Deployment of limited resources is an issue of major importance for decision-making in crisis events. This is especially true for large-scale outbreaks of infectious diseases. Little is known when it comes to identifying the most efficient way of deploying scarce resources for control when disease outbreaks occur in different but interconnected regions. The policy maker is frequently faced with the challenge of optimizing efficiency (e.g. minimizing the burden of infection) while accounting for social equity (e.g. equal opportunity for infected individuals to access treatment). For a large range of diseases described by a simple SIRS model, we consider strategies that should be used to minimize the discounted number of infected individuals during the course of an epidemic. We show that when faced with the dilemma of choosing between socially equitable and purely efficient strategies, the choice of the control strategy should be informed by key measurable epidemiological factors such as the basic reproductive number and the efficiency of the treatment measure. Our model provides new insights for policy makers in the optimal deployment of limited resources for control in the event of epidemic outbreaks at the landscape scale. The management of diseases involves the expenditure of limited resources, which more often than not are outstripped by the demand for controlling all infected individuals [1]–[3]. This is often the case when disease occurs simultaneously in different but inter-connected regions [2], [4], [5]. Treatment of infection in one region such as a state, city, or hospital may affect the potential for spread to another region when there is movement of individuals between the regions. Seeking to control disease outbreaks in more than one region, poses a dilemma for epidemiologists and health administrators of how best to deploy limited resources, such as drugs or trained personnel, amongst the different regions [6]–[11]. One common objective is to minimise the numbers of infected individuals and hence to minimize the burden of infection during the course of an epidemic [4], [12]. For epidemics of the SIS (Susceptible-Infected-Susceptible) form, in which individuals can be re-infected, Rowthorn et al. [10] showed that rather than targeting the region with most infecteds, as might have been intuitively expected, it is instead optimal to give preference to treating the region with the lower levels of infecteds: the remaining regions are treated as residual claimants, receiving treatment only when there is resource left over. The epidemiological intuition underpinning the optimal strategy is understood by noting that since there are only two types of host (susceptible or infected), preferential treatment in a region with low level of infection is equivalent to giving preference to the region with the highest level of susceptibles available for infection. Since, on average an infected individual infects more than one susceptible, removing infecteds where susceptibles are plentiful reduces the force of infection of the epidemic and so is likely to bring the epidemic under control. But what happens when there are more than two epidemiological classes? For many diseases, reinfection is often preceded by a period of temporary immunity, yielding a third class of ‘removed’ individuals in the population that complicates the identification of an optimal strategy for control. In this paper, we focus on this much broader class of epidemics described by an SIRS model. We consider an SIRS-type epidemic in which infected individuals cease to be infectious and move into a temporary immune (R) class, after which they become susceptible once again. This is characteristic of many diseases, such as malaria [13], [14], tuberculosis [15] and syphilis [16], in which infecteds (I) recover naturally or after treatment. Infected individuals gain a temporal immunity to the pathogen, after which they rejoin the susceptible class (S) and can be reinfected. We assume that treatment is not used as a prophylactic so that only infected individuals receive treatment. Hence, the proportion of treated individuals is given as To address the problem of resource allocation for disease management in multiple regions, we use a combination of optimization methods from economic theory of disease control [17], [18] with a metapopulation model from epidemiological theory [19], [20]. This enables us to formalize the problem and to derive criteria for optimality so as to minimize the total number of infections over time. Not infrequently, strict criteria for optimization identify strategies that may be logistically impractical, for example by requiring a change in pattern of control at a switching time that may be difficult to monitor [17]. Strictly optimal strategies may also be challenged on grounds of social equity, whereby every infected individual does not have an equal chance of being treated [21], [22]. Accordingly we assess the tractability of optimal control strategies and consider also how adaptations may be made to balance, optimality, tractability and social equity. For the sake of simplicity, the analysis is initially carried out for two interconnected regions (e.g.cities, towns or states) and the robustness of the results to spatial structure are later tested for two other simple and realistic spatial configurations. We consider two coupled sub-populations (regions) of susceptible individuals each with a fixed size Optimal control We suppose that expenditure on control is subject to a budget constraint The discount rate ([17]. The optimization approach we adopt is based upon the Hamiltonian method [23], which is a device for minimizing the objective function subject to the economic constraints and the epidemiological dynamics of the model. We assume that if it were possible to treat all infected individuals, disease eradication would be achieved in the long term ([24], it is possible to show that the optimal control problem does have a solution. To solve the problem of optimal deployment of limited resources (i.e., when there are insufficient resources to treat all individuals that may become infected), we use the Pontryagin maximum principle [23](PMP), a mathematical tool widely used to solve optimal control problems for dynamical systems. This method takes into account the influence of current infection on the future evolution of disease as given by the propagation equations (1)–(3). The influence is embodied in the co-state variables that appear in a mathematical expression known as the Hamiltonian (see Materials and Methods). PMP enables us to derive necessary conditions for optimality from which it is possible to build up a set of candidate strategies for optimality from which ultimately it is possible using extensive numerical simulation to identify an optimal solution. Efficiency maximization The Pontryagin maximum principle (PMP) was used to derive necessary conditions for optimal resource allocation, when there are insufficient resources to treat all infected individuals. Using these necessary conditions together with exploratory numerical analysis, we identify the following as candidate strategies for optimality (see Materials and Methods): • preferential treatment of the more infected sub-population - to equalize disease burden within the regions as fast as possible and thereafter to treat each region equally; • preferential treatment of the less infected sub-population - initially ‘sacrificing’ the sub-population with the higher level of infecteds • preferential treatment of the more susceptible sub-population - initially ‘sacrificing’ the sub-population with the lower level of susceptibles • a strategy involving at least one switch between preferential treatment of the more infected to either the less infected or the more susceptible sub-population. Although it is not possible to prove analytically that a given path is optimal, after extensive numerical simulation, we identify the single switch strategy from giving preference to the more infected sub-population to giving preference to the less infected sub-population as the best allocation strategy that minimizes the discounted total numbers of infected individuals in both sub-populations (Figs. 1 & 2). However, attempts to implement the switching strategy are prone to the risk of missing the optimal switching time. This risk is enhanced by the fact that the optimal switching time depends upon the values of epidemiological parameters and the initial levels of infection that are unlikely to be accurately known in advance. Comparison of disease progress curves for a strategy that gives preferential treatment to the more infected sub-population (A,D,G), preferential treatment to the less infected sub-population (B,E,H) and the most efficient strategy (C,F,I). Difference between the outcome of the different policies for the whole range of initial conditions. To conclude our analysis on efficiency maximization, we investigate the effect of the rate of loss of immunity Table 1). Using numerical simulation, we compare the candidate strategies for optimality (see Materials and Methods) and show that for very large values of [10] who show this policy to be the best strategy for the control of an SIS type epidemic. Whereas for very small values of Effect of the rate of loss of immunity ( The single switch strategy, though the best policy, is not easily implementable. Numerical simulation shows that the second best policy in terms of simplicity and efficiency maximization is either to give preference to the more susceptible sub-population or preference to the less infected sub-population depending on the initial state of the system (Fig. 3). We compare the performance of these policies for different values of the rate of loss of immunity Fig. 4). For Fig. 4). Difference between the outcome of the different policies for the whole range of initial conditions. Difference between the outcome of ‘preferential treatment to the less infected sub-population’ and that of ‘preferential treatment to the more susceptible sub-population’. Efficiency and social equity Since the optimal strategy is very difficult to implement, two robust alternative strategies would be either to give preference to the more susceptible sub-population or to give preference to the less infected sub-population. However, these strategies are likely to be regarded as highly socially inequitable from the perspective of the chance that any infected individual receives treatment. For the initial state of the system satisfying: • a pro-rata policy designed to give equal opportunity for any infected individual to receive treatment We compare the performance of this strategy with the three tractable strategies considered above (i.e. not involving switching). We do this for different values of the basic reproductive number Fig. Difference between the outcome of selected strategies for different values of R[0]. Given a threshold value, of the difference between the outcome of a given control strategy and that of the pro-rata strategy, (d%) above which the use of inequitable policies may be justifiable, Fig. 5 shows that there exists a threshold value Fig. 5 and and6).6). We also compare the policies for different value of the coupling strength between the two sub-populations [10] (result not shown Difference between the outcome of ‘preferential treatment to the less infected sub-population’ and that of that pro-rata policy for symmetrical global connection between regions. To investigate the robustness of the result to spatial structure, we consider two further spatial configurations: 10 identical regions with symmetrical global coupling, and 10 identical regions arranged in a circle with each population interacting only with its two nearest neighbours. For small values of Fig. 7). But for high values of Fig. 7). Simulation shows that the threshold value of Fig. 6), and decreases for increasing values of the rate of loss of immunity (Fig. 8). Given that the choice of the discount rate affects the relative valuation of the current and future disease, one would expect a correlation between the choice of the discount rate and the value of the percentage error above which social inequity is justifiable. Difference between the outcome of selected policies for different values of R[0] for multiple sub-populations with different coupling between sub-populations. Difference between the outcome of ‘preferential treatment to the less infected sub-population’ and that of the pro-rata policy for symmetrical global connection between regions. When pro-rata is not a good candidate strategy in terms of efficiency [22]. However, determining what fraction of resource is to be allocated for equity concerns, while retaining a good level of overall efficiency, requires further debate and greater interrogation of epidemiological models with insight from social sciences [22], [26]. We have addressed the problem of allocation of limited resources for the control of an SIRS-type epidemic in different but interconnected regions. Using a combination of optimization methods from economic theory with a metapopulation model from epidemiological theory for disease management, we have formalized the problem of resource allocation and derived criteria for optimality so as to minimize the discounted number of infected individuals in both sub-populations over time, during the course of the epidemic. Using extensive numerical simulations, we have shown that the best strategy in terms of efficiency maximization is a switching strategy, whereby resources are initially preferentially allocated to the more infected sub-population then to the less infected sub-population. However, this strategy is seldom tractable, due to the fact that the switching time depends upon the value of epidemiological parameters and the initial state of the system, which are unlikely to be accurately known [17], [27]. Given that a practical strategy for disease control must account for various factors such as efficiency maximization and social equity amongst others, we have extended previous studies on dynamic resource allocation by investigating how to account for optimality (minimizing the burden of infection), social equity (equal opportunity for infected individuals to access treatment), and simplicity (ease of implementation) in identifying strategies for disease control. We have shown that when faced with the dilemma of choosing between a socially equitable strategy for resource allocation (e.g. a pro-rata allocation strategy) and a purely efficient but inequitable strategy (e.g. by giving preference to the more susceptible sub-population or preference to the less infected sub-population), the decision should be informed by the value of key epidemiological and economic parameters. In particular, we have shown that given a certain percentage of difference between the outcomes of different strategies (i.e. relative discounted number of infections that are not averted under the pro-rata policy) above which the use of an inequitable policy may be justifiable, there exists a threshold value of the basic reproductive number ( Interest in the optimal allocation of resources for epidemic control in structured populations has recently been renewed due to the threat of pandemic influenza [11], [28]–[30]. These studies primarily focus on the optimal deployment of mass vaccination to prevent or mitigate the spread of an outbreak of influenza within a population. Among other things, they show that when vaccine supplies are limited and the public health objective is to minimize infections, it is optimal to target vaccination toward the more epidemiologically important sub-populations (those that suffer the greatest per capita burden of infection) [11], [28]–[30]. The other sub-populations would thus be indirectly protected through herd immunity [11], [28]. These results agree with our analysis which shows that a good control strategy in terms of simplicity and efficiency maximization would be to give preference to the more susceptible sub-population. This sub-population may be regarded as the more epidemiologically important as it is potentially the main contributor to future infections. Several areas of investigation suggest themselves for future work. Foremost amongst these are allowance for heterogeneity in the size of sub-populations, and the rates of transmission of infection, both of which are recognized to be important factors in metapopulation theory. Further work will also investigate the robustness of the results for different measures for efficiency of control and to uncertainty about the likely values of epidemiological parameters, given that optimal strategies are often very sensitive to the epidemiological parameters [27], [31]–[33], which may not be accurately known before control is implemented. Materials and Methods The objective is to minimize the discounted burden of infection during the course of the epidemic subject to the propagation equations (1)–(3)and the following epidemiological and economic constraints: Each sub-population, of a fixed size When there are more infecteds that can be treated, [23]. This yields the following result: And it must be the case that That [34], which here is a necessary and sufficient criterion to prevent invasion of an epidemic. Upon equation 10 any admissible path (disease dynamic curves obtained for a given value of the control functions [10] and [27] for details). Therefore, besides the general transversality conditions [23]. We define a function where the integral is evaluated along the path defined by the propagation equations (1)–(3) when Given an initial state of the system [24] shows that a solution to the optimal control problem exists. This is done using Theorem 10.1 from [24], and the compactness of the set of points The singular solution We suppose that there exists an allowable path that satisfies the above maximal conditions on the Hamiltonian, and for which there exists an open interval where we have From an economical view point, the co-state variables can be interpreted as shadow prices. Thus [35], [36]. Because infection is harmful, and increasing the proportion of infectious individuals will result in decreasing the proportion of susceptibles, the shadow price From (14), it follows that From the symmetry of the system, we have which satisfies that following equations: The singular solution is achieved by preferential treatment of infecteds in the region with the higher prevalence of infecteds (see Eq. 17). The policy is called the MRAP since it involves the Most Rapid Approach Path to the singular solution, in which infection is equalized in both sub-populations. When Fig. 9). In other words, when the average proportion of individuals treated individuals in each sub-population Fig. 9). Bifurcation diagram for the singular solution (Eq. 16). Candidates for optimality From the above results, it follows that the optimal control strategy depends on the effect of a marginal change in the value of From the interpretation of the co-state variables as shadow prices, equation (8) can be interpreted as follows: if increasing the amount of infected individuals in sub-population 1 (sub-population 2) by one unit, would generate more infection in the whole population than an increase of the same amount in sub-population 2 (sub-population 1), then preference in treatment must be given to sub-population 1 (sub-population 2). From equation (15) and equation (8), it follows that an optimal solution is either a switching strategy of preference between sub-population 1 and sub-population 2, or the MRAP (the most rapid approach path to singular solution). The MRAP solution, which is equivalent to ‘preferential treatment of the more infected sub-population’ is given by the following As for the switching strategies between sub-population 1 and 2, they can be constructed in an infinite number of ways. Here we consider two plausible candidate solutions for optimality (based upon exploratory numerical analysis): ‘preferential treatment of the more susceptible sub-population’, ‘preferential treatment of the less infected sub-population’. These strategies are respectively defined by the following equations: The strategies giving preference to the more susceptible sub-population, preference to the less infected sub-population as well as the single and double switching strategies between one of the above strategies and the MRAP strategy are all candidates for optimality. Moreover, we consider an ‘alternative’ strategy which consists in the first instance in equalizing the level of infection in both sub-populations as fast as possible. This is done by implementing the strategy giving preference to the more infected sub-population strategy. When equality of the levels of infection is first reached, preference is then given to the more susceptible sub-population. We compare the above strategies. For any value of the initial condition, simulation shows that the smallest value of the objective function (Eq. 5) is obtained with the single switch strategy from giving preference to the more infected sub-population to giving preference to the less infected sub-population. Implementing the single switch strategy is subject to the risk of missing the optimal switching time. We were also able to show that the switching strategy satisfies the Hamiltonian and transversality conditions. We were not able to rule out the possibility that there are other paths, such multiple switching strategies, which outperform the above strategy. Simulation shows that the optimal switching strategy varies with the rate of loss of immunity [10]. For Details of the numerical explorations Numerical simulation was done using a fourth order Runge-Kutta scheme with 0.01 time intervals. Experiments were done for different values of the period of integration and time intervals. The accuracy of our method was established up to three decimal places. The state variables were scaled with respect to the fixed sub-population size To compare different control strategies, simulations were done using a large set of initial conditions (the state of the epidemic in each sub-population before resources are first allocated). For every single initial condition, we compared the value of the objective function for each of the control strategies described above. To build the set of initial condition, we proceeded as follows: for each sub-population, we spanned the surface Comparing the proposed candidates for optimality is not enough to establish the optimality of a given solution. We used the same method as Rowthorn et al. [10]. We consider the paths that eventually reach set [23]). We wish to thank Robert E. Rowthorn for thoughtful discussion and valuable comments on the manuscript. Competing Interests: The authors have declared that no competing interests exist. Funding: This work was supported by a Gates Cambridge Trust Scholarship (MLNM) and a BBSRC (Biotechnology and Biological Research Council) Professorial Fellowship (CAG) which we gratefully acknowledge. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Lipsitch M, Bergstrom CT, Levin B. The epidemiology of antibiotic resistance in hospitals: Paradoxes and prescriptions. Proc Natl Acad Sci USA. 2000;97:1938–1943. [PMC free article] [PubMed] Kiszewski A, Johns B, Schapira A, Delacollette C, Crowell V, et al. Estimated global resources needed to attain international malaria control goals. Bull World Health Organ. 2007;85:623–630. [PMC free article] [PubMed] Monto A. Vaccines and antiviral drugs in pandemic preparedness. J Infect Dis. 2006;12:55–60. [PMC free article] [PubMed] Dye C, Gay N. Epidemiology: modeling the SARS epidemic. Science. 2003;300:1884–1885. [PubMed] Sani A, Kroesea D. Controlling the number of HIV infectives in a mobile population. Math Biosci. 2008;213:103–112. [PubMed] 6. May R, Anderson R. Spatial heterogeneity and design of immunization programs. Math Biosci. 1984;72:83–111. 7. Hethcote H, van Ark J. Epidemiological models for heterogeneous populations: proportionate mixing, parameter estimation, and immunization programs. Math Biosci. 1987;84:85–118. Zaric G, Brandeau M. Dynamic resource allocation for epidemic control in multiple populations. IMA J Math Appl Med Biol. 2002;19:235–255. [PubMed] Brandeau M, Zaric G, Ricther A. Resource allocation for control of infectious diseases in multiple independent populations: beyond cost-effectiveness analysis. J Health Econ. 2003;22:575–598. [PubMed Rowthorn R, Laxminaryan R, Gilligan C. Optimal control of epidemics in metapopulations. JRSoc Interface. 2009;6:1135–1144. [PMC free article] [PubMed] Keeling M, White P. JRSoc Interface; 2010. Targeting vaccination against novel infections: risk, age and spatial structure for pandemic influenza in great britain. doi: 10.1098. [PMC free article] [ Dushoff J, Plotkin J, Viboud C, Simonsen L, Miller M, et al. Vaccinating to protect a vulnerable subpopulation. PLoS Med. 2007;4:e174. [PMC free article] [PubMed] 13. Aron J. Mathematical modeling of immunity to malaria. Mathematical Biosciences. 1988;90:385–396. Filipe J, Riley E, Drakeley C, Sutherland C, Ghani A. Determination of the processes driving the acquisition of immunity to malaria using a mathematical transmission model. PLoS Comput Biol. 2007;3 :e255. [PMC free article] [PubMed] Castillo-Chavez C, Feng Z. To treat or not to treat: the case of tuberculosis. J Math Biol. 1997;35:629–645. [PubMed] Grassly N, Fraser C, Garnett G. Host immunity and synchronized epidemics of syphilis across the united states. Nature. 2005;433:417–421. [PubMed] Forster G, Gilligan C. Optimizing the control of disease infestations at the landscape scale. Proc Natl Acad Sci USA. 2007;104:4984–4989. [PMC free article] [PubMed] 18. Goldman S, Lightwood J. Cost optimization in the SIS model of infectious disease with treatment. Top Econ Anal Policy. 2002;2:1–22. 19. Hanski I. Metapopulation dynamics. Nature. 1998;396:41–49. Keeling M, Grenfell TD. Individual-based perspectives on R0. J Theor Biol. 2000;203:51–61. [PubMed] Strosberg M. Allocating scarce resources in a pandemic: Ethical and public policy dimensions. Virtual Mentor (Ethics J Am Med Ass) 2006;8:241–244. [PubMed] Kaplan E, Merson M. Allocating hiv-prevention resources: balancing efficiency and equity. Am J Pub Health. 2002;92:1905–1907. [PMC free article] [PubMed] 23. Seierstad A, Sydsaeter K. New York, NY, USA: Elsevier North-Holland, Inc; 1986. Optimal control theory with economic applications. 24. Agrachev A, Sachkov Y. Springer-Verlag, New York, in: encyclopedia of mathematical sciences, vol. 87. edition; 2004. Control theory from the geometric viewpoint. Wu J, Riley S, Leung G. Spatial considerations for the allocation of pre-pandemic influenza vaccination in the united states. Proc R Soc Lond B. 2007;274:2811–2817. [PMC free article] [PubMed] Ndeffo-Mbah M, Gilligan C. Optimization of control strategies for epidemics in heterogeneous populations with symmetric and asymmetric transmission. J Theor Biol. 2010;262:757–763. [PubMed] Medlock J, Galvani A. Optimizing influenza vaccine distribution. Science. 2009;325:1705–1708. [PubMed] Wallinga J, van Bovan M, Lipsitch M. Optimizing infectious disease interventions during an emerging epidemic. PNAS. 2010;107:923–928. [PMC free article] [PubMed] Goldstein E, Apolloni A, Lewis B, Miller J, Macauley M, et al. Distribution of vaccine/antivirals and the ‘least spread line’ in a stratified population. J R Soc Interface. 2010;7:755–764. [PMC free article] [PubMed] Tanner M, Sattenspiel L, Ntaimo L. Finding optimal vaccination strategies under parameter uncertainty using stochastic programming. Math Biosci. 2008;215:144–151. [PubMed] Merl D, Johnson R, Gramacy B, Mangel M. A statistical framework for the adaptive management of epidemiological interventions. PLoS ONE. 2009;4:e5087. [PMC free article] [PubMed] Ndeffo-Mbah M, Forster G, Wesseler J, Gilligan C. Economically optimal timing of crop disease control in the presence of uncertainty: an options approach. JRSoc Interface. 2010;7:1421–1428. [PMC free article] [PubMed] Heffernan J, Simth R, Wahl L. Perspectives on the basic reproductive ratio. J R Soc Interface. 2005;2:281–293. [PMC free article] [PubMed] 35. Behncke H. Optimal control of deterministic epidemics. Optim Contr Appl Meth. 2000;21:269–285. 36. Dorfman R. An economic interpretation of optimal control theory. Amer Econ Rev. 1969;59:817–831. Articles from PLoS ONE are provided here courtesy of Public Library of Science • Simulation Models for Socioeconomic Inequalities in Health: A Systematic Review[International Journal of Environmental Rese...] Speybroeck N, Van Malderen C, Harper S, Müller B, Devleesschauwer B. International Journal of Environmental Research and Public Health. 2013 Nov; 10(11)5750-5780 • Development of a resource modelling tool to support decision makers in pandemic influenza preparedness: The AsiaFluCap Simulator[BMC Public Health. ] Stein ML, Rudge JW, Coker R, van der Weijden C, Krumkamp R, Hanvoravongchai P, Chavez I, Putthasri W, Phommasack B, Adisasmito W, Touch S, Sat LM, Hsu YC, Kretzschmar M, Timen A. BMC Public Health. 12870 See all... • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3172228/?tool=pubmed","timestamp":"2014-04-16T08:26:48Z","content_type":null,"content_length":"143811","record_id":"<urn:uuid:3daf7fe2-faea-4b8e-ab84-4f415f79d366>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning Poisson Binomial Distributions Learning Poisson Binomial Distributions C. Daskalakis and I. Diakonikolas and R. Servedio. 44th Annual Symposium on Theory of Computing (STOC), 2012, to appear. We consider a basic problem in unsupervised learning: learning an unknown \emph{Poisson Binomial Distribution}. A Poisson Binomial Distribution (PBD) over $\{0,1,\dots,n\}$ is the distribution of a sum $X = X_1 + \cdots + X_n$ of $n$ independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 \ cite{Poisson:37} and are a natural $n$-parameter generalization of the familiar Binomial Distribution. We work in a framework where the learner is given access to independent draws from the distribution and must (with high probability) output a hypothesis distribution which has total variation distance at most $\eps$ from the unknown target PBD. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal. We essentially settle the complexity of the learning problem for this basic class of distributions. As our main result we give a highly efficient algorithm which learns to $\eps$-accuracy using $\ tilde{O }(1/\eps^3)$ samples \emph{independent of $n$}. The running time of the algorithm is \emph{quasilinear} in the size of its input data, i.e. $\tilde{O}(\log(n)/\eps^3)$ bit-operations (observe that each draw from the distribution is a $\log(n)$-bit string). This is nearly optimal since any algorithm must use $\Omega(1/\eps^2)$ samples. We also give positive and negative results for some extensions of this learning problem. pdf of conference version pdf of full version
{"url":"http://www.cs.columbia.edu/~rocco/papers/stoc12pbd.html","timestamp":"2014-04-18T15:38:17Z","content_type":null,"content_length":"2647","record_id":"<urn:uuid:fe8e3329-85b8-4b71-8fe5-ea8e95160d4c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclide: dynamic geometry Euclide 0.6.5 has been released! A new version of Euclide has been released, since the end of 2011. It includes many bug fixes, new constructions, and a new toolbar, that make it much easier to create geometric constructions ! Among new features: • New operations on polygons: difference and exclusive or • Can now construct ellipse by 2 focus and 1 point • A toolbar for better interaction • added support for menu accelerators • several bug fixes The last version can be downloaded from this page. As always, do not hesitate to send feedbacks. What is Euclide ? Euclide is a dynamic geometry software, which tries to be as flexible as possible. You start by placing some free points, then you create objects using these points: lines, circles, line segment, circle arcs... each new figure can in turn be used to compose new shapes. By moving the points placed at the beginning, one can observe the evolution of the figure. An example is given below. The software is still largely in development, and many functions are not yet implemented. In the actual version, Euclide allows following actions: • Creation of free points, points on shapes, or intersection points • Interactive displacement of free points • Creation and storage of various affine trasnforms: line symetry, point symetry, translation, rotation • zoom of the figure • measure of length, distances, angles • use of measures to create other shapes • creation of shape locus • context help giving the next thing to do for each action ("clic a point", "choose a curve"...) A more complete list of features is given in the corresponding page. Information on how to use the software can be found at the documentation page. How to get it ? The best is to use the download page of the project hosted on sourceforge. You can get more information at the installation page. How to get help ? The first place to check is the documentation page. If this does dot suffice, please ask a question in one of the forums.
{"url":"http://jeuclide.sourceforge.net/","timestamp":"2014-04-17T09:52:51Z","content_type":null,"content_length":"5731","record_id":"<urn:uuid:67ce33d7-41d8-4178-b6ea-ca3a9a13cb78>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron. J. Diff. Eqns., Vol. 2000(2000), No. 02, pp. 1-8. Dynamics of logistic equations with non-autonomous bounded coefficients M. N. Nkashama Abstract: We prove that the Verhulst logistic equation with positive non-autonomous bounded coefficients has exactly one bounded solution that is positive, and that does not approach the zero-solution in the past and in the future. We also show that this solution is an attractor for all positive solutions, some of which are shown to blow-up in finite time backward. Since the zero-solution is shown to be a repeller for all solutions that remain below the afore-mentioned one, we obtain an attractor-repeller pair, and hence (connecting) heteroclinic orbits. The almost-periodic attractor case is also discussed. Our techniques apply to the critical threshold-level equation as well. Submitted October 21, 1999. Published January 1, 2000. Math Subject Classifications: 34C11, 34C27, 34C35, 34C37, 58F12, 92D25. Key Words: Non-autonomous logistic equation, threshold-level equation, positive and bounded solutions, comparison techniques, $\omega$-limit points, maximal and minimal bounded solutions, almost-periodic functions, separated solutions. Show me the PDF file (119K), TEX file, and other files for this article. M. N. Nkashama Department of Mathematics, University of Alabama at Birmingham Birmingham, Alabama 35294-1170, USA e-mail: nkashama@math.uab.edu Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/2000/02/abstr.html","timestamp":"2014-04-19T07:05:56Z","content_type":null,"content_length":"2000","record_id":"<urn:uuid:ea37c7c8-fbb8-4472-945f-8166c75c64c8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: can someone please explain to me why the answer to the following question is -infinity, infinity Best Response You've already chosen the best response. Best Response You've already chosen the best response. \[\lim_{x \rightarrow -infinity} (\ln(cbrt(x)))/sinx\] Best Response You've already chosen the best response. cbrt = cube root Best Response You've already chosen the best response. lnx^(1/3)/sinx = (1/3)lnx / sinx ? i thinkk.. Best Response You've already chosen the best response. actually apparently -infinity, infinity is not the correct answer.... wolfram lied so if you help me come to the correct answer Best Response You've already chosen the best response. ?? \[\lim_{x\to -\infty}\frac{\ln(\sqrt[3]{x})}{\sin(x)}\] Best Response You've already chosen the best response. ya thats it Best Response You've already chosen the best response. makes no sense since you cannot take the log of a negative number Best Response You've already chosen the best response. i think should be 0 Best Response You've already chosen the best response. so there is no limit, as the numerator is undefined if \(x\leq 0\) Best Response You've already chosen the best response. oh crap it is \[\lim_{x \rightarrow +infinity}\] sorry.... Best Response You've already chosen the best response. \(\sin(x)\) has no limit either as it is periodic and takes on all values between -1 and 1 infinitely often Best Response You've already chosen the best response. well still no limit Best Response You've already chosen the best response. numerator is going to infinity, but denominator is not going to any specific number as i wrote above. Best Response You've already chosen the best response. in other words it swings wildly between large positive and large negative values as sine varies between -1 and 1 Best Response You've already chosen the best response. so ultimately its undefined Best Response You've already chosen the best response. yes for sure undefined Best Response You've already chosen the best response. so with that being said, anytime sinx is in my denomonator i know that the function must be undefined? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. well no, the numerator could go to zero, so the whole thing could go to zero Best Response You've already chosen the best response. \[\lim_{x\to\infty}\frac{e^{-x}}{\sin(x)}\] for example would be zero Best Response You've already chosen the best response. so are there any good tips and tricks you could tell me to help me understand this sort of material Best Response You've already chosen the best response. no tricks really, you have to make sure you know what it going on, and don't for example us l'hopital's rule unless it is applicable, i.e. unless you are in indeterminate form, which you are not here. in your example you do not have \(\frac{\infty}{\infty}\) and so you cannot use l\hopital and get zero for an answer! Best Response You've already chosen the best response. see im not sure how to determine whether i am in that form or not Best Response You've already chosen the best response. plug in the number and check or imagine what happens as x gets large if you are going to infinity Best Response You've already chosen the best response. in your example as x gets large so does \(\sqrt[3]{x}\) and therefore so does \(\ln(\sqrt[3]{x})\) but the prolem is that sine does not get large and does not have a limit Best Response You've already chosen the best response. so basically what i am saying is that in this problem there is no trick or gimmick. you just have to imagine what happens as x gets bigger and bigger Best Response You've already chosen the best response. ok thanks! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f8b370ce4b09e61bffc7b48","timestamp":"2014-04-20T18:45:37Z","content_type":null,"content_length":"95419","record_id":"<urn:uuid:3a9b4d96-c44e-4592-86cc-13b538718bff>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Small Prime Solutions of Quadratic Equations Canad. J. Math. 54(2002), 71-91 Printed: Feb 2002 • Kwok-Kwong Stephen Choi • Jianya Liu Let $b_1,\dots,b_5$ be non-zero integers and $n$ any integer. Suppose that $b_1 + \cdots + b_5 \equiv n \pmod{24}$ and $(b_i,b_j) = 1$ for $1 \leq i < j \leq 5$. In this paper we prove that \begin {enumerate}[(ii)] \item[(i)] if $b_j$ are not all of the same sign, then the above quadratic equation has prime solutions satisfying $p_j \ll \sqrt{|n|} + \max \{|b_j|\}^{20+\ve}$; and \item[(ii)] if all $b_j$ are positive and $n \gg \max \{|b_j|\}^{41+ \ve}$, then the quadratic equation $b_1 p_1^2 + \cdots + b_5 p_5^2 = n$ is soluble in primes $p_j$. \end{enumerate} MSC Classifications: 11P32 - Goldbach-type theorems; other additive questions involving primes 11P05 - Waring's problem and variants 11P55 - Applications of the Hardy-Littlewood method [See also 11D85]
{"url":"http://cms.math.ca/10.4153/CJM-2002-004-4","timestamp":"2014-04-19T04:38:39Z","content_type":null,"content_length":"33360","record_id":"<urn:uuid:9788bf49-637f-4dfa-a28f-c54f9430841a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the supremum of continuous functions integrable? up vote 8 down vote favorite Let $f_\alpha$ be a family of continuous positive functions $\mathbb R\to \mathbb R$ where the index $\alpha$ runs in a compact metric space and the map $\alpha\to f_\alpha$ is continuous with respect to compact-open topology on the target. Suppose there is a uniform upper bound on the integrals of $f_\alpha$'s over $\mathbb R$. Question. Is $\underset{\alpha}{\sup} f_\alpha$ necessarily an integrable function? Apology. This sure sounds like a homework level question, but after looking at for a while I am not even sure what the answer is. 1 Note that in your hypotheses the family $\{f_\alpha\}$ is equicontinuous on bounded intervals, so the supremum is a continuous function (possibly non integrable). – Pietro Majer May 16 '11 at Also, every nonnegative continuous function $f\colon\mathbb{R}\to\mathbb{R}$ can occur as $\sup\_\alpha f_\alpha$. Consider the case where the metric space is the extended reals $\mathbb{\bar R}$, 2 $f_\alpha=0$ for $\alpha=\pm\infty$ and $f_\alpha(x)=f(x)(1-g(\alpha)\vert\alpha-x\vert)\_+$ otherwise, with $g\colon\mathbb{R}\to\mathbb{R}$ a continuous function tending to inifinity as $\vert\ alpha\vert\to\infty$ fast enough that $\int f_\alpha(x)\,dx$ is bounded. – George Lowther May 16 '11 at 17:21 add comment 3 Answers active oldest votes No. Let the compact metric space be $[0,1]$ with the standard topology and define $$ f_\alpha(x) =\alpha\max(1-\alpha \vert x\vert,0) $$ for $\alpha\in[0,1]$. This satisfies the properties asked for, with the upper bound $\int f_\alpha(x)\\,dx\le1$ (and, equality for $\alpha\not=0$). But, $$ \sup\_\alpha f_\alpha(x)=1_{\{\vert x\vert\ge1/2\}}\frac{1}{4\vert x up vote 10 down \vert}+1_{\{\vert x\vert < 1/2\}}(1-\vert x\vert) $$ is not integrable. vote accepted Thanks so much! – Igor Belegradek May 16 '11 at 16:48 add comment As you stated, no: take e.g. the unit closed interval $[0,1]$ as a compact metric space and define for all $\alpha\in [0,1]$ the function $f_\alpha(x)= \alpha (1-\alpha|x|) _ +$, that up vote 10 depends continuously on $\alpha$ even in the uniform norm on $\mathbb{R}$. Then for $|x|\ge 1$ we have $ \sup_ {\alpha\in[0,1]} f_ \alpha (x)=1/4x $, which is not integrable. down vote 1 Thanks so much! I accepted George Lowther's answer because he was there first. :) – Igor Belegradek May 16 '11 at 16:48 add comment Just one more example: let $\phi \in C_c(\mathbb{R})$ with, say, $\sup_{\mathbb{R}} \phi = 1$. For $\alpha \in [-\frac{\pi}{2}, \frac{\pi}{2}]$, set $$f_\alpha(x) = \begin{cases} \phi(x - \tan \alpha), & \alpha \in (-\frac{\pi}{2}, \frac{\pi}{2}) \\\\ 0, &\alpha = \pm \frac{\pi}{2}. \end{cases}$$ Then $\int f_\alpha = \int \phi < \infty$ for each $\alpha \ne \pm \frac{\pi} up vote 4 {2}$, and $\int f_\alpha = 0$ otherwise, but $\sup_\alpha f_\alpha = 1$. down vote Very nice, thank you! – Igor Belegradek May 16 '11 at 23:05 add comment Not the answer you're looking for? Browse other questions tagged real-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/65149/is-the-supremum-of-continuous-functions-integrable","timestamp":"2014-04-20T09:04:21Z","content_type":null,"content_length":"62697","record_id":"<urn:uuid:e74b80ad-78f7-49de-b987-fc4d30adb762>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Circuits Question - Thevein & Norton Equivalents Ok so I built it in Circuit Lab to confirm that the node voltages I found where correct. Node 9 = -61 Volts Node 11 = 5 Volts Node 12 = 128.75 Volts Node 13 = -76.25 Volts So agrees with what I calculated. [itex]V_{Th} = V_{13} - V_{12} = -76.25 V - (-128.75 V) = 52.5 Volts[/itex] [itex]V_{Th}[/itex] has an orientation of A + - B Where node 13 or A is more positive than node 12 or B I than calculated [itex]R_{Th}[/itex] by the method below I think that the answer I get in this post is wrong for the Thevenin equivalent resistance because I forgot to disable a current source. As you can see I get [itex]R_{Th} = 76.19 Ω[/itex]. Below is what I get for the Thevenin and Norton equivalents of the circuit. As you can see I get [itex]I_{Th} = 689.1 mA[/itex]. I think however that I'm doing something wrong but I'm not sure what.
{"url":"http://www.physicsforums.com/showthread.php?p=4262735","timestamp":"2014-04-19T17:29:43Z","content_type":null,"content_length":"69551","record_id":"<urn:uuid:a14e91b6-79a6-4983-ad4a-5748a66f63cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
User Georges Elencwajg bio website visits member for 4 years, 6 months seen 4 hours ago stats profile views 8,855 15h revised what's the cohomological dimension of a Stein space? added c) 15h answered what's the cohomological dimension of a Stein space? 2d Are the reals really a fraction field? comment Dear @Asaf, every automorphism of $A$ can be extended to an automorphism of $\mathbb R$ (apply the automorphism to the numerator and the denominator of a fraction), but I'm far for sure that every permutation of the $r_i$'s can be extended to $A$ . 2d What is Chern-Simons theory expected to assign to a point? comment How interesting and how lucky you are to have two native languages: when I think of how much effort one has to make as an adult to learn a new language... (U moet het wel weten: U schijnt veel talen te kennen) And congratulations for your staggering mastery of the hard mathematics you have regaled us with for several years. Apr What is Chern-Simons theory expected to assign to a point? 14 comment Dear André, will you excuse me if I ask you what your mother tongue is? The only excuse for this indiscrete request is that I am pathologically interested in linguistics [and your English is too perfect to be native:-)] 10 awarded Great Answer 8 awarded Nice Question 7 asked Are the reals really a fraction field? 7 awarded Popular Question 30 awarded Good Answer Mar A geometric characterization for arithmetic genus 27 revised added 94 characters in body Mar Solvable question of dee dee bar lemma 10 comment Dear A.T.Saaki, I hope it was completely clear that this was a harmless wordplay and that I was in no way trying to make fun of you. Actually when I began my career in mathematics I studied the dee bar operator for quite some time and really liked it, but unfortunately I left that domain and am quite unable to answer your question. Mar Solvable question of dee dee bar lemma 10 comment Dee Dee Bar sounds like a nice place where to enjoy a drink... Mar “Paradoxes” in $\mathbb{R}^n$ 10 comment +1: I find these results wonderfully paradoxical, with no quotation marks around the adjective, since they violate my intuition. If some people have a more accurate intuition, so much the better for them. 4 awarded Notable Question Feb Severi-Brauer variety and finite covering 24 comment Many thanks for your quick answer, Sasha, especially considering that my comment comes so late after your answer. Feb Severi-Brauer variety and finite covering 24 comment Dear Sasha, why is the universal smooth conic over $X$ not the projectivization of a vector bundle ? Feb determinant of normal bundle ample 17 comment Well that new question may be more interesting but it is not what you asked originally. Feb determinant of normal bundle ample 17 comment This is an answer to the original question which just asked for a formula for the determinant, and I gave such a formula here . The original question did not even mention the word "ample" ! determinant of normal bundle ample Feb comment @user45766: It is extremely unpleasant that you have completely deleted the original question which just asked for a formula for the determinant of the normal bundle. I have given you 17 precisely such a formula , but now my answer looks like a complete non sequitur because of your modifications. Please modify your post in order that the original question is re-established and add your question on ampleness below.
{"url":"http://mathoverflow.net/users/450/georges-elencwajg?tab=activity","timestamp":"2014-04-17T12:43:05Z","content_type":null,"content_length":"46473","record_id":"<urn:uuid:a67ca75e-198c-4fcd-8077-79543e66581a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
The MIIS Eprints Archive: Metadata Visibility matches "Always Show" AND Problem Sectors matches "Energy and utilities" Prediction of sanding in subsurface hydrocarbon reservoirs.Fuel Cell Assembly Process Flow for High ProductivityElectricity price predictability and quantifying customer suitabilityDHV water pumping optimization Strategic bidding in a primary reserve auction Optimal Flood Control Thruster Allocation for Dynamical Positioning Optimal Distributed Power Generation Under Network-Load Constraints Effect of distributed energy systems on the electricity grid Modelling the effect of friction on explosives http://www.maths-in-industry.org/miis/ OAI Site description has not been configured. Sat, 19 Apr 2014 06:41:14 +0100 Sat, 19 Apr 2014 06:41:14 +0100 en http://www.maths-in-industry.org/miis/634/ Ockendon, H. and Stoykov, S. and Zorawik, T. and Dimova, S. and Georgiev, I. and Kolkovska, N. and Todorov, M. and Vasileva, D. (2013) Prediction of sanding in subsurface hydrocarbon reservoirs. [Study Group Report] http://www.maths-in-industry.org/miis/612/ Diakite, I. and DiLorenzo, T. and Edwards, D.A. and Emerick, B. and Fang, R. and Jing, F. and Li, L. and Miller, J. and Panaggio, M. and Peace, A. and Raymond, R. and Sun, Y. and Wolff, E. and Zumbrum, M. (2012) Fuel Cell Assembly Process Flow for High Productivity. [Study Group Report] http://www.maths-in-industry.org/miis/610/ Arculus, R. and Christiansen, M. and Devine, M. and Dempsey, C. and Fennell, J. and Gleeson, J. and Hunter, G. and O'Sullivan, K. and Nuttall, B. (2011) Electricity price predictability and quantifying customer suitability. [Study Group Report] http://www.maths-in-industry.org/miis/597/ van Mourik, S and Bierkens, J and Stigter, H (2009) DHV water pumping optimization. [Study Group Report] http://www.maths-in-industry.org/miis/588/ Palvolgyi, D and Csapo, G and Wortel, M (2011) Strategic bidding in a primary reserve auction. [Study Group Report] http://www.maths-in-industry.org /miis/586/ Dickinson, P and Hulshof, J and Ran, A (2011) Optimal Flood Control. [Study Group Report] http://www.maths-in-industry.org/miis/583/ Poppe, K and Bouwe van den Berg, J and Blank, E (2010) Thruster Allocation for Dynamical Positioning. [Study Group Report] http://www.maths-in-industry.org/miis/582/ van den Akker, M and Bloemhof, G and Bosman, J (2010) Optimal Distributed Power Generation Under Network-Load Constraints. [Study Group Report] http://www.maths-in-industry.org/miis/577/ M., Brown and L., MacManus (2011) Effect of distributed energy systems on the electricity grid. [Study Group Report] http://www.maths-in-industry.org/miis/575/ Hicks, P. and Hall, C. (2011) Modelling the effect of friction on explosives. [Study Group Report]
{"url":"http://www.maths-in-industry.org/miis/cgi/exportview/subjects/utilities/RSS/utilities.rss","timestamp":"2014-04-19T05:41:14Z","content_type":null,"content_length":"5558","record_id":"<urn:uuid:50953817-c0d1-4a74-9b37-9aa1a55f3ab6>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Structural Properties of Double-Walled Carbon Nanotubes Structural Properties of Double-Walled Carbon Nanotubes V.K. Jindal^*, Shuchi Gupta, and K. Dharamvir Department of Physics, Panjab University, Chandigarh 160014 INDIA This is an abstract for a presentation given at the Ninth Foresight Conference on Molecular Nanotechnology. There will be a link from here to the full article when it is available on the web. Carbon nanotubes are being extensively studied for their interesting bulk, structural, mechanical and electronic properties. They exist as single nanotubes of various kinds as well as materials of these in forms of bunches of either single wall (SWNT) or multi-wall formations (MWNT). Recent technological advances looking for their semi-conducting properties have tremendously increased our effort to understand some basic properties of carbon nanotubes, especially related to their electronic properties. The present paper investigates structural properties of double walled nanotubes, by calculating the rotational and translational freedom during movement of one nanotube with respect to the other one. This investigation will lead us to understand the magnitude of fixation energy of such double walled nanotubes at any given temperature. It will also help in calculating properties of bunches of For our model study, we represent the inter-nanotube interaction potential derivable from carbon atom-atom interaction potentials in a form close to Van-der-Waals potential energy. Such atom-atom interaction potentials have been considered to be very useful to represent various carbon based molecular crystals, like fullerene solids[1] and bunches of carbon nanotubes in triangular lattices. The basic procedure of the model calculation follows writing the total potential energy of long double wall nanotubes of different radii as dictated by (n,m) indices. It so turns out that the minimum energy configuration requires the inter-nanotube distance to be around 0.339nm, in close agreement to that measured by Ebbessen and Ajayan [2]. One of the tubes of the double wall structure is rotated along the long axis to obtain total energy of the combination as a function of rotational angle, which is measured from an initial configuration. Similarly, the outer tube is also translated along the long axis (z-axes) to obtain total energy as a function of increment of shift, Dz. The potential energy in the minimum energy configuration comes out to be 0.023 eV/atom, the number of atoms taken here being the total number of atoms on the surfaces of the two nanotubes. The barrier height in case of rotation is around 0.007 meV/atom and periodicity around 180. Similarly, for translation along z-axis, the barrier height comes out to be around 0.008meV/atom with z periodicity around 2.46 Å (equaling the lattice constant of graphite sheet). The potential used here has also been deployed to calculate interlayer separation, Young’s modulus along ‘c’ axis and energy per atom between two graphite sheets, and the results of Young’s modulus and interlayer separation compare very well with the measured values existing in literature, indicating that energy estimate should be acceptable. However, striking differences have been observed with the density functional calculation using LDA of Charlier and Michenaud [3]. This discrepancy needs to be understood. We also estimate the potential energy/atom of a multi wall nanotubes, by increasing the number of walls and observe that the energy saturates on increasing the walls, and nearly 90% of the energy is obtained on formation of 8 wall nanotubes. The models for such double or multi-wall nanotubes provide a platform to forms multiwall nanotubes materials in the form of bunches, like SWNT bunches. In the case of MWNT bunches, the non-rigidity of the MWNT in the form of rotational and translational motion would be useful input. 1. V.K. Jindal, Shuchi Gupta and K. Dharamvir, Phantoms(nanotubes) 8,5,(2000) 2. T. W. Ebbessen and P.M. Ajayan, Nature (London) 358, 220(1992). 3. J.C. Charlier and J.P. Michenaud, Phys. Rev. Lett. 70, 1858(2000). ^*Corresponding Address: V.K. Jindal Department of Physics, Panjab University Chandigarh 160014 INDIA phone: 91-172-534458 fax: 91-172-783336 email: jindal@pu.ac.in Foresight Programs
{"url":"http://www.foresight.org/Conferences/MNT9/Abstracts/Jindal/index.html","timestamp":"2014-04-17T06:41:56Z","content_type":null,"content_length":"11301","record_id":"<urn:uuid:b2a229c2-9f79-44dd-9bcd-82ef8d300175>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
ASSET Test Structure Test Breakdown of ASSET The test is required for the accurate identification of a student’s level of skills, strengths and weaknesses related to specific subject areas and consequently placing him in postsecondary courses. This is done through a series of tests that demarcate the skills acquired by students in high school. It does not concentrate on testing the memory of students, rather it helps to determine if the students have understood the concepts and are able to apply their knowledge. It provides an opportunity to improve one’s shortcomings by identifying them at an early stage. The result helps in the following. □ Students can be guided towards the most suitable course that can enhance their knowledge and abilities. Students can be placed into postsecondary institutions on the basis of the results. □ It provides insight to teachers about the knowledge that the student has grasped. □ The results show the subject area in which the student needs more attention. The score reveals the ability of students at an early stage in academics, which helps to strengthen the foundation. This helps students to get prepared for higher level of education and exams. It helps to improve the students where they falter at a stage where the loss is minimum. If the result of this test is ignored, it might cause a greater loss at a later stage when it becomes extremely difficult to improve. Students appearing for the test should pay attention to the structure and content asked in the test since this makes it possible for them to exhibit their skills in the most efficient and affective manner. The structure is explained below. ASSET Test Structure The tests included are: ● Basic Skills Tests – Under this, three skills are tested by three tests. These are Writing skills test, Numerical skills test and Reading skills test. ● Advanced Mathematics Tests – There are four tests that test the advanced mathematical skills in students. These are Elementary Algebra, Intermediate Algebra, College Algebra and Geometry. Basic Skills Tests Writing Skills Test □ Testing time – 25 minutes □ Number of testing items – 36 items □ Type of testing items – Multiple-choice questions. There are three prose passages followed by 12 multiple-choice questions each. □ Test content – Questions in this test pertain to punctuation, grammar, sentence structure, organization, strategy and style. □ Skills required – With these questions, it is possible to judge the rhetorical skills of students along with usage and mechanics. The ability of students to make judgments regarding the errors in sentence structure and related grammatical mistakes is tested. Students are required to select the best answer option, which can be done to precision only if they have clear ideas about the content. Numerical Skills Test □ Testing time – 25 minutes □ Number of testing items – 32 items (25 – Arithmetic, 7 – Pre-Algebra) □ Type of testing items – Multiple-choice questions □ Test content – The content is taken from topics of Arithmetic like operations with whole numbers, decimals and fractions, factors, common factors, multiples, ratios and order of operations for real numbers. The Pre-Algebra questions include questions on prime numbers, composite numbers, complex fractions, signed numbers, absolute values, scientific notation, and square roots. □ Skills required – The skills tested in this test are the basic numerical skills acquired by students in their high school coursework. In order to answer these questions, students should exhibit word problem solving skills that require basic knowledge of arithmetic. Calculators are not permitted in this test and hence students should be able to work out the problem mentally or with the use of a scratch paper. Reading Skills Test □ Testing time – 25 minutes □ Number of testing items – 24 items (12 referring questions, 12 reasoning questions) □ Type of testing items – Multiple-choice questions. There are three passages of about 375 words each, followed by sets of 8 questions. □ Test content – The passages are taken from general topics of prose fiction, business, social studies etc. The questions either refer to the passage directly, or are based on reasoning out the derived and implied meanings. □ Skills required – This test tests basic reading comprehension skills. Hence, students must be able to understand the given text by reading it. Since there are direct as well as indirect questions asked, the students are required to exercise their logical reasoning and thinking abilities to come to the conclusion. However, one has to strictly answer the questions within Advanced Mathematics Tests Elementary Algebra □ Testing time – 25 minutes □ Number of testing items – 25 items (Pre-Algebra – 5, Elementary Algebra – 16, Intermediate Algebra – 4) □ Type of testing items – Multiple-choice questions □ Test content – Evaluation and simplification of algebraic expressions, quadratic equations, polynomials, integer exponents, rational expressions and solutions of linear equations. Use of calculators is allowed, however only certain specific calculators can be used. □ Skills required – The skills required to answer these questions match up to the skills of students typically acquired in algebra course of first year high school. Intermediate Algebra □ Testing time – 25 minutes □ Number of testing items – 25 items (Elementary Algebra – 5, Intermediate Algebra and Coordinate Geometry – 16, College Algebra – 4) □ Type of testing items – Multiple-choice questions □ Test content – Solutions of polynomial equations by factoring, graphs of linear equations, operations with radical and rational expressions, the distance formula, slope of a line, solution of linear inequalities and simplification of radicals. Specific calculators can be used in this test also. □ Skills required – The skills tested here are similar to the skills acquired in the second year of high school algebra course. College Algebra □ Testing time – 25 minutes □ Number of testing items – 25 items (Intermediate Algebra – 3, College Algebra – 16, Trigonometry – 6) □ Type of testing items – Multiple-choice questions □ Test content – The content covers exponential functions, factorials, operations with complex numbers, composition of functions, inverses of functions, linear inequalities and graphs of polynomials. Students can make use of specific calculators for this test. □ Skills required – The skills tested are that of advanced level mathematics acquired in first year college algebra course. □ Testing time – 25 minutes □ Number of testing items – 25 items (Triangles - 12, Circles – 6, lines – 3, other figures – 4) □ Type of testing items – Multiple-choice questions □ Test content – The questions asked in this test include the topics that come under circles, lines, triangles and other figures. Use of calculators is allowed. □ Skills required – Skills acquired in high school geometry course are tested in this part. Scoring the ASSET Battery of Tests There is no pass or fail score. However, there could be minimum acceptable score level for each educational program as offered by institutions. If this score is not met, then students may have to take additional developmental courses to bring up their skill level. All testing items carry equal marks. That is why students are also reminded in the guidelines to be followed during the exam that they must not linger too long on a particular question. On the basis of the attempt, students are provided with a detailed analysis of their performance in three reports about advising, educational planning and transfer planning. Succeeding in ASSET Taking ASSET seriously is essential in order to derive the benefits. Since it identifies the areas that require more attention in the initial stages of educational performance of students, it becomes instrumental in their development and future performance. Hence, one must identify with the various tests that ASSET comprises of, and prepare well for them to make the most of this testing program. Terms and Conditions Information published in TestPrepPractice.net is provided for informational and educational purpose alone for deserving students, researchers and academicians. Though our volunteers take great amount of pain and spend significant time in validating the veracity of the information or study material presented here, we cannot be held liable for any incidental mistakes. All rights reserved. No information or study material in this web site can be reproduced or transmitted in any form, without our prior consent. However the study materials and web pages can be linked from your web site or web page for • Research • Education • Academic purposes No permission is required to link any of the web page with educational information available in this web site from your web site or web page
{"url":"http://www.testpreppractice.net/ASSET/asset-test-format.aspx","timestamp":"2014-04-20T21:11:51Z","content_type":null,"content_length":"23037","record_id":"<urn:uuid:cdc51461-327b-4761-b86c-281c0b369fb6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
entering arrays 04-01-2007 #1 Registered User Join Date Mar 2007 entering arrays I am trying to get the firts part of my program running and this is what I have so far: #include <iostream> using namespace std; const int Size = 20; int main () char FirstNumber [Size]; char SecondNumber [Size]; int Idx; int Idx2; int Num1 [Size]; int Num2 [Size]; cout<<"Enter the first number, less than 20 digits long, please : "; int Count = 0; while (Count < Size && cin>>FirstNumber [Count]) for ( Idx = 0; Idx < Count ; Idx++) cin>>FirstNumber [Idx]; int Num1 [Idx] = FirstNumber [Idx] - '0'; //changing char to int cout<<"Enter your second number, less than 20 digits long, please"; int Count2 = 0; while (Count2 < Size && cin>>FirstNumber [Count2]) for (Idx2 = 0; Idx2 < Count2 ; Idx2++) cin>>SecondNumber [Idx2]; int Num2 [Idx2] = SecondNumber [Idx2] - '0'; return 0; So I enter the first number but then nothing happens, it doesn't go on to ask for the second number. I am trying to count how many sinlge digits are entered and then trying to read in those digits into the array one by one. Also when I try to convert the characters to integers I get an error. Any one have nay ideas? use getline to enter the line then pasre the line to extract the number The first 90% of a project takes 90% of the time, the last 10% takes the other 90% of the time. no to be sarcastic, but what exactly are you hoping to achieve with this program? Looks like a bunch of over-complicated code for no apparent reason... for an assignment I am suppose to ask the user for two numbers less then 20 digits long and put them in an array as characters and then change them to integers and then find the sum of the two numbers by adding each digit of the arrays together. I am trying to read in the numbers if the user enter less then 20 digits. If I enter 20 digits it works fine but I need to be able to read if it is less then 20. The way you have it written, the two numbers must be exactly 20 characters long, no less. You need someway to exit the loop if a non-digit character is entered. Also you seem to get each number from cin twice; you have two loops that get the number from cin. It is too clear and so it is hard to see. A dunce once searched for fire with a lighted lantern. Had he known what fire was, He could have cooked his rice much sooner. what i would do to get the first number is the following: char fstNum[28]; for (int i = 0; i < 20; i++) { cin >> fstNum[i]; if (fstNum[i] == '') { break; } ...or something like that... ...and the same could be done for the second nuber... ...i made a char with [28] jst to be safe, cuz it could b dangerous if u tried to put something into fstNum[21], and u only had up to [20]. PS: I haven't tried that, so it's just a vague idea of what I might attempt to do if I was in your situation. 04-01-2007 #2 04-01-2007 #3 Registered User Join Date Nov 2006 04-01-2007 #4 Registered User Join Date Mar 2007 04-01-2007 #5 Registered User Join Date Apr 2006 04-01-2007 #6 Registered User Join Date Nov 2006 04-01-2007 #7 Registered User Join Date Nov 2006
{"url":"http://cboard.cprogramming.com/cplusplus-programming/88114-entering-arrays.html","timestamp":"2014-04-23T23:56:46Z","content_type":null,"content_length":"62005","record_id":"<urn:uuid:647958fe-7aa9-4a83-8efc-5e5af39244b8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Cog Spread 01-12-05, 10:36 PM #1 King of the Hipsters Join Date Jan 2005 Bend, Oregon My Bikes Bianchi Pista 0 Post(s) 0 Thread(s) Cog Spread If a person has two cogs on his rear hub, how great a tooth spread between the two cogs can he have with the same length chain: 16 and 17; 16 and 18; 16 and 19; or...? depends on how long your rear dropouts are. i only run a difference of 2 teeth on my wheel, on dropouts that are 3cm effective adjustment. Almost is only for horseshoes and hand grenades. If you are riding with the tools needed to flip your wheel around, why can't you carry a little chain tool and a few links, too? If you don't plan to make the switch on the fly, why care? RobbieIG wrote: "If you are riding with the tools needed to flip your wheel around, why can't you carry a little chain tool and a few links, too?" Right now I ride a geared hybrid commuter. I don't know about these things. It hadn't occured to me that riders with two cogs would change their chain length. I had assumed they would simply flip their wheel over and adjust the position of the axle in the dropouts for proper chain tension. My question had to do with how great a spread in the number of teeth of the two cogs one could accomodate with typical horizontal dropouts, say as on a Bianchi Pista. At 58 years of age, I presently follow a rather ambitious plan of exercise given to me by my physical therapist to overcome some earlier injuries. The plan includes a lot of bike riding. In order to keep myself motivated, I hold this image of a fixed gear bike out in front of me like a carrot. This fixed gear carrot gets me up at 0430 in the morning and keeps me riding even in the snow and the ice. I had hoped to start riding a fixed gear bike one day a week, starting some time this spring. The key to my recovery and progress involves making slow, careful progress and listening to my body. I had hoped to retain the ability to drop down in gear inches if my knees started to complain. I can presently ride all day, anywhere in my community, at 72 inches without any pain issues. I can do this about three days in a row before giving myself a break. I can do a whole day at 81 inches without consequences, but not two days in a row. Anyway, the fixed gear bike really motivates me. I love beautiful machines and nothing, in my mind, compares in beauty to some of the fixed gear bikes I have seen in this forum. I want to do this very much, and I want to do it the smart way without hurting myself. I don't have too many more years left to do this. So, if I have to carry addtional tools and some links, I will. Or, if I can choose the right tooth spread, and carry fewer tools and parts and still meet my needs, I'll do that. So, the question remains, on a bike like the Bianchi Pista, with a 48 tooth chain ring, how much of a tooth spread could I accomodate by moving the axle fore and aft? I can figure it out myself, I guess, when I get the bike. Maybe I can figure it out right now. I understand from this forum that one tooth on a cog equals 1/8 inch axle travel. I suppose then, I would build a chain that gave me the most rearward possible axle position in my drop outs, and then measure how far I could still move the axle forward, in inches, and divide by That would tell me how many more teeth I could have on my other cog. Does that sound right? I have set up a rear hub with a 16 and 18 tooth cog. I set the chain with the 18 tooth and the rear hub mounted close to the front of the dropout. When the wheel is flipped the hub only moves back about 1/2". If the bike had a rear brake it would be a drag. But without a rear brake it's a piece of pie. Ken Cox If a person has two cogs on his rear hub, how great a tooth spread between the two cogs can he have with the same length chain: 16 and 17; 16 and 18; 16 and 19; or...? one tooth moves axle exactly 1/8", 2=1/4", 4=1/2", and so on. so if your drop-outs or track ends are an inch long, they will be able to take max 4, but 3 is probably more realistic. just measure the length of your track-ends, drop-outs, and subtract your axle width, then you know how much difference your bike can take. another option is to set up your wheel with whatever cogs you want, and use a connex chain. figure out how much extra chain you need for the larger cog, and make up a little section of chain that long, with a connex link on it. then, when you flip your wheel over just add in that length of chain, easy peasy, and no chain tool needed. Or, set up a double front chainring and then use spacers and redishing on the rear hub so that you can get a straight chainline while running a different gear ratio with the same number of teeth. So, for example, you could run a 14t and 16t cog in back, and a 44t and 42t ring in the front. (I'm actually not sure this is a useful idea.) First off, congratulations on your interest in FG. I think you will find it to be the most satisfying bike riding experience of you life. I think that I can safely get a 3 tooth difference from 15 to 18 on my Pista (48T front) without adjusting the chain length but I'm not sure right now. I will have to check when I get home. There is something that you should consider if you are going to run two fixed cogs: A lot of Track hubs are inexplicably designed to accept a fixed cog on one side and a freewheel on the other(or two freewheels but not two fixed cogs), including the hub on the stock wheel of a Bianchi Pista. These are commonly referred to fixed/free flip/flop hubs. To run the setup you are speaking of you would need a fixed/fixed flip/flop hub which limits your options. You can go with Phil Wood or Shimano Dura Ace and break the bank or Surly which is significantly cheaper and the people in this forum will praise them up and down the street. In any case you will need to purchase a rear wheel which can cost some $. Or if your riding is going to be really low impact you can have a bum bike fixed setup(fixed cog lotsa locktite and a BB lockring) on the freewheel side of the stock Bianchi wheel. This may be dangerous and since you are a beginner I wouldn't recommend... although some would say you wouldn't have a problem. Good Luck! One thing you might want to consider is that with the fixed/free option you can give yourself coasting on the easier gear, which, if you really want to rest, is where its at. An easier uphill gear can turn into a leg mangling spinfest downhill. Nice to have you around. Proud of you for your upcoming fitness goals. On your pista, are you rockin' a hub with two sides, both fixed? fixed/freewheel? There's gotta be some super cheap chains out there that have master links. These allow removal/installation of chains with minimal tools. An easy fix to your "cog spread" dillemma would be to simply carry an additional chain in a zip loc bag. Now, perhaps I misinterpreted your question. Are you interested in changing gear ratios while out on a ride? Are you interested in changing gear ratios throughout the week? Each scenario could have a completely different solution. You could always be on the lookout for a cheap second wheel, too. In the fixed world, its always worth having an extra rear wheel...and chain...and cog. A person never knows how much he has to learn until he starts to learn it. I don't have a fixed gear bike, yet. I had thought about trying to put together a cost-effective Steamroller, but it kept coming out to more money than I expected. My son suggested a Bianchi Pista and upgrading the parts as they wore out. That made sense to me, but what do I know? I had thought of the Pista with two cogs because, even though I have practiced riding around town in one gear, I have had the advantage of free wheeling and I didn't know how much that might have enabled me to pull more gear inches than I really could. So, I thought, since the Pista had two sides (I didn't know one side favored a freewheel) I could have the option of an easier gear to learn with, and, similarly, an easy gear to get me home from work if my knees complained. My physical therapist and my knee doc think I can do this if I approach it very carefully. I just want to stack the deck for success. Thechamp's post got me to thinking a freewheel option in the beginning might serve me very well on the careful side. Many good options presented and much to learn. I cannot remember anything that has captured my imagionation as much as has this fixed gear bike idea (I mean, other than girls in my younger days). The concept and the bikes themselves radiate beauty. It amazes me how many variations riders can come up with such a simple idea. Ken Cox Thechamp's post got me to thinking a freewheel option in the beginning might serve me very well on the careful side.... that would probably be the best option, (assuming you're putting a brake on there anyways), just about every basic stock track bikes such as the pista comes with the rear hub as a fixed and then freewheel on the other side Ken Cox I cannot remember anything that has captured my imagionation as much as has this fixed gear bike idea (I mean, other than girls in my younger days). The concept and the bikes themselves radiate beauty. It amazes me how many variations riders can come up with such a simple idea. It amazes me that in the world of 10 speed durace, campy shifty crap, and carpet fiber shiz bikes there are plenty of people content with riding a 1 speed fixed gear steel bike, just as they did 100 years ago... Makes you wonder about the value of technology. that would probably be the best option, (assuming you're putting a brake on there anyways), just about every basic stock track bikes such as the pista comes with the rear hub as a fixed and then freewheel on the other side A small caveat with the bianchi pista track bike. The bike is not drilled for a rear brake, so you would have to drill the hole your self, or have your shop do it... some shops will not do it for liability reasons, like a shop near me when I inquired about having them drill the frame for a rear brake.... Well i got home and was motivated to check out the gear situation on my Pista but my new brand spankin' new Colnago fork was waiting for me so i'm a little preoccupied right now. I did however manage to drag myself away from my new best friend for a few minutes to do some measuring. I think the 1/8" theory is a little flawed after measuring some things. It is true that the diameter of a cog changes by 1/8" per tooth. Here are my measurements for what I had lying around: 14T cog = 2 1/2" 16T cog = 2 3/4" Difference = 2/8" I then measured the dropout and the track nut to find the useable amount of space in the dropout and this is what I found: Dropout = a little more than 1 1/2" so for arguments sake I will say 1 5/8" Track nut = 3/4" diameter Useable dropout space = 1" In my calculation of useable space I figured that you wouldn't want any of the track nut hanging out the back of the dropout so I subtracted half of the diameter of the track nut from the total. I also figured you could get the track nut a little farther forward in the dropout so I only subtracted 1/4" for the front. If you draw a line that is 1" long and mark it at each 1/8" you get a total of 9 marks. By the 1/8" theory you would be able to fit 9 different gears in my droputs without lengthening the chain! 3cm sounds a little more reasonable because 3cm is equal to 1.18" which would mean I could squeeze 3 different gears in if I was lucky. This has piqued my interest but I lack the know-how and the motivation to dust off the scientific calculator to figure out how to come up with a formula for exactly how far 1 tooth moves the axle in the dropout, any ideas? Any engineers out there? This has piqued my interest but I lack the know-how and the motivation to dust off the scientific calculator to figure out how to come up with a formula for exactly how far 1 tooth moves the axle in the dropout, any ideas? Any engineers out there? 1/8" of an inch. 1/8" of an inch. So are you saying I can fit 9 different gears on my hub without lengthening my chain or are you saying my calculations were wrong? A small caveat with the bianchi pista track bike. The bike is not drilled for a rear brake, so you would have to drill the hole your self, or have your shop do it... some shops will not do it for liability reasons, like a shop near me when I inquired about having them drill the frame for a rear brake.... some shops will go as far as saying that they won't install a front brake on a track bike because the track forks aren't designed to take the intense forward/rearward stresses braking requires of but, anyways... track bikes are a lot like a blind man in an orgy. you just gotta get into and feel your way around. LOL well put. Where is my cane? I'm not one for fawning over bicycles, but I do believe that our bikes communicate with us, and what this bike is saying is, "You're an idiot." BikeSnobNYC Meet my friend pi. Each tooth adds 1/2" linearly (the length of half of a link). However that 1/2" is wrapped around a circle. If you remember from geometry class, the circumference of a circle is 2(pi)r where r is the radius. So if our first cog has circumference x, a cog with one more tooth will have circumference x+1/2". 2(pi)r = x r = x/2(pi) d = x/pi (the diameter of the cog) 2(pi)r' = x+0.5 (we'll assume x is a quantity in inches). (x+0.5)/2(pi) = r' (the radius of the new cog) (x+0.5)/pi = d' (the diameter of the new cog) d' - d = (x+0.5)/pi - x/pi = 0.5/pi (difference in diameter which will be how much your chainline will shorten). So now we've determined the difference in diameter between a cog and a cog with one more tooth to be (x+0.5/pi) - (x/pi) = 0.5/pi = some irrational number approximately 0.159 which is sort of close to 1/8 = 0.125. Actually, maybe your chainline shortens by pi/4 as it's only affected by radial distance... right? So then it would be about 0.0796 which is not at all like 1/8. Last edited by bostontrevor; 01-13-05 at 05:10 PM. Meet my friend pi. Each tooth adds 1/2" linearly (the length of half of a link). However that 1/2" is wrapped around a circle. If you remember from geometry class, the circumference of a circle is 2(pi)r where r is the radius. So if our first cog has circumference x, a cog with one more tooth will have circumference x+1/2". 2(pi)r = x+0.5 (we'll assume x is a quantity in inches). (x+0.5)/2(pi) = r (the radius of the new cog) (x+0.5)/pi = d (the diameter of the new cog) So now we've determined the difference in diameter between a cog and a cog with one more tooth to be (x+0.5/pi) - (x/pi) = 0.5/pi = some irrational number approximately 0.159 which is sort of close to 1/8 = 0.125. You know the funny thing was I almost ended that post with "bostontrevor I know you know the answer." and then another cheesy Pi/Pie reference which you were so set up for but failed to deliver. edit: even though the difference in diameter is 1/8" that does not necessarily mean that the axle moves 1/8" right? Last edited by jinx_removing; 01-13-05 at 05:18 PM. Dammit! I blew it! "Ok, let's assume that you start with a cherry pie with 6 slices. Now compare this to a peach pie with 7 slices...." No good? Anyhow, I just did a little back of the envelope with a 14 & 15 tooth cog and I'm pretty sure the right answer is each tooth adds pi/4 inches to your chain line. Hmmm.. But it's also more complicated than that because the amount of chain you need is actually the boundary of the shape defined by the hemispheres of your cog and chainwheel connected by straight lines (the pie wedge). So the angles of that wedgish shape will change and it may not be a strictly pi/4 change.... Hmm... See that's what I was thinking. This seems like it would be pretty complicated to figure out. Even though the diameter of the cog may be 1/8" larger the chain is engaging it at more than one point so it must be more. Also if you go by the 1/8" theory the chainline would only move back approximatley 1/2 of that (1/16") since the chain is only engaging a little more than half of the cog at one time. Wouldn't you would need to find the length of the arc around the cog at which the chain engages it and subtract it from that of the arc at which the chain engages the larger cog? This would be pretty hard to figure out since the arc will change slightly depending on which chainring you have on the front and the size of the cog in the back. BTW. Nice try with the pie thing but it seemed a little forced Last edited by jinx_removing; 01-13-05 at 05:36 PM. 01-13-05, 12:08 AM #2 Track Rat Join Date Dec 2004 Pittsburgh, PA My Bikes To many 0 Post(s) 0 Thread(s) 01-13-05, 12:47 AM #3 Postmodern Beauty King Join Date Oct 2004 Corvallis, OR My Bikes Centurion Fix, Jamis Nova, Jamis Crosscountry 0 Post(s) 0 Thread(s) 01-13-05, 05:48 AM #4 King of the Hipsters Join Date Jan 2005 Bend, Oregon My Bikes Bianchi Pista 0 Post(s) 0 Thread(s) 01-13-05, 06:02 AM #5 Loose Member Join Date Feb 2004 Middleburg Pa. My Bikes 0 Post(s) 0 Thread(s) 01-13-05, 07:13 AM #6 01-13-05, 10:13 AM #7 Iguana Subsystem Join Date Nov 2004 san francisco 0 Post(s) 0 Thread(s) 01-13-05, 10:26 AM #8 Rebel Thousandaire Join Date Sep 2004 Hartford, CT My Bikes Public D8, Yuba Mundo (cargo), Novara Buzz (1-speed, soon to be 2-speed w/ a kickback hub), Xootr 1-speed folder 0 Post(s) 0 Thread(s) 01-13-05, 01:16 PM #9 Join Date May 2004 My Bikes unknown road conversion, half built Benotto track 0 Post(s) 0 Thread(s) 01-13-05, 01:35 PM #10 Join Date Jan 2005 portland, or My Bikes steyr, lejeune, schwinn, sears, crescent, blah blah blah. 0 Post(s) 0 Thread(s) 01-13-05, 01:53 PM #11 Join Date Oct 2003 My Bikes bianchi pista 0 Post(s) 0 Thread(s) 01-13-05, 02:11 PM #12 King of the Hipsters Join Date Jan 2005 Bend, Oregon My Bikes Bianchi Pista 0 Post(s) 0 Thread(s) 01-13-05, 02:16 PM #13 Join Date Jan 2005 My Bikes 03 bianchi pista, 98? khs track 0 Post(s) 0 Thread(s) 01-13-05, 02:22 PM #14 Join Date Oct 2004 0 Post(s) 0 Thread(s) 01-13-05, 02:25 PM #15 Join Date Oct 2004 0 Post(s) 0 Thread(s) 01-13-05, 03:59 PM #16 Join Date May 2004 My Bikes unknown road conversion, half built Benotto track 0 Post(s) 0 Thread(s) 01-13-05, 04:09 PM #17 Join Date Oct 2004 0 Post(s) 0 Thread(s) 01-13-05, 04:20 PM #18 Join Date May 2004 My Bikes unknown road conversion, half built Benotto track 0 Post(s) 0 Thread(s) 01-13-05, 04:21 PM #19 Join Date Oct 2003 My Bikes bianchi pista 0 Post(s) 0 Thread(s) 01-13-05, 04:32 PM #20 Team Beer Join Date Apr 2004 Sacramento CA My Bikes Too Many 1 Post(s) 0 Thread(s) 01-13-05, 05:05 PM #21 Retrogrouch in Training Join Date Sep 2004 Knee-deep in the day-to-day 0 Post(s) 0 Thread(s) 01-13-05, 05:09 PM #22 Join Date May 2004 My Bikes unknown road conversion, half built Benotto track 0 Post(s) 0 Thread(s) 01-13-05, 05:11 PM #23 Retrogrouch in Training Join Date Sep 2004 Knee-deep in the day-to-day 0 Post(s) 0 Thread(s) 01-13-05, 05:13 PM #24 Retrogrouch in Training Join Date Sep 2004 Knee-deep in the day-to-day 0 Post(s) 0 Thread(s) 01-13-05, 05:29 PM #25 Join Date May 2004 My Bikes unknown road conversion, half built Benotto track 0 Post(s) 0 Thread(s)
{"url":"http://www.bikeforums.net/singlespeed-fixed-gear/83169-cog-spread.html","timestamp":"2014-04-21T15:41:56Z","content_type":null,"content_length":"128407","record_id":"<urn:uuid:ae2f2a1f-bf46-45a4-9f4a-b443055bafc0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Why aren't weather predictions always accurate? Open Questions Mr. Boxy Why aren't weather predictions always accurate? 1452 day(s) ago No comments yet !!! Be the first to comment !!! It comes down to chaos theory, which came about thanks to early weather simulations. In the early 1960s, meteorologists Edward Lorenz was using a computer to work on formulas to predict weather patterns using a simple mathematical model. He found the results he was getting were grossly inaccurate. While the results the computer printed up had a precision of six digits past the decimal point (1.000000,) the calculations were done to the third digit (1.000.) While minute, these changes were enough to throw off the model. In 1963, Lorenz presented his discovery in a paper titled "Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?" which popularized the term "butterfly effect." Although his model was relatively simple,* even the most minute change would eventually have major consequences. This meant that accurate weather predictions would be impossible because there was no way all of the contributing factors could be taken into account. Computer technology has improved tremendously since then, with some of the most powerful computers in the world are used for weather modeling. This has improved weather forecasting, but they still can't take everything into account to guarantee accuracy. *His formula had twelve variables for predicting overall weather patterns. The formula currently used for calculating the heat index uses eleven variables. Posted 1452 day ago No comments yet !!! Be the first to comment on this answer !!! Other Open Questions in Science Trivia
{"url":"http://www.strangequestions.com/question/775/Why-arent-weather-predictions-always-accurate.html","timestamp":"2014-04-18T23:16:10Z","content_type":null,"content_length":"57561","record_id":"<urn:uuid:af4aa3a2-fb3a-43b3-aa85-a4b9be61f191>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Univariate GLM, ANOVA, & ANCOVA Home > E-book list > GLM Univariate Garson, G. D. (2012). Univariate GLM, ANOVA, & ANCOVA. Asheboro, NC: Statistical Associates Publishers. Instant availablity without passwords in Kindle format on Amazon: click here. Tutorial on the free Kindle for PC Reader app: click here. Obtain the free Kindle Reader app for any device: click here. Delayed availability with passwords in free pdf format: right-click here and save file. Register to obtain a password: click here. Statistical Associates Publishers home page. About the author Table of Contents ASIN number (e-book counterpart to ISBN): ASIN: B0092TNOGW . @c 2012 by G. David Garson and Statistical Associates Publishers. worldwide rights reserved in all languages and on all media. Permission is not granted to copy, distribute, or post e-books or Univariate GLM is the general linear model now often used to implement such long-established statistical procedures as regression and members of the ANOVA family. It is "general" in the sense that one may implement both regression and ANOVA models. One may also have fixed factors, random factors, and covariates as predictors. Also, in GLM one may have multiple dependent variables, as discussed in a separate section on multivariate GLM and one may have linear transformations and/or linear combinations of dependent variables. Moreover, one can apply multivariate tests of significance when modeling correlated dependent variables, not relying on individual univariate tests as in multiple regression. GLM also handles repeated measures designs. Finally, because GLM uses a generalized inverse of the matrix of independent variables' correlations with each other, it can handle redundant independents which would prevent solution in ordinary regression models. Data requirements. In all GLM models, the dependent(s) is/are continuous. The independents may be categorical factors (including both numeric and string types) or quantitative covariates. Data are assumed to come from a random sample for purposes of significance testing. The variance(s) of the dependent variable(s) is/are assumed to be the same for each cell formed by categories of the factor (s) (this is the homogeneity of variances assumption). Regression in GLM is simply a matter of entering the independent variables as covariates and, if there are sets of dummy variables (ex., Region, which would be translated into dummy variables in OLS regression, for ex., South = 1 or 0), the set variable (ex., Region) is entered as a fixed factor with no need for the researcher to create dummy variables manually. The b coefficients will be identical whether the regression model is run under ordinary regression (in SPSS, under Analyze, Regression, Linear) or under GLM (in SPSS, under Analyze, General Linear Model, Univariate). Where b coefficients are default output for regression in SPSS, in GLM the researcher must ask for "Parameter estimates" under the Options button. The R-square from the Regression procedure will equal the partial Eta squared from the GLM regression model. The advantages of doing regression via the GLM procedure are that dummy variables are coded automatically, it is easy to add interaction terms, and it computes eta-squared (identical to R-squared when relationships are linear, but greater if nonlinear relationships are present). However, the SPSS regression procedure would still be preferred if the reseacher wishes output of standardized regression (beta) coefficients, wishes to do multicollinearity diagnostics, or wishes to do stepwise regression or to enter independent variables hierarchically, in blocks. PROC GLM in SAS has a greater range of options and outputs (SAS also has PROC ANOVA, but it handles only balanced designs/equal group sizes). The full content is now available from Statistical Associates Publishers. Click here. Below is the unformatted table of contents. Table of Contents Overview 11 Key Concepts 15 Why testing means is related to variance in analysis of variance 15 One-way ANOVA 16 Simple one-way ANOVA in SPSS 16 Simple one-way ANOVA in SAS 20 Two-way ANOVA 23 Two-way ANOVA in SPSS 24 Two-way ANOVA in SAS 27 Multivariate or n-way ANOVA 29 Regression models 29 Parameter estimates (b coefficients) for factor levels 31 Parameter estimates for dichotomies 32 Significance of parameter estimates 32 Research designs 32 Between-groups ANOVA design 32 Completely randomized design 34 Full factorial ANOVA 34 Balanced designs 35 Latin square designs 36 Graeco-Latin square designs 37 Randomized Complete Block Design (RCBD ANOVA) 37 Split plot designs 39 Mixed design models 39 Random v. fixed effects models 41 In SPSS 41 In SAS 42 Linear mixed models (LMM) vs. general linear models (GLM) 43 Effects 43 Treating a random factor as a fixed factor 44 Mixed effects models 44 Nested designs 44 Nested designs 45 In SPSS 46 In SAS 49 Treatment by replication design 49 Within-groups (repeated measures) ANOVA designs 49 Counterbalancing 50 Reliability procedure 51 Repeated measures GLM in SPSS 51 Repeated measures GLM in SAS 51 Interpreting repeated measures output 52 Variables 53 Types of variables 53 Dependent variable 53 Fixed and random factors 54 Covariates 54 WLS weights 54 Models and types of effects 55 Full factorial models 55 Effects 56 Main effects 56 Interaction effects 56 Residual effects 59 Effect size measures 60 Effect size coefficients based on percent of variance explained 60 Partial eta-squared 60 Omega-squared 61 Herzberg's R2 62 Intraclass correlation 62 Effect size coefficients based on standardized mean differences 62 Cohen's d 62 Glass's delta 64 Hedge's g 65 Significance tests 65 F-test 65 Reading the F value 65 Example 1 66 Example 2 66 Significance in two-way ANOVA 67 Computation of F 67 F-test assumptions 67 Adjusted means 68 Lack of fit test 68 Power level and noncentrality parameter 69 Hotelling's T-Square 70 Planned multiple comparison t-tests 70 Simple t-test difference of means 72 Bonferroni-adjusted t-test 72 Sidak test 74 Dunnett's test 74 HSU's multiple comparison with the best (MCB) test 74 Post-hoc multiple comparison tests 74 The q-statistic 75 Output formats: pairwise vs. multiple range 76 Tests assuming equal variances 76 Least significant difference (LSD) test 76 The Fisher-Hayter test 77 Tukey's test, a.k.a. Tukey honestly significant difference (HSD) test 78 Tukey-b test, a.k.a. Tukey's wholly significant difference (WSD) test 79 S-N-K or Student-Newman-Keuls test 80 Duncan test 81 Ryan test (REGWQ) 81 The Shaffer-Ryan test 83 The Scheffé test 83 Hochberg GT2 test 85 Gabriel test 87 Waller-Duncan test 87 Tests not assuming equal variances 87 Tamhane's T2 test 87 Games-Howell test 88 Dunnett's T3 test and Dunnett's C test 89 The Tukey-Kramer test 89 The Miller-Winer test 89 More than one multiple comparison/post hoc test 89 Example 89 Contrast tests 91 Overview 91 Types of contrasts 92 Deviation contrasts 92 Simple contrasts 92 Difference contrasts 92 Helmert contrasts 92 Repeated contrasts 92 Polynomial contrasts 93 Custom hypothesis tables 93 Custom hypothesis tables index table 93 Custom hypothesis tables 94 Estimated marginal means 96 Overview 96 EMM Estimates table 98 Other EMM output 101 EMM Pairwise comparisons table 101 EMM Univariate tests table 101 Profile plots 101 GLM Repeated Measures 102 Overview 102 Key Terms and Concepts 103 Within-subjects factor 103 Repeated measures dependent variables 104 Between-subjects factors 105 Covariates 105 Models 106 Type of sum of squares 107 Balanced vs. unbalanced models 107 Estimated marginal means 108 Pairwise comparisons 109 Statistics options in SPSS 110 Descriptive statistics 110 Hypothesis SSCP matrices 111 Partial eta-squared 111 Within-subjects SSCP matrix and within-subjects contrast effects. 112 Multivariate tests. 113 Univariate vs. multivariate models 114 Box's M test 115 Mauchly's test of sphericity 115 Univariate tests of within-subjects effects 116 Parameter estimates 118 Levene's test 119 Spread-versus-level plots 120 Residual plots 120 Lack of fit test 122 General estimable function 122 Post hoc tests 122 Overview 122 Profile plots for repeated measures GLM 125 Example 125 Contrast analysis for repeated measures GLM 127 Types of contrasts for repeated measures 128 Simple contrasts example 129 Saving variables in repeated measures GLM 130 Cook's distance 131 Leverage values 131 Assumptions 132 Interval data 132 Homogeneity of variances 132 Homogeneity of variance 133 Appropriate sums of squares 137 Multivariate normality 138 Adequate sample size 139 Equal or similar sample sizes 139 Random sampling 139 Orthogonal error 140 Data independence 140 Recursive models 140 Categorical independent variables 140 The independent variable is or variables are categorical. 140 Continuous dependent variables 140 Non-significant outliers 140 Sphericity 141 Assumptions related to ANCOVA: 142 Limited number of covariates 142 Low measurement error of the covariate 142 Covariates are linearly related or in a known relationship to the dependent 142 Homogeneity of covariate regression coefficients 143 No covariate outliers 143 No high multicollinearity of the covariates 144 Additivity 144 Assumptions for repeated measures 144 Frequently Asked Questions 145 How do you interpret an ANOVA table? 146 Isn't ANOVA just for experimental research designs? 148 Should I standardize my data before using ANOVA or ANCOVA? 148 Since orthogonality (uncorrelated independents) is an assumption, and since this is rare in real life topics of interest to social scientists, shouldn't regression models be used instead of ANOVA models? 148 Couldn't I just use several t-tests to compare means instead of ANOVA? 148 How does counterbalancing work in repeated measures designs? 149 How is F computed in random effect designs? 150 What designs are available in ANOVA for correlated independents? 150 If the assumption of homogeneity of variances is not met, should regression models be used instead? 151 Is ANOVA a linear procedure like regression? How is linearity related to the "Contrasts" option? 151 What is hierarchical ANOVA or ANCOVA? 151 Is there a limit on the number of independents which can be included in an analysis of variance? 152 Which SPSS procedures compute ANOVA? 152 I have several independent variables, which means there are a very large number of possible interaction effects. Does SPSS have to compute them all? 152 Do you use the same designs (between groups, repeated measures, etc.) with ANCOVA as you do with ANOVA? 152 How is GLM ANCOVA different from traditional ANCOVA? 153 What are paired comparisons (planned or post hoc) in ANCOVA? 153 Can ANCOVA be modeled using regression? 153 How does blocking with ANOVA compare to ANCOVA? 153 What is the SPSS syntax for GLM repeated measures? 154 What is a "doubly repeated measures design"? 155 Bibliography 156 Pagecount: 160
{"url":"http://www.statisticalassociates.com/glm_univariate.htm","timestamp":"2014-04-17T10:03:24Z","content_type":null,"content_length":"21762","record_id":"<urn:uuid:73428702-da3c-478d-988a-965d64df21f2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Real analysis May 13th 2009, 02:56 AM #1 Real analysis Hi there show that f(x) is continuous at R where 0 < a < 1 & ab>1+3/2T ......... T : means : Bie can you just tell me how I can prove it , what theorems or definitions I should use to prove it ? First let $f_n(x)=a^n\cos(b^n\pi x)$, then $|f_n(x)|\le a^n=M_n$. Well clearly each $f_n(x)$ is continuous. Now define $f(x)=\sum_{n=0}^\infty f_n(x)$, so if $\sum f_n\to f$ uniformly we will know that $f$ is continuous. So by the M-test we have that $\sum f_n\to f$ uniformly since $\sum_{n=0}^\infty M_n<\infty$ (geometric series with common ratio less than 1). If you don't know what the M-test is then just follow the above link. As for the condition on $ab$ it is not needed to show continuity, but if I am correct; I think that the next step would be to show that the function is nowhere differentiable. This is where the condition is most likely used. May 13th 2009, 09:47 AM #2 Nov 2006
{"url":"http://mathhelpforum.com/differential-geometry/88831-real-analysis.html","timestamp":"2014-04-17T16:24:35Z","content_type":null,"content_length":"33528","record_id":"<urn:uuid:094aa5dd-1ea9-4437-8df7-82a76bc3690c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming challenge: wildcard exclusion in cartesian products Dinko Tenev dinko.tenev at gmail.com Thu Mar 23 11:47:52 CET 2006 Dirk Thierbach wrote: > If more time during preprocessing is allowed, another idea is to > treat the wildcard expressions as regular expressions, convert > each into a finite state machine, construct the "intersection" of > all these state machines, minimize it and then swap final and non-final > states. Given the requirements, did you mean taking the *union* and swapping states? Or maybe swapping states first, and then taking the > Then you can use the resulting automaton to efficiently > enumerate S^n - W. In the above case, the resulting FSM would have just > three states. I don't see immediately how exactly this is going to work. Unless I'm very much mistaken, a FSA in the classical sense will accept or reject only after the whole sequence has been consumed, and this spells exponential times. For improved asymptotic complexity in this case, you need to be able to at least reject in mid-sequence, and that calls for a slightly different concept of a FSA -- is this what you meant? More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2006-March/376851.html","timestamp":"2014-04-16T16:13:13Z","content_type":null,"content_length":"3958","record_id":"<urn:uuid:d2bfb8ea-cb91-4ecb-ba17-f9c151f1bed9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Lego Scherk Surface Yet another minimal surface - this one was pretty easy, but it's a nice example of a saddle point built from Lego. Only standard (1xn and 2xn) bricks were used. As with most of my mathematical surfaces, I made use of some computer assistance. Just in case anyone's interested, here's the raw LDRAW .DAT file generated by my program for this sculpture. Beware - the .DAT file builds it out of 1x1 bricks. Actually constructing this out of larger bricks so that it holds together is a (non-trivial) exercise for the reader! This model shows (most of) one cell of a doubly-periodic Scherk surface. Actually Scherk discovered more than one minimal surface in 1835, but this one has the particularly simple parametrisation given by exp(z) = cos(x)/cos(y). This model shows the surface in the region |x|, |y| < p/2 - 0.01. Here are some links to pages related to the Scherk surface: ● Scherk's Surface - this page has a Java applet that lets you rotate an image of the surface in 3D with the mouse, and a reasonable description of what a minimal surface is. ● MathWorld - Scherk's Minimal Surfaces: actual equations, more Java applets and a list of further reading references here. The contents of this page are Copyright © A. Lipson 2002 Lego ® is a trademark of The Lego Group, who have nothing to do with this or any of my other Lego-related web pages. Back to ASL's home page Back to ASL's Lego page Back to ASL's Mathematical Lego Sculptures Hits since14th April 2002: Powered by counter.bloke.com This page last modified 2nd April 2005
{"url":"http://www.andrewlipson.com/scherk.htm","timestamp":"2014-04-21T12:20:31Z","content_type":null,"content_length":"3678","record_id":"<urn:uuid:e4ebaf41-1fd5-4997-9f53-6f36ef2f41ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof verification for neighbor fractions hi Al-allo I couldn't follow this because of the way you had set out your proof. So, I've converted the fractions, taken out the * and written L instead of l. I have added some words in red that make it clearer to me what you are doing. In a proof by contradiction you may put equals even though you are hoping to show that the two sides are not equal. So you may replace "equality to be determined" by "=". Instead of the word "let" you could put "suppose". At the line of ************* I think you need to add one more step. After my word But you need to justify why these expressions cannot be equal. Now, reduce the common numbers : We must now prove that the left hand side has irreductible fractions. Lets see what would happen if these fraction were reductible. and multiply by Ln We have considered the initial fractions to be reductible and have arrived at a false result. An integer cannot be equal to (+-1/z*p) So, the initial fractions must be irreductible. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=296708","timestamp":"2014-04-20T00:58:47Z","content_type":null,"content_length":"22423","record_id":"<urn:uuid:d1b9dc5b-fc53-49cd-a2cd-cf2e502579f6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Cauchy sequence From Conservapedia x^2 − 5x + 6 = 0 This article/section deals with mathematical concepts appropriate for a student in mid to late high school. x = ? The reader should be familiar with the material in the Limit (mathematics) page. A Cauchy sequence (pronounced CO-she) is an infinite sequence that converges in a particular way. This type of convergence has a far-reaching significance in mathematics. Cauchy sequences are named after the French mathematician Augustin Cauchy (1789-1857). There is an extremely profound aspect of convergent sequences. A sequence of numbers in some set might converge to a number not in that set. The famous example of this is that a sequence of rationals might converge, but not to a rational number. For example, the sequence consists only of rational numbers, but it converges to $\sqrt{2}\,$, which is not a rational number. (See real number for an outline of the proof of this.) The sequence given above was created by a computer, and it could be argued that we haven't really exhibited the sequence. But we can put such a sequence on a firm theoretical footing by using the Newton-Raphson iteration. This would give us $A_0 = 1\,$ $A_{n+1} = \frac{1}{2}(A_n + 2/A_n)\,$ so that $A_1= 3/2 = 1.5\,$ $A_2= 17/12 = 1.4166666...\,$ These aren't the same as the sequence given previously, but they are all rational numbers, and they converge to $\sqrt{2}\,$. So if we lived in a world in which we knew about rational numbers but had never heard of the real numbers (the ancient Greeks sort of had this problem) we wouldn't know what to do about this. Recall that, for a sequence (a[n]) to converge to a number A, that is $\lim_{n\to \infty}a_n = A\,$ we would need to use the definition of a limit—we would need a number A such that, for every ε > 0, there is an integer M such that, whenever $n > M, |a_n-A| < \varepsilon\,$. There is no such rational number A. But there is clearly a sense in which $(a_n)\,$ converge. The definition of Cauchy convergence is this: A sequence $(a_n)\,$ converges in the sense of Cauchy (or is a Cauchy sequence) if, for every ε > 0, there is an integer M such that any two sequence elements that are both beyond M are within ε of each other. Whenever n > M and m > M, $|a_n-a_m| < \varepsilon\,$. Note that there is no reference to the mysterious number A—the convergence is defined purely in terms of the sequence elements being close to each other. The example sequence given above can be shown to be a Cauchy sequence. Construction of the Real Numbers What we did above effectively defined $\sqrt{2}\,$ in terms of the rationals, by saying "The square root of 2 is whatever the Cauchy sequence given above converges to." even though that isn't a "number" according to our limited (rationals-only) understanding of what a number is. The real numbers can be defined this way, by saying that a real number is defined to be a Cauchy sequence of rational numbers. There are many details that we won't work out here; among them are: • There are different Cauchy sequences that converge to the same thing; we gave two sequences above that converged to $\sqrt{2}\,$. So a real number is actually an "equivalence class" of Cauchy sequences, under a carefully defined equivalence. This is a bit tricky. • We have to show how to add, subtract, multiply, and divide Cauchy sequences. This is a bit tricky. • We have to give the Cauchy sequences corresponding to rational numbers. This is easy—5/12 becomes (5/12, 5/12, 5/12, ...). Once we have done that, the payoff is enormous. We have defined an extension to the rationals that is metrically complete—that extension of the rationals is the real numbers. Metrically complete means that every Cauchy sequence made from the set converges to an element which is itself in the set. The reals are the metric completion of the rationals. The use of Cauchy sequences is one of the two famous ways of defining the real numbers, that is, completing the rationals. The other method is Dedekind cuts External Links
{"url":"http://www.conservapedia.com/Cauchy_sequence","timestamp":"2014-04-20T23:38:46Z","content_type":null,"content_length":"19023","record_id":"<urn:uuid:f99c3351-43ae-44c4-85cf-6e9277211839>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: ci question [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: ci question From David Airey <david.airey@vanderbilt.edu> To statalist@hsphsun2.harvard.edu Subject Re: st: ci question Date Thu, 10 Apr 2008 20:46:28 -0500 Sometimes it helps if you post a picture (not to the list) for readers to see. Note some of the tricks along the x-axis Nick mentions are used at this page to make bar and sem graphs. On Apr 10, 2008, at 4:17 PM, Chris Witte wrote: Regarding the second question of my original message: I'd like to be able to have multiple levels by which the plots are separated/grouped along the x-axis (like the "over" command in "graph box" allows). For example, using eclplots, I'd like to be able to include two "parmid_varname" variables....or using ciplots, I'd like to include two variables in the "by" option. Now, I've also plotted this data by overlaying rcap plots with scatter plots of the means. Using this plotting method, is there a way to horizontally offset the rcap plots within a single x-axis value so that the rcaps do not overlap? This might be an acceptable presentation for me as well, although it would be better if I could do this through ciplot, eclplot, or serrbar. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-04/msg00465.html","timestamp":"2014-04-17T10:01:46Z","content_type":null,"content_length":"6391","record_id":"<urn:uuid:8c1a17d7-b98b-4ad7-bf88-04c9b9835b24>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Announcing Scan&Solve™ for Rhino: Stress Analysis Made Easy Scan&Solve™ for Rhino is a new plugin from Intact Solutions that completely automates basic structural testing of Rhino solids. No simplification, healing, translating, or meshing is needed. Depending on complexity of your shape and chosen resolution, you may need to wait for a few minutes, but the results are worth the wait! Simply pick the material, choose restraints and specify loads on the faces of the solid model: Hit the go button to see the predicted performance (strength, weakness) of your shape: Download an absolutely free beta version of Scan&Solve™ at No prior experience in structural analysis or finite elements is required!
{"url":"http://www.scan-and-solve.com/profiles/blogs/announcing-scanampsolve-for","timestamp":"2014-04-16T17:07:52Z","content_type":null,"content_length":"47633","record_id":"<urn:uuid:980354b3-6f80-4743-b56d-9505297d8d89>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/huntera/answered","timestamp":"2014-04-19T12:38:03Z","content_type":null,"content_length":"107373","record_id":"<urn:uuid:ed1cd7a1-c232-414a-992d-322fe62e5b7a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
One can find the TeX, dvi or pdf file of a work by clicking to the word article after the title of the given work. By clicking here some texts can be found where certain problems of the Probability Theory are worked out. List of publications together with their TeX, dvi and pdf files (of the papers written after 1988) The homepage of Péter Major
{"url":"http://www.renyi.hu/~major/public1.html","timestamp":"2014-04-19T23:02:43Z","content_type":null,"content_length":"2474","record_id":"<urn:uuid:fa59831b-4939-4eec-80a8-be7ad9c0fcc3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Greatest common factor March 25th 2013, 02:40 AM Greatest common factor I think the greatest common factor is the one I mark in the image, but wolfram gives me so many different answers I'm not sure if I am correct. My confidence is always shaken when wolfram gives me multiple results which may be all totally equivalent but which one is factored the most out of those? March 25th 2013, 02:42 AM Prove It Re: Greatest common factor The greatest common factor is the stuff outside the brackets in the answer you chose, and the answer you chose is the fully factorised form of your expression. March 25th 2013, 02:52 AM Re: Greatest common factor Thanks for clearing that up, so the others are not completely factored? Or they are equally completely factored to the one I chose? March 25th 2013, 03:02 AM Prove It Re: Greatest common factor The other ones are not completely factored. You can check this yourself, can you see any other common factors inside the brackets that can be taken out? March 25th 2013, 03:03 AM Re: Greatest common factor Oh yeh that was quite a silly question I just noticed as soon as I looked at it. Thanks very much.
{"url":"http://mathhelpforum.com/algebra/215512-greatest-common-factor-print.html","timestamp":"2014-04-16T19:09:14Z","content_type":null,"content_length":"5684","record_id":"<urn:uuid:5d16b7c7-0d94-46e6-8263-eb8305b205e0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Reload this Page Euler's Totient Function Proving http://b.scorecardresearch.com/b?c1=...6=&cv=1.3&cj=1 I need some help on how to prove this statement. I don't know where to start! Prove that if d and n are positive integers such that d|n, then φ (dn) = dφ(n). A lot of help is appreciated.. I tried writing http://www.mathhelpforum.com/math-he...ff701eea-1.gif and http://www.mathhelpforum.com/math-he...c3f2334f-1.gif where http:// www.mathhelpforum.com/math-he...d902a249-1.gif through http://www.mathhelpforum.com/math-he...f1322214-1.gif are the same for both http://www.mathhelpforum.com/math-he...16e091ad-1.gif and http:// www.mathhelpforum.com/math-he...b31363a1-1.gif. But I don't know where to go from there.... Thanks
{"url":"http://mathhelpforum.com/number-theory/126676-reload-page-eulers-totient-function-proving-print.html","timestamp":"2014-04-19T11:25:55Z","content_type":null,"content_length":"8726","record_id":"<urn:uuid:3204e5ee-1a6f-4d07-9053-4ee6216f0cd7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Instructor's notes. The model used in the activity " Using mass balance model to understand CFCs" is based on the simple mass balance relation. (eqn. 1) Where C is the global atmospheric concentration of CFC-12 (in pptv), S is the emission source strength (ppt/yr), and t is the atmospheric lifetime (yrs). The general solution to equation 1 for any arbitrary time dependent emission source S(t) is: (eqn. 2) We have solved equation 2 assuming an emission source strength of the form, (eqn. 3) [see solution] An interactive online program has been written in which students can modify the input values ( Co, t, So, and R) and then generate a graph of C vs t. A table of values for C and S as functions of time is also generated. Students first "calibrate" the model to fit recent observations and then use the model to explore future emission scenarios. Although I have used this assignment after a brief in-class discussion of the model basics and online modeling environment, the two activities below provide a solid background of the mass balance concept and its application to global trace gas For classes with limited mathematics ability, I describe the mathematics of the model using only the finite difference form of equation 1. Although it is tempting to also discuss Euler's number e, I purposely avoid this for classes with weak math skills as I believe that it adds little (if anything) to their understanding of the mass balance physical processes. │ │To give students a better feel for the model you may want to use some or all of the introductory water bucket model activity at: http://www.atmosedu.com/physlets/GlobalPollution/WaterBucket.htm. │ │ │I often use a 2-liter pop bottle, with flow from a sink into the top and a hole in the bottom, as a physical model during an in class discussion of mass balance. The lifetime for this water │ │ │bucket model is then related to the hole size in the bottle and viscosity of the water. │
{"url":"http://www.atmosedu.com/physlets/GlobalPollution/instructorNotesCFCs.htm","timestamp":"2014-04-20T06:05:09Z","content_type":null,"content_length":"4376","record_id":"<urn:uuid:75ceefad-118e-4b20-a3e1-3402c62b60a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Local dynamics at a fixed point We can write a complex multiplier λ (in the polar coordinate system) as λ = |λ| exp(iφ ). Then iterations (or images) of a point (z[o] + ε ) in the vicinity of a fixed point z[∗] = f(z[∗ ]) are z[k] = f^ ok(z[∗] + ε ) = z[∗] + λ ^kε + O(ε^ 2) ~ z[∗] + |λ|^ k e^ikφε. That is, if we put coordinate origin to z[∗ ], after every iteration point z[k+1] is rotated by angle φ with respect to the previous position z[k] and its radius is scaled by |λ|. For φ = 2π m/n points z[k] jump exactly m rays in the counter-clockwise direction at each iteration and make n-rays "star" or "petals" structures. These structures are more "visible" for λ = 1 + δ , |δ | << 1 (e.g. near the main cardioid border). Attracting fixed point For |λ| < 1 all points in the vicinity of attractor z[∗] move smoothly to z[∗ ]. You can see "star" structures made by orbit of the critical point. Repelling fixed point For c outside the main cardioid, |λ| > 1 and the fixed point z[∗] becomes repelling (it lies in J). Connected J set separates basin of attracting cycle and basin of infinite point. Therefore in the vicinity of z[∗] rotations by 2π m/n generate n-petals structures made of these two basins. Points in petals are attracted by periodic cycle and points in narrow whiskers go to infinity. You see below, that rotational symmetry near repeller z[∗] keeps for "dendrite" and Cantor dust J-sets too. Contents Previous: The fixed points and periodic orbits Next: Spiral structures in the Julia sets updated 12 Sep 2013
{"url":"http://www.ibiblio.org/e-notes/MSet/local.html","timestamp":"2014-04-16T10:10:38Z","content_type":null,"content_length":"4938","record_id":"<urn:uuid:e2a7d3ed-33b1-4e59-bd19-9c5bb323a62e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Novato Algebra Tutor ...I have over 10 years' experience teaching 6th, 7th, 8th and 9th grade math in middle/high schools. My style is effective because I figure out the real problem quickly. Also, I keep instruction short, then do many assessments to see if a student is ready to move on. 2 Subjects: including algebra 1, prealgebra ...Even in an age of Windows and graphic user interfaces, I often prefer to use a DOS command prompt, since I can often execute commands more quickly by using only the keyboarding (instead of the mouse), which is what DOS is all about. For anyone who wants to learn DOS command and understand what D... 10 Subjects: including algebra 1, geometry, Java, computer science ...Environmental Engineering Science, California Institute of Technology (Caltech) Dr. G.'s qualifications include a Ph.D. in engineering from CalTech (including a minor in numerical methods/ applied math) and over 25 years experience as a practicing environmental engineer/scientist. In addition, ... 13 Subjects: including algebra 1, algebra 2, calculus, physics ...If you wonder if any of these questions relate to your child, have you considered a math tutor? I can help. I will help your child fulfill graduation requirements. 12 Subjects: including algebra 1, algebra 2, calculus, geometry ...I am flexible in my approach and tailor my methods to the needs and abilities of my students. I enjoy working with youth as well as adult learners. I have been tutoring students since I was in high school myself. 12 Subjects: including algebra 2, algebra 1, chemistry, physics
{"url":"http://www.purplemath.com/novato_algebra_tutors.php","timestamp":"2014-04-20T13:56:29Z","content_type":null,"content_length":"23371","record_id":"<urn:uuid:060291f4-0479-4794-a24e-fccf749945a6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Program to emulate calculator 09-02-2008 #1 Registered User Join Date Sep 2008 Program to emulate calculator Hi all, i was told to do up a program to emulate a calculator (+-*/ functions) but i am unable to get the program to show my answer. attached is the program i typed. can someone help me out? #include <stdio.h> #include <stdlib.h> double Addition(double num1, double num2){ return num1+num2; double Subtraction(double num1, double num2){ return num1-num2; double Multiplication(double num1, double num2){ return num1*num2; double Division(double num1, double num2){ return num1/num2; main (int argc, char *argv[]){ char sign; double first_num, second_num; printf("Type 2 numbers and an operation\n"); scanf("%lf %c %lf", &first_num, &sign, &second_num); if (&sign == "+") printf("%f\n", Addition(first_num, second_num)); else if (&sign == "-") printf("%f\n", Subtraction(first_num, second_num)); else if (&sign == "*") printf("%f\n", Multiplication(first_num, second_num)); else if (&sign == "/") printf("%f\n", Division(first_num, second_num)); When you do (&sign == "+") you're actually performing an address comparison. You're comparing the address of the variable sign (which you get by prepending the &) with the address of the global string literal "+". What you really want is a character comparison. So you need to take the variable itself and compare agains the *character* +, which you do by using single quotes. Also two other things to note are that you are missing the curly braces on your if and else if statements and that the format specifier in your printf statements is missing the l (ell) that makes it print out a double instead of a float. "No-one else has reported this problem, you're either crazy or a liar" - Dogbert Technical Support "Have you tried turning it off and on again?" - The IT Crowd Whilst the rest of your post is indeed correct, this part is not. 1. All floating point values are promoted to double when passed to var-args functions (and functions with unspecified arguements - old-style code). Thus %f is used for float and double - since they are all double anyways. 2. %lf is used in some compilers (c99) to indicate "long double", which would lead to erroneous output. In other compilers, %lf _may_ mean double, but in others it will be "unknown" [and thus in the clearly undefined behaviour arena]. Compilers can produce warnings - make the compiler programmers happy: Use them! Please don't PM me for help - and no, I don't do help over instant messengers. it finally works! haha thanks to the both of u pete and mats! Whilst the rest of your post is indeed correct, this part is not. 1. All floating point values are promoted to double when passed to var-args functions (and functions with unspecified arguements - old-style code). Thus %f is used for float and double - since they are all double anyways. 2. %lf is used in some compilers (c99) to indicate "long double", which would lead to erroneous output. In other compilers, %lf _may_ mean double, but in others it will be "unknown" [and thus in the clearly undefined behaviour arena]. You live and learn. I was sure thaf %f is float and %lf is double... "No-one else has reported this problem, you're either crazy or a liar" - Dogbert Technical Support "Have you tried turning it off and on again?" - The IT Crowd Oh, but it's not required. It's optional. More like it's recommended. But then again, perhaps not, because it takes space. There's no easy answer to that question, but it certainly isn't required. Oh and you're using implicit main. That's bad. For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ "No-one else has reported this problem, you're either crazy or a liar" - Dogbert Technical Support "Have you tried turning it off and on again?" - The IT Crowd "No-one else has reported this problem, you're either crazy or a liar" - Dogbert Technical Support "Have you tried turning it off and on again?" - The IT Crowd same format specifier meaning different things Not much sense to do different format specifiers for float and double in printf while both are passed as double scanf is other story... The first 90% of a project takes 90% of the time, the last 10% takes the other 90% of the time. Hi, I also need help in the code... after trying out with the modified code, i still receive error upon running... and also unable to get program to show the answer Can anyone help? Need it to be done urgently... ): #include <stdio.h> #include <stdlib.h> double Addition(double num1, double num2){ return num1+num2; double Subtraction(double num1, double num2){ return num1-num2; double Multiplication(double num1, double num2){ return num1*num2; double Division(double num1, double num2){ return num1/num2; main (int argc, char *argv[]){ char operator; double first_num, second_num; printf("Type 2 numbers and an operator\n"); scanf("&#37;f %c %f", &first_num, &operator, &second_num); if (&operator == '+') { printf("%f\n", Addition(first_num, second_num)); }else if (&operator == '-') { printf("%f\n", Subtraction(first_num, second_num)); }else if (&operator == '*'){ printf("%f\n", Multiplication(first_num, second_num)); }else if (&operator == '/'){ printf("%f\n", Division(first_num, second_num)); And you didn't read Pete's first response because why? eh it's because I don't really understand.. i'm totally new to this.. and i did add in the brackets to if and if else statements, changing the " " to ' ' , &#37;If to %f it still don't work.. Last edited by Zihui02; 09-04-2008 at 09:48 AM. And if you do the other bit mentioned, namely getting rid of &, what happens? And if you don't understand what he said, look up the words in the index of your textbook to get a brush-up on the vocabulary. The & operator takes the ADDRESS of something and the ADDRESS of something generates an extra indirection, ie char becomes char*, and char* is not the same as char. The end. And you still haven't fixed main either. But do say, do you have a book or is it a course from where you're teaching yourself? For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ 09-03-2008 #2 09-03-2008 #3 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 09-03-2008 #4 Registered User Join Date Sep 2008 09-03-2008 #5 09-03-2008 #6 09-03-2008 #7 09-03-2008 #8 09-03-2008 #9 09-03-2008 #10 09-04-2008 #11 Registered User Join Date Sep 2008 09-04-2008 #12 09-04-2008 #13 Registered User Join Date Sep 2008 09-04-2008 #14 09-05-2008 #15
{"url":"http://cboard.cprogramming.com/c-programming/106703-program-emulate-calculator.html","timestamp":"2014-04-20T09:30:56Z","content_type":null,"content_length":"107216","record_id":"<urn:uuid:055089d2-7d68-4020-ad01-8c825946132d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Johannes Kepler Johannes Kepler: His Life, His Laws and Times "... the ways by which men arrive at knowledge of the celestial things are hardly less wonderful than the nature of these things themselves" - Johannes Kepler IYA Kepler A Short Biography A List of Kepler's Firsts Kepler's Laws of Planetary Motion People and Events Contemporary to Kepler (1571-1630) Articles about Kepler Biographies and books Web Sites (music, drama, animations, lectures, museums, sites with biographies) A Short Biography Johannes Kepler was born about 1 PM on December 27, 1571, in Weil der Stadt, Württemberg, in the Holy Roman Empire of German Nationality. He was a sickly child and his parents were poor. But his evident intelligence earned him a scholarship to the University of Tübingen to study for the Lutheran ministry. There he was introduced to the ideas of Copernicus and delighted in them. In 1596, while a mathematics teacher in Graz, he wrote the first outspoken defense of the Copernican system, the Mysterium Cosmographicum. Kepler's family was Lutheran and he adhered to the Augsburg Confession a defining document for Lutheranism. However, he did not adhere to the Lutheran position on the real presence and refused to sign the Formula of Concord. Because of his refusal he was excluded from the sacrament in the Lutheran church. This and his refusal to convert to Catholicism left him alienated by both the Lutherans and the Catholics. Thus he had no refuge during the Thirty-Years War. Kepler was forced to leave his teaching post at Graz due to the counter Reformation because he was Lutheran and moved to Prague to work with the renowned Danish astronomer, Tycho Brahe. He inherited Tycho's post as Imperial Mathematician when Tycho died in 1601. Using the precise data that Tycho had collected, Kepler discovered that the orbit of Mars was an ellipse. In 1609 he published Astronomia Nova, delineating his discoveries, which are now called Kepler's first two laws of planetary motion. And what is just as important about this work, "it is the first published account wherein a scientist documents how he has coped with the multitude of imperfect data to forge a theory of surpassing accuracy" (O. Gingerich in foreword to Johannes Kepler New Astronomy translated by W. Donahue, Cambridge Univ Press, 1992), a fundamental law of nature. Today we call this the scientific method. In 1612 Lutherans were forced out of Prague, so Kepler moved on to Linz. His wife and two sons had recently died. He remarried happily, but had many personal and financial troubles. Two infant daughters died and Kepler had to return to Württemburg where he successfully defended his mother against charges of witchcraft. In 1619 he published Harmonices Mundi, in which he describes his "third In spite of more forced relocations, Kepler published the Epitome Astronomiae in 1621. This was his most influential work and discussed all of heliocentric astronomy in a systematic way. He then went on to produce the Rudolphine Tables that Tycho had envisioned long ago. These included calculations using logarithms, which he developed, and provided perpetual tables for calculating planetary positions for any past or future date. Kepler used the tables to predict a pair of transits by Mercury and Venus of the Sun, although he did not live to witness the events. Johannes Kepler died in Regensburg in 1630, while on a journey from his home in Sagan to collect a debt. His grave was demolished within two years because of the Thirty Years War. Frail of body, but robust in mind and spirit, Kepler was scrupulously honest to the data. Short Biography -|- Kepler's Firsts -|- Kepler's Laws -|- People and Events in Kepler's Time -|- Articles Biographies and books -|- Web Sites -|- IYA Kepler A List of Kepler's Firsts • First to correctly explain planetary motion, thereby, becoming founder of celestial mechanics and the first "natural laws" in the modern sense; being universal, verifiable, precise. In his book Astronomia Pars Optica, for which he earned the title of founder of modern optics he was the: • First to investigate the formation of pictures with a pin hole camera; • First to explain the process of vision by refraction within the eye; • First to formulate eyeglass designing for nearsightedness and farsightedness; • First to explain the use of both eyes for depth perception. In his book Dioptrice (a term coined by Kepler and still used today) he was the: • First to describe: real, virtual, upright and inverted images and magnification; • First to explain the principles of how a telescope works; • First to discover and describe the properties of total internal reflection. In addition: • His book Stereometrica Doliorum formed the basis of integral calculus. • First to explain that the tides are caused by the Moon (Galileo reproved him for this). • Tried to use stellar parallax caused by the Earth's orbit to measure the distance to the stars; the same principle as depth perception. Today this branch of research is called astrometry. • First to suggest that the Sun rotates about its axis in Astronomia Nova • First to derive the birth year of Christ, that is now universally accepted. • First to derive logarithms purely based on mathematics, independent of Napier's tables published in 1614. • He coined the word "satellite" in his pamphlet Narratio de Observatis a se quatuor Iovis sattelitibus erronibus Short Biography -|- Kepler's Firsts -|- Kepler's Laws -|- People and Events in Kepler's Time -|- Articles Biographies and books -|- Web Sites -|- IYA Kepler Kepler's Laws of Planetary Motion Kepler was assigned the task by Tycho Brahe to analyze the observations that Tycho had made of Mars. Of all the planets, the predicted position of Mars had the largest errors and therefore posed the greatest problem. Tycho's data were the best available before the invention of the telescope and the accuracy was good enough for Kepler to show that Mars' orbit would precisely fit an ellipse. In 1605 he announced The First Law: Planets move in ellipses with the Sun at one focus. The figure below illustrates two orbits with the same semi-major axis, focus and orbital period: one a circle with an eccentricity of 0.0; the other an ellipse with an eccentricity of 0.8. Prior to this in 1602, Kepler found from trying to calculate the position of the Earth in its orbit that as it sweeps out an area defined by the Sun and the orbital path of the Earth that: The radius vector describes equal areas in equal times. (The Second Law) Kepler published these two laws in 1609 in his book Astronomia Nova. For a circle the motion is uniform as shown above, but in order for an object along an elliptical orbit to sweep out the area at a uniform rate, the object moves quickly when the radius vector is short and the object moves slowly when the radius vector is long. On May 15, 1618 he discovered The Third Law: The squares of the periodic times are to each other as the cubes of the mean distances. This law he published in 1619 in his Harmonices Mundi . It was this law, not an apple, that led Newton to his law of gravitation. Kepler can truly be called the founder of celestial mechanics. Also, see the article on "Kepler and Mars - Understanding How Planets Move" by Edna DeVore Short Biography -|- Kepler's Firsts -|- Kepler's Laws -|- People and Events in Kepler's Time -|- Articles Biographies and books -|- Web Sites -|- IYA Kepler People and Events Contemporary to Kepler (1571-1630) Short Biography -|- Kepler's Firsts -|- Kepler's Laws -|- People and Events in Kepler's Time -|- Articles Biographies and books -|- Web Sites -|- IYA Kepler Articles about Kepler 2008 Oct 5. Searching Heaven and Earth for the Real Johannes Kepler. by Dava Sobel, Discover Magazine Nov 2008 issue. Galileo may be science's most famous martyr, but it was Kepler who solved the mystery of the planets. Excerpt: Zielona Gora, Poland—The great German astronomer Johannes Kepler (1571–1630) arrived in this forested region to serve his last employer exactly 380 years ago—reason enough for some two dozen science historians to gather here and celebrate with a conference. For five days in late June, they regaled each other with the fruits of their own recent research into their hero’s achievements .... His Rudolphine Tables of 1627 (his “crowning publication,” according to Gingerich) enabled him to predict the first observable transits of Mercury and Venus—the passages of those planets across the face of the sun—both in 1631. Kepler, however, never witnessed either event. He died in 1630, on a frustrated journey to collect payments owed him by several patrons.... Short Biography -|- Kepler's Firsts -|- Kepler's Laws -|- People and Events in Kepler's Time -|- Articles Biographies and books -|- Web Sites -|- IYA Kepler Biographies and Books on Johannes Kepler Kepler by Max Caspar, Dover Publications, 1993, 441pp. ISBN 0-486-67605-6 (paperback). This is the most complete and authoritative biography on Johannes Kepler. It is a recent translation by C. Doris Hellman with an introduction, bibliography and list of textual citations by Owen Kepler's Witch: An Astronomer's Discovery of Cosmic Order Amid Religious War, Political Intrigue, and the Heresy Trial of His Mother by James Connor, Harper SanFrancisco, To be published Apr. 2004. 416pp.$24.95 ISBN: 0-06-052255-0 Tycho & Kepler: The Unlikely Partnership that Forever Changed Our Understanding of the Heavens by Kitty Ferguson, Walker New York, 2002, 402pp., $28.00 ISBN: 0-8027-1390-4 (hard cover) Johannes Kepler: Short Biography, Encyclopedia Britannica by Robert S. Westman, Professor of History and Science Studies, UC San Diego. Westman presents a concise biography of Johannes Kepler: his life, discoveries, and publications. A bibliography offers further references. Free viewing online; by subscription for download. The Sleepwalkers: A History of Man's Changing Vision of the Universe by Arthur Koestler, Penguin Books, 1959, 623pp. ISBN 0-14-019246-8 (paperback). It also includes material on Copernicus, Tycho and Galileo. Johannes Kepler, John Tiner, Mott Media, 1977, 202pp. ISBN 0-915134-11-X (paperback) ISBN 0-915134-96-9 (hard cover) For high school level reading, a biography which reads more like a story. Johannes Kepler: And the New Astronomy, by James R. Voelkel, Oxford University Press, 1999 144pp., ISBN: 0195116801 (hard cover) ; ISBN: 019515021X (paperback) The Composition of Kepler's Astronomia Nova, by James R. Voelkel, : Princeton University Press, 2001, 308pp. ISBN: 0691007381 (hard cover) In German: Johannes Kepler, Max Caspar, Verlag für Geschichte der Naturwissenschaften und der Technik, Stuttgart, 1995, Vierte Auflage (4th ed.) 591pp. ISBN 3-928186-28-0 (For English translation, see above.) Johannes Kepler Er veränderte das Weltbild, Günter Doebel, Verlag Styria, Graz, 1983, 256pp. ISBN 3-222-11457-9 Johannes Kepler Dokumente zu Lebenszeit und Lebenswerk, by Walther Gerlach and Martha List, Ehrenwirth Verlag, München, 1971, 243pp. ISBN 3 431 01421 6 Johann Kepler Sein Leben in Bildern und eigenen Berichten, by Justus Schmidt, Rudolf Trauner Verlag, Linz, 1970, 308pp. ISBN 3 85320 258 6 Other Information: There is a play on Kepler and Tycho "Reading The Mind of God" by Patrick Gabridge There is a web site for the Kepler Museum in Weil der Stadt by the Kepler-Gesellschaft e. V. See also Andrew Fraknoi's resource guides: Short Biography -|- Kepler's Firsts -|- Kepler's Laws -|- People and Events in Kepler's Time -|- Articles Biographies and books -|- Web Sites -|- IYA Kepler Links to Other Sites Relating to Johannes Kepler Music (IYA - 2009) - Listen to AstroCappella songs by The Chromatics with themes about Kepler and Galileo: • Dance of the Planets - about extrasolar planets • Shoulders of Giants, commissioned by the Johannes Kepler Project, written and arranged by Padi Boyd and performed by The Chromatics, specifically for the International Year of Astronomy. We hope the song will strike a harmonious cord with people and be used in a variety of other IYA projects around the globe. To this end, we would like to make this song freely available for such use, asking only that the appropriate credits be given to the composer (Padi Boyd), the performers (The Chromatics/AstroCappella) and the Johannes Kepler Project. (use of the song as part of a commercial project will require a copyright release from the Project.) Museum: (June 2000) - Kepler Museum in Weil der Stadt by the Kepler-Gesellschaft e. V. Drama: (June 2000) - There is a play about Kepler and Tycho "Reading The Mind of God" by Patrick Gabridge Sites describing Kepler's Laws Sites with biographies, biographical material • http://www.johanneskepler.info/ - a comprehensive and up-to-date collection of Kepler related resources, articles and community discussions on the web. Includes Somnium (Dream) by Johannes Kepler, perhaps the first ever work of science fiction, in which an employee of Tycho Brahe takes a journey to the moon with the help of his bewitched mother in order to study the lunar environment and its inhabitants. • Short Biography MacTutor History of Mathematics Archive Univ St Andrews. Has several portraits. (Caution: There are only four known portraits done in Kepler's lifetime. I have not been able to definitely identify which are which. The biograghy by Max Caspar discusses the issue at the very end, but does not reproduce them. The cover of the Dover publication by Caspar is a modern • Outline Biography Rice University • Not so Short Biography by Galileo Project • Golden Age of Astronomy in Prague • Kepler's House in Linz 1625 • Johannes Kepler University-Linz Austria • Kepler Astronomy Picture of the Day • Weil der Stadt, Kepler's birth place. Click on Sehenswürdigkeiten. Then click on the red dot for marktplatz to see the Kepler monument in Weil der Stadt. I much prefer the sharpest criticism of a single intelligent man to the thoughtless approval of the masses. --Johannes Kepler
{"url":"http://kepler.nasa.gov/Mission/JohannesKepler/","timestamp":"2014-04-21T04:31:54Z","content_type":null,"content_length":"301274","record_id":"<urn:uuid:87b74534-aa19-42ed-ac5c-a27b3bfae063>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
'The Golden Section' Issue 17 Nov 2001 The Golden Section Hans Walser, trans. Peter Hilton Mathematical Association of America 2001 The golden section (or golden ratio), famously, was used in antiquity, when the ancient Greeks built temples the proportions of whose parts - by accident or design - are often supposed to have fallen in the golden ratio. In this little book, Hans Walser is concerned with the mathematical properties of self-similarity which presumably give this particular ratio its aesthetic appeal. It therefore fills a gap felt by those who have read about the number's supposed mystical properties but would like a more solid introduction to its mathematical ones. The maths is reasonably brisk, so the book will mainly appeal to readers who have, or are studying, maths at A-level, or at least are not afraid of plenty of equations. The book is divided into chapters on various areas (all more or less geometrical, with the exception of one on Number Sequences) where the golden section has hoisted its standard. It is a good mixture of exposition, questions to keep the reader alert, and pretty diagrams, which of course the subject invites. For example there is a chapter on fractals based on the golden section, and another on solid geometry. A pleasant feature is the frequent references back to earlier chapters, showing how different areas of mathematics interconnect. I have only one small complaint to record, and it concerns the production rather than the contents: the glossy cover is inclined to curl back in a peculiarly distressing way. (Yes, all paperbacks do this to an extent - but not to this extent.) If you can overlook that, this is a stimulating and diverting book. Book details: The Golden Section Hans Walser, Peter Hilton (Translator) Paperback - 158 pages (2001) The Mathematical Association of America ISBN 0883855348 You can buy the book and help Plus at the same time by clicking on the link on the left to purchase from amazon.co.uk, and the link to the right to purchase from amazon.com. Plus will earn a small commission from your purchase.
{"url":"http://plus.maths.org/content/os/issue17/reviews/book2/index","timestamp":"2014-04-18T03:08:56Z","content_type":null,"content_length":"25077","record_id":"<urn:uuid:778db98a-3716-4fde-bce5-af3d87d31f1d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Noun: coefficient of mutual induction 1. A measure of the induction between two circuits; the ratio of the electromotive force in a circuit to the corresponding change of current in a neighbouring circuit; usually measured in henries - mutual inductance Derived forms: coefficients of mutual induction Type of: coefficient
{"url":"http://www.wordwebonline.com/en/COEFFICIENTOFMUTUALINDUCTION","timestamp":"2014-04-21T05:07:47Z","content_type":null,"content_length":"7863","record_id":"<urn:uuid:2a1ef494-a549-4b95-8267-e495c38c52d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinite number of primitive Pythagorean triples March 27th 2010, 09:45 AM Infinite number of primitive Pythagorean triples Prove that there exist an infinite number of primitive Pythagorean triples x,y, and z with y even such that y is a perfect cube. I can write down the definition, but I have no idea how prove this... Any help is appreciated! [also under discussion in math links forum] March 27th 2010, 09:55 AM Hint: if $a,b,c\in\mathbb{Z}\,\,\,then\,\,\,a^2+b^2=c^2\Long leftrightarrow a=m^2-n^2\,,\,b=2mn\,,\,c=m^2+n^2\,,\,\,m,n\in\mathbb{Z}$ . The triple $(a,b,c)$ is called primitive if $gcd(m,n)=1$ and exactly one of them is even , so: can you find infinite pairs like $m,n$ as above? Well, there you have your infinite primitive pythagorean triples. (Cool) March 27th 2010, 09:55 AM WLOG We know for some $m>n$ So let $m=4k^3$ and $n=1$. Now observe $y=2\cdot4k^3 = (2k)^3$. March 28th 2010, 12:41 AM Hint: if $a,b,c\in\mathbb{Z}\,\,\,then\,\,\,a^2+b^2=c^2\Long leftrightarrow a=m^2-n^2\,,\,b=2mn\,,\,c=m^2+n^2\,,\,\,m,n\in\mathbb{Z}$ . The triple $(a,b,c)$ is called primitive if $gcd(m,n)=1$ and exactly one of them is even , so: can you find infinite pairs like $m,n$ as above? Well, there you have your infinite primitive pythagorean triples. (Cool) Yes, but the question requires also that y must be a perfect cube. March 28th 2010, 12:46 AM Thanks, but can you explain how you came up with these answers and how can we prove that the resulting triples will all be "primitive" Pythag. triple and that y is a perfect cube? I have seen the theorem: The positive primitive solutions of $x^2 + y^2 = z^2$ with y even are $x = r^2 - s^2, y=2rs, z = r^2 + s^2$, where r and s are arbitrary integers of opposite parity with r>s>0 and (r,s)=1. Thank you! March 28th 2010, 01:47 AM Thanks, but can you explain how you came up with these answers and how can we prove that the resulting triples will all be "primitive" Pythag. triple and that y is a perfect cube? I have seen the theorem: The positive primitive solutions of $x^2 + y^2 = z^2$ with y even are $x = r^2 - s^2, y=2rs, z = r^2 + s^2$, where r and s are arbitrary integers of opposite parity with r>s>0 and (r,s)=1. Thank you! Well, you can choose ANY pair of coprime $m,n\in\mathbb{N}$ with say even m, so he chooses $m=4k^3\,,\,n=1$...it is obvious these pairs are coprime, right? By the way, he could as well choose $n= March 29th 2010, 12:30 AM Is there any restriction on k? Also, are you sure that $m=72k^3$ would work? 144 is not a perfect cube... March 29th 2010, 05:01 AM March 29th 2010, 07:05 AM In general, to ensure that the triple is "primitive", we need to ensure TWO things, i.e. (i) m and n must be of opposite parity AND (ii) gcd(m,n)=1, right? March 29th 2010, 07:22 AM March 29th 2010, 09:45 AM March 29th 2010, 11:02 AM March 29th 2010, 03:38 PM March 29th 2010, 03:39 PM
{"url":"http://mathhelpforum.com/number-theory/135951-infinite-number-primitive-pythagorean-triples-print.html","timestamp":"2014-04-20T06:06:16Z","content_type":null,"content_length":"23214","record_id":"<urn:uuid:a09b95ba-a8e4-41c5-8f09-3f9123e41bbd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Tutors Winchester, MA 01890 Qualified (PhD, Harvard) and Patient Tutor in Sciences and Math ...'m a well educated and patient tutor for help with science, math, and English. I have worked as a post-doctoral fellow in neuroscience at Princeton, have a Ph.D. in from Harvard, and graduated with a B. Sc. from Columbia. I was a mathematics olympiad regional... Offering 10+ subjects including physics
{"url":"http://www.wyzant.com/Winthrop_MA_physics_tutors.aspx","timestamp":"2014-04-20T12:28:35Z","content_type":null,"content_length":"59506","record_id":"<urn:uuid:2446f549-3e30-459e-b76e-d152359c8b89>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Automatic texture coordinates [Archive] - OpenGL Discussion and Help Forums Zulfiqar Malik 10-31-2005, 08:31 AM I am using automatic texture generation for rendering some geometry. The texture coordinates are generated based on the eye space position of the incoming vertex. vec4f params(0, 0, 1, 0); glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGenfv(GL_S, GL_EYE_PLANE, params); glTexGenfv(GL_T, GL_EYE_PLANE, params); glTexGenfv(GL_R, GL_EYE_PLANE, params); // do rendering hereNotice that the plane is (0, 0, 1, 0). Everything works fine. The plane is multiplied with inverse model view matrix and the resulting plane is multiplied with incoming eye space vertex to generate a 3d texture coordinate. However, i tried getting the same job done using texture matrix, in which case i pass (1, 1, 1, 1) as the plane equation but set a texture matrix in which only index 33 (third row third column) was set to 1.0 and the rest of matrix was zeroed out. This should theoretically produce the same result once the matrix is multiplied with the generated texture coordinates if it follows the following plane' = plane * mv_inverse texcoord = plane' * eye_space_vertex texcoord' = texture_matrix * texcoord Use texcoord' as 3d texture coordinates But, sadly, it doesn't :( . Anyone knows what the problem might be? Thanks in advance.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-148358.html","timestamp":"2014-04-21T10:07:55Z","content_type":null,"content_length":"6159","record_id":"<urn:uuid:76e90c91-a69c-42fc-8bb2-6e5994b437e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/khadeeja/medals","timestamp":"2014-04-21T05:05:20Z","content_type":null,"content_length":"99153","record_id":"<urn:uuid:4b4cd532-eee8-4fb3-aec0-814a9bbc0b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Bilinear of dot product October 7th 2011, 12:38 PM #1 Aug 2011 Bilinear of dot product How to prove that if for $( \mathbb{R}^n , \ || \ * \ ||)$ is $||x+y||^2+||x-y||^2=2(||x||^2+||y||^2)$ then we can define a dot product as $<x,y>=\frac{||x+y||^2-||x-y||^2}{4}$? Intuitionly it is true but I have great problem with proof of bilinear of it. It doesn't seem so simple. Any clues? Re: Bilinear of dot product How to prove that if for $( \mathbb{R}^n , \ || \ * \ ||)$ is $||x+y||^2+||x-y||^2=2(||x||^2+||y||^2)$ then we can define a dot product as $<x,y>=\frac{||x+y||^2-||x-y||^2}{4}$? Intuitionly it is true but I have great problem with proof of bilinear of it. It doesn't seem so simple. Any clues? This is the Jordan–von Neumann theorem, and it certainly isn't simple. Unless you are another von Neumann, you are likely to need more than a few clues to find a proof of it. You can find a proof here. (The proof is given for a complex space, but there is a footnote indicating that the proof for a real space is similar.) Edit. The J–vN theorem is actually for infinite-dimensional spaces. I suppose there might conceivably be a simpler proof for the finite-dimensional space $\mathbb{R}^n$, but I doubt it. October 7th 2011, 01:08 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/189756-bilinear-dot-product.html","timestamp":"2014-04-18T17:28:32Z","content_type":null,"content_length":"35592","record_id":"<urn:uuid:76edf3e4-83ab-48f9-91bf-a824c17291e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Higher Order Set Theory and ZFC Dmytro Taranovsky dmytro at mit.edu Wed Apr 18 18:44:17 EDT 2012 Since ZFC is the current standard set theory (that is by default, provability means provability in ZFC), a question arises as to what extent can higher order set theory be formalized in a theory conservative over ZFC. A well-known theory conservative over ZFC is obtained by adding a symbol kappa and schema "V_kappa is an elementary substructure of V". However, we can go much further than that: - we can have multiple cardinals representing Ord instead of just kappa, and - we can iterate the notion of higher order set theory any finite number of times. Recall from my paper (and my FOM postings) that for a definable predicate X, a cardinal kappa is X-reflective iff the theory (V, in, X, kappa) with parameters in V_kappa agrees with the theory of (V, in, X, lambda) for every lambda>kappa with sufficiently strong reflection properties. R_1 is the predicate for reflective cardinals and R_{n+1} is the predicate for R_n-reflective cardinals. The natural basic theory of R_{n+1} has consistency strength between n-ineffable and n+1-subtle, but there is a substantial fragment that is conservative over ZFC, defined as follows. The language consists of the language of ZFC (first order logic and membership relation), and predicate R_i for every positive integer i. The axioms are (1) ZFC, plus replacement for formulas in the extended language (2) (schema, n>0) there is cardinal kappa satisfying R_n(kappa) (3) (schema, n>0) R_{n+1}(kappa) ==> R_n(kappa) (4) (schema, n>0, and elementarity is also a schema) R_n(kappa) implies that (V_kappa, in, R_1, ..., R_{n-1}) is an elementary substructure of (V, in, R_1, ..., R_{n-1}). (5) (schema, n>=0, phi is a formula in (V, in, R_1, ..., R_n) with two free variables, and S is a set definable in (V, in , R_1, ..., R_n)) R{n+1}(kappa) and R_{n+1}(lambda) ==> forall s in S (phi(kappa,s) <==> [Note: In (5), the schema over S is interpreted as schema over formulas psi in (V, in, R_1, ..., R_n}) with one free variable, using S = {s: psi(s)} and with the main statement conditioned on S being a set.] Conservation over ZFC is proved by showing that given a finite fragment of the theory, R_i (for every used i) can be defined in ZFC so as to satisfy the fragment. (4) and (5) try to assert that elements of R_{n+1} are correct for higher order set theory with R_1, ..., R_n. (4) asserts that V_kappa is similar to V. (5) asserts that (since kappa and lambda are both correct for higher order set theory with R_1,...,R_n) cardinals satisfying R_{n+1} agree with each other. However, being conservative over ZFC imposes strict limitations on how far we can go. For example, in (5), we are using S instead of V_min(kappa,lambda). Another limitation of being conservative over ZFC is that in this theory, we cannot define R_n from R_{n+1}. However, (4) combined with (2) and (3) ensures that there are many cardinals satisfying R_n. If the axioms are extended to include R_1(kappa) ==> "kappa is regular", then the resulting theory is conservative over ZFC + (schema) "Ord is Mahlo", and analogously with other properties besides regularity. (In ZFC, "Ord is Mahlo" is equivalent to (schema) there is regular kappa such that V_kappa is a Sigma_n elementary substructure of V.) Dmytro Taranovsky More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2012-April/016440.html","timestamp":"2014-04-18T06:16:04Z","content_type":null,"content_length":"6182","record_id":"<urn:uuid:aa893d0a-a89d-43b9-ae38-8f61c7e1b72f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Otter and MACE on TPTP v2.3.0 Here are the results of Otter 3.1 and MACE 1.4 on TPTP v2.3.0. These are the versions of Otter and MACE that will be entered in the CASC-17 ATP System Competition. Otter (which searches for proofs) and MACE (which searches for models/counterexamples) are separate programs. Some of the TPTP problems are satisfiable, so the two programs were combined with a shell script (otter-mace, that runs Otter for up to 5 seconds, then (if no proof was found) runs MACE for up to 5 seconds, then (if no counterexample was found) runs Otter for up to for the rest of the allowed time. TPTP v2.3.0 has 4229 problems. Most have proofs (are unsatisfiable) and some have models. Otter-MACE found proofs for 1902 problems, and models for 194 problems, giving about 49.5% success rate (given 5 minutes for each problem) over the whole TPTP. These figures were obtained in May 2000 on a (400 MHz PII) linux box. The table has entries of the following form. Problem Result Reason Seconds Memory Generated Size ALG002-1 PROOF --- 1 223K 1851 10 ALG008-1 MODEL --- 6 - - 3 ALG001-1 fail time 300 ??K ?? - Problem: the TPTP problem name Result/Model: PROOF, MODEL, fail Reason: reason for failure Seconds: time used Memory: memory used (applies only to Otter searches) Generated: clauses generated (applies only to Otter searches) Size: proof length or model size Here is the table (gzipped plain text). These activities are projects of the Mathematics and Computer Science Division of Argonne National Laboratory.
{"url":"http://www.mcs.anl.gov/research/projects/AR/otter/tptp230.html","timestamp":"2014-04-19T14:51:33Z","content_type":null,"content_length":"3619","record_id":"<urn:uuid:c194ddfb-e1af-4b0e-967d-a54a3a508401>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
paritial derivative of function of dependent variables You don't take partial derivatives of variables. You take partial derivatives of functions. i dont see the difference . is a symbol that's used to represent a member of some set. A is a variable that always represents the same thing. A should be thought of as a "rule" that associates exactly one member of a set (called the ) with each member of a set (called the ). (This is a definition of "function". It's just an explanation of the concept. The actual definition is a bit tricky and not very relevant here. If you're curious, see this post . Definition 2 is probably the most useful one). Most functions can be defined by specifying a relationship between variables. For example, the specification x-y=1 implicitly defines two functions ##f,g:\mathbb R\to\mathbb R## that can also be defined by f(t)=1+t for all ##t\in\mathbb R##, and g(t)=1-t for all ##t\in\mathbb R##. (Note that it never matters what variable is used in a "for all" statement). These functions are differentiable. For all ##t\in\mathbb R##, we have f'(t)=1 and ##g'(t)=-1##. Note that this makes f' and g' constant functions , not to be confused with as defined above. The notation dy/dx doesn't refer to a derivative of y. It can't, because y isn't a function. The y in the numerator and the x in the denominator lets us know that we're supposed to compute the derivative of the function that "takes x to y" and then plug x into the result (which is another function) to get the final result. If x-y=1, then the function that "takes x to y" is the one I called There's no such thing as a partial derivative of a variable. Typically, a math book will say something like this: Let n be a positive integer. Let E be a subset of ##\mathbb R^n##. Let x be an interior point of ##E##. Let ##\{e_1,e_2,\dots,e_n\}## be the standard basis for ##\mathbb R^n##. Let ##k\in\{1,2,\dots,n\}## be arbitrary. If there's a number A such that the limit $$\lim_{t\to 0}\frac{f(x+te_k)-f(x)}{t}$$ exists, then this number is called the kth partial derivative of f at x, or the "partial derivative of f at x, with respect to the kth variable". There are many different notations for it, for example ##D_kf(x)##, ##\partial_kf(x)##, ##\partial f(x)/\partial x_k##, ##f^{(k)}(x)## and ##f_{,\,k}(x)##. Now, if n=2, we will usually write f(x,y) instead of f(x) (with ##x\in\mathbb R^2##) or ##f(x_1,x_2)##. Because x is traditionally put into the first variable "slot" of f, the notation ##\partial f (x,y)/\partial x## is an alternative to the five notations mentioned above (with k=1 and (x,y) replacing x). assuming for a function,all parameters are independent. then you cant say because we dont know if x,y are independent of each other. The thing you said that he can't say is "z(x, y) = x + y. It is a function of two parameters." Tac-tics was talking about the ##z:\mathbb R^2\to\mathbb R## defined by ##z(x,y)=x+y## for all ##x,y\in\ mathbb R##. It never matters what variables are used in a "for all" statement, so we could have said "...defined by ##z(s,t)=s+t## for all ##s,t\in\mathbb R##." The definition of z doesn't in any way depend on some variables x and y, so there's nothing you can say about x and y that will change the function z. On the other hand, if you say that x,y,z are variables representing real numbers, and then say that z=x+y, then this (i.e. the string of text "z=x+y") is a constraint that prevents us from assigning arbitrary values to all three variables, and also ensures that if we assign values to any two of them, the value of the third is fixed. This means that the constraint implicitly defines at least three functions, and we can compute the partial derivatives of those. If you also specify that x=y, then this further reduces our ability to assign values to the three variables. Now if we assign a value to any of them, the values of the other two are fixed. So the pair of constraints (z=x+y, x=y) defines at least six functions implicitly, and we can compute the derivatives of those. so i dont understand why its false $$\frac{\partial}{\partial t}f(x_{i},x_{j},x_{k},...) = \frac{\partial}{\partial t}f(x_{i})=\frac{d}{dt}f(x_{i})$$ The partial derivative notation doesn't really make sense here. Since t isn't one of the variables in the "slots" of f, there's no way to interpret ##\partial f/\partial t## as ##D_kf## for some k. as Tac-Tics suggested, i AM confused about function and variable. That's OK. A lot of people are. I think the reason is that math books and math teachers don't explain the stuff that I said at the start of this post. 1. in an equation, if a side is variable then the other side is also a variable. I wouldn't say that. ##x^2=\pi## is an equation. Here x is a variable because it represents a real number, and ##\pi## is a constant because it represents a real number it represents that same real number when it appears in 2. f(x[1],x[2],x[3]...) is a short notation which says f is equal to some expression involving x[1],x[2],x[3]... . eg, more importantly f(x[1],x[2],x[3]...) does not assumes no relation between x[1],x[2],x[3]... . moer than one function notation can be formed out of a single expression. f is a function. (x ...) is a member of the domain of f. f(x ...) is a member the codomain of f, called "the value of f at (x For example, if ##f:\mathbb R^2\to\mathbb R## is defined by ##f(x,y)=5x+y^2## for all ##x,y\in\mathbb R##, then f is a function, x and y are real numbers, (x,y) is an ordered pair of real numbers, i.e. a member of ##\mathbb R^2##, f(x,y) is a real number (but we only know which one if we know the values of x and y), f(7,3) is a real number (and we know that it's 44). It is never OK to write f(x,y)=f(x), because the left-hand side only makes sense if the domain of f is a subset of ##\mathbb R^2## and the right-hand side only makes sense if the domain of f is a subset of ##\mathbb R##. Note that the definition of a function is always a "for all" statement, even if the words "for all" have been omitted. For example, the words "the function ##x^2##" is strictly speaking nonsense. The correct way to say it is "the function ##f:\mathbb R\to\mathbb R## defined by ##f(x)=x^2## for all ##x\in\mathbb R##". Even the phrase "the function ##f(x)=x^2##" is flawed in (!) different ways: 1. Neither of the strings of text "f(x)=x " or "f(x)" represents a function. The function is denoted by f. 2. There's no specification of the domain. 3. There's no specification of the codomain. 4. The absence of the words "for all" hides the fact that x is a dummy variable (one that can be replaced by any other symbol without changing the meaning of the statement).
{"url":"http://www.physicsforums.com/showthread.php?p=4213827","timestamp":"2014-04-21T12:23:34Z","content_type":null,"content_length":"133423","record_id":"<urn:uuid:adc95645-78f6-495a-9687-9241b4ac0906>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Journey North Manatees How to Map Satellite Telemetry Data The Satellite Data Mapping Charts in this lesson will help students pinpoint the manatee's location on a map. Using the satellite data provided in each migration update, they should follow the instructions for columns A-J on the charts provided. With a transparency, they can draw on top of your map as they work to find the exact location. Step by step examples are also provided below. Materials Needed Example #1: Find the Manatee's Latitude │ │ Latitude │ │ DATE ├─────────────────────┬──────┬──────┬────────┬─────────┤ │ │ (A) │ (B) │ (C) │ (D) │ (E)= │ │ │ Manatee's Latitude │ │ │ │ (C)x(D) │ │ 1/23/02 │ 25.700 N │ 25 N │ .700 │ 115 mm │ 80.5 mm │ Column A: Record the manatee's exact latitude here. Column B: Find the latitude line nearest the eagle, directly to the south, and record that number here. Put a transparency on top of your map. Draw a line along this latitude line. Column C: The manatee is a fraction of a degree north of the line you just drew for B. As given in the satellite data, record the fraction as a decimal in this column. Column D: Measure the scale of your map for latitude. Record the scale here in mm per degree latitude. Column E: Find out how far north (in mm) the manatee is located from the latitude line you drew for (B). To do this, multiply the decimal in column C by the scale of your map D and record the product here. Now draw a line this distrance north of the line you drew for (B). The manatee is located on this latitude line. Example #2: Find the Manatee's Longitude │ │ Longitude │ │ DATE ├──────────────────────┬──────┬──────┬────────┬─────────┤ │ │ (F) │ (G) │ (H) │ (I) │ (J)= │ │ │ Manatee's Longitude │ │ │ │ (H)x(I) │ │ 1/23/02 │ 81.300W │ 81 W │ .300 │ 164 mm │ 49.2 mm │ Column F: Record the manatee's longitude here. Column G: Find the longitude line nearest the manatee, directly to the east, and record that number here. Put the transparency back on your map. Draw a line along this longitude line. Column H: The manatee is a fraction of a degree west of the line you just drew for G. As given in the satellite data, record the fraction as a decimal in this column. Column I: Measure the scale of your map for longitude. Record the scale here in mm per degree longitude. Column J: Find out how far west (in mm) the manatee is located from the longitude line you drew for (G). To do this, multiply the decimal in column (H) by the scale of your map (I) and record the product here. Now draw a line this distrance west of the line you drew for (G). The manatee is located on this longitude line, exactly where it intersects the latitude line. Note to Teachers You may need to alter these instructions, depending on the scale of the map you use. These instructions assume your map shows increments of 1 degree. If your map is different than this, follow the basic idea but adjust your numbers accordingly. Copyright 2002 Journey North. All Rights Reserved. Please send all questions, comments, and suggestions to our feedback form
{"url":"http://www.learner.org/jnorth/tm/manatee/MapSatelliteData.html","timestamp":"2014-04-17T06:48:22Z","content_type":null,"content_length":"10252","record_id":"<urn:uuid:c8a311a3-e8bc-42da-b423-ad4c13172ff5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
t-walk on the wild side When I read in the abstract of the recent A General Purpose Sampling Algorithm for Continuous Distributions, published by Christen and Fox in Bayesian Analysis that We develop a new general purpose MCMC sampler for arbitrary continuous distributions that requires no tuning. I am slightly bemused. The proposal of the authors is certainly interesting and widely applicable but to cover arbitrary distributions in arbitrary dimensions with no tuning and great performances sounds too much like marketing on steroids! The 101 Theorem in MCMC methods is that, no matter how good your sampler is, there exists an exotic distribution out there whose only purpose is to make it The algorithm in A General Purpose Sampling Algorithm for Continuous Distributions is based on two dual and coupled chains which are used towards a double target $\pi(x)\pi(x^\prime)$. Given that only one of the two chains moves at each iteration, according to a random walk, there is a calibration parameter that definitely influences the performances of the method, if not the acceptance probability. This multiple chain approach is reminding meĀ both of coupled schemes developed by Gareth Roberts in the late 1990′s, along with Laird Breyer, in the wake of the perfect sampling “revolution” and of delayed rejection sampling, as proposed by Antonietta Mira in those years as well. However, there is no particular result in the paper showing an improvement in convergence time over more traditional samplers. (In fact, the random walk nature of the algorithm strongly suggests a lack of uniform ergodicity.) The paper only offers a comparison with an older optimal scaled random walk proposal of Roberts and Rosenthal (Statistical Science, 2001). Rather than with the more recent and effective adaptive Metropolis-Hastings algorithm developed by the same authors. Since the authors developed a complete set of computer packages, including one in R, I figure people will start to test the method to check for possible improvement over the existing solutions. If the t-walk is indeed superior sui generis, we should hear more about it in the near future… 3 Responses to “t-walk on the wild side” 1. [...] dive MH A new Metropolis-Hastings algorithm that I would call “universal” was posted by Somak Dutta yesterday on arXiv. contains a different Metropolis-Hastings algorithm [...] 2. Colin: The comparison with our down-the-shelf PMC sampler is shown on Darren Wraith’s experiment. I agree with you that the more basic proposals we have the better, since mixing proposals always makes things better. 3. Of course one is correct to be skeptical about a random-walk sampler that claims to be super-fast for arbitrary distributions and have no tuning. Fortunately we do not claim that for the t-walk — we are way too seasoned in running MCMC on big problems to be so mistaken. What we do claim is that the t-walk will do an adequate job for many routine problems faced by practicing statisticians, and significantly reduce the problem-specific work required, so reducing the human time required to complete an analysis. In the present age of exotic super-multi-hyper MCMC algorithms, plain old utility to the practitioner seems to be not enough for some people. We still believe it is a laudable goal. To believe that a method must demonstrate “a … result … showing an improvement in convergence time over more traditional samplers” is missing exactly this point. It is true that there are distributions for which the t-walk will perform poorly, and we mention some that we found in the paper. Nevertheless, the zero time that it takes to fire up a t-walk sampler makes it a great thing to try first. If it does not work for your problem, move on — it has cost you very little. The adequate performance of the t-walk is demonstrated by the comparisons presented in the paper. You don’t need to take our word for it — you can download the t-walk code and run the tests On a technical note, I think the comparison to Roberts’ coupled schemes (that were big advances when presented) is naive in terms of important algorithmic properties. It is not an accident that the t-walk scales very differently with problem size. One is missing an(other) important property of a sampler to not see scaling as a primary issue.
{"url":"http://xianblog.wordpress.com/2010/03/12/t-walk-on-the-wild-side/","timestamp":"2014-04-18T00:30:11Z","content_type":null,"content_length":"40870","record_id":"<urn:uuid:0e296b50-5df2-4a0c-925f-f30b707a35cd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate the Surface Area of a Rectangular Prism Edit Article Surface Area HelpArea = 2ab + 2bc + 2acArea = 2B + Ph Edited by Franky Hill, Zoe Volt, Maluniu, Jack Herrick and 35 others A rectangular prism is a fancy name for a 6-sided object that is very familiar to everybody—the box. Think of the common brick, or a shoebox, and you know exactly what a rectangular prism is. This article will show you how to calculate the area of this shape. Method 1 of 2: Area = 2ab + 2bc + 2ac 1. 1 Understand what a rectangular prism. When you look at the example below, you’ll see there are 6 sides total. Each side is exactly the same as the side opposite it, so really, there are only 3 basic rectangles to deal with. If you simply find the area of each of the 3 rectangles, add them together, then multiply by 2, you’ll have the total area. Let’s break it down. □ Our example box is 4 inches wide(a) by 5 inches long(b) by 3 inches high (c). 2. 2 Learn the formula. To calculate the area of a rectangular prism, use the following formula: 2ab + 2bc + 2ac □ What does that mean? To put it in plain English, it means you multiply the width times the length, and then multiply that by 2. Then you multiply the length times the height, and then multiply that result by 2. Next, you measure the width times the height, and multiply that by 2 as well. Finally, you add all three results together to get your final answer. Let’s take that one step at a time. 3. 3 Find the area of the base face.The base face is the bottom side of the shape, shown in yellow. To find its area, multiply the length by the width. The first part of the formula is 2ab, so 2ab = 2*(4*5) = 2*(20) = 40 4. 4 Find the area of the long face. This is shown in purple. You find it by multiplying the length by the height. The middle part of the formula is 2bc, so 2bc = 2(5*3) = 2*(15) = 30. 5. 5 Finally, find the area of the sort face. This is shown in green. The last part of the formula is 2ac, so 2ac = 2(4*3) = 2*(12) = 24. 6. 6 Now, add them up. 2ab + 2bc + 2ac = 40 + 30 + 24 = 94. The area of this rectangular prism is 94 square inches. Method 2 of 2: Area = 2B + Ph 1. 1 Learn the formula. To calculate the area using the perimeter of the base, we'll use the formula 2B + Ph. Here's what the letters mean: □ B = Area of the base. □ P = Perimeter of the base. □ H = Height of the prism. 2. 2 Use the same rectangular prism as Method 1 above. 3. 3 Calculate the area of the base (B). The area of the base is 2ab = 2(4*5) = 20. 4. 4 Calculate the perimeter. The perimeter of the base face is found by adding the length of each side together. If we look at it like a formula, that would be 2a + 2b. Using our example, we know the base is 4 inches wide by 5 inches long. Our perimeter is 2(4) + 2(5) = 8 + 10 = 18. 5. 5 Put the numbers into the formula. In our example: □ 2B + Ph = (2*20) + (18*3) = 40 + 54 = 94. • Finding the surface area of a rectangular prism comes in handy in real life situations more often than you would think—cupboards, doors, rooms, etc., are often rectangular prisms, meaning you may have to calculate their surface area for home improvement projects. • A rectangular prism is a type of cuboid, which is a geometry term for any solid figure with six faces, making it a type of convex polyhedron. • Finding the area of a rectangular prism may seem difficult, but it's not once you get comfortable with the formula. Practice finding surface area with the 2B + Ph formula several times to hone your expertise. Article Info Thanks to all authors for creating a page that has been read 206,403 times. Was this article accurate?
{"url":"http://www.wikihow.com/Calculate-the-Surface-Area-of-a-Rectangular-Prism","timestamp":"2014-04-18T23:23:45Z","content_type":null,"content_length":"77992","record_id":"<urn:uuid:23c71396-ee84-4254-a7d9-92eaa1590233>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"url":"http://nrich.maths.org/public/leg.php?group_id=4&code=34","timestamp":"2014-04-20T11:44:12Z","content_type":null,"content_length":"17012","record_id":"<urn:uuid:8233eacb-76df-4d34-91c8-cdeaca3c374b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Answer to the Friday Puzzle..... Answer to the Friday Puzzle….. On Friday I set this little puzzle…..I have come up with a way of sorting numbers into one of three groups. Here are some examples of the numbers in my three groups…. Group One: 0, 3, 6, 8, 9 Group Two: 1, 4, 7, 11, 14 Group Three: 2, 5, 10, 12, 13 In which group should I place the numbers 15, 16 and 17? If you have not tried to solve it, have a go now. For everyone else the answer is after the break. All of the numbers in Group 1 only have curves, all of those in Group 2 only have straight lines, and Group 3 have a mixture. So, 15 goes into Group 3, 16 into Group 3, and 17 into Group 2. Did you solve it? Any other answers? I have produced an ebook containing 101 of the previous Friday Puzzles! It is called PUZZLED and is available for the Kindle (UK here and USA here) and on theiBookstore (UK here in the USA here). You can try 101 of the puzzles for free here. 30 comments on “Answer to the Friday Puzzle…..” 1. 17 in Group 1? 2. Surely it depends on the font you’re using. That 7 looks as though it possesses a curve to me. 3. I don’t understand the answer, surely 17 goes into group 2? 4. 17 in group 2 surely. □ It does, don’t know what the others are on or on about! Duh!! 5. As others have mentioned, this depends too much on fonts to be a good puzzle (for example, on my computer viewed through Google Reader, 7 has a curve, and this page loaded directly gives 1 a slight curve on the top left). On the other hand, a font that used a digital-clock style for numbers would have nothing but straight lines. 6. Friday it said: “A fun mathematical puzzle this week.” This is not mathematical. Anyway, I didn’t get it. 7. Yes, I thougt that I was wrong, because is very estrange answer! :) 8. Yup, exactly the answer I had. 9. ‘Friday it said: “A fun mathematical puzzle this week.” This is not mathematical.’ It’s called misdirection. 10. yes,yes, as Others have stated. on my computer the 7 has a curve. also on some computers the serif twiddles may have curves. not fair test. that’s why setting this puzzle in roman Numbers would be fairer. □ That’s all very well, but I don’t speak Italian 11. got it, using recursive non-derivative algebra in a leveraged geometric configuration, They all were in group 2 on my calculator. 12. I agree with -M-. I read this was a mathematical puzzle so I decided to dismiss any other patterns. 13. Yes solved it! 14. It bamboozled me 15. The question was misleading. You said “In which group” when you meant “groups”. Grrrr. □ Not many others picked up on this except you and I. I was annoyed too. □ I noticed it actually, but did not comment. Richard isn’t very good at questions……… on reflection he’s not very good at answers either. 16. I got it, as well as the fact that it had nowt to do with maths. 17. I got it after one of the comments questioned the “mathematical” aspect of the puzzle alerting us all to revert to the old think out of the box cliche. 18. I have defined a function f such that f(15)=0,3,6,8,9… ; f(16)=1,4,7,11,14… ; f(17)=2,5,10,12,13… . Unfortunately, this comment box is too small to write f in full. *This*, my friend, is a mathematical answer. 19. Blindingly obvious once it is pointed out. Embarrassing not to see it in a few seconds. Luckily I was still stuck on the two trains. □ Make sure you don’t make any unwarranted assumptions about ground and perpendiculars. 20. i saw a really different pattern emerge. Usually when people start counting, they start at number 1 but this puzzle started on zero. Zero being the first number in the sequence, goes in group1. Then the number One, being the second number in the sequence, goes in group2. The number two is third so it goes in group3, and so on. that pattern continues until the number 8. 8 and 9 are in the first group (sequential numbers) and the numbers 12 and 13 are also sequential numbers and are in group3. After this, it’s complicated for me to explain . but there’s a bit more to it. Since we are dealing with numbers, one can think about them being group for many different reasons. Nonetheless, I do agree with previous comments, the original solution to the grouping wasn’t really 21. Couldn’t get to grips with this but my 12 year old son spotted it in seconds – gutted! 22. I didn’t solve it, because I’m lazy. But after reading the answer it made perfect sense. It is mathematical, if you look at the common factors in group 1 and 2 you would then be able to solve the problem. In it’s simplest form 1+2=3. JMO 23. Assumptions!!! Did not make it explicit that 15,16,17 didn’t all go into the same group… Had me coming up with some crazy factorials 24. I got it. What I don’t get is why the font sould be such a problem. When you learn to write numbers at school, you learn a standar way to draw them. So in my opinion even written in “corsiva”, 1, 7 and 4 are still known to be written with straigt lines. I understand better the messages that argue that this is not a mathematical puzzle. But couldn’t we consider that curves and straight lines are geometric matters, and therefore part of mathematics ? 25. group vs groups…. WISEMAN!!!!!! *shakes fist*
{"url":"http://richardwiseman.wordpress.com/2012/06/11/answer-to-the-friday-puzzle-159/","timestamp":"2014-04-21T07:50:23Z","content_type":null,"content_length":"90371","record_id":"<urn:uuid:a399e99e-7c32-43dc-ba34-206be944feeb>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
patterns from complex numbers April 28th 2011, 02:21 AM patterns from complex numbers i have been given to obtain solutions to z^n=i for n=3,4,5 by using the Moivre's Theorem. then the teacher is saying you need to prove and generalize your results for z^n= a+bi, where |a+bi|=1 i dont know what do to...any ideas would be much appreciated... April 28th 2011, 06:47 AM Note that $|a+bi|=1 \iff z = e^{i\theta+2\pi k}, \theta \in \mathbb{R},k \in \mathbb{Z}$ So if that is the case then $z^n= e^{i\theta+2\pi k} \iff ...$ Hint: use your expoential laws and there are n solutions! April 28th 2011, 08:36 AM hmm..yeah..but then how can i go further...basically.. z^n= a+bi, where |a+bi|=1...this is what i need to prove...by using moivre's theorem... sorry if i bother you again..but i did not quite get how to solve.. April 28th 2011, 08:45 AM Frankly, I do not understand what the original question is asking you to show. You see, we do not know what you have shown in previous problems. What do you think you are to show? April 28th 2011, 10:52 AM Perhaps we should start at the beginning. You had to obtain solutions for z^n = i for n = 3, 4, and 5 using De Moivre's theorem. What have you been able to do with this part? April 28th 2011, 11:46 AM OK..so basically...at the beginning of the task i had to obtain solutions by moivre's theorem for the equation z^n=1...find a pattern...so i came up with the conjecture that works for it..it was quite easy....now...i am stuck in obtaining solutions to solve Z^n=i for n=3,4,5 by using moivre's theorem...it says...represent the solutions on the argand diagram and the generalize and prove your result for z^n=a+bi, where |a+bi|=1...finally..what happens when |a+bi|≠1..this is the whole task..but is quite ridiculous...the teacher is gonna mark it...and does not care much....and it will count for the final mark...dont really know how to do it...thanks that you are here.. April 28th 2011, 01:18 PM OK..so basically...at the beginning of the task i had to obtain solutions by moivre's theorem for the equation z^n=1...find a pattern...so i came up with the conjecture that works for it..it was quite easy....now...i am stuck in obtaining solutions to solve Z^n=i for n=3,4,5 by using moivre's theorem...it says...represent the solutions on the argand diagram and the generalize and prove your result for z^n=a+bi, where |a+bi|=1...finally..what happens when |a+bi|≠1..this is the whole task..but is quite ridiculous...the teacher is gonna mark it...and does not care much....and it will count for the final mark...dont really know how to do it...thanks that you are here.. (sighs) Actually, as of now you probably won't thank me that I'm here. We cannot help you with problems that count toward the final grade. Thread closed.
{"url":"http://mathhelpforum.com/pre-calculus/178836-patterns-complex-numbers-print.html","timestamp":"2014-04-21T05:09:44Z","content_type":null,"content_length":"10142","record_id":"<urn:uuid:9f8b1d7e-821e-492f-901e-0b0678327020>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
E.: Fully Abstract Compositional Semantics for Logic Programs Results 1 - 10 of 44 , 1994 "... The paper is a general overview of an approach to the semantics of logic programs whose aim is finding notions of models which really capture the operational semantics, and are therefore useful for defining program equivalences and for semantics-based program analysis. The approach leads to the intr ..." Cited by 115 (26 self) Add to MetaCart The paper is a general overview of an approach to the semantics of logic programs whose aim is finding notions of models which really capture the operational semantics, and are therefore useful for defining program equivalences and for semantics-based program analysis. The approach leads to the introduction of extended interpretations which are more expressive than Herbrand interpretations. The semantics in terms of extended interpretations can be obtained as a result of both an operational (top-down) and a fixpoint (bottom-up) construction. It can also be characterized from the model-theoretic viewpoint, by defining a set of extended models which contains standard Herbrand models. We discuss the original construction modeling computed answer substitutions, its compositional version and various semantics modeling more concrete observables. We then show how the approach can be applied to several extensions of positive logic programs. We finally consider some applications, mainly in the area of semantics-based program transformation and analysis. - In Proc. Twentieth Annual ACM Symp. on Principles of Programming Languages , 1993 "... This paper describes a semantic basis for a compositional approach to the analysis of logic programs. A logic program is viewed as consisting of a set of modules, each module defining a subset of the program's predicates. Analyses are constructed by considering abstract interpretations of a compo ..." Cited by 55 (10 self) Add to MetaCart This paper describes a semantic basis for a compositional approach to the analysis of logic programs. A logic program is viewed as consisting of a set of modules, each module defining a subset of the program's predicates. Analyses are constructed by considering abstract interpretations of a compositional semantics. The abstract meaning of a module corresponds to its analysis and composition of abstract meanings corresponds to composition of analyses. Such an approach is essential for large program development so that altering one module does not require re-analysis of the entire program. A compositional analysis for ground dependencies is included to illustrate the approach. To the best of our knowledge this is the first account of a compositional framework for the analysis of (logic) programs. 1 Introduction It is widely acknowledged that as the size of a program increases, it becomes impractical to maintain it as a single monolithic structure. Instead, the program has to be... - PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON LOGIC PROGRAMMING AND NONMONOTONIC REASONING (LPNMR-97), NUMBER 1265 IN LNCS , 1997 "... The research on systems of logic programming with modules has followed two mainstreams, programming-in-the-large, where compositional operators are provided for combining separate and independent modules, and programming-in-the-small, which aims at enhancing logic programming with new logical co ..." Cited by 36 (12 self) Add to MetaCart The research on systems of logic programming with modules has followed two mainstreams, programming-in-the-large, where compositional operators are provided for combining separate and independent modules, and programming-in-the-small, which aims at enhancing logic programming with new logical connectives. In this paper, we present , 1991 "... Concurrent constraint programming [Sar89,SR90,SRP91] is a simple and powerful model of computation based on the notions of store-as-constraint and process as information transducer. In this paper we describe a (domain-theoretic) semantic foundation for cc languages with ask, tell, parallel compositi ..." Cited by 30 (4 self) Add to MetaCart Concurrent constraint programming [Sar89,SR90,SRP91] is a simple and powerful model of computation based on the notions of store-as-constraint and process as information transducer. In this paper we describe a (domain-theoretic) semantic foundation for cc languages with ask, tell, parallel composition, hiding, recursion and angelic non-determinism ("parallel backtracking"). This class of languages includes the cc/Herbrand language [Sar89] and a simpler reworking [HS91] of CHIP. Generalizing previous work on determinate constraint programming [JPP89,SRP91], we describe the semantics for such a language based on modelling a process as a set of constraints, with parallel composition (conjunction) given by set intersection and or-parallel search (disjunction) given by set union. This is achieved by viewing processes as continuous linear closure operators on the Smyth powerdomain of the underlying constraint system. The model is shown to be fully abstract for the observation of finite appro... - Journal of Logic and Computation , 1995 "... We consider the Constraint Logic Programming paradigm CLP(X ), as defined by Jaffar and Lassez [29, 28]. CLP(X ) integrates a generic computational mechanism based on constraints within the logic programming framework. The paradigm retains the semantic properties of pure logic programs, namely the e ..." Cited by 28 (2 self) Add to MetaCart We consider the Constraint Logic Programming paradigm CLP(X ), as defined by Jaffar and Lassez [29, 28]. CLP(X ) integrates a generic computational mechanism based on constraints within the logic programming framework. The paradigm retains the semantic properties of pure logic programs, namely the existence of equivalent operational, model-theoretic and fixpoint semantics. We introduce a framework for defining various semantics, each corresponding to a specific observable property of CLP computations. Each semantics can be defined either operationally (i.e. top-down) or declaratively (i.e. bottom-up). The construction is based on a new notion of interpretation, on a natural extension of the standard notion of model and on the definition of various immediate consequences operators, whose least fixpoints on the lattice of interpretations are models corresponding to various observable properties. We first consider some semantics defined in [29] and their relations, in terms of correctne... - Proc. Joint Int. Conference and Symposium on Logic Programming , 1992 "... We discuss a declarative characterization of inheritance in logic programming. Our approach is inspired both by existing literature on denotational models for inheritance and by earlier work on a compositional definition of the semantics of logic Programming. We consider a general form of inheritanc ..." Cited by 27 (5 self) Add to MetaCart We discuss a declarative characterization of inheritance in logic programming. Our approach is inspired both by existing literature on denotational models for inheritance and by earlier work on a compositional definition of the semantics of logic Programming. We consider a general form of inheritance which is defined with an overriding semantics between inherited definitions and incorporates two different mechanisms known in the literature as static and dynamic inheritance. The result of our semantic reconstruction is an elegant framework which enables us to capture the compositional properties of inheritance and offers a uniform basis for the analysis of the different mechanisms we consider. 1 - LPNMR 2007. LNCS (LNAI , 2007 "... Practically all programming languages allow the programmer to split a program into several modules which brings along several advantages in software development. In this paper, we are interested in the area of answer-set programming where fully declarative and nonmonotonic languages are applied. In ..." Cited by 27 (9 self) Add to MetaCart Practically all programming languages allow the programmer to split a program into several modules which brings along several advantages in software development. In this paper, we are interested in the area of answer-set programming where fully declarative and nonmonotonic languages are applied. In this context, obtaining a modular structure for programs is by no means straightforward since the output of an entire program cannot in general be composed from the output of its components. To better understand the effects of disjunctive information on modularity we restrict the scope of analysis to the case of disjunctive logic programs (DLPs) subject to stable-model semantics. We define the notion of a DLP-function, where a well-defined input/output interface is provided, and establish a novel module theorem which indicates the compositionality of stable-model semantics for DLP-functions. The module theorem extends the well-known splitting-set theorem and enables the decomposition of DLP-functions given their strongly connected components based on positive dependencies induced by rules. In this setting, it is also possible to split shared disjunctive rules among components using a generalized shifting technique. The concept of modular equivalence is introduced for the mutual comparison of DLP-functions using a generalization of a translation-based verification method. 1. - in Proceedings of the North American Conference on Logic Programming , 1989 "... We investigate properties of logic programs that permit refinements in their fixpoint evaluation and shed light on the choice of control strategy. A fundamental aspect of a bottom-up computation is that we must constantly check to see if the fixpoint has been reached. If the computation iteratively ..." Cited by 26 (5 self) Add to MetaCart We investigate properties of logic programs that permit refinements in their fixpoint evaluation and shed light on the choice of control strategy. A fundamental aspect of a bottom-up computation is that we must constantly check to see if the fixpoint has been reached. If the computation iteratively applies all rules, bottom-up, until the fixpoint is reached, this amounts to checking if any new facts were produced after each iteration. Such a check also enhances efficiency in that duplicate facts need not be re-used in subsequent iterations, if we use the Seminaive fixpoint evaluation strategy. However, the cost of this check is a significant component of the cost of bottom-up fixpoint evaluation, and for many programs the full check is unnecessary. We identify properties of programs that enable us to infer that a much simpler check (namely, whether any fact was produced in the previous iteration) suffices. While it is in general undecidable whether a given program has these properties, we develop techniques to test sufficient conditions, and we illustrate these techniques on some simple programs that have these properties. The significance of our results lies in the significantly larger class of programs for which bottom-up evaluation methods, enhanced with the optimizations that we propose, become competitive with standard (top-down) implementations of logic programs. This increased efficiency is achieved without compromising the completeness of the bottom-up approach; this is in contrast to the incompleteness that accompanies the depth-first search strategy that is central to most top-down implementations. - Theor. Comput. Sci , 1999 "... ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1033279","timestamp":"2014-04-16T05:45:17Z","content_type":null,"content_length":"38736","record_id":"<urn:uuid:f0823415-10b4-43fd-815e-51e78d72aef0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
[Scipy-tickets] [SciPy] #1802: Infinite time taken by scipy hypergeom function for some particular values SciPy Trac scipy-tickets@scipy.... Tue Jan 1 10:28:32 CST 2013 #1802: Infinite time taken by scipy hypergeom function for some particular values Reporter: imsc | Owner: somebody Type: defect | Status: needs_review Priority: normal | Milestone: Unscheduled Component: Other | Version: 0.11.0 Keywords: | Changes (by josefpktd): * status: new => needs_review imsc, Thanks for the report. These problems are difficult to find because they show up only in some cases. Ticket URL: <http://projects.scipy.org/scipy/ticket/1802#comment:3> SciPy <http://www.scipy.org> SciPy is open-source software for mathematics, science, and engineering. More information about the Scipy-tickets mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-tickets/2013-January/005971.html","timestamp":"2014-04-17T09:37:07Z","content_type":null,"content_length":"4056","record_id":"<urn:uuid:b71c8d28-bcb0-4146-b905-220b3d3a70cc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Communicating and trusting proofs: The case for broad spectrum proof certificates. Available from author’s website Communicating and trusting proofs: The case for broad spectrum proof certificates. Available from author’s website (2011) Download Links by Dale Miller , Lix École Polytechnique author = {Dale Miller and Lix École Polytechnique}, title = {Communicating and trusting proofs: The case for broad spectrum proof certificates. Available from author’s website}, year = {2011} Abstract. Proofs, both formal and informal, are documents that are intended to circulate within societies of humans and machines distributed across time and space in order to provide trust. Such trust might lead one mathematician to accept a certain statement as true or it might help convince a consumer that a certain software system is secure. Using this general characterization of proofs, we examine a range of perspectives about proofs and their roles within mathematics and computer science that often appear contradictory. We then consider the possibility of defining a broad spectrum proof certificate format that is intended as a universal language for communicating formal proofs among computational logic systems. We identify four desiderata for such proof certificates: they must be (i) checkable by simple proof checkers, (ii) flexible enough that existing provers can conveniently produce such certificates from their internal evidence of proof, (iii) directly related to proof formalisms used within the structural proof theory literature, and (iv) permit certificates to elide some proof information with the expectation that a proof checker can reconstruct the missing information using bounded and structured proof search. We consider various consequences of these desiderata, including how they can mix computation and deduction and what they mean for the establishment of marketplaces and libraries of proofs. In a companion paper we proposal a specific framework for achieving all four of these desiderata. 1 1092 Proof-Carrying Code - Necula - 1997 333 Logic programming with focusing proofs in linear logic - Andreoli - 1992 260 Investigations into logical deduction - Gentzen - 1969 229 Foundational proof-carrying code - Appel - 2001 125 Proofs and refutations - Lakatos - 1976 96 Social processes and proofs of theorems and programs - Millo, Lipton, et al. 75 Theorem proving modulo - Dowek, Hardin, et al. 63 SLAM and static driver verifier: Technology transfer of formal methods inside Microsoft - Ball, Cook, et al. - 2004 61 Efficient representation and validation of proofs - Necula, Lee - 1998 43 Focusing and polarization in linear, intuitionistic, and classical logics - Liang, Miller 36 Formal methods and safetycritical standards - Bowen, Hinchey - 1994 36 Natural Deduction. Almqvist - Prawitz - 1965 32 Troelstra and Helmut Schwichtenberg. Basic Proof Theory - Anne - 1996 31 A machine-checked theory of floating point arithmetic - Harrison - 1999 30 Structural Proof Theory - Negri, Plato - 2001 26 How to Believe a Machine-Checked Proof - Pollack - 1998 24 A linear approach to the proof-theory of least and greatest fixed points - Baelde - 2008 14 The Challenge of Computer Mathematics - Barendregt, Wiedijk - 2005 13 A unified sequent calculus for focused proofs - Liang, Miller 11 logic. Theoretical Computer Science, 50:1–102 - Linear - 1987 11 Mechanizing Proof - MacKenzie - 2001 9 Conception d’un noyau de vérification de preuves pour le λΠ-calcul modulo - Boespflug - 2011 7 Le point aveugle, cours de logique, tome 1 : vers l’ imperfection. Editions Hermann, collection, Visions des Sciences - Girard - 2006 6 Lightweight lemmas in Lambda Prolog - Appel, Felty - 1999 5 A proposal for broad spectrum proof certificates - Miller - 2011 5 A framework for proof systems - Nigam, Miller - 2009 4 The life of Bertrand Russell - Clark - 1975 4 Prensa Nieto, and Alwen Fernanto Tiu. Expressiveness + automation + soundness: Towards combining SMT solvers and interactive proof assistants - Fontaine, Marion, et al. - 2006 4 Necula and Shree Prakash Rahul. Oracle-based checking of untrusted software - George - 2001 2 Ministry of Defence. UK defence standardization - K - 1997 1 thesis, Ecole Polytechnique, December 2008. David Baelde. Least and greatest fixed points in linear logic - PhD - 2010
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.228.6884","timestamp":"2014-04-18T02:13:14Z","content_type":null,"content_length":"29859","record_id":"<urn:uuid:f19cf187-cfbe-444a-b244-d869a802aa6a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
With the Masuda Method, how many eggs does it take? How many eggs does it usally take?I have like 20 charmanders now! Is there a way to increase the chance even more?I know its like 2089% or something like that. But i dont want a box full of charmanders.Please help. 2089% ? Do you know, what % mean? It´s like you would have even more shinies than you´ve ever met...like you met 2 Pokémon and had 400 shinies... Best answer
{"url":"http://pokemondb.net/pokebase/39272/with-the-masuda-method-how-many-eggs-does-it-take?show=40062","timestamp":"2014-04-17T03:27:30Z","content_type":null,"content_length":"36757","record_id":"<urn:uuid:46ea49ae-c89c-4fae-a083-9cac2a20f8f7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Circular permutation PQ tree is a tree-based data structure that represents a family of on a set of elements, discovered and named by Kellogg S. Booth George S. Lueker in 1976. It is a rooted, labeled tree, in which each element is represented by one of the leaf nodes , and each non-leaf node is labelled P or Q. A P node has at least two children, and a Q node has at least three children. A PQ tree represents its permutations via permissible reorderings of the children of its nodes. The children of a P node may be reordered in any way. The children of a Q node may be put in reverse order, but may not otherwise be reordered. A PQ tree represents all leaf node orderings that can be achieved by any sequence of these two operations. A PQ tree with many P and Q nodes can represent complicated subsets of the set of all possible orderings. However, not every set of orderings may be representable in this way; for instance, if an ordering is represented by a PQ tree, the reverse of the ordering must also be represented by the same tree. PQ trees are used to solve problems where the goal is to find an ordering that satisfies various constraints. In these problems, constraints on the ordering are included one at a time, by modifying the PQ tree structure in such a way that it represents only orderings satisfying the constraint. Applications of PQ trees include creating a contig map from DNA fragments, testing a matrix for the consecutive ones property, recognizing interval graphs and determining whether a graph is planar. Examples and notation If all the leaves of a PQ tree are connected directly to a root P node then all possible orderings are allowed. If all the leaves are connected directly to a root Q node then only one order (and its reverse) is allowed. If nodes a,b,c connect to a P node, which connects to a root P node, with all other leaf nodes connected directly to the root, then any ordering where a,b,c are contiguous is Where graphical presentation is unavailable PQ trees are often noted using nested parenthesized lists. Square parentheses represents a Q node and regular ones represents P nodes. Leaves are non-parentheses elements of the lists. The image on the left is represented in this notation by [1 (2 3 4) 5]. This PQ tree represents the following twelve permutations on the set {1, 2, 3, 4, 5}: 12345, 12435, 13245, 13425, 14235, 14325, 52341, 52431, 53241, 53421, 54231, 54321. PC trees The PC tree, developed by Wei-Kuan Shih and Wen-Lian Hsu, is a more recent generalization of the PQ tree. Like the PQ tree it represents permutations by reorderings of nodes in a tree, with elements represented at the leaves of the tree. Unlike the PQ tree, the PC tree is unrooted. The nodes adjacent to any non-leaf node labeled P may be reordered arbitrarily as in the PQ tree, while the nodes adjacent to any non-leaf node labeled C have a fixed cyclic order and may only be reordered by reversing this order. Thus, a PC tree can only represent sets of orderings in which any circular permutation or reversal of an ordering in the set is also in the set. However, a PQ tree on n elements may be simulated by a PC tree on n + 1 elements, where the extra element serves to root the PC tree. The data structure operations required to perform a planarity testing algorithm on PC trees are somewhat simpler than the corresponding operations on PQ trees.
{"url":"http://www.reference.com/browse/Circular+permutation","timestamp":"2014-04-19T13:28:15Z","content_type":null,"content_length":"80468","record_id":"<urn:uuid:a782b0af-16b7-4869-b345-e8ce0a7cca9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface area From Fiswiki Surface area is the measure of how much exposed area a solid object has, expressed in square units. Mathematical description of the surface area is considerably more involved than the definition of arc length of a curve. For polyhedra (objects with flat polygonal faces) the surface area is the sum of the areas of its faces. Smooth surfaces, such as a sphere, are assigned surface area using their representation as parametric surfaces. This definition of the surface area is based on methods of infinitesimal calculus and involves partial derivatives and double integration. [edit] Common formulas Surface areas of common solids Shape Variables Cube s = side length Rectangular prism ℓ = length, w = width, h = height Sphere r = radius of sphere Spherical lune r = radius of sphere, θ = dihedral angle Closed cylinder r = radius of the circular base, h = height of the cylinder s = slant height of the cone, Lateral surface area of a cone r = radius of the circular base, h = height of the cone s = slant height of the cone, Full surface area of a cone r = radius of the circular base, h = height of the cone Pyramid B = area of base, P = perimeter of base, L = slant height [edit] Also see [edit] Reference: • Wikipedia Surface Area [1] Return to 'The Science of Snowmaking' or Freestyle Skiing
{"url":"http://wiki.fis-ski.com/index.php/Surface_area","timestamp":"2014-04-18T18:11:37Z","content_type":null,"content_length":"13583","record_id":"<urn:uuid:71005de0-db3e-4612-a871-03e908d53a24>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Division with remainder word problems Name:_____________ Division with remainder © mathkinz Find Answers: ┃Q.1) 16 toys are divided equally among 3 kids. How many will each receive and how many will be left? │_______┃ ┃Q.2) 25 stamps are divided equally among 4 girls. How many stamps will be left after dividing? │_______┃ ┃Q.3) Find the remainder after dividing 28 by 7? │_______┃ ┃Q.4) Find the remainder after dividing 34 by 6? │_______┃ ┃Q.5) A raffle ticket costs $4. If I have $30, how many maximum tickets can I buy with $30 and how much money will be left? │_______┃ ┃Q.6) An air plane costs 30 million dollars. If an Airlines has 320 million dollars, how many air planes can be bought with $320 million dollars? │_______┃ ┃Q.7) After dividing 35 pencils equally in 6 kids, how many pencils are left? │_______┃ ┃Q.8) There are 18 paintings for a house. If there are 4 rooms and each room is suppose to have same number of paintings, how many paintings will be left?│_______┃ ┃Q.9) After dividing 45 by 8, find the remainder? │_______┃ ┃Q.10) 35 notebooks are divided equally in a class. If each kid got 4 notebooks and 3 are left, find the number of kids in the class? │_______┃
{"url":"http://www.mathkinz.com/BasicMath/DivRemainderWord3.html","timestamp":"2014-04-19T00:51:36Z","content_type":null,"content_length":"4164","record_id":"<urn:uuid:aa984da2-6504-47cd-8e2b-643b2486fd23>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
imsubtract (Image Processing Toolbox User's Guide) Image Processing Toolbox User's Guide Subtract one image from another, or subtract a constant from an image Z = imsubtract(X,Y) subtracts each element in array Y from the corresponding element in array X and returns the difference in the corresponding element of the output array Z. X and Y are real, nonsparse numeric arrays of the same size and class, or Y is a double scalar. The array returned, Z, has the same size and class as X unless X is logical, in which case Z is double. If X is an integer array, then elements of the output that exceed the range of the integer type are truncated, and fractional values are rounded. Subtract two uint8 arrays. Note that negative results are rounded to 0. Estimate and subtract the background of an image: Subtract a constant value from an image: See Also imabsdiff, imadd, imcomplement, imdivide, imlincomb, immultiply, ippl © 1994-2005 The MathWorks, Inc.
{"url":"http://matlab.izmiran.ru/help/toolbox/images/imsubtract.html","timestamp":"2014-04-21T01:59:46Z","content_type":null,"content_length":"5401","record_id":"<urn:uuid:27440d13-500e-4a92-9cde-bd1011c6c0ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Nicholas Branca Nicholas Branca Co-Principal Investigator, Mathematics Renaissance K–12 (MRK-12) Co-Principal Investigator, Video Cases for Mathematics Professional Development • Ed.D. Mathematics, Teachers College, Columbia University • M.A. Mathematics, Teachers College, Columbia University • M.A.T. Mathematics, Harvard University • B.S. Mathematics, Iona College Nicholas Branca grew up in the South Bronx and taught junior and senior high school mathematics, served as a research assistant or associate at Columbia and Stanford Universities; and was on the faculties of Stanford and Pennsylvania State Universities. He was a Professor in the Mathematical Sciences Department at San Diego State University from 1976 until he passed away in 2008. During that time he received numerous grants to enhance the professional development of teachers of mathematics (the San Diego Mathematics Project, the Authentic Assessment Institute, and a Pre-College Teacher Development Grant funded by the National Science Foundation). In his work with Teacher Professional Development he served on various committees and/or boards of professional organizations, among them the Greater San Diego Mathematics Council, the California Mathematics Council, and the National Council of Teachers of Mathematics. He was a working group member of the NCTM Commission which drafted the "NCTM Professional Teaching Standards". Professor Branca served as the Executive Director of the California Mathematics Project. The California Mathematics Project is one of nine California Subject Matter Projects (CSMPs) funded by California state legislation. The CSMPs are a statewide network of subject-specific professional development programs for teachers. The primary mission of the CSMPs is to improve instruction in all disciplines at all grade levels throughout California. He was interested in working with teachers and teacher leaders and learning about the work of others involved in similar pursuits.
{"url":"https://go.sdsu.edu/education/crmse/nicholas_branca.aspx","timestamp":"2014-04-16T04:12:50Z","content_type":null,"content_length":"24272","record_id":"<urn:uuid:481f84d2-25cb-4a8c-90fd-c81d1c5a9089>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
The fundamental properties of computing Physics works with fundamental properties such as mass, speed, acceleration, energy, and so on. Quantum mechanics has a well known trade-off between position and momentum: you can know where I am, or how fast I am going, but not both at the same time. Algorithms (and their implementations) also have fundamental properties. Running time and memory usage are the obvious ones. In practice, there is often a trade-off between memory usage and the running time: you can a low memory usage, or a short running time, but not both. Michael Mitzenmacher reminded me this morning of another: correctness. On some difficult problems, you can get a low memory usage and a short running time if you accept an approximate solution. I believe there are other fundamental properties like latency. Consider problems where the volume of the solution and of the input is large: statistics, image processing, finding some subgraph or sublist, text compression, and so on. In such instances, the solution comes out as a stream. You can measure the delay between the input and the output. For example, a program that compresses text by first scanning the whole text might have high latency, even if the overall running time is not large. Similarly, we can give the illusion that a Web browser is faster by beginning the Web page rendering faster, even if the overall running time of the rendering is the same. As another example, I once wrote a paper on computing the running maximum/minimum of an array where latency was an It would be interesting to come up with a listing of all the fundamental properties of computing. If correctness is a fundamental property then you should probably define it. I suspect this is harder than it sounds. Does it means it passes a set of tests? Does it have a specific mathematical relation that it must satisfy? Do the results make the customer happy? They are not synonymous nor are they mutually exclusive. Clearly there are degrees of correctness, but it is a more relativistic measure than resource usage. Comment by Geoff Wozniak — 13/1/2010 @ 11:31 @Geoff Absolutely. And it might be hard to define latency as well. But the mere fact that it is hard to define these quantities should not prevent us from thinking about them as being fundamental concepts (if not properties). Comment by Daniel Lemire — 13/1/2010 @ 13:17 @Daniel Certainly not and I didn’t mean to suggest otherwise. I do wonder, however, if a property is found to be very difficult to define whether or not it should even be considered a fundamental property. Or perhaps it means what the property is pertaining to — in this case algorithms — is ill-defined. Interesting questions to ponder. Comment by Geoff Wozniak — 13/1/2010 @ 13:44 Your question is entirely valid. However, the concept of the “running time” of algorithm is also somewhat fuzzy. It has become the subject of an entire subfield of Computer Science called Complexity I think that it always comes down to (somewhat arbitrary) models and representations. We know that these models are useful because we implement algorithm in software and realize that the complexity of the algorithm is, indeed, closely related to the performance of the software. However, the match is not perfect. Maybe I exaggerate when I try to compare “running time” or “latency” with mass or momentum. I do not know yet. Comment by Daniel Lemire — 13/1/2010 @ 14:01 I have to say – the three measures that you’re identifying sound a great deal like the traditional engineering triangle: good, fast, and/or cheap. You can make it as good as you want (correctness) but only at the expense of either speed or cost (memory/cycles/etc). Comment by Chris Gray — 13/1/2010 @ 15:47 @Geoff: A standard way to define “correctness” to give a mathematically pure definition is generally via an approximation ratio: I can guarantee an answer within a factor of 2 of the optimal. In the case of randomized algorithms, another standard approach is a probabilistic statement: I get the correct answer with high probability (or probability at least x for whatever x is convenient). So, I disagree with your assertion that correctness needs to be a “more relativistic measure” than running time. There are standard ways of examining it in the theoretical literature; one can certainly imagine many variations (and people do), but this is also true for measures like running time. I’ve found that students are initially startled by the idea that correctness can be just another performance measure, like memory usage and computation time. It’s a challenging and important philosophical idea, well worth introducing in this form to undergraduates. Comment by Michael Mitzenmacher — 13/1/2010 @ 15:58 Peter Denning runs a “Great Principles of computing” project. http://cs.gmu.edu/cne/pjd/GP/GP-site/welcome.html Comment by Alan Fekete — 13/1/2010 @ 17:33 If you want to deal with parallel algorithms, you might also consider work and cost in addition to time, where cost = time * nb processors and work = total amount of operations done over all In the sequential world, the relationship between these three are straightforward (since nb processors = 1): time = work = cost However, this is not so in the parallel world, where you only have: time <= work <= cost And you also have possible tradeoff: using less processors with a different algorithm may reduce the cost but may increase the time — or not, in which case it might be considered "optimal" relative to the sequential version (if the parallel has the same cost as the sequential with an improved time. Comment by Guy T. — 13/1/2010 @ 19:21 @Guy Excellent point. Yes, definitively. Comment by Daniel Lemire — 13/1/2010 @ 19:37 For these fundamental properties, isn’t it important to draw a distinction between “computing” and “human computing”? Speed vs. memory — that seems like a computer-only fundamental property. But page rendering / latency seems more like a human perception issue. Same with correctness, to a certain extent. Comment by jeremy — 13/1/2010 @ 21:06 Interesting post & discussion. However, I don’t think that latency is as fundamental as running time, memory usage, and correctness. If latency is the delay between the input and the output, then it is also something that we should file under “running time”. Comment by Zeno Gantner — 14/1/2010 @ 4:40 There is a relation between latency and running time, but they are not the same thing. Consider the rendering of HTML pages. Some browsers begin the rendering faster than others, at the cost of an larger overall running time. Comment by Daniel Lemire — 14/1/2010 @ 8:20 I think correctness is most interesting when considered as an orthogonal dimension to other properties. Then latency is d(running time) / d(correctness), that is, the rate correctness changes as a function of time. For a traditional algorithm this would be undefined (you jump from 0 to 1 instantaneously), but for a loading webpage it might be linear. Similiarly, approximate algorithms often give a guaranteed correctness relative to the amount of memory available. At some quantity of memory and time the algorithm might be exact, but it can degrade better as memory declines. Thanks for prompting this question, it’s interesting to think about modelling algorithm performance a multi-dimensional shape. Comment by Paul — 14/1/2010 @ 11:55 Shouldn’t there be a distinction between “algorithm level properties” and “algorithm implementation level properties”? Those are hardly the same. Comment by Jean-Lou Dupont — 14/1/2010 @ 14:09
{"url":"http://lemire.me/blog/archives/2010/01/13/the-fundamental-properties-of-computing/","timestamp":"2014-04-20T08:14:55Z","content_type":null,"content_length":"24646","record_id":"<urn:uuid:bc1ae16b-95d3-4122-a25e-576febbee0ce>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
• It's the only way to access our downloadable files; • You can use our search box tool; • Registered users see fewer Adverts; • You will receive our 'irregular' newsletters; • It's free. Unless specified otherwise in the individual descriptions MathSticks resources are licenced under a Creative Commons Licence. You are free to use; share; copy; distribute and transmit the work. Provided that you give mathsticks.com credit for the work and logos remain intact. You may not alter, transfrom, or build upon the work, nor may you use it in any form for commercial purposes.
{"url":"http://mathsticks.com/taxotouch/33","timestamp":"2014-04-19T02:29:54Z","content_type":null,"content_length":"49892","record_id":"<urn:uuid:27a9b3e3-7e30-4b3b-a7fd-55a9c7d565c4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Plus Advent Calendar Door #2: Who will come second? Will he rocket to the lead? (Most excellent tortoise suit by Katie Bradley.) Achilles and a tortoise are competing in a 100m sprint. Confident in his victory, Archilles lets the tortoise start 10m ahead of him. The race starts, Achilles zooms off and the tortoise starts bumbling along. When Achilles has reached the point A from where the tortoise started, it has crawled along by a small distance to point B. In a flash Achilles reaches B, but the tortoise is already at point C. When he reaches C, the tortoise is at D. When he's at D, the tortoise is at E. And so on. He's never going to catch up with the tortoise, so he has no chance of winning the race. Something's wrong here, but what? Let's assume that Achilles is ten times faster than the tortoise and that both are moving at constant speed. In the times it takes Achilles to travel the first 10m to point A, the tortoise, being ten times slower, has only moved by 1m to point B. By the time Achilles has travelled 1m to point B, the tortoise has crawled along by 0.1m to point C. And so on. After n such steps the tortoise has travelled And this is where the flaw of the argument lies. The tortoise will never cover the 90m it has to run using steps like these, no matter how many of them it takes. In fact, the distance covered in this way will never exceed converges to This problem is known as one of Zeno's paradoxes after the ancient Greek philosopher Zeno, who used paradoxes like this one to argue that motion is just an illusion. Find out more about Zeno's paradoxes and infinite series on Plus. Return to the Plus Advent Calendar Submitted by Anonymous on December 10, 2013. I had heard this problem before, but I never knew why it doesn't work. This is a very insightful and clever article.
{"url":"http://plus.maths.org/content/plus-advent-calendar-door-2-who-will-come-second","timestamp":"2014-04-21T14:50:58Z","content_type":null,"content_length":"26731","record_id":"<urn:uuid:73ef8e7e-a71a-4ae9-940e-ab82acaef3ba>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Would You Like Some Calculus With Your Physics?Would You Like Some Calculus With Your Physics? It’s a nice demonstration of the oddity of the blogosphere that a libertarian political blog has become my go-to-source for thoughtful blogging about physics education. Thoreau had two good posts yesterday at Unqualified Offerings, one on the problems created by breaking down incorrect intuition, and another on the lack of calculus in calculus-based physics texts: The ostensibly calculus-based introductory physics book by Knight is not really a calculus-based book. Sure, integrals and derivatives pop up here and there, but the vast majority of the problems can be solved without them, and calculus is hardly emphasized at all in most of the text and examples. The few problems that do use calculus are generally the hard ones near the end of the problem set, and with very little in the text to prepare them for these problems it’s hard to assign them. [...]This has been in the back of my mind for a while, but I was able to cope with it because, well, it’s just freshman stuff. But next year I’m supposed to teach the upper division classical mechanics course, and I’m realizing that my students will not have had a truly calculus-based freshman mechanics course, so all of the stuff that I’d like to do must instead be put off until I first redo mechanics (in abbreviated form, of course) with calculus. This does not make me happy. I’ve noticed much the same thing. In fact, one of my biggest reservations about the Matter and Interactions curriculum is that it has, if anything, even less calculus than the previous intro text we were using. That may be a little unfair, actually– it has calculus, but in many ways, it’s stealth calculus. The whole text is built around a very computational approach to physics, with lots of time spent on solution methods involving the iterative updating of physical properties over small steps in position or time. These are essentially numerical integrations of the equations of motion, and the book does explicitly say that in several places. But there are essentially no problems involving applied calculus, and all the summary formulae are presented in update form, so I fear that the take-home message is that physics is really algebra-based. Thoreau’s comment about preparation for upper-level classes is a worry, as well. I’m going to be teaching quantum optics again in the fall (assuming I can cajole enough students into signing up, anyway), and one issue I’ll have to contend with is that a good number of the students will never have seen Maxwell’s equations in differential form. Which makes it a little difficult to get to the wave equation, and set up the necessary background information about the classical model of light as an EM wave. This is a tough problem, though. There are a lot of problems with trying to make the introductory courses more mathematical, starting with the preparation of our students, many of whom don’t have all that solid a grip on algebra. We list calculus as a co-requisite for intro physics, but the computer system used to handle course registration does not check or enforce prerequisites in any useful way, so we get students in the class who aren’t comfortable with derivatives, let alone differential equations. On some level, I sort of feel like we should make our students suck it up and deal with the math– after all, the second course in the physics major when I was an undergrad was E&M out of Purcell’s book, and it doesn’t get more mathematical than that. On the other hand, though, I got basically nothing out of that class, and had to re-learn E&M more or less from scratch my junior year. And I was crazy enough to go on to grad school– for the typical wannabe engineer, Purcell would be a slow agonizing death by vector calculus. Anyway, if anyone knows a foolproof solution for these issues, leave a comment or send me an email, because I’d love to know what to do. 1. #1 Eric Lund May 8, 2009 [O]ne issue I’ll have to contend with is that a good number of the students will never have seen Maxwell’s equations in differential form. Which makes it a little difficult to get to the wave equation, and set up the necessary background information about the classical model of light as an EM wave. I know it is possible to get a wave equation out of the integral form of Maxwell’s equations. The professor whose freshman E&M class I TA’ed in grad school wrote up some notes for his class that show the derivation. (I don’t know if I still have my copy of those notes; I haven’t looked for them in a long time.) The problem is that the derivation takes five pages, as opposed to the five lines it takes to derive the wave equation from the differential form. I also took freshman E&M with Purcell as the textbook, and I second your thought that this is not a viable solution to the problem. In addition to the issues you mentioned, there is the not so small problem that Purcell uses CGS units, so your engineers would be forced to re-learn all of their E&M theory anyway. I still sometimes use Purcell as a reference for translating the CGS blatherings of theorists into SI quantities that correspond to the data (the inside back cover of my edition shows the translations), because even today the translation is not automatic for me. 2. #2 FUG May 8, 2009 I see the differential form of equations all the time in class, and we use this book: http://www.amazon.com/Scientists-Engineers-Chapters-CengageNOW-2-Semester/dp/0495013129/ref= I have no perspective on what is too little or too much calculus to have, but I feel like I’m grasping the concepts with an appropriate amount of work, and I have had to use calculus to solve problems, so maybe it’s worthwhile (~ 2 hours per home work assignment, 1 assignment per week, plus a couple extra hours study time for test weeks). I don’t always use calculus, since derived equations are sometimes easier to work with, but usually at least once per assignment. 3. #3 Coriolis May 8, 2009 My perspective on this is both as a more recent student and a TA going back to it, but I don’t see it as that much of a problem (we use Halliday/resnick/walker). At least at my university, those classes are clearly meant for engineers – and there’s enough complaining with the algebra that is there already. Adding calculus would add more complications then most of the students can handle. Of course this does mean that the first upper division physics class, in whatever it is comes as quite a shock to physics undergrads, but hey, that’s the beauty of physics education isn’t it The one thing we do here that does help somewhat I think is to have a “honors” section for the lower-level physics classes -usually taken by physics majors and interested engineers or other natural science majors. These are much more like the higher level physics classes, in that there is usually only one-two sections, alot more student-teacher interaction, and of course harder problems and everything else. As far as I know though, they still use the same books and perhaps it would be better if they didn’t. 4. #4 Josh May 8, 2009 As a genetics major with aspirations of becoming a theoretical biologist, I think I feel your pain. For example, almost all biology majors have a requirement of a year of calculus, either in the form of “short calculus” or what appears to be an incredibly condensed “calculus for biology” course, which wham bangs you through calculus and even some probability and diff eq. I myself took the 2 year “math for math/physics/engineering” sequence. Then, they let biologists get away with a touchy-feely version of physics—basically no math involved at all, just concepts. I get the feeling that this kind of dichotomy between “physics for real” and “physics for biologists” is common—and I personally find it insulting. Moreover, mathematics is heavily de-emphasized in most biology classes, even ones where mathematics is relevant. This leads to the occasional problem when a mathematical concept is critical to understanding a concept. For example, in biochemistry, it is crucial to understand Michaelis-Menten kinetics, which are traditionally derived via a simple ODE. Unfortunately, this ODE scares the hell out of most people. Similar problems arise in ecology, evolution, biophysics, etc. type courses. And of course, in my domain of theoretical population genetics, the state of affairs makes me pretty sad. I’m currently enrolled in a course (mostly for the fun of it, since I honestly have done so much extra-curricular studying that I know most of it), and the prof skirts over some of the deepest and most elegant derivations, like the probability of fixation and the transit time of a new mutant, as well as some things I think all students should see in the first place, like how to go from the discrete generation version to the continuous time version of some of the master equations. I can’t criticize him, however, because most students simply would not be prepared for it—they probably haven’t evaluated an integral in 3 or 4 years by the time they take this course! And oh man… don’t get me started on computational biology. The future of biology is by all means going to be heavily computation (bioinformatics, etc.) but those skills are even more de-emphasized than mathematical skills. No one should be able to graduate with any kind of biology major without ONE COURSE in programming at least—preferably in a powerful scripting language like Perl. 5. #5 Cherish May 8, 2009 I think the smartest thing I ever did was to stop taking any physics until I’d had a full year of calc under my belt. Then every class I took after that was concurrent with my higher level math classes. In fact, I took a lot more math than physics at first. My personal opinion is that this would be a good requirement for most people who want to be physics majors. It’s a lot easier to learn the math first and then the physics, and I think you get a heckuva lot more out of it. It helps the profs, too, since then they don’t have to go modifying things for the mathematically under-educated. 6. #6 Eric Lund May 8, 2009 @Josh: It sounds like your curriculum is designed for pre-meds, who in fairness are often a plurality if not a majority of majors in biology and related fields. There is a definite distinction between “pre-med” physics (which is there because med schools insist on it) and “real” physics. I feel your pain on that score; in grad school I TA’ed the pre-med mechanics course once, which I found to be once too often. I also agree that your curriculum does not serve well anybody wishing to go to grad school rather than med school. Talk with your advisor about it; he may confirm my speculation that you are not the target demographic. @Cherish: I have noticed that in a lot of places most engineering and physical science majors are not expected to take physics until their second term (independent of whether the school is on semesters or quarters). I think the idea is for students to be taking Calc 2 concurrently with Physics 1, allowing quick reinforcement of the math lessons in physics. That made sense for me as a freshman: the math I didn’t understand in calculus I got when it appeared later that week in physics, and sometimes vice versa. Most of the people who do start physics in their first term have placed out of calculus (or at least the first semester thereof). 7. #7 Thoreau May 8, 2009 First, the one thing I’ll say in favor of the Matter and Interactions curriculum is that they at least try to offer something in place of the calculus that’s getting left out. They offer a more fundamental view of physics, and they try to introduce computation. Whatever the merits or demerits of that curriculum, it’s very different from the approach so many other books are taking where they make it simpler and simpler and simpler. I don’t have any easy answers here. I’m well aware that even in the curricula that de-emphasize calculus, a lot of students still aren’t getting it. I want to be responsive to that fact, and certainly one plausible response is to do fewer things but try to make sure they at least master those things. OTOH, a lot of them don’t even get that, and I fear that if we just keep responding to the ones who don’t get it then we’re going to spiral downward. The ostensible divide is that we have physics with calculus for the engineers and physics with algebra for the biologists. In reality, we have physics with a lot of algebra and a bit of calculus here and there for the engineers, and physics where we’re happy if they get the algebra right now and then for the biologists. Maybe that’s all we can expect. But then somebody can look at how many are STILL not getting it and propose to strip out even more math and focus on the remaining basics. So we do that, then some STILL aren’t getting it, so we repeat the process. The best answer I can come up with, the one that will never fly, is that we should draw a line somewhere (and we can debate where to draw it) and then just start flunking a lot of people. If the goal is a physics class that everyone can master then no level is low enough. But if we have some goal informed by a concern for the subject matter itself, not just the desire to see every student master whatever is offered, then you have to draw a line and start flunking a lot of people. That solution must obviously wait until I’ve gotten tenure. 8. #8 Thoreau May 8, 2009 Regarding pre-med physics: There’s even a difference between pre-med physics and biophysics. The MCAT, for whatever reason, includes a lot of “modern physics” and even relativity, as well as lots of standard mechanics. Why? I dunno. I guess they saw it in some standard physics book and decided to write a test on it. You can design a test with medically relevant questions at a low or high level, but either way it will look very different from the current MCAT physics section. It’s not just about the level, it’s about the topics as well. 9. #9 CCPhysicist May 8, 2009 I agree with both of Thoreau’s criticisms, enough that I should blog about them now that it is summer and I have some free time to get caught up on that sort of thing. IMHO, there is a lot to the point about an over emphasis on the counter-intuitive parts of physics. Doubly so when students can solve the relevant problem correctly but get the “trick” question wrong. (Example: Whether a slight net force is needed to travel at a constant velocity, a classic.) Since there are a lot of grad students who can get these wrong, perhaps the place to attack them is a bit later when you decide it really matters. As for the “with calculus” part, there is a lot of calculus in my second semester class. Quite a bit of it is conceptual (what is the charge enclosed, integral of dV is V, integral of dA is A) and thus hard. They really hate to integrate piecewise constant functions for some reason. The rest is chain rule and setting up “word problems” without necessarily having to do the integral. My first semester has calculus as a corequisite, and I think that is why many of the texts de-emphasize calculus until you get later into the book. We use chain rule when we get to oscillations and waves, and do integrals in thermo, but not much more than that. 10. #10 CCPhysicist May 8, 2009 Commenting @7 and @8: Simpler, or less broad? I prefer depth in certain areas to the full span of the mini-PhD that is generally offered in such courses. But I do know of one school where they have dropped all of thermo from the first semester class, and I don’t see any evidence that they have gone deeper into what remains by, say, solving the anharmonic oscillator problem where you keep the cubic term for a pendulum. I didn’t know the MCAT had relativity on it, but I know that our “trig based” class includes radioactivity and related topics because those are things that doctors actually use. They need optics and circuits a lot more than they need Gauss’ Law. 11. #11 D. C. Sessions May 8, 2009 OK, some bias to admit here: 1) My lower-division physics classes (and most of the rest) were almost 40 years ago, back when it was “Halliday and Resnick” w. no sign of Walker 2) I’m the parent of two physics majors from a school which requires calculus as a prerequisite for incoming freshmen — if you don’t have it, you take it as a deficiency makeup. I like that system. Feynman observed forty years ago that he was seeing the majority of his freshmen coming in with basic calculus out of the way already so that Cal Tech could get right into real physics, and it was a Good Thing even then. Thanks to missing the AP exams, I retook freshman calculus, but the main effect was raising my GPA and reinforcing bad habits. Were it up to me, calculus would be a prerequisite for all serious physics classes [1] and I’d make sure that it was always available in the summer — those who miss the incoming calculus exam could make up the deficit without missing a whole year. Then again, I’m studying for an advanced degree in Bad Codger Attitude. That, and making plans to go back for the physics PhD I put off in pursuit of a career. I don’t need to be wiping noses while I’m at it. [1] I’ll except the repeat of high-school physics for the english lit crowd who need a physical science class. I will not except engineers. The lot I have as NCG candidates are crippled enough as it is without watering down basic physics. 12. #12 meichenl May 8, 2009 I also used Purcell freshman year, and I don’t remember it being particularly mathematical. My impression was that more than any other book I’ve read, Purcell derives results from symmetry principles and physical intuition rather than computation. Those were valuable lessons for me. It’s a book on elementary physics that challenges the student to use a mature physicist’s thought process on simple problems. I don’t think the difficulties students have with it are mathematical. They’re more like “physics growing pains”. When Purcell does treat mathematical concepts, the intuition he gives is very much “physical”. For example, check out problem 2.16b, in which he shows the identity div(curl V) = 0 for any vector field V with one diagram and a few lines of text. Or take a look at Figure 2.21. Stokes’ theorem becomes obvious. He covers about the same ground as Griffiths (conceptually, while omitting some topics), but I don’t remember any “vector integration by parts”, or using Fourier analysis to solve boundary problems with Laplace’s equation. Further, chapter 5 on relativity is about as far removed from a mathematical treatment as is possible. It never mentions the electromagnetic field tensor, and relies mostly on pictures, intuition, and wordy arguments. It might help to supplement Purcell with Schey’s “Div, Grad, Curl, and All That”. Of course, none of this applies to students who are not already comfortable with calculus. 13. #13 Thoreau May 8, 2009 At most schools, 1 quarter or semester of calculus is a prerequisite for the physics sequence for engineers (and generally enrollment in the second calculus course is a corequisite for the physics intro course). Despite all that, we don’t actually use much calculus, and the books reinforce that bad practice. 14. #14 az May 8, 2009 The way my school handled it (and I thought most schools did this) was to have two-tiered math and physics. One physics track required calculus, the other required trigonometry. I would think the easiest way to make sure that the students in the calc-based course know calc is to really tighten the requirements on who can take that course: no lower than an A-/B+ in the first 2 semesters of the most rigorous calculus sequence offered (or maybe a 5 on BC calc AP if that’s equivalent). Alternately, you make students take a placement test – if they don’t demonstrate a decent understanding of calculus, they can either go back and take calc again, or take the trig-based class. If you don’t have the power to set your own requirements, or the heart to turn people away, you can give them the results of their test and a sober evaluation of what chances they have to actually succeed in the class. If you’ve done that, then nobody can complain when the class IS rigorously mathematical and they’re doing poorly. They’ve been properly warned. Of course, this only works if you have a two-tier system when it comes to introductory physics. I think changes need to be made to the non-calc class too. I’ve taken both sequences (about 15 years apart, the second time I took the algebra/trig-based one because I didn’t think I remembered enough calculus at that point). The algebra-trig version was really stupid, and strangely, much, much harder! Perhaps to some extent this is because I was old + rusty by the time I took it, but I think it was because they left out the necessary math. I guess some people are really good at concepts, but for me (and hopefully I’m not alone on this), I need to see the math to really understand the concepts. I usually need to derive an equation to really understand it, and in the algebra-based class, they’d just give you the equations as if they were pulled out of the air. I only vaguely remembered what I had actually learned in the rigorous physics sequence, but I certainly remembered that material presented was cohesive and often elegant. The algebra version seemed like a mish-mash of random equations pulled out of the instructor’s ass. I’ve never understood why the hell med schools want their students to have taken physics. I doubt they ever use it, and all it does is create a VERY large group of students who need to take some sort of physics class, but don’t necessarily fit well into either track. 15. #15 Zen May 8, 2009 Historically, it is only recent that mathematics and physics were taught separately. If you want students to understand mathematics, the best way, is practice and application, which we call Before college, I had a trigonometry course that I did well in, but it wasn’t until I started doing physics (using sine, cosines and tangents in every problem) that I REALLY comprehended trig. Perhaps it is time we develop a blended curriculum. 16. #16 CCPhysicist May 8, 2009 Chad wonders: one issue I’ll have to contend with is that a good number of the students will never have seen Maxwell’s equations in differential form You can do this in less than a day if they’ve had Calc III. Just put both forms up on the board along with the two theorems from Calc III that relate them to each other, and away you go. The only other thing you need is one identity from the “div grad curl” book and a quick review of ampere and gauss to explain what “div” and “curl” mean physically using fluid dynamics (sinks and paddlewheels) to make it concrete. Chad says: We list calculus as a co-requisite for intro physics, but the computer system used to handle course registration does not check or enforce prerequisites in any useful way, so we get students in the class who aren’t comfortable with derivatives, let alone differential equations. I don’t get this at all, Chad. If calculus is a CO-requisite, no pre-req check in the world will guarantee that they’re familiar with a concept like a derivative. They won’t get to that for 3 weeks, at minimum. The first 3 weeks of calculus are spent of the concept of a limit, continuity, epsilon-delta proofs, etc etc. It is then another 2 weeks or more before they get to the applications section where velocity appears as a derivative. I’m fortunate to be in an environment where the people who teach calculus are just down the hall, so we talk about how to approach those first few days. I get them to say that this un-named expression in chapter 2 is actually the “derivative”, so at least we have the language under control from the first day or two rather than 5 weeks later. DCS says @11: I’m studying for an advanced degree in Bad Codger Attitude. I am so stealing that one. If you think what you describe is bad, just imagine algebra classes that spend a significant amount of time teaching them how to solve linear or quadratic equations on a TI-83 rather than how to manipulate symbols. I’m sorry, but I just don’t get it and no one can point me to an outcomes-based study that says that approach leads to greater success in calculus or physics. And I am going to blog about the “conceptual” issues as well as the topic of calculus and physics this weekend. 17. #17 Aki May 8, 2009 In my undergraduate career, we used Halliday, Resnick and Krane the first year. It seemed to get the job done. The biggest problem we had at the time was a disconnect between our calc. and physics classes. When, as upper-division students, we were invited to comment on ways to bolster retention, we advocated a blended class for freshmen that would force physics and calc. to march in step and allow students to bridge the gap. They didn’t listen. Now, as a teacher myself, I have my students asking for more calculus and more connections between the physics and the calculus. I’m still trying to find a textbook that serves the need. 18. #18 Chad Orzel May 8, 2009 I don’t get this at all, Chad. If calculus is a CO-requisite, no pre-req check in the world will guarantee that they’re familiar with a concept like a derivative. They won’t get to that for 3 weeks, at minimum. The first 3 weeks of calculus are spent of the concept of a limit, continuity, epsilon-delta proofs, etc etc. It is then another 2 weeks or more before they get to the applications section where velocity appears as a derivative. Sorry– Calc II is a co-requisite. The part with integrals. They’re supposed to have taken the first term of calculus before taking physics, but that never gets enforced. As somebody mentioned above, the physics sequence doesn’t start until the second of our three academic terms– first-year engineering students take physics in the Winter term, after taking math in the Fall term. The idea is to have the students pick up the math they need first, and then take physics. A couple of people have also suggested a “blended” curriculum, covering both math and physics. We do run one section of an integrated math-physics course. It’s a year-long course, in three parts, covering the first three terms of calculus and the first two terms of physics, team-taught by a math professor and a physics professor. I’ve only ever taught the first term, which barely gets through Newton’s Laws. It’s an interesting approach, though, and I enjoyed teaching it. It’s been a long, long time since I had any formal math classes, and it was fascinating to see how different the approaches are. It also confirmed my belief that I’m not cut out to be a mathematician. If nothing else, I’m incapable of writing complete sentences on the chalk board in a legible manner. 19. #19 Eofhan May 8, 2009 I was a Physics major 20 years ago. A bad one, but capable-enough to earn a BA. Even I understand that calculus is fundamental. What the hell happened? . . . 20. #20 Lab Lemming May 9, 2009 I think much of the problem may lie in the teaching of math, which results in your students not being able to recall or use what they were taught long ago. As a geologist, I took calculus intensive courses, like geophysics and thermodynamics, interspersed with math-free courses, like paleontology. The result is that as an undergrad, I ended up learning- and forgetting- calculus three times. There has to be a better way to teach math such that scientists can easily pick up the skills that they need after a year or two of not using them. After all, that’s generally how life as a private sector scientist works. 21. #21 Jonathan Vos Post May 9, 2009 Having gone to a famously good high school and university, I had no idea how badly broken was the Physics/Math connection in American education until I was teaching. There was no spice of Calculus in the Intro Astronomy (Moons for Goons) that I taught (2 lecture sections of 50 students each), but merely elementary Algebra. That is, it was a “cool stuff on backs of envelopes.” The Astronomy Lab was a Physics lab + computer lab of real Physics, heavily Mathematical, of phenomena central to Astronomy. But, again, no Calculus. Just Intermediate Algebra plus graphing and elementary Statistics. My wife is a Physics Professor at a private university where I’ve also taught Math. She’s been teaching Physical Science (even less Math) and Physics (with a lab that everybody likes. Again, no A good case can be made that Biology is to Math in the 21st Century as Physics was to math in the 19th and 20th Centuries. But that does not seem to be trickling down yet into the undergrad I have a month to go as Student Teacher at Lincoln High School in L.A. before my credential lets me teach full-time High School Math. Two of my 4 courses, AP Statistics and AP Calculus, have a few Physics problems in the worksheets and textbooks. The Geometry and Algebra classes are nearly Physics-free. Finally, there are colleges and universities that offer both a B.A. and a B.S. in Economics. The difference? Calculus! I very strongly disagree with C.P. Snow’s “Two Cultures” hypothesis. But maybe there IS a fuzzy boundary at Calculus between Real Science and Flavor of Science in our schools. 22. #22 Bruce Sherwood May 9, 2009 I’m one of the authors of the Matter & Interactions curriculum. Occasionally at Carnegie Mellon a strong student would complain that “this is supposed to be a calculus-based course but we don’t use any calculus” (I’ve not heard this complaint from NCSU students). This always turned out to be a case of the student not realizing that integrals had anything to do with the sum of a large number of small quantities, or that a derivative had anything to do with a ratio of small quantities. What the student perceived as “calculus” was a large number of evaluation formulas for derivatives and integrals. We think our “stealth” version is in fact much closer to the true nature of calculus than is the standard subject (just as the Momentum Principle, dp = Fnet*dt, is what Newton used; F=ma is definitely not Newton’s second law of motion). We’re intrigued that Michael Oehrtmann, a math educator at Arizona State, is developing a calculus course for engineering and science students that emphasizes small, nonzero differences, as do we. His course would be a perfect complement to Matter & Interactions. I would further argue that iterative calculations which show the time evolution character of the Momentum Principle (in the form dp = Fnet*dt) are very much in the spirit of differential equations, whereas F=ma looks like an algebraic relation (and usually is, since F is usually constant). The place where calculus per se really kicks in is the Matter & Interactions chapter on finding the electric field of distributed charges, which is fiercely all about calculus. But even there, in a chapter which has been praised by reviewers as being particularly good on how to go from a physical situation to an integrand, we have found that most students are not able to do an example on their own (say, find the electric field along the axis of a uniformly charged rod, some distance beyond one end), because in their “calculus” course the focus was on evaluating integrands, not setting them up (here again, setting up the integrand from a physical situation is one of Oehrtmann’s emphases). In the real world, absent Oehrtmann’s course, we find that it’s not possible to ask average students after two semesters of calculus and a semester of mechanics to set up an integral themselves. 23. #23 dr. dave May 9, 2009 teaching in a place where there are no physics, engineering, or chemistry majors, this is not a problem i deal with. i teach mostly historically-based physics for students with an interest in the liberal arts. does anyone know of a good FLUXIONS-based undergrad text? preferably not in Latin? :) 24. #24 Monado May 9, 2009 At university I was a bit bewildered by simple physics questions, because I didn’t know how to define the problem. For example, we’d get a question such as, “a rock of such a mass falls so far into a bucket. Calculate the changes of energy.” So I’d start adding them up. Of course it lost potential energy and gained kinetic energy–for a while. That part was simple. Then it created a sound and imparted some heat to the bucket. How to calculate the energy lost in producing a sound? How loud a sound? How much heat conversion? I was stumped. 25. #25 Chad Orzel May 9, 2009 I don’t know why I’m always faintly surprised when the authors of things I post about turn up here. It is available to everyone in the entire world, after all… Occasionally at Carnegie Mellon a strong student would complain that “this is supposed to be a calculus-based course but we don’t use any calculus” (I’ve not heard this complaint from NCSU students). This always turned out to be a case of the student not realizing that integrals had anything to do with the sum of a large number of small quantities, or that a derivative had anything to do with a ratio of small quantities. What the student perceived as “calculus” was a large number of evaluation formulas for derivatives and integrals. We think our “stealth” version is in fact much closer to the true nature of calculus than is the standard subject (just as the Momentum Principle, dp = Fnet*dt, is what Newton used; F=ma is definitely not Newton’s second law of motion). I basically agree with this, I think. Certainly, I like the fact that the curriculum starts with momentum, rather than kinematics– that’s one of the strongest points, for me. I wrote what I did above in large part because I saw some errors on the first mid-term (I’m just over halfway through our intro mechanics course now) that I never saw in previous versions of the class. They seemed to me to be the result of thinking about the problems in an even more algebraic manner than usual. This may be partly due to the somewhat unusual population in the course this term, but it was a striking difference. (I’m hesitant to discuss it in detail on the blog– if you’d like to know more, email me.) I do think that the finite step method is, in the end, one of the great strengths of the approach, in that it lends itself well to computational solutions, and allows discussion of problems that can’t be solved analytically. I’m not sure how much the students appreciate that, though… The place where calculus per se really kicks in is the Matter & Interactions chapter on finding the electric field of distributed charges, which is fiercely all about calculus. But even there, in a chapter which has been praised by reviewers as being particularly good on how to go from a physical situation to an integrand, we have found that most students are not able to do an example on their own (say, find the electric field along the axis of a uniformly charged rod, some distance beyond one end), because in their “calculus” course the focus was on evaluating integrands, not setting them up (here again, setting up the integrand from a physical situation is one of Oehrtmann’s emphases). In the real world, absent Oehrtmann’s course, we find that it’s not possible to ask average students after two semesters of calculus and a semester of mechanics to set up an integral themselves. That’s definitely a problem. I taught that part of the course last spring, and had the same issue. I got reasonably good results by writing a program to calculate the on-axis field due to a charged rod, and then asking them to modify the code to measure the field for off-axis points. That was an honors class, though, so I’m not sure the results will generalize. I know some of my colleagues were doing the same thing this term– I’ll have to ask them how it went. 26. #26 CCPhysicist May 9, 2009 This is to Bruce Sherwood: Does Michael Oehrtmann know that he is reinventing the old “infinitesimals” approach to calculus, and that there is a free textbook on the web? It predates calculators, let alone the computer-based text I taught out of back in the 70s, but it might be helpful. I couldn’t agree more concerning the field due to a rod problem. As I mentioned either here or in my blog or in a comment on Thoreau’s blog, setting up a problem where calculus might be involved is a major weakness in the calculus (or pre-calc or algebra) curriculum. It just doesn’t get done in math classes because the math profs just are not comfortable teaching the non-math subjects (be it physics or biology) that are needed as background to the application. That is where “linked” classes such as Chad mentions @18 can come into play. The content instructor can cover for the math instructor. And Chad, you might tell your math colleagues that there was once a textbook for your quarter system where basic calculus (both differential and integral) was taught like arithmetic in the first quarter – to lead into physics – with the abstraction of limits and so on deferred to the second quarter when inverse functions and other complications show up. No epsilons or deltas for the first three weeks. They didn’t leave out the abstraction, they deferred it. 27. #27 Bruce Sherwood May 10, 2009 Thanks for the tip about the Keisler book. I’ve passed this link on to Oehrtmann, though I’m guessing he knows about it. Many years ago I saw a textbook with this emphasis but lost track of it; maybe it was the Keisler book, though Keisler mentions a mathematician named Robinson who put infinitesimals on a firm footing, and it’s possible I saw a book by Robinson. 28. #28 Bruce Sherwood May 10, 2009 Having reflected some more on this very interesting discussion, some further thoughts: I’ve taught intro physics for 40 years, starting at Caltech using the Feynman Lectures on Physics for the textbook (which was a fabulous experience). It seems to me that there isn’t a lot of scope for standard calculus in intro mechanics, other than having students calculate positions of the center of mass of odd objects, or moments of inertia of various solids, which would be beyond most students other than in an honors course. What does matter is that prior or concurrent study of calculus is essential for giving the students enough math sophistication to deal with the concepts in mechanics, and to be able to follow some critical derivations. What kind of “calculus” problems did people have in mind for intro mechanics? The biggy isn’t mechanics but E&M, as has been partially noted in this discussion already. The Great Divide is whether one can use the differential forms of Maxwell’s equations or not. In the intro course (except possibly in an honors course at a very selective university) the answer is a resounding “not”. The Feynman Lectures on Physics and Purcell’s splendid E&M text were the biggest influences on Ruth Chabay and me in writing the Matter & Interactions textbook. We love Purcell. But it is rarely suitable for the intro E&M course, and a bit too low a level for the junior course, hence it doesn’t get used much. Minor point: Chad says, “…I’ll have to contend with is that a good number of the students will never have seen Maxwell’s equations in differential form. Which makes it a little difficult to get to the wave equation, and set up the necessary background information about the classical model of light as an EM wave.” As another person pointed out, you can get this with the integral equations. In fact, in Matter & Interactions we use only integral forms of Maxwell’s equations to get the full classical model of the interaction of light and matter, though we use a qualitative version of Purcell’s elegant Gauss’s Law argument to say that accelerated charges radiate. But that’s enough, because then the radiative electric field accelerates charges in matter, which re-radiate. Classically, light doesn’t bounce off the wall — you see new light from the accelerated charges in the wall, accelerated by the incoming light. 29. #29 Jonathan Vos Post May 10, 2009 (1) Chad’s delta experience in teaching is interesting, and he is honest about the problem that both textbook and student characteristics have changed. (2) Bruce Sherwood is right about Feynman’s Lecture Notes at Caltech. Note that everyone at Caltech (in my 1968-1973 era) came in with strong Calculus experience, which Apostol made rigorous and, with later courses in differential equations and complex analysis raised to a level of mastery. Everyone takes Physics Lab. Note also that the Feynman book has 3 volumes, roughly kinematics of particles under mechanical and electrostatic forces; full-blown Electromagnetics; Quantum Mechanics. I understand that Caltech has tweaked their Physics curriculum, but don’t know details. (3) Metaphysically and computationally, maybe Wolfram (formerly a Caltech Physics professor) is right that Physics took a wrong turn at Newton, and algorithmics and automata theory is the way to go. Too soon to tell. (4) Caltech led the world in Astrophysics, and that trickled down to the undergrad curriculum. Now Kip Thorne’s decades of inflouence breaks new ground in computational geometrodynamics. This goes rather deeper that undergrad Calculus. (5) I am not writing anything about String Theory. 30. #30 CCPhysicist May 10, 2009 The “real calculus” problem that showed up in my intro (honors) physics class as an undergrad was the anharmonic oscillator (pendulum out to the cubic term in the expansion) and coupled linear pendula, but the calculus I usually have in mind (for my class) are the work integrals in thermo, although motion with velocity dependent drag forces would be nice. I second the comment about Purcell’s E+M text (I assume you mean “berkeley physics 2″). Fantastic way to learn the subject, and my undergrads still suffer its effects. That is, until they get to the last week of Calc 3 and totally get Stokes Theorem. In our case, it doesn’t hurt that the guy teaching Calc 3 is just down the hall and knows how I teach E+M. Chad, I’d love hearing about the new problem that crept in. 31. #31 Bruce Sherwood May 11, 2009 But the work integrals in thermo are typically just used to derive results, they aren’t homework problems. Even the mechanics section of Matter & Interactions makes abundant use of traditional calculus for derivations. Examples: Factor the momentum vector into the product of magnitude and unit vector and differentiate using the product rule to get a term which represents the rate of change of the magnitude of the momentum and the rate of change of the direction of the momentum; use the Momentum Principle in the form dp/dt = Fnet (as vectors); find the analytical solution for dp/dt = Fnet in the case of the harmonic oscillator (solve the differential equation by the usual scheme of guessing a result and finding what parameters will make that result work in the differential equation); from the Momentum Principle derive the Energy Principle for a point particle starting from dE/dx = dpx/dt; find evaluation formulas for various kinds of potential energy by finding the quantity whose negative gradient is the force (this comes early enough that if calculus is concurrent students haven’t yet seen integrals but can find antiderivatives); etc. Even the case of motion with velocity dependent drag forces would most likely be a derivation shown to the students, not a homework problem for them to solve using calculus, no? I’m looking for realistic examples in mechanics where in a regular calculus-based course (not honors, not physics majors) one can imagine assigning a homework problem for which students would independently have to use calculus. I don’t think there are many. And I don’t think there were any when I took calculus-based physics many decades ago, so I don’t think there’s been some major watering down. A colleague pointed out to me that there has been one change over time. Before most disciplines changed from requiring 3 semesters of physics to 2, it was sometimes the case that the 3rd semester was E&M, and students had already had calculus in 3D, so it was more feasible to consider using differential forms of Maxwell’s equations. Now at most students take calculus in 3D concurrently with E&M. 32. #32 GR May 12, 2009 It’s hard to think up of anything in intro mechanics that requires calculus — maybe finding the maximum of a trajectory? I think, though, that even if students are just “shown” the derivation, they still should have a firm grasp of calculus. If only there was a way to make sure we all worked through the derivations ourselves… As a side note about E&M, I first saw the vector calculus version in my intro class, and even though I freely admit I had no idea what the heck was going on it helped later on. One of the things I love about the physics curriculum is how it keeps going back to old topics and adding on new awesomeness. I just graduated from NCSU (small world I guess…), and having the experience of interacting with both M&I and non-M&I people I feel like I should throw out some observations: 1) M&I and non-M&I students seem to live in different worlds. Students who’ve taken one curriculum seem to have a lot more difficulty with problems from the other one that should be expected. This is kind of disconcerting to me — it’s all the same physics, right? I don’t know if this is because of the type of problems, or the way the problems are asked… 2) One disadvantage of the M&I curriculum that’s been voiced to me by people I’ve tutored is that it introduces the whole machinery of atoms, etc… too quickly. This left them confused as to what actually was going on, and when the various approximations were allowed to be used. 3) I used to think poor math skills were an issue in physics education but now I’m not so sure. I tutored a guy who was amazingly fast at algebraic manipulations, but he just couldn’t figure out what concept was being applied or which formula related to what concept. I’m pretty sure the only way to remedy this is with more “discovery-based” (or whatever the buzzword is) learning. With the precaution that too much exploration often leads to frustration for students because, let’s face it, most of us aren’t Newton. I’m not sure if it’s doing it 100% right (I just can’t quite agree that computer programs are better than “the real world”) but M&I should at least be applauded for bringing that type of learning more into the mainstream. 4) WebAssign and its web-based homework cousins are terrible. Terrible. Terrible. Terrible. Sure, they save time for teachers/TA’s when grading. But they encourage students to not write down problems and just go at it with a calculator, punching in numbers in some combination until they get a right answer*. The x-number-of-tries thing makes students focus solely on “getting the problem right” more than “understanding what’s going on”. And that’s not even touching on the rampant cheating… 5) I still think the best way to “get” physics is to work lots and lots and lots of problems. How many and what weighting they should be given in the grading scheme is something I should probably leave to professionals. My junior-level E&M professor had an interesting strategy. He would have us (on good old paper!) write down the problem, then write out a strategy for solving that problem, and then solve the problem. It was brutal, and I often spent 30+ hours a week on those damn problem sets, but taking the time to stop and really think about the physical situation instead of just jumping into the crank really helped my understanding. Anyways, just my 2 cents 33. #33 Frank Noschese May 13, 2009 The College Board has no problem finding ways to put calculus on the AP Physics C exams. Just check out this year’s questions on the Mechanics exam: Q1. Potential energy function U(x) Q2. Differential equation for SHM Q3. Rope with mass slides off the edge of a table I don’t understand why many colleges don’t give credit for kids who can score a 5 on this exam! Are these questions easier/harder/typical compared to college physics exams? 34. #34 truth is life May 19, 2009 Frank @33: Those were MUCH more difficult (upon simple inspection) than any of the problems I saw in physics I or II (and I’m a physics major!) Of course, I’m going to the University of Houston, not exactly a big physics school, and they do give you credit for the calculus-based course for getting a 5 on the test. The most ‘calculus’ I ever did was using integrals and derivatives to save memorizing the different velocity equations (v=at+v_i if a is constant, for example). There were a few problems in E&M where calculus helped some, but for the most part it was just memorizing a ton of algebraic formulas. Even in the first semester of modern we didn’t really do anything with calculus until the last third of the semester when we started covering the Schrödinger equation. Before that, when we were dealing with Einstein and Bohr/de Brolige/Thompson/etc., it was just algebra. Bruce Sherwood has it right when he says 1. Calculus is perceived as being just a bunch of memorized formulas (I know I though this way pretty much up until this semester with my analysis course–and even then it’s hard to stop thinking of it this way) and 2. It’s really hard (especially for people thinking like this) to properly formulate the integrals. 35. #35 Fran May 25, 2009 I’m with Frank (comment 33) on the AP Physics C question. The college board frequently puts free-response questions on the AP exam that require setting up differential equations or integrals. I would LOVE to know how those questions compare to those of “calculus-based” physics courses at Universities. Many of the comments here seem to imply that you would NOT find similar problems on the final exams there. Thanks for the discussion, everyone! I enjoyed reading it! 36. #36 Mike Heesen June 19, 2009 I really like calculus and the combination with physics is really awesome for me. 37. #37 Mike Heesen June 19, 2009 I really like calculus and the combination with physics is really awesome for me.
{"url":"http://scienceblogs.com/principles/2009/05/08/would-you-like-some-calculus-w/","timestamp":"2014-04-17T04:52:25Z","content_type":null,"content_length":"152868","record_id":"<urn:uuid:b39068fb-1dc1-4e94-a813-ccccc03c7cef>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Help on a Turing Machine Question March 22nd 2008, 03:42 PM #1 Mar 2008 Help on a Turing Machine Question Can someone please help me with this question ? Show that for each Turing Machine, M, there is a Turing Machine, M', which computes the same partial function as M, but which never moves left of the initially scanned square of any initial tape. It does depend on how the TM is defined, I will assume that the initial tape consists of a sequence of cells with a number of 1's and the rest of the tape blank and the reading head over the left most non-blank cell. Suppose the reading head of TM M moves a maximum of X cells to the left of the initial position. Then consider a TM M' that moves the block of 1's X+1 cells to the right (show this need never move to the left of the initial position), then positions the reading head over the leftmost non blank cell and then executes M. March 23rd 2008, 03:41 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-math-topics/31741-help-turing-machine-question.html","timestamp":"2014-04-17T18:47:36Z","content_type":null,"content_length":"32934","record_id":"<urn:uuid:ad594105-763b-4fef-ae0b-27b70efb164c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Coq Proof Assistant , there's a whole lot of abstract talk regarding how formal proofs might work, and why they are not infallible. To make things more concrete, this page describes a real system for formal proofing, called Coq (see ). It's a system for writing formal proofs. You can use it for ordinary mathematics, but it has also special support for proofs about programming languages. The idea is that you develop the algorithm and its correctness proof . Then, you can extract the code from the proof to actually get something executable. (Currently supported are ). So basically, unless there are glaring bugs in Coq itself, you have a guarantee that the program is really correct. The best thing is that someone else can run your proof through Coq, and see if it checks, without having to inspect your proof "by hand". This is just mathematically impossible. You cannot prove that a program is correct because our current programming languages are unquantifiable. Of course it is possible to prove a program correct. What is possible is to write a program that accepts all correct programs and rejects all incorrect ones. As said above, Coq relies on the programmer to provide the proof; all Coq does is to mechanically check the proof. (OK, it can do the boring parts of the proof automatically.) What do you mean by "unquantifiable"? Here's a correctness proof for an absurdly simple function. Apologies to those whose sensibilities are offended by my choice of language. The original version of this omitted the in the declaration of the return value of unsigned int fact(unsigned int n) { unsigned int result = 1; unsigned int i; for (i=1; i<=n; ++i) result *= i; return result; . For any , the body of the loop will be executed exactly times, with taking the values 1,2,..., . Trivial. (Note that the increment of cannot wrap around.) Wrong: The increment of CAN wrap around. If n == UINT_MAX, then i <= n is always true. In this special case, will wrap around again and again, and the loop will never end. How to correct this error is left as an exercise for the reader. Theorem . For any , the value returned from is congruent modulo M to the factorial of , where M-1 is the largest number represented by the unsigned int . Trivial, given the lemma and the fact that arithmetic operations on unsigned int are required by the standard to return values congruent modulo M to their "pure mathematical" values. There are some things that are pretty "unquantifiable"; for instance, anything that depends on the amount of memory you have available. Even in those cases, though, it's possible to give guarantees of the form "The program will either terminate abnormally or give the right answer", if the program is carefully enough written. Of course, giving a formal correctness proof for any practical program would be a terribly arduous undertaking. But nothing in the design of (the better-specified) existing languages makes it impossible to write provably correct programs. Am I missing something? -- The proofs are indeed to an experienced programmer, but I don't see how they are distinct from operational reasoning that every programmer does (or should do), because there are no formal semantics prescribed to the statements in this language. These are also proofs by induction on n, but that is not explicitly stated. To be picky, there are programs that are both impossible to prove "correct" and impossible to find defects in. This is due to two reasons: the inherent incompleteness of any formal logic and the inability of a group of different people to agree on a single notion of "correct." I don't know if the incompleteness of logic is really that much of an issue in practice, but disagreements over what something should do are an everyday occurrence. -- See also
{"url":"http://c2.com/cgi/wiki?CoqProofAssistant","timestamp":"2014-04-17T23:19:29Z","content_type":null,"content_length":"5803","record_id":"<urn:uuid:5e38f393-a31b-45b8-b184-4a3a5deece1b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 1634: CALCULUS I, Summer 2006 Instructor: Dr. A. Boumenir Location: Boyd 303 Monday 12:00 -1:45; Tuesday and Thursday 11:00- 1:45. Office: Boyd 321 Office Hours: Tuesday and Thursday 2:00 to 5:00 and also by appointment Phone: 678-839-4131 Email: boumenir@westga.edu Text: Calculus, Early Transcendentals Volume 1 (5^th edition) by Stewart. Objective: This is the first course in the Calculus sequence. Students are assumed to be familiar with the main concepts and ideas in Algebra and Trigonometry, which can be found in chapter 1 of our text. We shall cover all sections in chapters 2-5 with an emphasis on Problem Solving. Tests: There will be three in class tests, 100 points each, given on Thursdays Test 1 on 22^nd June, Test 2 on 6th July and Test 3 on 20^th July. Quizzes: A weekly quiz will be given on Thursdays and will mainly cover questions from the homework. Each quiz counts 20 points towards your final grade. The best 10 quizzes/Hw are Home Work is due by 5:00pm on the due date. Final Exam: The final exam is Thursday 27 July from 12:30 to 2:30, and counts 200 points towards your final grade. Evaluation: Tests= 3x100 points, Quizzes/Hw= 200 points, Final= 200 points Grading: 700-630: A, 629-560: B, 559-490: C, 489-420: D, Below 420:F W Deadline: June 28^th is the last day to withdraw with grade of W.
{"url":"http://www.westga.edu/~math/syllabi/syllabi/summer06/MATH1634.htm","timestamp":"2014-04-20T23:36:51Z","content_type":null,"content_length":"13245","record_id":"<urn:uuid:2ea7d647-e339-4cb4-aea4-ac21a9baaf6d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
A Theory of Diagnosis from First Principles Results 1 - 10 of 658 - ACM Transactions on Computational Logic , 2002 "... Disjunctive Logic Programming (DLP) is an advanced formalism for knowledge representation and reasoning, which is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class ΣP 2 (NPNP). Thus, under widely believ ..." Cited by 320 (78 self) Add to MetaCart Disjunctive Logic Programming (DLP) is an advanced formalism for knowledge representation and reasoning, which is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class ΣP 2 (NPNP). Thus, under widely believed assumptions, DLP is strictly more expressive than normal (disjunction-free) logic programming, whose expressiveness is limited to properties decidable in NP. Importantly, apart from enlarging the class of applications which can be encoded in the language, disjunction often allows for representing problems of lower complexity in a simpler and more natural fashion. This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to ∆P 3-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of - Artificial Intelligence , 1993 "... This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesia ..." Cited by 298 (37 self) Add to MetaCart This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language. Scholar, Canadian Institute for Advanced... , 1988 "... A fundamental problem in knowledge representation is how to revise knowledge when new, contradictory information is obtained. This paper formulates some desirable principles of knowledge revision, and investigates a new theory of knowledge revision that realizes these principles. This theory of revi ..." Cited by 240 (0 self) Add to MetaCart A fundamental problem in knowledge representation is how to revise knowledge when new, contradictory information is obtained. This paper formulates some desirable principles of knowledge revision, and investigates a new theory of knowledge revision that realizes these principles. This theory of revision can be explained at the knowledge level, in purely model-theoretic terms. A syntactic characterization of the proposed approach is also presented. We illustrate its application through examples and compare it with several other approaches. 1 Introduction At the core of very many AI applications built in the past decade is a knowledge base --- a system that maintains knowledge about the domain of interest. Knowledge bases need to be revised when new information is obtained. In many instances, this revision contradicts previous knowledge, so some previous beliefs must be abandoned in order to maintain consistency. As argued in [Ginsberg, 1986], such situations arise in diverse areas such... - Journal of Logic Programming , 1994 "... In this paper, we review recent work aimed at the application of declarative logic programming to knowledge representation in artificial intelligence. We consider exten- sions of the language of definite logic programs by classical (strong) negation, disjunc- tion, and some modal operators and sh ..." Cited by 224 (21 self) Add to MetaCart In this paper, we review recent work aimed at the application of declarative logic programming to knowledge representation in artificial intelligence. We consider exten- sions of the language of definite logic programs by classical (strong) negation, disjunc- tion, and some modal operators and show how each of the added features extends the representational power of the language. , 1991 "... This paper concerns the empirical basis of causation, and addresses the following issues: 1. the clues that might prompt people to perceive causal relationships in uncontrolled observations. 2. the task of inferring causal models from these clues, and 3. whether the models inferred tell us anything ..." Cited by 208 (34 self) Add to MetaCart This paper concerns the empirical basis of causation, and addresses the following issues: 1. the clues that might prompt people to perceive causal relationships in uncontrolled observations. 2. the task of inferring causal models from these clues, and 3. whether the models inferred tell us anything useful about the causal mechanisms that underly the observations. We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we address the issue of non-temporal causation. 1 Introduction The study of causation is central to the understanding of hum... , 1998 "... Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous effets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing th ..." Cited by 188 (16 self) Add to MetaCart Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous effets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing these explorers with a form of computational intelligence that we call remote agents. In this paper we describe the Remote Agent, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goal-directed closed-loop commanding, that takes a significant step toward enabling this future. This architecture addresses the unique characteristics of the spacecraft domain that require highly reliable autonomous operations over long periods of time with tight deadlines, resource constraints, and concurrent activity among tightly coupled subsystems. The Remote Agent integrates constraint-based temporal planning and scheduling, robust multi-threaded execution, and model-based mode identification and reconfiguration. The demonstration of the integrated system as an on-board controller for Deep Space One, NASA's rst New Millennium mission, is scheduled for a period of a week in late 1998. The development of the Remote Agent also provided the opportunity to reassess some of AI's conventional wisdom about the challenges of implementing embedded systems, tractable reasoning, and knowledge representation. We discuss these issues, and our often contrary experiences, throughout the paper. - ARTIFICIAL INTELLIGENCE , 1992 "... We study the complexity of several recently proposed methods for updating or revising propositional knowledge bases. In particular, we derive complexity results for the following problem: given a knowledge base T , an update p, and a formula q, decide whether q is derivable from T p, the updated (or ..." Cited by 186 (12 self) Add to MetaCart We study the complexity of several recently proposed methods for updating or revising propositional knowledge bases. In particular, we derive complexity results for the following problem: given a knowledge base T , an update p, and a formula q, decide whether q is derivable from T p, the updated (or revised) knowledge base. This problem amounts to evaluating the counterfactual p > q over T . Besides the general case, also subcases are considered, in particular where T is a conjunction of Horn clauses, or where the size of p is bounded by a constant. , 1993 "... Abduction is an important form of nonmonotonic reasoning allowing one to find explanations for certain symptoms or manifestations. When the application domain is described by a logical theory, we speak about logic-based abduction. Candidates for abductive explanations are usually subjected to minima ..." Cited by 163 (26 self) Add to MetaCart Abduction is an important form of nonmonotonic reasoning allowing one to find explanations for certain symptoms or manifestations. When the application domain is described by a logical theory, we speak about logic-based abduction. Candidates for abductive explanations are usually subjected to minimality criteria such as subsetminimality, minimal cardinality, minimal weight, or minimality under prioritization of individual hypotheses. This paper presents a comprehensive complexity analysis of relevant decision and search problems related to abduction on propositional theories. Our results indicate that abduction is harder than deduction. In particular, we show that with the most basic forms of abduction the relevant decision problems are complete for complexity classes at the second level of the polynomial hierarchy, while the use of prioritization raises the complexity to the third level in certain cases. - Proceedings of the Joint International Conference and Symposium on Logic Programming , 1996 "... An implementation of the well-founded and stable model semantics for range-restricted function-free normal programs is presented. It includes two modules: an algorithm for implementing the two semantics for ground programs and an algorithm for computing a grounded version of a range-restricted funct ..." Cited by 139 (16 self) Add to MetaCart An implementation of the well-founded and stable model semantics for range-restricted function-free normal programs is presented. It includes two modules: an algorithm for implementing the two semantics for ground programs and an algorithm for computing a grounded version of a range-restricted function-free normal program. The latter algorithm does not produce the whole set of ground instances of the program but a subset which is sufficient in the sense that no stable models are lost. The implementation of the stable model semantics for ground programs is based on bottom-up backtracking search. It works in linear space and employs a powerful pruning method based on an approximation technique for stable models which is closely related to the well-founded semantics. The implementation includes an efficient algorithm for computing the well-founded model of a ground program. The implementation has been tested extensively and compared with a state of the art implementation of the stable mode... - Artificial Intelligence , 1987 "... Reasoning about change is an important aspect of commonsense reasoning and planning. In this paper we describe an approach to reasoning about change for rich domains where it is not possible to anticipate all situations that might occur. The approach provides a solution to the frame problem, and to ..." Cited by 136 (7 self) Add to MetaCart Reasoning about change is an important aspect of commonsense reasoning and planning. In this paper we describe an approach to reasoning about change for rich domains where it is not possible to anticipate all situations that might occur. The approach provides a solution to the frame problem, and to the related problem that it is not always reasonable to explicitly specify all of the consequences of actions. The approach involves keeping a single model of the world that is updated when actions are performed. The update procedure involves constructing the nearest world to the current one in which the consequences of the actions under consideration hold. The way we find the nearest world is to construct proofs of the negation of the explicit consequences of the expected action, and to remove a premise in each proof from the current world. Computationally, this construction procedure appears to be tractable for worlds like our own where few things tend to change with each action, or where ...
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.170.9236","timestamp":"2014-04-20T14:58:10Z","content_type":null,"content_length":"40393","record_id":"<urn:uuid:2486c09e-1cec-4ebe-8898-d62d8903f8db>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Galilean Relativity and Tangential Acceleration You were given help. Her are more details: I assume that v is the speed of the water relative to the shore- you should have said that. When John swims upstream his speed, relative to the bank, is c- v because he is going against the water. How long will it take him to swim a distance L upstream? When John swims downstream, his speed, relative to the bank is c+ v because he is carried along by the water. How long will it take him to swim a distance L downstream? Add those to find the time required to swim both legs. Since Emma is swimming across the water, she needs to angle slightly up stream. Imagine drawing a line of length vt at an angle upstream, followed by a line of length ct straight down the stream back to the original horizontal. You get a right triangle with hypotenuse of length vt, one leg of length ct, and the other leg of length L, the distance she is swimming. By the Pythagorean theorem, (vt)^ 2= (ct)^2+ L^2. Solve that for t, the time required to swim the length L across the current. Because she comes back across the current, you can just double that to determine the time necessary to swim both legs. The second problem is pretty easy. If the speed were constant, the only acceleration would be toward the center of the circle- there would be no "tangential acceleration". Since there is change in speed, the tangential acceleration is precisely that change in speed.
{"url":"http://www.physicsforums.com/showthread.php?t=46343","timestamp":"2014-04-19T09:41:18Z","content_type":null,"content_length":"31511","record_id":"<urn:uuid:e2b86c31-aa3c-4d27-bbb5-b9708d05ea24>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
A Remarkable Collection of Babylonian Mathematical Texts A Remarkable Collection of Babylonian Mathematical Texts: Manuscripts in the Schøyen Collection of Cuneiform Texts I (Sources and Studies in the History of Mathematics and Physical Sciences), Jöran Friberg, 2007, xx +534 pp, 259 illustrations including 70 color plates, hardback $120, ISBN-13: 978-0-387-34543-7, Springer Science + Business Media, Spring Street, New York NY 10013. Early in my quest to better understand the history of mathematics, I came to the conclusion that the European development of mathematics emerged more from the computational techniques of Ancient Mesopotamia rather than arithmetica-directed speculations of Greek philosophers. Granted the intellectual exuberance of the Renaissance exalted in the accomplishments of Ancient Greece and passed this heritage of admiration down to us, but modern scholarship and research has matured and broadened our understandings of the development of mathematics. In past years, my quest for information on Babylonian mathematics was limited to a very few sources, particularly Neugebauer’s and Sachs’ Mathematical Cuneiform Texts (1945). However, now a wonderful new resource book on Babylonian mathematics has appeared, Jöran Friberg’s A Remarkable Collection of Babylonian Mathematical Texts. Indeed, the Collection is “remarkable” in presenting and discussing 130 previously unknown (to a popular reading audience) Old Babylonian cuneiform texts. Friberg, an applied mathematician, a well respected scholar in the field of Babylonian mathematics, and a teacher, has chosen the majority of his subject tablets from the Schøyen Collection of rare texts and documents. The book consists of twelve chapters and ten appendices. A wealth of fascinating information is offered to the reader. The style and form of the presentation is reader-friendly. This is a book that can be consulted by a wide reading audience. Chapters are laid out in a systematic fashion, first presenting basic information from the mechanics of numeration and Babylonian arithmetic to the construction and employ of multiplication tables. Consideration is then given to Babylonian metrological systems and the types and forms of weight stones. The remaining chapters discuss the content of specific mathematical texts involving a variety of problems from land measurement and the distribution of resources to purely geometric situations. Some surprising results appear, such as the Babylonian interest in mazes and maze problems and scribes’ use of fine grids of construction lines to accomplish linear design motifs and the clarity of inscribed geometric diagrams. A text from a circa 14th century BCE tablet describes a gaming piece with sides formed by 20 equilateral triangles (an icosahedral die?). The author’s scholarly experience is evident in his identification of the caliber of the scribe and the work being performed from novice practice problems to master inscriptions. All translations and comments are supported by drawings and diagrams. A striking set of color plates enriches the presentation and the extensive sets of appendices reinforce and further extend the discussions of the text. One of my favorite problems was the asking of a scribe to find the area enclosed between two equilateral triangles, one inscribed in the other. The scribe drew the diagram and then, to my chagrin, divided the desired area into a chain of three congruent trapezoids, for which he then (it seems certain that all scribes were male) computed the area. To me, it certainly appeared more efficient to compute the areas of the individual triangles and subtract one from the To anyone interested in the history of Babylonian mathematics and mathematical communication, this is a marvelous resource. A copy should be in every university library and this book should be referenced in all history of mathematics courses.
{"url":"http://www.maa.org/publications/periodicals/convergence/review-of-ia-remarkable-collection-of-babylonian-mathematical-textsi","timestamp":"2014-04-18T18:44:17Z","content_type":null,"content_length":"99615","record_id":"<urn:uuid:dbe23b5c-9b5a-42d9-a251-47b2184a9551>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring Participation Inequality in Social Networks This entry was written by one of our members and submitted to our YouMoz section. The author's views below are entirely his or her own and may not reflect the views of Moz. Reaching into the worlds of economics and statistics, I'd like to share a way to measure the health of online communities: This all started sometime around 2001 -- I originally heard of the Gini coefficient freshman year in college during one of those massive lecture courses of Economics 101. From Wikipedia: The Gini coefficient is a measure of inequality of income distribution or inequality of wealth distribution. It is defined as a ratio with values between 0 and 1: 0 corresponds to perfect equality (e.g. everyone has the same income) and 1 corresponds to perfect inequality (e.g. one person has all the income, while everyone else has zero income). It bounced back in my brain one day a while back as I overheard someone lamenting the 90-9-1 rule of online participation: that 90% of your users will be "lurkers," those who read but don't contribute, 9% will contribute sporadically or only occasionally, and 1% of your entire user base will make up the bulk of the total participation in your community. Some people like to use the 90-9-1 rule to boo-hoo any attempt at building an online community, some like to do a little math and say "hey, 1% of my total user base is still a big number if they really do become outspoken evangelists" -- but everyone is always looking for a way to break the rule and encourage widespread participation. But how do we create a metric that allows us to track the ROI of our efforts to increase participation? We can build our own Gini-like metric .... WARNING: this is a long one but if you stick with me, I bet you're going to start thinking about measuring online communities in a different way. In most communities, I encourage point systems driven by participation -- leave a comment, get a point, write a blog, get a point -- sometimes certain activities are worth more points(be careful when doing this), and always, the community itself has an effect on the total score: for instance, write a defamatory comment, get negative points from other users and your total score drops. Another choice we often have to make is to decide whether or not to make the score visible to the community -- it almost always encourages competition between users, which in some communities is perfect and in others, can lead to negative behaviors. Digg, for instance, used a visible participation score and it led to the top users wielding too much influence over the entire community -- which fostered a drop in the quality of the content. Regardless of how visible we make the score, we, as the community organizers, can use it in all manner of ways. In this example, we can use the score to compare the participation of users across the entire community to determine the distribution of participation and build a dynamic metric we can track over time -- just like economists use the Gini coefficient to measure income distribution. In statistics, what we're looking for is called statistical dispersion -- how far data elements fall from each other or a mean value. In our case, a perfectly distributed community would all have the same participation points, or each member would have the same number of points as the total community points divided by the number of members. The perfectly distributed community would look like: User1: 500 Points, User2: 500 points, User3: 500 points and so on... Everyone is participating equally. But we know that's not how it looks in real communities, we're much more likely to see: User1: 0 points, User2: 0 points, User 3: 5 points, User 4: 500 points... Participation is very unequally dispersed. And we also know that as participation grows increasingly less equal, we see new entrants into the community drop-off more quickly and even older members fade away -- as good community managers, we look out for this type of activity, but it would be extremely beneficial to have a dashboard of quantitative data to back up our qualitative assumptions. To solve this, in short terms, I start by running a calculation on each user to find the average deviation, also known as the absolute deviation, from the mean (or ideal mean) of the community. Once I know this, I take the coefficient of the variance, which is the average deviation divided by the mean, times 100% which gives us the deviation as a percentage of the mean. Understand? Good, cause I just confused myself. Okay, I'll show my work! Let's start with a community: community 1 User 1 50 User 2 4 User 3 6 User 4 18 Total Points: 78 Mean (or perfect score): 19.5 points The average deviation of the group is 15.25. On average, each score is 15.25 units away from the mean. Taking the coefficient of variance, 15.25/19.5 x 100% = 78.21% -- which means, the average deviation is 78.21% of the mean -- or, the participation in this community is largely unequal. Unequal compared to what? I'm glad you asked! Let's look at another community: community 2 User 1 8 User 2 7 User 3 9 User 4 10 Total Points: 34 Mean (or perfect score): 8.5 points The average deviation of the group is 1. On average, each score is 1 point away from the mean. Taking the coefficient of variance, 1/8.5 x 100% = 11.76% -- which means, the average deviation is 11.76% of the mean -- or the participation in this community is more equally distributed than community 1. How can we use this? Each period, we can track the change in our coefficient to see if the participation in the community has grown more or less equally distributed, and on what scale the change has occurred. We shouldn't use this metric by itself, of course, it's also necessary to see the overall growth of participation -- by total number of points -- which we can also segment by our user types or buying segments that we've already constructed beforehand.Imagine now as you deploy an online community, you can track the distribution of participation from the very start -- and as you see more users register on the site and as you attempt to push more of them to contribute more often -- you now have a metric ready at your side to measure the effectiveness of each new campaign.* I'm no statistician, and I built this model on my own. One reason I'm putting it out there is genuinely for information share, but I'd also love to kick start a conversation about measuring the equality of participation. Give it a thought. Do you like this post? Get your social on Yikes, looks like something went wrong. Please try again later. 6 Comments • Bud-Caddell Hey everybody, I posted a follow up on my blog at http://www.passion2publish.com • Kat French Great work! If I'd add anything, it would be to add a "probationary period" for new users before their data would be included in the metrics. Many "newbies" in any community will lurk for a while before participating--I think that's just human nature, and I don't think the "normal lurking period" represents what their "normal" participation levels would be. Sort of like keeping your goldfish in their little plastic bag and letting them acclimate, instead of just dumping them into the aquarium. Plus, a certain number of users register with no intentions of ever becoming active members of a community. □ Bud-Caddell Great idea! I like the idea of the lurking period. How do you think we could measure how long that should be? Something like the average time between registering and first participation? ☆ Peter Newsome Another factor to consider is the type of membership. There would be different (or at least varied) results if a community has both free and premium membership options. If you take into consideration the lurking period, it would also be interesting to measure the number of free members who upgrade to premium membership and how long it took them to do so. These types of statistics would be great in determining if a network should start to introduce a premium membership option. It would also help networks that already have premium options in marketing their services more effectively due to a better understanding of their subscriber trends. ☆ Kat French Yeah... I've managed a few phpbb forums over the years. You could take the date the account was registered, find their first post, and calculate an average "lurking period" from that. Come to think of it, most of the forums I've participated on over the years actually RECOMMEND that you read and lurk for a while before posting. As well as most of the articles that I've read about social media marketing--you always hear "read and get to know the forum for a while before jumping into posting." So yeah, there's bound to be a period that is essentially going to skew your stats. ○ Bud-Caddell I always read those policies "Read around before you post a question" and I know why they're in place -- so the same question is asked only once (or just less often), and the forum itself is more valuable -- but I've always thought there has to be a better way to do this rather than the stern warning. Maybe that's through community moderation, with some technological help, like how digg searches previous postings before you can submit. Either way, we should be encouraging our users to participate, always.
{"url":"http://moz.com/ugc/measuring-participation-inequality-in-social-networks","timestamp":"2014-04-17T01:06:01Z","content_type":null,"content_length":"83955","record_id":"<urn:uuid:556f2b47-13fb-4eec-8534-f3b5ab3a8b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse Variation - Problem 3 If two points vary inversely, that means that the product of the x and y values of the first point is equal to the product of the x and y values of the second point. This is known as the product rule for inverse variation: given two ordered pairs (x1, y1) and (x2, y2), x1y1 = x2y2. Plug the x and y values into the product rule and solve for the unknown value. Here I'm given two points but one of them has a variable and I'm told they vary inversely and I have to solve for that variable. If the points (1/2,4) and (x,1/10) are solutions to an inverse variation, find x. Okay well here is what I know about inverse variation. I know that two variables vary inversely if their product is equals to some constant, the product of the x and y values. What that told us is that we have what's called the product rule. If you multiply an x and a y value that are from an ordered pair that go together it's going to be equal to the product of the other ordered pair values. That's called the product rule for inverse variation. So let's try it we know that x1 and y1 are ½ and 4 so I'm going to multiply those and that's going to be equal to the product of x and 1/10 from my second pair. All we have to do now is solve for x. ½ of 4 is equal to 2. 2 is going to be equal to x divided by 10 so to solve for x what I want to do is multiply both sides by 10 and I'm going to have x equals 20. There's my x value that tells me that if I stuck 20 in there I will get the same product between 1/2 and 4 as I will get between 20 and 1/10. When you come to inverse variation keep this really important formula in your brain. If you can remember that then you can use your logic skills to derive this product rule. Good luck guys you can do it with inverse variation. inverse variation constant of variation product rule for inverse variation
{"url":"https://www.brightstorm.com/math/algebra/linear-equations-and-their-graphs/inverse-variation-problem-3/","timestamp":"2014-04-21T09:58:27Z","content_type":null,"content_length":"65699","record_id":"<urn:uuid:e00de95a-d8ae-4375-9376-5ec0658788ec>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Newark, CA Prealgebra Tutor Find a Newark, CA Prealgebra Tutor ...Algebra is my favorite branch of Math. I have taught more than 15 students in Algebra 2, including Honors, in various public and private schools like Harker, Logan, etc. I have 5 years of experience of teaching Calculus (including AP AB & BC) up to college level. 17 Subjects: including prealgebra, calculus, geometry, statistics ...When I was in high school and college, I found that most math and science textbooks are very abstract and nearly impossible to understand for someone who is learning the subject for the first time, so I had to find other means to really learn the concepts. Now that I am past this hurdle, I want ... 12 Subjects: including prealgebra, chemistry, calculus, physics ...I love to make learning fun and adore children. I enjoy tutoring study skills, time management, to-do lists, deadlines, procrastination, and planning a strategy for completing essays and research projects. During Stanford, I had experience tutoring classmates across different subjects, lead stu... 39 Subjects: including prealgebra, reading, English, writing ...I have taught business psychology in a European university. I tutor middle school and high school math students. I can also teach Chinese at all levels. 11 Subjects: including prealgebra, calculus, statistics, geometry ...But then I take them back to the beginning, find out what they missed learning, and correct that. Math is like building a brick wall, each layer relies on a solid foundation. If you didn't learn fractions or the multiplication table, you're never going to get through Algebra. 10 Subjects: including prealgebra, calculus, precalculus, algebra 1
{"url":"http://www.purplemath.com/Newark_CA_prealgebra_tutors.php","timestamp":"2014-04-17T04:27:54Z","content_type":null,"content_length":"23965","record_id":"<urn:uuid:ecaaab2d-5179-4461-a5f9-ea3c5a8f2c94>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Some questions about *Set Theory* by Frank R. Drake up vote 2 down vote favorite Hello everyone, I am currently studying set theory on my own on the book Set Theory, an Introduction to Large Cardinals by Frank R. Drake and I have a couple of serious doubts. Drake first introduces the usual (not formal) definition (due to Tarski) of satisfaction on a given collection A and then the usual definition of a model of some theory $S$ ($M \models S$). Then he gives (more or less) the the following definition: Given a theory $S$ and a formula $\phi$ we write $S \vdash \phi$ iff for all structures $M$ we have $$M \models S \text{ iff } M \models \phi.$$ Of course this is not the usual definition of $S \vdash \phi$ (there exists a finite sequence of formulas, etc..) but (I think) the two are equivalent because of the Completeness Theorem (even for classes in the sense below). This is a minor point but I feel it could be somewhat related to the major problems below. After the presentation of some of the axiomatic development of ZF, Drake starts to discuss (now informally) some model theory and the notion of absoluteness between structures (again he uses the word "collection" referring to the domain of the structures). Then he returns to formality and explains how we can define the notion of satisfaction within ZF, i.e. how we can formally write the statement $ZF \vdash x \models \phi$, for some $\phi$, where of course $x$ is a set. Then he says that the whole properties about absoluteness could be easily formalized within ZF so that we should have, e.g. $ZF \vdash x \models \phi \leftrightarrow y \models \phi$, if $x$ and $y$ are both transitive models of ZF and $\phi$ is a $\Delta_1^{ZF}$ formula. I am quite convinced about this. Then problems arise: Drake defines the universe of the constructible sets, $L$, and, in order to prove consistency results, uses strongly (for example) the following fact: $V \models \phi $ iff $ L \models \phi$, if $\phi$ is a $\Delta_1^{ZF}$ formula. I've been thinking about this for days and I arrived to the conclusion that this is only a short form for $ZF \vdash \phi$ iff $ZF \vdash \phi^L$, if $\phi$ is a $\Delta_1^{ZF}$ formula. i.e., from the Completeness Theorem and the definition of satisfaction, $ZF \vdash \phi \leftrightarrow \phi^L$, if $\phi$ is a $\Delta_1^{ZF}$ formula. So, if you have had the kindness to read up to this point, my questions are the following: a) Is my reasoning correct? b) If my reasoning is correct, is $ZF \vdash \phi^M$ a "good" definition of $M \models \phi$ (M proper class)? "Good" meaning "consistent with Tarski's definition of satisfaction. c) If my reasoning is correct, is there a easy way to show that the absoluteness properties remain the same even for a model which is a proper class? d) How can we speak about such a thing as $V$? I am really uncomfortable about it. lo.logic set-theory add comment 2 Answers active oldest votes For your main question, you are right that there is a subtle issue with the claim that $$V\models\phi\iff L\models\phi\text{ for }\Delta_1^{ZF}\text{ assertions }\phi.\qquad\qquad (\star) $$ But your interpretation is not as strong as one can give here. First, supporting your worries, let's point out that we cannot expect to prove the theorem as a single assertion in ZF. This is just because if we happen to be living in a world where $\ neg\text{Con}(ZF)$, then technically every formula is $\Delta_1^{ZF}$, because for a formula $\phi$ to be $\Delta_1^{ZF}$ means that it is provably equivalent to a $\Sigma_1$ assertion and also provably equivalent to a $\Pi_1$ assertion. In a world where ZF proves anything, then every formula is $\Delta_1^{ZF}$. But we can live in such a world where ZF is inconsistent but $V$ and $L$ satisfy different formulas. Thus, Drake cannot be claiming that $(\star)$ is a theorem of ZF. Rather, on the positive side, Drake is likely claiming $(\star)$ as a meta-theoretic assertion, a theorem scheme. (And most uses of $\Delta_1^{ZF}$ amount to such theorem schemes or claims in the meta-theory.) Specifically, what $(\star)$ is asserting is that if in the meta-theory we happen to observe that $\phi$ has complexity $\Delta_1^{ZF}$, which means that in the meta-theory we have a proof in ZF that $\phi$ is equivalent to a $\Sigma_1$ assertion and also another proof that it is equivalent to a $\Pi_1$ assertion, then for this particular $\ phi$ we also have a proof that $\phi\iff\phi^L$, or in other words, that $V\models\phi$ if and only if $L\models \phi$. This is easy, since the $\Sigma_1$ variation is upwards absolute and the $\Pi_1$ variation is downwards absolute. up vote 9 down vote Your interpretation of the claim is not about truth in $V$ for such assertions $\phi$, but merely about provability of $\phi$ in all models of ZF, and so it fails to apply in many accepted instances where the scheme version of the claim does apply. Thus, I would say that your proposal to interpret $M\models \phi$ by $\text{ZF}\vdash\phi^M$ is flawed. If $M$ is any definable class, then for any particular $\phi$, we may run the Tarski definitiion of truth sufficiently to define what it means for $M\models\phi$. Slightly more generally, for any meta-theoretic natural number $n$, we can write down the Tarskian truth definition for truth in $M$ of any given assertions of complexity $\Sigma_n$. What we can't do, and this is likely the source of your worries, is give ourselves a full account of truth-in-$M$ for all formulas. This is precisely forbidden by Tarski's theorem on the non-definability of truth. But when you restrict the complexity of the assertions, as you do here to $\Delta_1$, then we have a fully robust $\Sigma_n$ theory of truth applicable to any class to which we can refer. Finally, to assert that $V\models\phi$ is the same as to assert $\phi$. But it is sometimes convenient to have a notation for the class of "everything", when one is comparing this universe to inner models such as $L$. Thanks a lot. So is it correct to say that, if $\phi$ is a $\Delta_1^{ZF}$ formula then $$ V \models \phi \Leftrightarrow L \models \phi$$ implies for this particular $\phi$ $$ZF \ vdash \phi \Leftrightarrow ZF \vdash \phi^L?$$ For, if this is not true I cannot see how Drake can use the metatheoretic claim in order to prove finitistic consistency results (which, for example, are based on the fact that $ZF \vdash \phi^L$ for all $\phi$ axiom of ZF). – Archbishop Jul 4 '12 at 15:53 1 Yes, and even more: if $\phi$ is a particular formula known to be provably $\Delta_1$, then $ZF\vdash \phi\iff\phi^L$, and from this it follows easily that $ZF\vdash\phi\iff ZF\vdash\ phi^L$. – Joel David Hamkins Jul 4 '12 at 16:02 add comment Joel has covered most of this topic very well, but let me first amplify one of the points he made. As he says, the proposal to interpret $M\models\phi$ as $\text{ZF}\vdash\phi^M$ is flawed. There is, however, a correct interpretation of $M\models\phi$ that looks rather similar to this proposal, namely just $\phi^M$ (i.e., delete "$\text{ZF}\vdash$" from the flawed proposal). In other words, truth of one sentence (or satisfaction of one formula) in a definable class is extremely easy to define. This subsumes the (currently) next-to-last sentence in Joel's answer, because $\phi^V$ is simply $\phi$. As Joel also explains, one can define truth in proper classes for far more than a single formula, namely for the class of all $\Sigma_n$ formulas for any fixed $n$, but this is considerably up vote more complicated than what I wrote above about a single $\phi$. 6 down vote Finally, let me also comment on the line between the last two shaded boxes in the question, "i.e., from the Completeness Theorem and the definition of satisfaction". You seem to believe that "$T\vdash\alpha$ iff $T\vdash\beta$" says the same thing as "$T\vdash(\alpha\iff\beta)$". (You only state this belief for a particular $T$, $\alpha$ and $\beta$, so I apologize if I've incorrectly interpreted the generality of your belief.) In fact, these two do not say the same thing. The second is stronger than the first. The first only means (via the Completeness Theorem) that if all models of $T$ satisfy $\alpha$ then they all satisfy $\beta$ and vice versa; it would allow for $\alpha$ and $\beta$ to be true for entirely different collections of models of $T$ as long as each fails in at least one model. The second, in contrast, means that exactly the same models of $T$ satisfy $\alpha$ as satisfy $\beta$. add comment Not the answer you're looking for? Browse other questions tagged lo.logic set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/101276/some-questions-about-set-theory-by-frank-r-drake","timestamp":"2014-04-21T04:52:35Z","content_type":null,"content_length":"62582","record_id":"<urn:uuid:00b59ba8-fe9c-4a3b-b242-1af8f878dd53>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Abel's Lemmas on Irreducibility In today's blog, I present a few lemmas that Niels Abel published on irreducibility. These lemmas are used in Abel's proof on the insolvability of the quintic equation. The content in today's blog is taken from 100 Great Problems of Elementary Mathematics by Heinrich Dorrie. Definition 1: expressible in rationals "A number or equation is expressible in rationals if it is expressible using only addition, subtraction, multiplication, and division of integers." Definition 2: rational functions A function if its coefficients are all rational numbers. Definition 3: degree of a polynomial degree of a polynomial is the highest power of the polynomial that has nonzero coefficients. Definition 4: reducible over rationals A function with coefficients in the rational numbers is said to be reducible over rationals if it can be divided into a product of polynomials with lower degree and rational coefficients. Definition 5: free term or constant term of a polynomial free term constant term of a polynomial is the value that is not bound to an unknown. Lemma 1: The free term is equal to the product of roots. (1) Let a[0]x^n + a[1]x^n-1 + ... + a[n-1]x + a[n] be an degree polynomial. (2) By the Fundamental Theorem of Algebra (see Theorem, ), we know that there are roots such that: a[0]x^n + a[1]x^n-1 + ... + a[n-1]x + a[n] = (x - r[1])*(x - r[2])*...*(x - r[n]) (3) Now, it is clear that of the products in step #2, the only term that does not include is the product of all the roots. Lemma 2: Abel's Lemma The equation x^p = C is a prime number is irreducible over rationals is a rational number not the power of a rational number (1) Assume that x^p = C is reducible into rational functions. (2) Then, there exists such that x^p - C = f(x)g(x) are rational functions of lower degree. (3) We know that the roots to x^p - C=0 r, rα, rα^2, ... rα^p-1 is one of the roots and is a th root of unity since: (rα^i)^p = (r^p)(α^i)^p = (r^p)(α^p)^i = r^p*(1)^i = r^p (4) Let be the the free terms for (5) Since a free term is the product of a function's root (see Lemma 1 above), we know that A*B = (r)*(rα)*(rα^2)*...*(rα^p-1) = ± C (6) We can see that C = r^p (a) If , then and the roots are r*(-r) = -r^2 (b) If is a prime ≥ 3 , then using the summation formula (see Corollary 2.1, ), we have: C = r^pα^[(1/2)(p)(p-1)] (c) Since is even, we have: C = r^p(α^p)^(1/2)(p-1) = r^p(1)^(1/2)(p-1) = r^p (7) Likewise, there exists μ, M, ν, N such that: A = r^μα^MB = r^να^N (8) Since there are instances of in the product in step #5, we know that: μ + ν = p (9) We further know that gcd(μ,ν) = 1 (a) Assume that gcd(μ,ν) = f which is greater than (b) So μ = mf ν = nf (c) Then p = mf + nf = f(m+n) (d) But is prime so this is impossible and (10) Using Bezout's Identity (see Lemma 1, ), we know that there exists such that: μh + νk = 1 (11) Now let's define a rational number K = A^h*B^k (12) So that K= A^h*B^k = r^hμα^M*r^kνα^N = r^(hμ + kν)α^hM+kN = rα^hM+kN (13) But then K^p = (rα^hM+kN)^p = r^p*(α^p)^hM+kN = r^p (14) But this is impossible since we selected an integer that is not a -th power. Theorem 3: Abel's Irreducibility Theorem be irreducible over rationals. If one root of the equation is also a root of the rational equation All the roots of are roots of can be divided by without a remainder. (1) Using Euclid's Greatest Common Divisor Algorithm for Polynomials (see Theorem 3, ), we are left with: V(x)F(x) + v(x)f(x) = g(x) (2) If have no common divisor, then is a constant. That is, g(x) = g[0] is the free term. (3) If is irreducible and a root f = 0 is also a root of , then there exists a common divisor of at least the first degree (4) Since is irreducible, must equal a constant and f(x)=g(x)*f[1](x) = g(x)*f1[0] (5) Then, F(x)=F[1](x)*g(x) = F[1](x)*f(x)*f1[0] (6) Thus, is divisible by and vanishes for every zero point of Corollary 3.1: If a root of an equation which is irreducible in rational numbers is also a root of in rational numbers of lower degree than , then all the coefficients of are equal to zero. (1) Assume that at least one coefficient of is not zero. (2) Then is a polynomial with a degree of at least (3) Since there is a root that divides both and since is irreducible over rationals, we can use Theorem 3 above to conclude that every root of (4) But has a lower degree than which is impossible. (5) So we reject our assumption at step #1 and conclude that all coefficients of must be Corollary 3.2: is an irreducible over rationals, then there is no other equation irreducible over rationals that has a common root with (1) Let be functions irreducible over rationals. (2) Assume that have a common root (3) Then we can use Theorem 3 above to conclude that (4) But if , it is clear that f(x) = g(x) 2 comments: david.foley said... In Abel's lemma step 6 you use an a, b, and c step to show C = r^p. From step 3 we know that r is by definition a solution of x^p = C so that r^p = C. It was brought into existence to have the very property that when raised to the pth power it will equal C. david.foley said... In Abel's irreducibility theorem step 3 it is asserted that there exists a common divisor of at least the first order but is that so obvious without some explanation. One way to show this is to assume there is no common divisor and then inspect the equation of step 1 evaluated at r. The RHS is g0 while the LHS is 0. Since we know that g0 is nonzero our assumption must be wrong and a common divisor must exist.
{"url":"http://fermatslasttheorem.blogspot.com/2008/09/abels-lemmas-on-irreducibility.html","timestamp":"2014-04-20T03:17:51Z","content_type":null,"content_length":"102636","record_id":"<urn:uuid:3c3447c5-653e-46e3-9b5f-f73080c8abc0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00063-ip-10-147-4-33.ec2.internal.warc.gz"}