content
stringlengths
86
994k
meta
stringlengths
288
619
Browse by Keyword: "longitude" azimuth Determines azimuth & distance of the second point (B) as seen from the first point (A), given latitude, longitude & elevation of two points on the Earth cconv A Coordinate Conversion node module cities1000 lat/lon, names of cities with over 1000 people coordinator Converts coordinates (e.g. lat/long to MGRS) exiflocation Extract lat long data and get map location URLs from EXIF data in images. geos-major Lightning fast longitude and latitude lookup for country and state codes hubot-time-me Ask hubot location-based question about users, such as their local time. i64 URL safe Base64 Integer Strings (BIS) and conversion tools. Supports both fast conversions for regular integers and large integer strings. Assists with compression as fewer base 64 digits are needed to represent larger integers than base 10 digits. Unlike RFC-3548 Base 64 encodings, readability of BIS is improved for small integers by using an alphabet that extends base-converter latlong create a kdtree from lat/long and return the closest points mt-coordtransform Coordinate transformations between latitude/longitude WGS84 and OSGB36. mt-geo Geodesy representation conversion functions. mt-latlon Latitude/longitude spherical geodesy formulae and scripts. node-vincenty Calculates the distance in meters between two latitude and longitude coordinates. ospoint Converts Ordnance Survey grid references into Longitgude and Latitude placename find a normalized place name and lat/lon from a free-form location query point-to-city Simple module to get the city name from a point (lat,lon). Based in Yahoo! Place Finder (requires key from Yahoo!) sgeo Spherical coordinate library simplegeoloc SimpleGeoLoc simple items with geolocation, near items function timezoner Node.js client library for accessing Google Time Zone API. tzwhere Determine timezone from lat/long
{"url":"https://www.npmjs.org/browse/keyword/longitude","timestamp":"2014-04-17T00:08:30Z","content_type":null,"content_length":"8846","record_id":"<urn:uuid:ff3235e6-276b-4a5b-aa29-89858f928995>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Sampling: Weighing Costs versus Precision in Providing Taxpayer Guidance I. Introduction In the months preceding elections in the United States it is difficult to avoid statistical sampling, as polling projections are everywhere. Only a sample is used to make these projections because it would take too much time and be too expensive to determine how every voter will vote. [1] Statistical sampling has many others uses as well, including being used as evidence in a trial [2] or being used to estimate how much a taxpayer owes the government on their tax return. [3] As with elections, to determine the exact result for a tax return, every item in the population would need to be investigated. As a population gets larger, this gets more time consuming and more expensive, especially when the information is collected by experts, lawyers, and accountants. Furthermore, each additional item of the population collected will not result in a proportionate change in the precision of the estimate, because the precision of an estimate varies inversely with the square root of the sample size. [4] II. Statistical Sampling in Trials and Usage by the IRS Statistical sampling has not always been used as evidence in trials or in preparing tax returns. Early courts were skeptical of statistical sampling estimates and did not admit these estimates. [5] However, modern courts have begun to accept sampled evidence in a wide variety of contexts, including mass torts cases. [6] Similarly, the Internal Revenue Service (“IRS”) utilized statistical sampling in performing tax audits as early as 1964 [7], but it has taken the IRS more time to provide taxpayers guidance for using statistical sampling in preparing tax returns. [8] Some evidence that the IRS is allowing increased use of statistical sampling for taxpayers, is that the IRS provided guidance on using statistical sampling for substantiating meal and entertainment expenses that are excepted from the 50% disallowance rule under Code section 274(n) in 2004 [9], and provided guidance for calculating qualified production activities for the Domestic Manufacturing Deduction in 2007. [10] The IRS has yet to provide similar guidance for calculating the Research and Development Tax Credit (“R&D Tax Credit”) [11], however recent court decisions have allowed taxpayers to use estimates in calculating their R&D Tax Credits. In Union Carbide and Subsidiaries v. Commissioner, the United States Tax Court accepted estimates based on extrapolations “as a close approximation of all of the qualified research activities.” [12] Similarly, the Fifth Circuit in U.S. v. McFerrin held that employee testimony and estimates may be used to substantiate qualified research expenditures, against arguments by the IRS. [13] As the IRS has not yet provided guidance to taxpayers for using statistical sampling in calculating the R&D Tax Credit [14], and they may in the future [15], the R&D Tax Credit provides a good context for examples that follow. A major benefit for a taxpayer or for a party in a trial who uses statistical sampling, is the costs that can be saved by using a sample rather than using the entire population. [16] This is especially true when there is a large sample and when amounts are being calculated by expert witnesses, lawyers, or accountants. There are additional benefits as well. For example, additional valuable information can be gained by using the resources available to determine a carefully drawn smaller sample or to collect more information on each item in the sample. [17] Also, there may be drawbacks from using an entire large population, as one recording the entire sample results may get tired or bored enough to start recording information incorrectly. [18] Even if the costs to calculate a tax deduction or a credit equaled the cost to calculate that deduction or credit, benefits are induced by the presence of the deductions and credits. [19] For example, as the name suggests, the R&D Tax Credit was added to encourage research and development in the United States and as part of the American Jobs Creation Act of 2004, the Domestic Manufacturing Deduction was added to encourage increasing the quality of manufacturing and jobs in the United States. [20] A study of the effectiveness of the R&D Credit has shown a positive impact on R&D activity and “[t]here is significant evidence that nations and states that adopt an R&D tax credit will experience an increase in R&D investments.” [21] If the incentive to participate in these activities was cheaper and easier to calculate, it follows that more people would consider using them. III. Precision vs. Costs to Increase Precision The R&D Tax Credit is a difficult credit to calculate because it requires intrusive examinations to determine how many of the costs of a particular research project qualifies as a research expense for the credit. [22] For example, qualified research expenses include qualified wages paid to engineers. [23] It may not be difficult to determine how much a company paid its engineers by looking at payroll detail, but it is more difficult to determine how much of an engineer’s wages qualify as a research expense. This is the case because qualified research expenses, as defined within I.R.C. § 41, which outlines how the R&D Tax Credit is calculated, do not include all wages. [24] Even a twenty minute phone conversations with each engineer, to determine wages that qualify, will add up quickly when you take into account that the engineer could be continuing to conduct research instead, and the costs paid those conducting the interviews. When a company does extensive research and development and has multiple locations with multiple engineers, it adds up even faster. The precision of an estimate calculated from a sample varies inversely with the square root of the sample size. [25] Therefore, in the example above, if ten engineers were originally interviewed, in order to double the precision the taxpayer would be required to interview forty engineers. [26] Similarly, to increase the precision of a sample by a factor of ten, it would require interviewing one hundred engineers. [27] Adding more numbers to this example, if a sample of ten determines that the mean percentage of time engineers spend doing qualified research is 60%, and you can be 95% sure that the mean of the population falls between 40% and 80%, to be 95% sure that this amount is between 50% and 70%, one would have to have to sample forty engineers. [28] To further increase precision so that you can be 95% sure that the percentage was between 59% and 61% would require interviewing four hundred engineers. [29] Although the “longstanding” [30] rule developed in Cohan v. Commissioner is that absolute certainty is not required and that close approximations are acceptable when calculating deductions [31], it would be difficult to argue that such a wide range would be acceptable. This would be especially true when it is possible to calculate a more precise number. Using a sample to claim a deduction or credit of $60, that a taxpayer is 95% sure that is between $40 and $80, does not appear to be a close approximation. However, it would be easier to argue that if it was determined that the deduction or credit was $60 with 95% certainty that that the deduction or credit was between $59 and $61, that $60 is a close approximation. However, if it costs the taxpayer $1 to determine that they are 95% sure the deduction or credit is between $40 and $80, and $400 to determine that they are 95% sure the deduction or credit is between $59 and $61; it is not worth it for the tax payer to calculate the deduction or credit at all if such a high degree of precision is required. IV. A Compromise is Needed to Make Statistical Sampling Effective When you have a range of how much tax liability exists, the IRS will always want the taxpayer to pay more and the taxpayer will always want to pay less. When precision is not very high, this difference may be large. Consider if instead of the example above that used tens of dollars, a credit of tens of millions was being calculated. In continuing to provide guidance to what extent statistical sampling is acceptable, the IRS should take into account how much can be saved by using statistical sampling. While they have a legitimate concern over requiring tax returns that are precise, the IRS should realize that the money saved could go elsewhere. Even if a deduction or credit fails to net as much revenue for the government, the presence of the deductions and credits encourage other activities for the benefit of the United States. [32] If it costs the taxpayer more to collect the information needed to calculate a potential benefit, the taxpayer may not participate in the potentially beneficial activity at all. [33] V. Conclusion Statistical sampling allows for substantial savings when making conclusions about populations. At the same time, there comes a point when asking for increased precision may cost more than it is worth to have this precision. Tax deductions and credits may be difficult to calculate, but rather than render them worthless to taxpayers or get rid of them completely, statistical sampling should be encouraged when calculations would otherwise be too difficult to calculate. [1] Robert M. Lawless, Jennifer K. Robbenault, & Thomas S. Ulen, Empirical Methods in Law (forthcoming 2010) (manuscript at 188, released to students). [2] Id at 208. [3] Rev. Proc. 2004-29, 2004-20 I.R.B. 918. [4] Hans Zeisel & David H. Kaye, Sampling, in Prove it with Figures 108-109 (1997). [5] Lawless, Robbenault, & Ulen, supra note 1, at 208. [6] Id. [7] Rev. Proc. 64-4, 1964-1 C.B. 644. [8] Will Yancey, Sampling for Income Tax and Customs, http://www.willyancey.com/sampling-income-tax.html#cases (last visited Oct. 11, 2009). [9] Rev. Proc. 2004-29, 2004-20 I.R.B. 918. [10] Rev. Proc. 2007-35, 2007-23 I.R.B. 1349. [11] Yancey, supra note 8. [12] Union Carbide Corp. and Subsidiaries v. Comm'r., 97 T.C.M. (CCH) 1207, 110 (2009). [13] U.S. v. McFerrin, 570 F.3d 672, 679 (5th Cir. 2009). [14] Yancey, supra note 8. [15] Mary Batcher, Statistical Sampling in Tax Filings: New Confirmation from the IRS, Tax Executive (2004), available at http://www.thefreelibrary.com/_/print/PrintArticle.aspx?id=143304208. [16] Lawless, Robbenault, & Ulen, supra note 1, at 208. [17] Id at 191-192. [18] Mary Batcher, Statistical Sampling: A Potential Win for Business Taxpayers, Tax Executive (2001), http://www.thefreelibrary.com/ [19] Ross Gitell & Edinaldo Tebaldi, Are Research and Development Tax Credits Effective? The Economic Impacts of a R&D Tax Credit in New Hampshire, Public Finance and Management (2008), http:// [20] American Jobs Creation Act of 2004, Pub. L. No. 108-357, 118 Stat. 1418. [21] Gitell & Tebaldi, supra note 19. [22] Batcher, supra note 15. [23] I.R.C. § 41(b)(2)(A)(i) (2008). [24] Id. [25] Zeisel & Kaye, supra note 4. [26] Id. [27] Id. [28] Id. [29] Id. [30] U.S. v. McFerrin, 570 F.3d at 679. [31] Cohan v. Comm'r, 39 F.2d 540, 544 (2d Cir. 1930). [32] Gitell & Tebaldi, supra note 19. [33] Mary Batcher, supra note 18. Recent Comments • Pretextual Wiretapping: Raj Rajaratnam and Perfect Hedge (2) Paddy Kalish wrote: very well written [More] • China Approves Citibank-led Consortium’s buy-into Guangdong Development Bank (1) cheap jerseys wrote: Thanks for such a great post and the review, I am ... [More] Comment RSS
{"url":"http://www.law.illinois.edu/bljournal/post/2009/10/12/Statistical-Sampling-Weighing-Costs-versus-Precision-in-Providing-Taxpayer-Guidance.aspx","timestamp":"2014-04-18T10:35:53Z","content_type":null,"content_length":"57771","record_id":"<urn:uuid:17c3049d-7ba5-4892-b36a-f9da1e7ec6b5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Large Numbers (Page 1 of 5 ) The presence of this invention opened up new perspectives into most of the sciences. Unfortunately, computers are machines that have their own language. The only problem is that a computer's language is different from the ones we use in our day-to-day lives. But have no fear. You can learn this language pretty easily with a little effort, and after completing this step, the barriers in many fields of science can fall one by one. For example, take a look at large numbers. Today we can execute calculations that exceed by far the dreams of classic mathematicians such as Euler. And all of these are done in just fractions of a second (or even less) by a machine that occupies a small amount of space under your desk. But guess what? Despite the fact that the arrival of this age gave us some capabilities that seemed improbable years ago, the fall of the former barrier created new limitations. In our case this occurred after the creation of the language that helps us to communicate with the computer. For example, one of the best languages, C++, has implemented data types within it, but these are of limited length and precision. Therefore, we encounter the following dilemma: what can we do when we need to make some calculations that require more precise calculations? What can we do when our calculation has a larger value than the maximum amount that can be put into an existing data type? The answer lies in OOP (Object Oriented Programming). Thus, if you skipped some of the classes while OOP was taught, you won't fully understand this multi-part article series. However if this is the case don't hesitate to expand your knowledge. And remember as always that Google is your best friend. Keep in mind that STL will also be mandatory if you want to comprehend the coded segments of the article. Either way, the theoretical sections of this article should be of great help for anybody. Using this method we are able to create our own data type: a specific type that knows nothing about maximum value limit or precision; the "ideal" data-type, if you will. Most current computers are 64 bit so it can represent, in the best case, 2^63 = 9,223,372,036,854,775,808 (since we assume a signed number so it's just half of 2^64). One bit is needed for the sign. That's quite a large number, but what if we need more? Obviously we need to create one for us. During this series I am going to discuss the problems that we will face upon the creation of a class and the possibilities solving The article series will comprise three parts. We'll start up with the standard communication with the user of the class and the classic add and subtract methods in part one. The second part will discuss multiplication, namely the Karatsuba method. And the saga will end with dividing and other eventual extensions. I'm going to present the class already created so you can see not just the problem but the solution as well. The pure C++ code will be attached at the end of every part. As for the practical part of a class as such in real life I can say only the following. In mathematics, cosmology and even cryptography, mankind's knowledge has evolved exponentially due the existence of these kinds of libraries, and classes that can make precise and efficient calculations of whatever complex arithmetic form. blog comments powered by
{"url":"http://www.devarticles.com/c/a/Cplusplus/Large-Numbers/","timestamp":"2014-04-19T09:24:49Z","content_type":null,"content_length":"48141","record_id":"<urn:uuid:3b52dc72-8d93-4df5-b74e-9592ebcb0d3d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Particle projected at an angle to the horizontal. February 6th 2013, 04:19 PM Particle projected at an angle to the horizontal. I'm sorry that this is a bit of a rambling question. I'm having difficulty understanding how to assign the signs in these problems. The rest of the math I can do and I'm finding it really frustrating that I'm not understanding the signs on the vectors. I can get to the solutions but I feel like I'm fudging it and I'd really appreciate it if someone would look over my work and clarify the sign thing for me. Thank you. The first part of the question, which I can do is, 'A particle is projected from point O with a speed u at an angle of $\alpha$ above the horizontal and moves freely under gravity. When the particle has moved a horizontal distance x, it's height above O is y. Show that: $y = x\tan\alpha - \dfrac{gx^2}{2u^2\cos^2\alpha}$ I was able to do this and I looked it up and found it's an expression of the equation of trajectory. It's the second part of the question that I'm having difficulty with, which is: 'A girl throws a ball form point A at the top of a cliff. The point A is 8 m above a horizontal beach. The ball is projected with a speed of $7 ms^-1$ at an angle of elevation of 45 degrees. By modelling the ball as a particle moving freely under gravity, find the horizontal distance of the ball from A when the ball is 1 m above the beach. I can solve this by substituting into the equation given in the first part if take the displacement y to be -7 m, I also had u as -7, but since this was squared, the sign makes no difference. $-7 = x\tan(45) - \dfrac{9.8x^2}{(2)(-7)^2\cos^2(45)}$, gives the correct answer. I also tried solving the same problem without using the equation from the first part but instead using the vertical motion to find the time and then the horizontal motion to find the distance. Using $s = ut + \dfrac{1}{2}at^2$, for the vertical motion. Taking g, (a), as positive, since it's down, I get the correct time if I take u, $7\sin45$, as negative and the displacement, 7 m, as positive. $7 = -7\sin(45) + \dfrac{1}{2}(9.8)t^2$, gives the correct time. One of the things that's confusing me is the displacement, in one equation it's negative in the other it's positive. The main thing is that it took me for ever to get the signs right to get the correct answer. I had the equations correct but without the right signs it was impossible to get the correct answer. My concern is that I'm not really understanding the signs on the vectors and I would really appreciate some help understanding how to assign the signs. Thank you. February 6th 2013, 04:43 PM Re: Particle projected at an angle to the horizontal. The problem is that you have't specifically set up a "coordinate system". That is, you have not stated exactly where x= 0, y= 0 will be and which directions are positive. "'A girl throws a ball form point A at the top of a cliff. The point A is 8 m above a horizontal beach. The ball is projected with a speed of at an angle of elevation of 45 degrees. By modelling the ball as a particle moving freely under gravity, find the horizontal distance of the ball from A when the ball is 1 m above the beach. I can solve this by substituting into the equation given in the first part if take the displacement y to be -7 m" Do you mean -8 m? There is no "7" distance mentioned here. If so then you are taking y= 0 to be at the "horizontal beach" and, though it strikes me as strange, y increasing downward. In this case "1 m above the beach" would be y= -1. What the signs are is pretty much your choice. It depends upon your choice of a coordinate system. February 6th 2013, 05:47 PM Re: Particle projected at an angle to the horizontal. Thank you for your reply. Yes I think my difficulty is exactly as you have stated, in not setting up a 'co-ordinate system' and defining where x = 0 and y = 0 and which directions are positive. I have tried very hard to do this and am finding it very confusing. I know that when solving these problems the directions of all the vectors involved must be the same and it is this that I'm having difficulty with. In this case if I take the horizontal through A as being y = 0, then when the ball is 1 m above the beach, which is 8m below A, that's a displacement of 7m downwards, so s, the displacement, is -7. Is that correct? Substituting this into the given equation gives me the correct solution so I assume it is. However I had to work backwards from the answer to figure this out and so I don't know if I've got the signs on other vectors incorrect instead. What's not helping me figure it out is that when I work out the problem the other way, in order to get the correct solution for the time I have to take the displacement as positive 7m, as I have shown my first post. That leaves me at a loss to figure out whether I've got the sign on the displacement wrong or whether the sign on one or more of the other vectors is incorrect. I have spent a very long time looking at this problem and the fact that the difficulty seems only to do with getting the correct signs on the vectors is driving me crazy(Headbang). Thank you for your help. February 7th 2013, 04:44 AM Re: Particle projected at an angle to the horizontal. Hello HallsofIvy I understand it now thank you. In this equation, $y = x\tan\alpha - \dfrac{gx^2}{2u^2\cos^2\alpha}$, I know that y is the height above y = 0 and that in the second part of the question 1 m above the beach is 7 m below y = 0, so the vertical displcement is -7. Also with this equation, $7 = -7\sin(45) + \dfrac{1}{2}(9.8)t^2$, I have taken down as positive so now the displacement is 7 m and the initial velocity is negative. That all makes sense now, sorry for being so dense. Thank you for you time.
{"url":"http://mathhelpforum.com/math-topics/212695-particle-projected-angle-horizontal-print.html","timestamp":"2014-04-18T22:07:08Z","content_type":null,"content_length":"11770","record_id":"<urn:uuid:7f81417c-feba-4a7c-8585-5703be0263bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Woburn Algebra 2 Tutor Find a Woburn Algebra 2 Tutor ...Calculus is the study of rates of change, and has numerous and varied applications from business, to physics, to medicine. The complexity of the topics involved however, require that your grasp of mathematical concepts and function properties is strong. I have helped numerous students master both the foundations and the specific skills taught in a variety of calculus courses. 23 Subjects: including algebra 2, physics, calculus, statistics ...Looking forward to hearing from you! Cheers, SusieI have played violin since I was 5 years old. I was trained with the Suzuki Method and completed all levels of Suzuki by age 10. 11 Subjects: including algebra 2, Spanish, accounting, ESL/ESOL I am a motivated tutor who strives to make learning easy and fun for everyone. My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. 16 Subjects: including algebra 2, French, elementary math, algebra 1 ...I am the father of 3 teens, and have been a soccer coach, youth group leader, and scouting leader. I am also an engineering and business professional with BS and MS degrees. I tutor Algebra, Geometry, Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry, Calculus, and Physics. 15 Subjects: including algebra 2, calculus, physics, statistics ...I am fluent in Mandarin and Cantonese. I took Chinese classes and obtained fairly good grades throughout elementary school and high school. For example, in the National College Entrance Exam, I obtained a score in Chinese above 98% of all students in the Guangdong province. 16 Subjects: including algebra 2, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/woburn_algebra_2_tutors.php","timestamp":"2014-04-19T20:13:12Z","content_type":null,"content_length":"23813","record_id":"<urn:uuid:ebf63bfb-26c8-4b57-9fe4-4efb84544e07>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial derivative January 26th 2011, 06:30 AM #1 Junior Member Jan 2011 Partial derivative Having a bit of a problem with this partial derivative, i know how to do the problem if the x and y wasn't infront as you could apply the product rule and apply the partial derivative, however having problems in what to do with it, here is the problem: Determine partial df/dx, df/dy, second partial df/dx, second partial df/dy and partial d^2f/dxdy Many thanks in advance. This problem may have arisen from a course in DE's, but it's really a calculus question. I'll do the first-order derivatives - see if you can finish with all the second-order derivatives. We have $f(x,y)=x\cos(x)\cosh(y)+y\sin(x)\sinh(y).$ Then, $\dfrac{\partial f}{\partial x}=(1\cdot \cos(x)-x\sin(x))\cosh(y)+y\cos(x)\sinh(y),$ and $\dfrac{\partial f}{\partial y}=x\cos(x)\sinh(y)+\sin(x)(1\cdot\sinh(y)+y\cosh( y)).$ So you can see that I just used the usual product rule from Calc I to differentiate the products. This problem may have arisen from a course in DE's, but it's really a calculus question. I'll do the first-order derivatives - see if you can finish with all the second-order derivatives. We have $f(x,y)=x\cos(x)\cosh(y)+y\sin(x)\sinh(y).$ Then, $\dfrac{\partial f}{\partial x}=(1\cdot \cos(x)-x\sin(x))\cosh(y)+y\cos(x)\sinh(y),$ and $\dfrac{\partial f}{\partial y}=x\cos(x)\sinh(y)+\sin(x)(1\cdot\sinh(y)+y\cosh( y)).$ So you can see that I just used the usual product rule from Calc I to differentiate the products. Fantastic i can see exactly where i went wrong, thanks very much. Just going out and will work on the second derivative later, thanks once more. You're welcome. Let me know if you have any more difficulties. Applied it to other questions and getting them spot on now, however there is a second part where it says: Without further explicit differentiation, determine partial (d^4f/dx^4)-(d^4f/dy^4) ive calculated the second partial derivative etc just wondering if there is any trick to it as it states without any further explicit differentiation. Any help will be most appreciated. Is this the same f as in the OP? If you've calculated $f_{xx}$ and $f_{yy}$ you'll notice that $f_{xx} + f_{yy} = 0$. So how does that help with $f_{xxxx} - f_{yyyy} = 0$? January 26th 2011, 06:44 AM #2 January 26th 2011, 07:14 AM #3 Junior Member Jan 2011 January 26th 2011, 10:59 AM #4 January 27th 2011, 03:40 AM #5 Junior Member Jan 2011 January 27th 2011, 04:06 AM #6 January 27th 2011, 04:47 AM #7 January 27th 2011, 01:18 PM #8 Junior Member Jan 2011
{"url":"http://mathhelpforum.com/calculus/169400-partial-derivative.html","timestamp":"2014-04-18T10:28:11Z","content_type":null,"content_length":"59263","record_id":"<urn:uuid:7cf3e11c-2d46-4ab4-8986-6c26cd8af370>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Inequalities trick Author Message Re: Inequalities trick [#permalink] 11 Mar 2011, 18:57 This post received Expert's post vjsharma25 wrote: I understand the concept but not the starting point of the graph.How you decide about the graph to be a sine or cosine waveform?Meaning graph starts from the +ve Y-axis for four values and starts from -ve Y-axis for three values. What if the equation you mentioned is (x+2)(x-1)(x-7)<0,will the last two ranges be excluded or the graph will also change? Ok, look at this expression inequality: (x+2)(x-1)(x-7) < 0 Can I say the left hand side expression will always be positive for values greater than 7? (x+2) will be positive, (x - 1) will be positive and (x-7) will also be positive... so in the rightmost regions i.e. x > 7, all three factors will be positive. The expression will be positive when x > 7, it will be negative when 1 < x < 7, positive when -2 , x < 1 and negative when x < -2. We need the region where the expression is less than 0 i.e. negative. So either 1 < x < 7 or x < -2. Now let me add another factor: (x+8)(x+2)(x-1)(x-7) Veritas Prep GMAT Instructor Can I still say that the entire expression is positive in the rightmost region i.e. x>7 because each one of the four factors is positive? Yes. Joined: 16 Oct 2010 So basically, your rightmost region is always positive. You go from there and assign + and - signs to the regions. Your starting point is the rightmost region. Posts: 4178 Note: Make sure that the factors are of the form (ax - b), not (b - ax)... Location: Pune, India e.g. (x+2)(x-1) Followers: 895 (7 - x)< Kudos [?]: 3800 [3] , given: Convert this to: (x+2)(x-1) 0 (Multiply both sides by '-1') Now solve in the usual way. Assign '+' to the rightmost region and then alternate with '-' Since you are looking for positive value of the expression, every region where you put a '+' will be the region where the expression will be greater than 0. Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews vjsharma25 Re: Inequalities trick [#permalink] 11 Mar 2011, 20:48 Manager VeritasPrepKarishma wrote: Joined: 10 Nov 2010 So basically, your rightmost region is always positive. You go from there and assign + and - signs to the regions. Your starting point is the rightmost region. Posts: 166 I think this is the key point,which has cleared my doubt about this theory Followers: 4 Thanks Karishma for showing patience to resolve this for me. Kudos [?]: 35 [0], given: 6 Re: Inequalities trick [#permalink] 26 May 2011, 09:42 mrinal2100 wrote: if = sign is included with < then <= will be there in solution Intern like for (x+2)(x-1)(x-7)(x-4) <=0 the solution will be -2 <= x <= 1 or 4<= x <= 7 Joined: 30 Nov 2010 in case when factors are divided then the numerator will contain = sign Posts: 48 like for (x + 2)(x - 1)/(x -4)(x - 7) < =0 the solution will be -2 <= x <= 1 or 4< x < 7 Followers: 0 we cant make 4<=x<=7 as it will make the solution infinite Kudos [?]: 6 [0], given: 27 correct me if i am wrong Can you please tell me why the solution gets infinite for 4<=x<=7 ? Re: Inequalities trick [#permalink] 26 May 2011, 19:55 chethanjs wrote: mrinal2100 wrote: if = sign is included with < then <= will be there in solution like for (x+2)(x-1)(x-7)(x-4) <=0 the solution will be -2 <= x <= 1 or 4<= x <= 7 in case when factors are divided then the numerator will contain = sign fluke like for (x + 2)(x - 1)/(x -4)(x - 7) < =0 the solution will be -2 <= x <= 1 or 4< x < 7 Math Forum Moderator we cant make 4<=x<=7 as it will make the solution infinite Joined: 20 Dec 2010 correct me if i am wrong Posts: 2058 Can you please tell me why the solution gets infinite for 4<=x<=7 ? Followers: 123 Thanks. Kudos [?]: 827 [0], given: (x -4)(x - 7) is in denominator. Making x=4 or 7 would make the denominator 0 and the entire function undefined. Thus, the range of x can't be either 4 or 7. 4<=x<=7 would be wrong. 4<x<7 is correct because now we removed "=" sign. Re: Inequalities trick [#permalink] 02 Jun 2011, 10:38 vjsharma25 wrote: VeritasPrepKarishma wrote: vjsharma25 wrote: Joined: 10 Jan 2011 How you have decided on the first sign of the graph?Why it is -ve if it has three factors and +ve when four factors? Posts: 22 Location: India Check out my post above for explanation. Schools: ISB, IIM-A I understand the concept but not the starting point of the graph.How you decide about the graph to be a sine or cosine waveform?Meaning graph starts from the +ve Y-axis for four values and starts from -ve Y-axis for three values. WE 1: 4 yrs in finance What if the equation you mentioned is (x+2)(x-1)(x-7)<0,will the last two ranges be excluded or the graph will also change? Followers: 0 if the equation being (x+2)(x-1)(x-7)<0 Kudos [?]: 0 [0], given: 0 then soln will be for 1<x<7 bcoz in this range outcome of this equation is negative which is required Re: Inequalities trick [#permalink] 07 Aug 2011, 06:24 gurpreetsingh wrote: Asher ulm wrote: Manager in addition: if we have smth like (x-a)^2(x-b) Joined: 06 Apr 2011 we don't need to change a sign when jump over "a". Posts: 77 yes even powers wont contribute to the inequality sign. But be wary of the root value of x=a Location: India This way of solving inequalities actually makes it soo much easier. Thanks gurpreetsingh and karishma GMAT 1: Q V However, i am confused about how to solve inequalities such as: (x-a)^2(x-b) and also ones with root value. Followers: 0 could someone please explain. Re: Inequalities trick [#permalink] 08 Aug 2011, 10:59 Expert's post Asher wrote: gurpreetsingh wrote: ulm wrote: in addition: if we have smth like (x-a)^2(x-b) we don't need to change a sign when jump over "a". yes even powers wont contribute to the inequality sign. But be wary of the root value of x=a This way of solving inequalities actually makes it soo much easier. Thanks gurpreetsingh and karishma However, i am confused about how to solve inequalities such as: (x-a)^2(x-b) and also ones with root value. could someone please explain. When you have (x-a)^2(x-b) < 0, the squared term is ignored because it is always positive and hence doesn't affect the sign of the entire left side. For the left hand VeritasPrepKarishma side to be negative i.e. < 0, (x - b) should be negative i.e. x - b < 0 or x < b. Veritas Prep GMAT Instructor Similarly for (x-a)^2(x-b) > 0, x > b Joined: 16 Oct 2010 As for roots, you have to keep in mind that given Posts: 4178 \sqrt{x} Location: Pune, India , x cannot be negative. Followers: 895 \sqrt{x} < 10 implies 0 < < 10 Squaring, 0 < x < 100 Root questions are specific. You have to be careful. If you have a particular question in mind, send it. Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Re: Inequalities trick [#permalink] 08 Aug 2011, 21:22 When you have (x-a)^2(x-b) < 0, the squared term is ignored because it is always positive and hence doesn't affect the sign of the entire left side. For the left hand side to be negative i.e. < 0, (x - b) should be negative i.e. x - b < 0 or x < b. Similarly for (x-a)^2(x-b) > 0, x > b Thanks Karishma for the explanation. Hope you wouldn't mind clarifying a few more doubts. Firstly, in the above case, since x>b could we say that everything with be positive. would the graph look something like this: positive.. b..postive.. a.. positive Joined: 06 Apr 2011 On the other hand if (x-a)^2(x-b) < 0, x < b, (x-a)^2 would be positive and for (x-b) if x<b the the left side would be negative. Posts: 77 would the graph look something like this: negative.. b..postive.. a.. postive Location: India Am i right? GMAT 1: Q V If it is not too much of a trouble, could you please show the graphical representation. Followers: 0 problems with \sqrt{x}.. this is all i could find (googled actually 1. √(-x+4) ≤ √(x) 2. x^\sqrt{x} =< (\sqrt{x})^x {P.S.: i tried to insert the graphical representation that i came up with, but i am a bit technically challenged in this area it seems} Re: Inequalities trick [#permalink] 09 Aug 2011, 02:28 Expert's post Asher wrote: Firstly, in the above case, since x>b could we say that everything with be positive. would the graph look something like this: positive.. b..postive.. a.. positive On the other hand if (x-a)^2(x-b) < 0, x < b, (x-a)^2 would be positive and for (x-b) if x<b the the left side would be negative. would the graph look something like this: negative.. b..postive.. a.. postive So when you have (x - a)^2(x - b) < 0, you ignore x = a and just plot x = b. It is positive in the rightmost region and negative on the left. So the graph looks like this: negative ... b ... positive Am i right? If it is not too much of a trouble, could you please show the graphical representation. Veritas Prep GMAT Instructor problems with \sqrt{x}.. this is all i could find (googled actually Joined: 16 Oct 2010 1. √(-x+4) ≤ √(x) Posts: 4178 2. x^\sqrt{x} =< (\sqrt{x})^x Location: Pune, India {P.S.: i tried to insert the graphical representation that i came up with, but i am a bit technically challenged in this area it seems} Followers: 895 Squared terms are ignored. You do not put them in the graph. They are always positive so they do not change the sign of the expression. (x-4)^2(x - 9)(x+11) < 0 We do not plot x = 4 here, only x = -11 and x = 9. We start with the rightmost section as positive. So it looks something like this: positive... -11 ... negative ... 9 ... positive Since we need the region where x is negative, we get -11 < x < 9. Basically, the squared term is like a positive number in that it doesn't affect the sign of the expression. I would be happy to solve inequalities questions related to roots but please put them in a separate post and pm the link to me. That way, everybody can try them. Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Re: Inequalities trick [#permalink] 09 Aug 2011, 07:16 VeritasPrepKarishma wrote: Asher wrote: Firstly, in the above case, since x>b could we say that everything with be positive. would the graph look something like this: positive.. b..postive.. a.. positive On the other hand if (x-a)^2(x-b) < 0, x < b, (x-a)^2 would be positive and for (x-b) if x<b the the left side would be negative. would the graph look something like this: negative.. b..postive.. a.. postive So when you have (x - a)^2(x - b) < 0, you ignore x = a and just plot x = b. It is positive in the rightmost region and negative on the left. So the graph looks like this: negative ... b ... positive Manager Am i right? Joined: 06 Apr 2011 If it is not too much of a trouble, could you please show the graphical representation. Posts: 77 problems with \sqrt{x}.. this is all i could find (googled actually Location: India 1. √(-x+4) ≤ √(x) GMAT 1: Q V 2. x^\sqrt{x} =< (\sqrt{x})^x Followers: 0 {P.S.: i tried to insert the graphical representation that i came up with, but i am a bit technically challenged in this area it seems} Squared terms are ignored. You do not put them in the graph. They are always positive so they do not change the sign of the expression. (x-4)^2(x - 9)(x+11) < 0 We do not plot x = 4 here, only x = -11 and x = 9. We start with the rightmost section as positive. So it looks something like this: positive... -11 ... negative ... 9 ... positive Since we need the region where x is negative, we get -11 < x < 9. Basically, the squared term is like a positive number in that it doesn't affect the sign of the expression. I would be happy to solve inequalities questions related to roots but please put them in a separate post and pm the link to me. That way, everybody can try them. Thanks a ton Karishma, i really appreciate it. I will post pm you after posting the roots questions. sushantarora Re: Inequalities trick [#permalink] 10 Aug 2011, 06:00 Intern hey , can u please tel me the solution for this ques Joined: 14 Apr 2011 a car dealership sells only sports cars and luxury cars and has atleast some of each type of car in stock at all times.if exactly 1/7 of sports car and 1/2 of luxury cars Posts: 11 have sunroofs and there are exactly 42 cars on the lot.what is the smallest number of cars that could have roofs? Followers: 0 ans -11 Kudos [?]: 1 [0], given: 0 Re: Inequalities trick [#permalink] 10 Aug 2011, 16:01 This post received WoW - This is a cool thread with so many thing on inequalities....I have compiled it together with some of my own ideas...It should help. CORE CONCEPT @gurpreetsingh - Suppose you have the inequality f(x) = (x-a)(x-b)(x-c)(x-d) < 0 Arrange the NUMBERS in ascending order from left to right. a<b<c<d Draw curve starting from + from right. now if f(x) < 0 consider curve having "-" inside and if f(x) > 0 consider curve having "+" and combined solution will be the final solution. I m sure I have recalled it fully but if you guys find any issue on that do let me know, this is very helpful. So for f(x) < 0 consider "-" curves and the ans is : (a < x < b) , (c < x < d) and for f(x) > 0 consider "+" curves and the ans is : (x < a), (b < x < c) , (d < x) If f(x) has three factors then the graph will have - + - + If f(x) has four factors then the graph will have + - + - + If you can not figure out how and why, just remember it. Try to analyze that the function will have number of roots = number of factors and every time the graph will touch the x axis. For the highest factor d if x>d then the whole f(x) > 0 and after every interval of the roots the signs will change alternatively. Make sure that the factors are of the form (ax - b), not (b - ax)... example - 7 - x Convert this to: (x+2)(x-1)(x-7)>0 (Multiply both sides by '-1') Now solve in the usual way. Assign '+' to the rightmost region and then alternate with '-' Since you are looking for positive value of the expression, every region where you put a '+' will be the region where the expression will be greater than 0. Variation - ODD/EVEN POWER @ulm/Karishma - if we have even powers like (x-a)^2(x-b) we don't need to change a sign when jump over "a". This will be same as (x-b)We can ignore squares BUT SHOULD consider ODD powers krishp84 example - Manager 2.a Status: On... (x-a)^3(x-b)<0 is the same as (x-a)(x-b) <0 Joined: 16 Jan 2011 2.b Posts: 190 (x - a)(x - b)/(x - c)(x - d) < 0 ==> (x - a)(x - b)(x-c)^-1(x-d)^-1 <0 Followers: 3 is the same as (x - a)(x - b)(x - c)(x - d) < 0 Kudos [?]: 31 [3] , given: 3) Variation <= in FRACTION @mrinal2100 - if = sign is included with < then <= will be there in solution like for (x+2)(x-1)(x-7)(x-4) <=0 the solution will be -2 <= x <= 1 or 4<= x <= 7 BUT if it is a fraction the denominator in the solution will not have = SIGN example - (x + 2)(x - 1)/(x -4)(x - 7) < =0 the solution will be -2 <= x <= 1 or 4< x < 7 we cant make 4<=x<=7 as it will make the solution infinite Variation - ROOTS @Karishma - As for roots, you have to keep in mind that given \sqrt{x}, x cannot be negative < 10 implies 0 < < 10 Squaring, 0 < x < 100 Root questions are specific. You have to be careful. If you have a particular question in mind, send it. Refer - inequalities-and-roots-118619.html#p959939 Some more useful tips for ROOTS....I am too lazy to consolidate @gmat1220 - Once algebra teacher told me - signs alternate between the roots. I said whatever and now I know why I will save this future references.... Please add anything that you feel will help. Anyone wants to add ABSOLUTE VALUES....That will be a value add to this post Labor cost for typing this post >= Labor cost for pushing the Kudos Button Re: Inequalities trick [#permalink] 11 Aug 2011, 19:33 sushantarora wrote: hey , Manager can u please tel me the solution for this ques Joined: 06 Apr 2011 a car dealership sells only sports cars and luxury cars and has atleast some of each type of car in stock at all times.if exactly 1/7 of sports car and 1/2 of luxury cars have sunroofs and there are exactly 42 cars on the lot.what is the smallest number of cars that could have roofs? Posts: 77 ans -11 Location: India Interesting question, but i guess it would be better to post it as a new topic. This would ensure that more ppl see it and ans it. GMAT 1: Q V But than again that's just my suggestion Followers: 0 Re: Inequalities trick [#permalink] 11 Aug 2011, 21:57 Expert's post sushantarora wrote: hey , can u please tel me the solution for this ques a car dealership sells only sports cars and luxury cars and has atleast some of each type of car in stock at all times.if exactly 1/7 of sports car and 1/2 of luxury cars have sunroofs and there are exactly 42 cars on the lot.what is the smallest number of cars that could have roofs? ans -11 Please put questions in new posts. Put it in the same post only if it is totally related or a variation of the question that we are discussing. VeritasPrepKarishma Now for the solution: Veritas Prep GMAT Instructor There are 42 cars on the lot. 1/7 of sports cars and 1/2 of luxury cars have sunroofs. This means that 1/7 of number of sports cars and 1/2 of number of luxury cars should be integers (You cannot have 1.5 cars with sunroofs, right?) Joined: 16 Oct 2010 We want to minimize the sunroofs. Since 1/2 of luxury cars have sunroofs and only 1/7 of sports cars have them, it will be good to have fewer luxury cars and more sports Posts: 4178 cars. Best would be to have all sports cars. But, the question says there are some of each kind at any time. So let's say there are 2 luxury cars (since 1/2 of them should be an integer value). But 1/7 of 40 (the rest of the cars are sports cars) is not an integer number. Let's instead look for the multiple of 7 that is less than 42. Location: Pune, India The multiple of 7 that is less than 42 is 35. So we could have 35 sports cars. But then, 1/2 of 7 (since 42 - 35 = 7 are luxury cars) is not an integer. The next smaller multiple of 7 is 28. This works. 1/2 of 14 (since 42 - 28 = 14 are luxury cars) is 7. So we can have 14 luxury cars and 28 sports cars. That is the maximum number of Followers: 895 sports cars that we can have. 1/7 of 28 sports cars = 4 cars have sunroofs 1/2 of 14 luxury cars = 7 cars have sunroofs So at least 11 cars will have sunroofs. Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Retired Moderator Status: 2000 posts! I don't Re: Inequalities trick [#permalink] 19 Jan 2012, 10:46 know whether I should feel great or sad about it! LOL Is there any case in which we should start with the positive sign in the diagram? Joined: 04 Oct 2009 IMO, no. Thanks! Posts: 1733 _________________ Location: Peru "Life’s battle doesn’t always go to stronger or faster men; but sooner or later the man who wins is the one who thinks he can." Schools: Harvard, Stanford, My Integrated Reasoning Logbook / Diary: my-ir-logbook-diary-133264.html Wharton, MIT & HKS WE 1: Economic research WE 2: Banking WE 3: Government: Foreign Trade and SMEs Followers: 60 Kudos [?]: 207 [0], given: Re: Inequalities trick [#permalink] 06 May 2012, 23:15 Edvento wrote: gurpreetsingh wrote: I learnt this trick while I was in school and yesterday while solving one question I recalled. Its good if you guys use it 1-2 times to get used to it. Suppose you have the inequality f(x) = (x-a)(x-b)(x-c)(x-d) < 0 Just arrange them in order as shown in the picture and draw curve starting from + from right. now if f(x) < 0 consider curve having "-" inside and if f(x) > 0 consider curve having "+" and combined solution will be the final solution. I m sure I have recalled it fully but if you guys find any issue on that do let me know, this is very helpful. Don't forget to arrange then in ascending order from left to right. a<b<c<d Edvento So for f(x) < 0 consider "-" curves and the ans is : (a < x < b) , (c < x < d) and for f(x) > 0 consider "+" curves and the ans is : (x < a), (b < x < c) , (d < x) If f(x) has three factors then the graph will have - + - + Joined: 08 Apr 2012 If f(x) has four factors then the graph will have + - + - + Posts: 129 If you can not figure out how and why, just remember it. Try to analyze that the function will have number of roots = number of factors and every time the graph will touch the x axis. Followers: 8 For the highest factor d if x>d then the whole f(x) > 0 and after every interval of the roots the signs will change alternatively. Kudos [?]: 48 [0], given: 14 Hi gurpreetsingh, This is really great!! By the way, For further reference, this method is called the 'wavy curve method'. Re: Inequalities trick [#permalink] 22 Jul 2012, 02:03 VeritasPrepKarishma wrote: Stiv mrinal2100: Kudos to you for excellent thinking! Senior Manager Correct me if I'm wrong. Joined: 16 Feb 2012 If the lower part of the equation Posts: 260 \frac {(x+2)(x-1)}{(x-4)(x-7)} Concentration: Finance, were 4\leq x \leq 7 Followers: 4 , than the lower part would be equal to zero,thus making it impossible to calculate the whole equation. Kudos [?]: 34 [0], given: 102 _________________ Kudos if you like the post! Failing to plan is planning to fail. Re: Inequalities trick [#permalink] 23 Jul 2012, 02:13 This post received Expert's post Stiv wrote: VeritasPrepKarishma wrote: mrinal2100: Kudos to you for excellent thinking! Veritas Prep GMAT Instructor Correct me if I'm wrong. Joined: 16 Oct 2010 If the lower part of the equation Posts: 4178 \frac {(x+2)(x-1)}{(x-4)(x-7)} Location: Pune, India Followers: 895 4\leq x \leq 7 Kudos [?]: 3800 [1] , given: 148 , than the lower part would be equal to zero,thus making it impossible to calculate the whole equation. x cannot be equal to 4 or 7 because if x = 4 or x = 7, the denominator will be 0 and the expression will not be defined. Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews pavanpuneet Re: Inequalities trick [#permalink] 25 Jul 2012, 02:05 Manager Hi Karishma, Joined: 26 Dec 2011 Just for my reference, say if the equation was (x+2)(x-1)/(x-4)(x-7) and the question was for what values of x is this expression >0, then the roots will be -2,1,4,7 and by placing on the number line and making the extreme right as positive... Posts: 117 ----(-2)----(1)----(4)---(7)----then x>7, 1<x<4 and x<-2...Please confirm.. Followers: 1 However, is say it was >=0 then x>7, 1<=x,4 and x<=-2; given that the denominator cannot be zero. Please confirm Kudos [?]: 8 [0], given: 17 Re: Inequalities trick [#permalink] 25 Jul 2012, 21:21 Expert's post pavanpuneet wrote: Hi Karishma, Just for my reference, say if the equation was (x+2)(x-1)/(x-4)(x-7) and the question was for what values of x is this expression >0, then the roots will be -2,1,4,7 and VeritasPrepKarishma by placing on the number line and making the extreme right as positive... Veritas Prep GMAT Instructor ----(-2)----(1)----(4)---(7)----then x>7, 1<x<4 and x<-2...Please confirm.. However, is say it was >=0 then x>7, 1<=x<4 and x<=-2; given that the denominator cannot be zero. Please confirm Joined: 16 Oct 2010 Yes, you are right in both the cases. Posts: 4178 Also, if you want to verify that the range you have got is correct, just plug in some values to see. Put x = 0, the expression is -ve. Put x = 2, the expression is Location: Pune, India positive. Followers: 895 _________________ Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews gmatclubot Re: Inequalities trick [#permalink] 25 Jul 2012, 21:21
{"url":"http://gmatclub.com/forum/inequalities-trick-91482-20.html","timestamp":"2014-04-20T18:27:02Z","content_type":null,"content_length":"219788","record_id":"<urn:uuid:afef2492-5a47-4fdf-b089-803836cd6339>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron. J. Diff. Eqns., Vol. 2005(2005), No. 81, pp. 1-17. Schouten tensor equations in conformal geometry with prescribed boundary metric Oliver C. Schnuerer Abstract: We deform the metric conformally on a manifold with boundary. This induces a deformation of the Schouten tensor. We fix the metric at the boundary and realize a prescribed value for the product of the eigenvalues of the Schouten tensor in the interior, provided that there exists a subsolution. This problem reduces to a Monge-Ampere equation with gradient terms. The main issue is to obtain a priori estimates for the second derivatives near the boundary. Submitted March 15, 2004. Published July 15, 2005. Math Subject Classifications: 53A30; 35J25; 58J32. Key Words: Schouten tensor; fully nonlinear equation; conformal geometry; Dirichlet boundary value problem. Show me the PDF file (301K), TEX file, and other files for this article. │ │ Oliver C. Schnüurer │ │ │ Freie Universität Berlin │ │ │ Arnimallee 2-6, 14195 Berlin, Germany │ │ │ email: Oliver.Schnuerer@math.fu-berlin.de │ Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/2005/81/abstr.html","timestamp":"2014-04-16T13:24:11Z","content_type":null,"content_length":"1756","record_id":"<urn:uuid:e5df760d-1b16-48a3-8173-1e23cba76141>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Volatility Regimes: Part 2 May 5, 2013 By From Guinness to GARCH Adam Duncan from January, 2013 Also avilable on R-bloggers.com Strategy Implications In this part of the volatility regimes analysis, we’ll use the regime identification framework established in part 1 to draw conclusions about which strategies work best is each regime. That should prove useful to us and goes a long way to answering the question, “What strategies should I be pursuing right now?” This first thing we need to do is assemble some strategies. We’ll use rules-based strategies as our potential candidates. There are many such strategies available. Almost every investment bank has a whole suite of these products available for users to access. The strategies run the gammut from carry to momentum to relative value. People who design hedge fund replication strategies have boiled down the money making process into basically 5 broad categories: 1. Momentum 2. Carry 3. Value 4. Volatility 5. Multi-strat / Alpha Rules-based strategies have been designed to pursue each of these strategies, some more complicated than others, but all designed to operate within a pre-defined set of rules and procedures. This is good because it eliminates concerns about style drift and other considerations which might confound our conclusions about how various strategies perform in different states of the world. Additionally, the investible world is usually broken down into asset classes that look something like: 1. Equities 2. Fixed Income 3. Commodities 4. FX 5. Emerging Markets Some strategies operate across all asset classes simultaneously. With 5 strategy types and 5 asset classes, we are well armed to put together a collection of strategies that we can test across the possible states of the world as defined by our volatility regime framework. I have familiarity with rules based strategies designed by JP Morgan and Credit Suisse. Other banks have extensive collections as well. I’ll limit my choices to those I am familar with. All of this data is available via Bloomberg (tickers are listed). Here are the strategies (including the asset class) that we’ll examine: • Momentum - Equity - AIJPMEUU (USD) • Momentum - FX - AIJPMF1U (USD) • Momentum - Commodity (Energy) - AIJPMCEU (USD) • Momentum - Fixed Income (short dated) - AIJPMMUU • Carry - FX (G10)- AIJPCF1U • Carry - Fixed Income (2yr) - AIJPCB1U • Carry - Commodity - AIJPCC1U • Carry - All - GCSCS2UE • Volatility - Equity (Imp vs. Realized) - AIJPSV1U • Volatility - FX - CSVILEUS (long only) • Volatility - Equities (CS RVIX) - CSEARVIX • Value - Emerging Markets (bonds) - EMFXSEUS • Value - Equities (CS HOLT RAII) - RAIIHRVU • Value - Commodities (CS GAINS) - CSGADLSE That’s a reasonably comprehensive basket of strategy types and asset classes that should allow us to draw some insight. You can always backtest your own favorite strategy and analyze it across the various regimes.But for now, these readily available strategies should suffice to give us direction. I’ve imported data starting in 1994. For almost all of the indices, this time period encapsulates all the available data. Note, however that our first volatility information from Part 1 doesn’t start until 28-Dec-1998. So, the analysis will have an effective start date of 28-Dec-1998. Some of the time series are disappointingly short. We may have to disqualify them on the basis of not crossing enough regimes. But for now, we’ll leave everything in and plow ahead. We can exclude the shorter series when we aggregate the results. Regime snippets shorter than 20 days are discarded in the analysis Anything in the tables that shows up as NA or NaN, is something that was just too short to provide meaningful insight. Returns and standard deviations are all annualized. The first thing we’ll do is base all the indices to 100 so we can see the proper evolution of each index. Each index will grow from 100 at a rate driven by it’s daily log returns. Then, we’ll examine the returns of each index during each of the identified volatility regimes. From there, we can decide which strategies do best and when. Here are the unconditional returns for each strategy (decreasing order by Information Ratio): ann.return ann.stddev info.ratio strat.type CSEARVIX 0.110052 0.05735 1.91881 VIX Vol RV AIJPCC1U 0.064988 0.03595 1.80772 Commodity Carry CSGADLSE 0.162870 0.15399 1.05770 Commodities Value RAIIHRVU 0.081325 0.07776 1.04586 Equities Value AIJPMMUU 0.047916 0.06010 0.79732 Fixed Inc Momentum AIJPCB1U 0.055611 0.07073 0.78628 Fixed Inc Carry GCSCS2UE 0.068753 0.10001 0.68748 All Carry AIJPSV1U 0.022259 0.06344 0.35088 Imp/Real Volatility EMFXSEUS 0.018915 0.05748 0.32906 EM Bonds Value AIJPCF1U 0.030464 0.11183 0.27241 FX Carry AIJPMF1U 0.012083 0.10461 0.11550 FX Momentum AIJPMEUU -0.006767 0.16454 -0.04113 Equity Momentum AIJPMCEU -0.045902 0.28178 -0.16290 Commodity Momentum CSVILEUS -0.014686 0.06444 -0.22789 Long Only FX Vol Ok. Let’s get down to the hard work of calculating all the return snippets for each of our possible states. Recall that there are 9 possible states, though our sample only has 8 of the possible 9 present. Recall also that we have 14 different indices to run across all of the possible states. Great care has to be taken to calculate, track, and assemble the return episdoes by state and strategy. After some difficult calculation and assembly, we can have a look at the information ratios by strategy, by regime: (Recall that we had no observations for “FL” in our sample. The “FM” regime also did not have any episodes longer than our 20 day minimum constraint. Annualizing returns from very short episodes is misleading. So, those 2 entries in our table are blank.) Regime Perscriptions Let’s focus in on the current regime, “SL”, and see what the data tells us about the different strategies. First, we might want to ask some questions about the current regime, like: When did it start? How long do regimes like this usually last? What does the model say about expectations for how long this regime will last? The current regime (“Steep/Low” or “S/L”) began on October 11, 2012. As of January 11th, the time the data set was last updated, the current regime was 67 days old. The plot here shows the number of “SL” occurances in the sample by length of each occurence. So, we have seen 1 day “SL” regimes 23 times (those are excluded in the broader analysis). Similarly, we have seen an “SL” regime of 81 days length exactly 1 time in the sample. The current regime is getting a bit long in the tooth with respect to most occurrences observed in the data. Indeed, we have only seen 1 longer “SL” regime (the 81 day episode just mentioned). Recall the Total Length of Stay estimates from section 2.3 in Part 1. The total number of days we expect to spend in “SL” over the next year was estimated at 43 days. We have already exceeded that estimate by 24 days as January 11th and, as of today have exceeded even the longest “SL” regime observed in the sample. The upper-limit of the 97.5% quantile for the SL total length of stay estimate is 66 days. Here are the information ratios for each of the strategies, conditional on being in an “SL” regime full sample:1998-present: ticker metric strategy.type 1 AIJPSV1U 1.0704 Imp/Real Volatility 2 AIJPCC1U 0.9854 Commodity Carry 3 RAIIHRVU 0.6190 Equities Value 4 GCSCS2UE 0.5380 All Carry 5 AIJPCF1U 0.5341 FX Carry 6 AIJPMMUU 0.0943 Fixed Inc Momentum 7 EMFXSEUS 0.0048 EM Bonds Value 8 AIJPMEUU -0.0548 Equity Momentum 9 AIJPMCEU -0.1307 Commodity Momentum 10 AIJPMF1U -0.3470 FX Momentum 11 AIJPCB1U -0.7766 Fixed Inc Carry 12 CSVILEUS NA Long Only FX Vol 13 CSEARVIX NA VIX Vol RV 14 CSGADLSE NA Commodities Value We can highlight the performance of the best performing “SL” strategy, Equity Implied vs. Realized Vol, only during “SL” regime epsiodes, as shown here: It’s also clear that Implied vs. Realized Vol does well in other regimes besides “SL”. Instead of ranking strategies by regime, we might also want to rank regimes, by strategy. Here is a such a list. That is, for each strategy, how do the regimes stack up in order of Information Ratio. Now is also a good time to compare the unconditional performance of a given strategy to the array of conditional performances by regime. For example, Imp/Realized Vol has an “SL” conditional information ratio of 1.07, an “NL” IR of 1.12, and a “SH” IR of 2.13. Compare that to an unconditional (ie. “always on”) IR of .35. It’s not a panacea, however. For example, Commodity Carry as a strategy is not improved by conditioning on volatility regime. Here is a view of the strategy performances shown 2 ways: by regime and by strategy: Recall the 1-month transition probablility matrix (shown again here) we estimated in part 1. The highest 1 month transition probabilities are (in order): “NL”, “SL”, and “NM”. Meaning, those are the 3 most likely transitions from the current state. We are most likely to either 1) Enter “NL”, 2) Stay where we are, or 3)Enter “NM”. If we transition to “NL”, what will the top strategies be and how do they differ from the current best strategies? FH FM NH NL NM SH SL SM FH 0.165 0.002 0.368 0.016 0.082 0.210 0.016 0.141 FM 0.066 0.010 0.157 0.098 0.259 0.121 0.072 0.218 NH 0.055 0.003 0.306 0.035 0.137 0.228 0.031 0.205 NL 0.004 0.004 0.028 0.358 0.174 0.038 0.266 0.128 NM 0.015 0.008 0.094 0.138 0.288 0.106 0.103 0.247 SH 0.033 0.004 0.230 0.058 0.182 0.204 0.049 0.240 SL 0.003 0.004 0.028 0.361 0.161 0.038 0.279 0.126 SM 0.016 0.007 0.120 0.115 0.256 0.134 0.091 0.261 A transition to “NL” suggests the following strategies: ticker metric strategy.type 1 AIJPSV1U 1.1193 Imp/Real Volatility 2 AIJPMMUU 0.9568 Fixed Inc Momentum 3 AIJPCC1U 0.7648 Commodity Carry 4 RAIIHRVU 0.7219 Equities Value 5 AIJPMF1U 0.6084 FX Momentum 6 AIJPCF1U 0.4046 FX Carry 7 EMFXSEUS 0.3365 EM Bonds Value 8 AIJPMEUU 0.1674 Equity Momentum 9 AIJPMCEU 0.1511 Commodity Momentum 10 GCSCS2UE 0.0856 All Carry 11 AIJPCB1U -1.1578 Fixed Inc Carry 12 CSVILEUS NA Long Only FX Vol 13 CSEARVIX NA VIX Vol RV 14 CSGADLSE NA Commodities Value Those results suggest that a our carry strategies will remain ok, but we’ll want to add exposure to fixed income momentum, as well as equity value strategies and perhaps FX momentum. If we transition from “SL” into “NM”, then… ticker metric strategy.type 1 CSEARVIX 1.0724 VIX Vol RV 2 AIJPCC1U 0.7495 Commodity Carry 3 CSVILEUS 0.6205 Long Only FX Vol 4 AIJPCB1U 0.5590 Fixed Inc Carry 5 AIJPMF1U 0.4160 FX Momentum 6 RAIIHRVU 0.3255 Equities Value 7 GCSCS2UE 0.2927 All Carry 8 AIJPMMUU 0.2511 Fixed Inc Momentum 9 AIJPMCEU 0.2472 Commodity Momentum 10 AIJPCF1U 0.1448 FX Carry 11 AIJPSV1U 0.0534 Imp/Real Volatility 12 AIJPMEUU 0.0339 Equity Momentum 13 EMFXSEUS -0.5480 EM Bonds Value 14 CSGADLSE NA Commodities Value …we’ll favor VIX vol relative value, FX momentum, and perhaps long only vol strategies. Note that “NM” is sort of a midling state where many strategies do reasonably well, with a few standouts. Note the shift in some carry strategies on this transition. This is a situation when vols rise, led by the front end of the term structure. This pushes levels and term structure back into higher levels. Thus the “NM” designation.When this happens, carry strategies usually get hurt and the results seem to bear this out. Given the current “SL” prescriptions, one would probably want to hedge the possibility of entering the “NM” regime. Whether or not you think volatility term structure and level are good conditioning information for strategy selection, the framework should nonetheless prove valuable. You could add aditional conditioning factors to the model and derive a different set of prescriptions. For some strategies, particularly vol based strategies, this model improves performance quite nicely. We might imrprove our stratgey selections by adding other asset class volatilities, or perhaps creating a composite volatility measure made up of may asset class vols and term structures. Next up: Part 3: Estimating a regime switching model. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/volatility-regimes-part-2/","timestamp":"2014-04-21T12:31:48Z","content_type":null,"content_length":"60953","record_id":"<urn:uuid:ba46895f-861b-417c-b6ee-274102300185>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Software Simplifies Stability Analysis Stability can be difficult to achieve in microwave circuits with gain (nonlinear behavior), such as amplifiers and oscillators. Amplifier designers, for example, have long dreaded the appearance of oscillations in a carefully considered circuit. When that circuit is in monolithic-microwave-integrated-circuit (MMIC) form, a "fix" requires another foundry run. But help in achieving microwave circuit stability has arrived, by way of the stability analysis (STAN) software developed by AMCAD Engineering (www.amcad-engineering.fr) and sold by Maury Microwave Corp. (www.maurymw.com). The tool, which works with a commercial computer-aided-engineering (CAE) simulation platform—such as the Advanced Design System (ADS) from Agilent Technologies ( www.agilent.com) or Microwave Office from AWR (www.awrcorp.com)—is based on a stability analysis technique that can locate and characterize the unwanted oscillations in power amplifiers (PAs) that are evidence of instability. The stability analysis tool helps RF/microwave engineers develop networks that improve circuit stability without sacrificing the performance goals of the original circuit Modeling tools like STAN are supported by device characterization capabilities, as provided by AMCAD and Maury Microwave (www.maurymw.com) with their load-pull test systems. Device stability or instability is often expressed in terms of the Rollett stability factor, K, or the Linville stability factor, μ. Graphically, they are often shown in terms of the distance from the center of a Smith chart to points on an input or output stability circle, representing input or output networks where instabilities may lie. Spurious oscillations arise from feedback loops, and can start with device bias (the linear or small-signal stability of the circuit) or the dynamic power applied to the circuit (the nonlinear or large-signal stability). It is in a designer’s best interest to understand the nature of these oscillations early in the design process. Various methods are available to analyze the small- and large-signal stability of a microwave circuit. Some of these approaches have been incorporated in commercial CAE simulators for use with simple linear two-port networks, and are not really suitable for multiple-device networks such as amplifiers.^1 Some small-signal analyses have been presented in the literature,^2,3 but these are difficult to apply to circuits with a large number of active devices. While the stability envelope approach^4 has been implemented in a commercial CAE simulator, it can be difficult to determine the origin of an instability and its start-up frequency. Commercial CAE simulators typically lack the capability to perform large-signal steady-state stability analysis. A number of approaches have been developed for microwave circuits,^5-10 but they either lack reliability or are too complex for practical use with current processing hardware. For example, the nonlinear normalized determinant function (NDF) technique^6 is time consuming to implement and requires access to the intrinsic nodes of the models for transistors used in an amplifier undergoing analysis. Simplified versions of the NDF approach are easier to apply but may not detect all cases of possible oscillation in a circuit design with multiple active devices. The stability analysis approach proposed by AMCAD is based on a pole-zero identification technique previously developed.^11 The method can be applied to DC, small-signal, and large-signal stability analyses using simulations obtained in commercial CAE tools. The AMCAD approach calculates a single-input, single-output (SISO) transfer function for a circuit of interest linearized about a given steady state. A simulated frequency response of the linearized circuit, H(jω) is fitted to a rational polynomial transfer function, H∧(s), by means of frequency-domain identification algorithms. If no poles on the right-half plane (RHP) are found in Ĥ(s) for the circuit being analyzed, it is considered stable. Figure 1shows how a circuit is analyzed. A sinusoidal small-signal current, i[in], is connected in parallel at a particular circuit node as the excitation source. The voltage, v[out], at this node is taken as the output of the system or circuit. The frequency, f[s], is then swept and the frequency response is calculated as the impedance seen by the current probe at this particular circuit node: H(jω) = Z(jω) = (v[out])(jω)/i[in](jω) where H(jω) corresponds to a linearization of the circuit about a DC bias point obtained by sweeping f[s] in a linear AC simulation. System identification techniques are then applied to H(jω) to obtain Ĥ(s). The linearized circuit is stable if no poles of Ĥ(s) lie on the RHP. 1. This diagram represents a circuit for stability analysis, with a small-signal sinusoidal current source connected at a particular node. For large-signal stability analysis, the ratio v[out](jω)/i[in](jω) represents the first term, H[0](jω), of the harmonic series that comprises the linear time-variant transfer function resulting from linearizing the system under large-signal conditions. Term H[0](jω) can be found by sweeping f[s] as part of a harmonic-balance (HB), mixer-like simulation in which the large-signal input drive serves as the mixer’s local oscillator (LO) and the small-signal current source at f[s] is the RF signal to the mixer. The same identification techniques applied for the small-signal case are used to find Ĥ[0](s). The poles of Ĥ[0](s) are directly related with the Floquet exponents of the linearized system, and thus provide the stability information for the large-signal steady state under study. The resulting pole-zero plots associated with Ĥ[0](s) provide useful information about the critical frequencies at which a spurious oscillation can take place. When working with the STAN tool, the first step is to connect a small current source to a node of the circuit under analysis. For simple circuits with a clear feedback structure, such as a single-stage amplifier or oscillator, any node should do. But for a multistage amplifier, one node per stage should be used. A linear or nonlinear simulation is then performed with a commercial CAE simulator of choice to obtain the frequency response of the circuit, H(jω). AMCAD offers simulation templates for both ADS from Agilent Technologies (Fig. 2) and Microwave Office from AWR. Different templates are used depending upon the type of analysis, such as an AC simulation for linear stability analysis or an HB simulation for nonlinear stability analysis. Both types of simulations use a similar methodology and both display H(jω) on a graph. This frequency response is then exported in a text file to be identified in the STAN tool. 2. Stability analysis templates, such as this nonlinear template for ADS 2011, are used to implement the AMCAD stability analysis capability with commercial CAE simulators. The second step of the stability analysis approach is to identify this frequency response H(jω) to find the transfer function, Ĥ(s), and the associated poles and zeros of that response. The STAN tool makes this possible by starting from a text file exported from a CAE simulator of choice to make the identification and analyze the results. The quality of this data fit is normally done by visual inspection, since the order of the transfer function is a priori unknown; normally an iterative process is used to find the order of the model, which can be quite cumbersome. A specially designed automated algorithm in the STAN tool^12 helps speed and simplify this process, enabling the use of pole-zero identification for multivariable large-signal stability analyses that are otherwise impractical with manual quality assessment. The STAN tool features a straightforward graphical user interface (GUI) to further simplify the process (Fig. 3). 3. This is an example of the graphical user interface used with the STAN tool. The STAN tool can perform analysis at several nodes (Fig. 4) as well as swept multiparameter analysis (Fig. 5), with simulation templates available for different conditions. The results make it possible to determine the kind of oscillation mode and the location in the circuit.^13 As an example of these results, the poles-zeros graph of Fig. 6 shows the evolution of the poles versus input power for a simple amplifier with two transistors in parallel. It shows the input drive level at which oscillation starts in this circuit design. 4. This diagram shows how a multinode stability analysis would be performed on an RF/microwave amplifier. 5. Parametric analysis of an amplifier under study is performed by simulating different load conditions. 6. This plot of poles versus input power shows oscillation starting with +13 dBm input power. A more complex example is a three-stage LDMOS distributed power amplifier for a software-defined-radio (SDR) application.^12 A prototype is stable with a 50-Ω load under nominal operating conditions, but the design had to be evaluated under a wide range of conditions (including with high reflective loads). Large-signal analyses were carried at while varying input power, frequency, and the load impedance. Figure 7 shows areas on the Smith chart leading to instability—results which have been experimentally validated. 7. The green and red represent stable and unstable regions in the Γ[L] plane for input power of +17.1 dBm at input frequency of 500 MHz. The STAN tool can also be used for Monte Carlo stability analysis, as was demonstrated with an L-band FET amplifier at 1.2 GHz.^14 The prototype had exhibited low-frequency instability associated with the bias circuitry. This instability can be corrected with the inclusion of a gate-bias resistor. The STAN tool was used to determine the value of the resistor for adequate stability margin. Figure 8 shows pole-zero identification results for two different values of stabilization resistance for the L-band amplifier; some instability existed for 44 Ω, while the 70-Ω resistance yielded stable operation for all conditions. By applying Monte-Carlo analysis techniques, the STAN tool is suitable for systematic stabilization of multi-transistors circuits.^15 8. These pole-zero maps of the FET amplifier were obtained for the two values of stabilization resistance shown. AMCAD Engineering, Batiment Galileo, 20 rue Atlantis, 87068 Limoges, France; +33 (0) 555-040-531, FAX: +33 (0) 555-040-531, www.amcad-engineering.com. 1. J.M. Rollet, "Stability and Power-Gain Invariants of Linear Two Ports," IRE Transactions on Circuit Theory, Vol. 9, No. 1, March 1962, pp. 29-32. 2. W. Strubble and A. Platzker, "A Rigorous Yet Simple Method For Determining Stability of Linear N-Port Networks," 15th Gallium Arsenide Integrated Circuits (GaAs IC) Symposium Technical Digest, October 1993, pp. 251-254. 3. M. Ohtomo, "Stability Analysis and Numerical Simulation of Multidevice Amplifiers," IEEE Transactions on Microwave Theory and Techniques, Vol. 41, No. 6/7, June/July 1993, pp. 983-991. 4. T. Narhi and M. Valtonen, "Stability Envelope: New Tool for Generalised Stability Analysis," IEEE MTT-S International Microwave Symposium Digest, Vol. 2, June 1997, pp. 623-626. 5. V. Rizzoli and A. Lipparini, "General Stability Analysis of Periodic Steady-State Regimes in Nonlinear Microwave Circuits," IEEE Transactions on Microwave Theory and Techniques, Vol. 33, No. 1, January 1985, pp. 30-37. 6. S. Mons, et al., "A Unified Approach for the Linear and Nonlinear Stability Analysis of Microwave Circuits Using Commercially Available Tools," IEEE Transactions on Microwave Theory and Techniques, Vol. 47, No. 12, December 1999, pp. 2403-2409. 7. A. Suarez, et al., "Nonlinear Stability Analysis of Microwave Circuits Using Commercial Software," IEEE Electronics Letters, Vol. 34, No. 13, June 1998, pp. 1333-1335. 8. P. Bolcato, et al., "Efficient Algorithm for Steady State Stability Analysis of Large Analog/RF Circuits," IEEE MTT-S International Microwave Symposium Digest, Vol. 1, May 2001, pp. 451-454. 9. G. Leuzzi and F. Di Paolo, "Bifurcation Synthesis by Means of Harmonic Balance and Conversion Matrix," Proceedings of Gallium Arsenide Applications Symposium, October 2003, pp. 521-524. 10. M. Mochizuki, et al., "Nonlinear Analysis of f0/2 Loop Oscillation of High Power Amplifiers," IEEE MTT-S International Microwave Symposium Digest, Vol. 2, May 1995, pp. 709-712. 11. J. Jugo, J. Portilla, A. Anakabe, A. Suarez, and J.M. Collantes, "Closed-loop stability analysis of microwave amplifiers," IEEE Electronics Letters, Vol. 37, February 2001, pp. 226-228. 12. A. Anakabe, et al., "Automatic Pole-Zero Identification for Multivariable Large-Signal Stability Analysis of RF and Microwave Circuits," Proceedings of the 40th European Microwave Conference, September 2010, pp. 477-480. 13. A. Anakabe, J.M. Collantes, J. Portilla, J. Jugo, S. Mons, A. Mallet, and L. Lapierre, "Analysis of Odd-Mode Parametric Oscillations in HBT Multi-Stage Power Amplifiers," European Microwave Week, 11th GaAs Symposium, October 2003, pp. 533-536. 14. J.M. Collantes, N. Otegi, A. Anakabe, N. Ayllon, A. Mallet, G. Soubercaze-Pun, "Monte-Carlo Stability Analysis of Microwave Amplifiers," 12th Annual IEEE Wireless and Microwave Technology Conference (WAMICON), April 2011. 15. N. Ayllon, J.M. Collantes, A. Anakabe, I. Lizarraga, G. Soubercaze-Pun, and S. Forestier "Systematic Approach to the Stabilization of Multitransistor Circuits," IEEE Transactions on Microwave Theory and Techniques, Vol. 59, No. 8, August 2011, pp. 2073-2082. Collaborating In Quest Of Stability Maury Microwave Corp. (Ontario, CA) and AMCAD Engineering (Limoges, France) began collaborating in 2010 to blend their knowledge and experience in measurement and modeling device characterization. Maury Microwave and AMCAD—along with their partner, Agilent Technologies—provide numerous measurement solutions. These include active, passive, and hybrid nonlinear harmonic load-pull measurement capabilities, ranging from a few MHz through 110 GHz using Maury’s automated impedance tuners and IVCAD software. The vector-receiver load-pull capability offered by Maury/AMCAD (sometimes referred to as real-time load-pull capability) is achieved by using low-loss couplers between the tuner and a device under test (DUT), then measuring the calibrated a- and b-waves from the DUT in real time for each impedance and drive power presented to the DUT. A variety of parameters, including intermodulation parameters and vector parameters such as AM-to-PM distortion, can be calculated from these waves. Harmonic load-pull measurements can be accomplished by using mechanical tuners (multiple single-frequency tuners joined together with a multiplexer or cascaded serially, or multi-frequency single-box harmonic tuners), active tuning chains (consisting of a magnitude and phase controllable signal generator and amplifier), or a combination of both.
{"url":"http://mwrf.com/print/software/software-simplifies-stability-analysis","timestamp":"2014-04-19T10:12:38Z","content_type":null,"content_length":"32376","record_id":"<urn:uuid:d6ad729a-9db1-4b95-bb8a-70bbb64d8540>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Go4Expert - Sudoku | 21 Oct 2009 Re: Sudoku | 21 Oct 2009 @xpt: even I agree but my answer was based on the que posted by shabbir as per the competition rules, Added 20th Jun 2009 - After all we are Human and if the poster does any mistake in question there would be no winner.
{"url":"http://www.go4expert.com/printthread.php?t=19845","timestamp":"2014-04-19T12:24:28Z","content_type":null,"content_length":"9132","record_id":"<urn:uuid:575cdab4-ffe2-457f-9677-80b85e91e4aa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Isidore Nabi on the Tendencies of Motion In 1672 the First International Conference on the Trajectories of Bodies was convened in order to organize a concerted systems approach to the problem of motion. This was made necessary on the one hand by the widespread observation that objects move, and on the other by the currency of extravagant claims being made on the basis of an abstracted extrapolation of the motion of a single apple. Practical applications related to our peacekeeping mission were also a consideration. The organizing committee realized that a unified interdisciplinary approach was required in which the collection of data must be looked at over as wide a geographic transect as possible, ancillary information must be taken without prejudice on all the measurable properties of the objects, multiple regression and principal factor analysis applied to the results, and the nature of motion then assigned to its diverse causes, as observation and analysis dictated. It was further agreed that where alternative models fit the same data, both were to be included in the equation by the delta method of conciliatory approximation: let M be the motion of a body as function F(X1, X2, ...) of the variables Xi (parametric variables of state, such as the location, velocity, mass, color, texture, DNA content, esterase polymorphism, temperature, or smell of M), and let M1 = F1(X1, X2, X3, ...) be an alternative model that fits the data more or less equally well. Then (M1, M2) = d F1(X1, X2, X3, ...) + (1-d) F2(X1, X2, X3, ...) is the conciliated systems model. The value of d is arbitrary and is usually assigned in the same ratio as the academic rank or prestige of its proponents. Similarly, when dichotomous decisions arose, such as whether to include only moving objects or to also allow those at rest in the regression, both of the alternative modes were followed and then combined by delta conciliation. A total of 100,023 objects were examined, measured, and used in the statistical analysis. From these we calculated 100 main effects, 49,500 pairwise interaction terms, 50,000 three-way, and 410 four-way interaction coefficients, leaving 13 degrees of freedom for error variance. The data and coefficients have been deposited in the British Museum and may be published someday. Sample data are shown in Tables 1-1984. Some of the objects were Imperial Military Artifacts (IMAs), such as cannonballs. Since their tendencies of motion were similar to those of non-IMAs and were independent of the nature of the target (the variance caused by schools, hospitals, and villages all had insignificant F values), this circumstance need not concern us further. The IMAs were relevant only in that their extensive use in noncooperative regions (NCRs) provided data points that otherwise would have required Hazardous Information Retrieval (HIR), and that their inclusion in the study prevented Un-Financed Operations The motion of objects is extremely complex, subject to large numbers of influences. Therefore further study and renewal of the grant are necessary. But several results can be reported already, with the usual qualifications. 1. More than 90 percent of the objects examined were are rest during the period of observation. The proportion increased with size and, in the larger size classes, decreased with temperature above ambient at a rate that increased with latitude. 2. Of the moving objects, the proportion moving down varied with size, temperature, wind velocity, slope of substrate if the object was on a substrate, time of day, and latitude. These accounted for 58 percent of the variance. In addition, submodels were validated for special circumstances and incorporated by the delta method in the universal equation: a. Drowning men moved upward 3/7 of the time, and downward 4/7. b. Apples did indeed drop. A stochastic model showed that the probability of apple drop increases through the summer and increases with the concentration of glucose. c. Plants tend to move upward very slowly by growth most of the time, and downward rapidly occasionally. The net result is a mean tendency downward of about .001 percent +/- 4 percent. d. London is sinking. e. A stochastic model for the motion of objects at Wyndam Wood (mostly birds, at the .01 level) shows that these are in fact in a steady state except in late autumn, with upward motion exactly balancing downward motion in probability except on a set of measure zero. However, there was extreme local heterogeneity with upward motion predominating more the closer the observer approached, with a significant distance x observer interaction term. 3. Bodies at rest remain at rest with a probability of 0.96 per hour, and objects in motion tend to continue in motion with a probability of 0.06. 4. For celestial bodies, the direction of movement is influenced by proximity to other bodies, the strength of the interaction varying as the distance to the -1.5+/-.8 power. 5. A plot of velocity against time for moving objects shows a decidedly non-linear relation with very great variation. A slope of 32ft/sec/sec is passed through briefly, usually 1-18seconds after initiation of movement, but there is a marked deceleration prior to stopping, especially in birds. 6. For 95 percent +/- .06 percent of all actions, there is a corresponding reaction at an angle of 175 +/- 6 degrees from the first, and usually within 3 percent of the same magnitude. 7. On the whole, there is a slight tendency for objects to move down. 8. A general regression of motion was computed. Space considerations preclude its publication. 9. In order to check the validity of our model, a computer simulation program was developed as follows: the vector for velocity of motion V was set equal to the multiple regression expression for all combinations of maximum and minimum estimates of the regression coefficients. Since we had a total of 100,010 such parameters, there were 2 to the power of 100,010 combinations to be tested, or about 10^30,000. For each of these, the error terms were generated from a normal random variable generator subroutine (NRVGS). Finally, a statistical analysis of the simulated motions was tested for consistency with the model. Computations are being performed by the brothers of the monastic orders of Heteroscedastics and Cartesians, each working an abacus and linked in the appropriate parallel and serial circuits by their abbots. We have already scanned 10^5 combinations, and these are consistent with the model. Acknowledgement. This work was supported by the East India Company. The Dialectical Biologist (Levins and Lewontin)
{"url":"http://danny.oz.au/danny/quotes/isidore_nabi.html","timestamp":"2014-04-19T23:08:12Z","content_type":null,"content_length":"7888","record_id":"<urn:uuid:1261d305-e597-43d0-8bb2-e23b5f00c56b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
The Riemann Hypothesis for High School Students Hi All, I would like to present what I believe to be a simple way to convey the essence of the Riemann Hypothesis to High School students. I hope you like it, and reply with suggestions for further improvements. Note for teachers: the rationale behind the graphs lays with the geometric meaning of complex numbers, and with the equivalence of the zeros of the Riemann Zeta function with the zeros of the Dirichlet Eta function (more details at the bottom). The required level of math literacy is the following: - you are familiar with natural logarithms [tex]\ln[/tex] - you are familiar with angles measured in radians ([tex]\pi \Leftrightarrow 180[/tex]°) - you are familiar with the meaning of fractional powers, such as [tex]\sqrt{n}=n^{\frac{1}{2}} \;\;\; \sqrt[3]{n}=n^{\frac{1}{3}} \;\;\; \sqrt[5]{n^3}=n^{\frac{3}{5}} \;\;\; ,[/tex] etc. The explanation goes as follows (refer to Figure_1.pdf): • choose whatever positive value you wish for a proportionality factor, which we will call [tex]t[/tex] (t=38 in the example of Figure_1) • imagine to find yourself in an open field, and draw two reference lines at 90° to each other, such as the X and Y axes of cartesian coordinates, for example with the X axis pointing parallel to the northern direction as identified by an ideally accurate compass you have with you • walk 1 km along the X axis, and stop • identify a direction at an angle [tex]\theta_2=-t\ln2+\pi [/tex] wrt the direction pointed to by the compass, walk a distance [tex]1/\sqrt{2} \;\;\; km , [/tex] and stop • identify a direction at an angle [tex]\theta_3=-t\ln3[/tex] wrt the direction pointed to by the compass, walk a distance [tex]1/\sqrt{3} \;\;\; km , [/tex] and stop • identify a direction at an angle [tex]\theta_4=-t\ln4+\pi [/tex] wrt the direction pointed to by the compass, walk a distance [tex]1/\sqrt{4} \;\;\; km , [/tex] and stop • and so on ... for segment [tex]n[/tex], walking a distance [tex]1/\sqrt{n} \;\;\; km , [/tex] along the direction at an angle [tex]\theta_n=-t\ln n[/tex] (adding [tex] \pi [/tex] when [tex] n [/ tex] is even) • eventually, you will find yourself getting closer and closer to the "point of convergence", identified with a cross in the graph at the bottom of Figure_1 • it is interesting to remark that you will find yourself approaching said "point of convergence" by following a very simply structured crisscrossing path (for simplicity, only segments from n=293 to n=313 are shown). This is actually the result of having to add [tex] \pi [/tex] every other segment. In fact, when [tex]n[/tex] becomes sufficiently large, [tex]\theta_{n+1}[/tex] will be just a little bit larger than [tex]\theta_n [/tex] (because of the logarithm), and because one of the two will need to be turned around by 180° (the segment corresponding to even [tex]n[/tex]), the angle between two consecutive segments will eventually become an acute angle, shrinking down more and more as [tex]n[/tex] grows larger and larger. Can you see why said acute angle is now easy to calculate as [tex]\delta_{n+1}=t \ln \frac{n+1}{n} \;\;\; ? [/tex] What are the zeros of the Riemann Zeta Function ? said zeros are those particular values of [tex]t[/tex] that will bring you back where you started from, that is: the point X=0, Y=0 (see examples in Fig. 2 and 3). What does the Riemann Hypothesis state ? that you may have chances for finding values of [tex]t[/tex] bringing you back where you started from, if and only if the operation you carry out at the denominator for calculating the length of is exactly the square root, no other root will ever work (examples: [tex]\sqrt[3]{n}[/tex] or [tex]\sqrt[4]{n}[/tex] or [tex]\sqrt[9]{n}[/tex] or etc. etc. will not work, and will never, ever allow you to go back where you started from). In other words: if we write the length of segment [tex] \frac{1}{n^{\sigma}} \;\;\; with \;\;\; 0 < \sigma < 1 [/tex] the only hope we will ever have to find values of eventually bringing us back where we started from is that [tex]\sigma = \frac{1}{2}[/tex] Note for teachers: each of the segments making up the paths depicted in the attached figures actually corresponds to one of the terms of the following alternating sign infinite sum (the Dirichlet Eta function) [tex] \eta(s) = \sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^s} = 1-\frac{1}{2^s}+\frac{1}{3^s}-\frac{1}{4^s}+-\ldots [/tex] where [tex] s = \sigma + i t[/tex] each term is therefore a complex number, which can be represented by a vector, whose polar representation is [tex](-1)^{n-1}\frac{1}{n^{\sigma}} \;\; e^{-it \ln n}[/tex] If we wish to be strictly rigorous, the equivalent definition given above for the zeros of the Riemann Zeta function is in reality referring to zeros of the Dirichlet Eta function. But of course, in the interior of the critical strip the nontrivial zeros of the Riemann Zeta function coincide with the zeros of the Dirichlet Eta function, so that said equivalent definition is indeed a rigorous and correct definition.
{"url":"http://www.physicsforums.com/showthread.php?t=319758","timestamp":"2014-04-18T21:29:05Z","content_type":null,"content_length":"45457","record_id":"<urn:uuid:4ddf128d-de6a-4f9a-88d0-e1513d563373>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Study Guide/Linear motion Kinematics is the description of motion. The motion of a point particle is fully described using three terms - position, velocity, and acceleration. For real objects (which are not mathematical points), translational kinematics describes the motion of an object's center of mass through space, while angular kinematics describes how an object rotates about its centre of mass. In this section, we focus only on translational kinematics. Position, displacement, velocity, and acceleration are defined as follows. A vector is a quantity that has both magnitude and direction, typically written as a column of scalars. That is, a number that has a direction assigned to it. In physics, a vector often describes the motion of an object. For example, Warty the Woodchuck goes 10 meters towards a hole in the ground. We can divide vectors into parts called "components", of which the vector is a sum. For example, a two-dimensional vector is divided into x and y components. $\Delta \vec{x}\equiv\vec x_f- \vec x_i\,$ Displacement answers the question, "Has the object moved?" Note the $\equiv$ symbol. This symbol is a sort of "super equals" symbol, indicating that not only does $\vec x_f- \vec x_i$ EQUAL the displacement $\Delta\vec{x}$, but more importantly displacement is OPERATIONALLY DEFINED by $\vec x_f- \vec x_i$. We say that $\vec x_f- \vec x_i$ operationally defines displacement, because $\vec x_f- \vec x_i$ gives a step by step procedure for determining displacement. Namely ... 1. Measure where the object is initially. 2. Measure where the object is at some later time. 3. Determine the difference between these two position values. Be sure to note that DISPLACEMENT is NOT the same as DISTANCE travelled. For example, imagine travelling one time along the circumference of a circle. If you end where you started, your displacement is zero, even though you have clearly travelled some distance. In fact, displacement is an average distance travelled. On your trip along the circle, your north and south motion averaged out, as did your east and west motion. Clearly we are losing some important information. The key to regaining this information is to use smaller displacement intervals. For example, instead of calculating your displacement for your trip along the circle in one large step, consider dividing the circle into 16 equal segments. Calculate the distance you travelled along each of these segments, and then add all your results together. Now your total travelled distance is not zero, but something approximating the circumference of the circle. Is your approximation good enough? Ultimately, that depends on the level of accuracy you need in a particular application, but luckily you can always use finer resolution. For example, we could break your trip into 32 equal segments for a better approximation. Returning to your trip around the circle, you know the true distance is simply the circumference of the circle. The problem is that we often face a practical limitation for determining the true distance travelled. (The travelled path may have too many twists and turns, for example.) Luckily, we can always determine displacement, and by carefully choosing small enough displacement steps, we can use displacement to obtain a pretty good approximation for the true distance travelled. (The mathematics of calculus provides a formal methodology for estimating a "true value" through the use of successively better approximations.) In the rest of this discussion, I will replace $\Delta$ with $\delta$ to indicate that small enough displacement steps have been used to provide a good enough approximation for the true distance travelled. $\vec v_{av}\equiv \frac{\vec{x_f}-\vec{x_i}}{t_f-t_i}\equiv \frac{\Delta\vec{x}}{\Delta t}$ [Δ, delta, upper-case Greek D, is a prefix conventionally used to denote a difference.] Velocity answers the question "Is the object moving now, and if so - how quickly?" Once again we have an operational definition: we are told what steps to follow to calculate velocity. Note that this is a definition for average velocity. The displacement Δx is the vector sum of the smaller displacements which it contains, and some of these may subtract out. By contrast, the distance travelled is the scalar sum of the smaller distances, all of which are non-negative (they are the magnitudes of the displacements). Thus the distance travelled can be larger than the magnitude of the displacement, as in the example of travel on a circle, above. Consequently, the average velocity may be small (or zero, or negative) while the speed is positive. If we are careful to use very small displacement steps, so that they come pretty close to approximating the true distance travelled, then we can write the definition for INSTANTANEOUS velocity as $\vec v_{inst}\equiv \frac{\vec{\delta x}}{\delta t}$ [δ is the lower-case delta.] Or with the idea of limits from calculus, we have ... $\vec v_{inst}\equiv \frac{d \vec x}{dt}$ [d, like Δ and δ, is merely a prefix; however, its use definitely specifies that this is a sufficiently small difference so that the error--due to stepping (instead of smoothly changing) the quantity--becomes negligible.] $\vec a_{av}\equiv \frac{\vec{v_f}-\vec{v_i}}{t_f-t_i}\equiv \frac{\Delta\vec{v}}{\Delta t}$ Acceleration answers the question "Is the object's velocity changing, and if so - how quickly?" Once again we have an operational definition. We are told what steps to follow to calculate acceleration. Again, also note that technically we have a definition for AVERAGE acceleration. As for displacement, if we are careful to use a series of small velocity changes, then we can write the definition for INSTANTANEOUS acceleration as $\vec a_{inst}\equiv \frac{\delta\vec{v}}{\delta t}$ Or with the help of calculus, we have ... $\vec a_{inst}\equiv \frac{d \vec v}{dt} = \frac{d^2\vec x}{dt^2}$ Notice that the definitions given above for displacement, velocity, and acceleration included little arrows over many of the terms. The little arrow reminds us that direction is an important part of displacement, velocity, and acceleration. These quantities are VECTORS. By convention, the little arrow always points right when placed over a letter. So for example, $\vec v$ just reminds us that velocity is a vector, and does NOT imply that this particular velocity is rightward. Why do we need vectors? As a simple example, consider velocity. It is not enough to know how fast one is moving. We also need to know which direction we are moving. Less trivially, consider how many different ways an object could be experiencing an acceleration (a change in its velocity). Ultimately there are three distinct ways an object could accelerate. 1. The object could be speeding up. 2. The object could be slowing down. 3. The object could be traveling at constant speed, while changing its direction of motion. (More general accelerations are simply combinations of 1 and 3 or 2 and 3). Importantly, a change in the direction of motion is just as much an acceleration as is speeding up or slowing down. In classical mechanics, no direction is associated with time (you cannot point to next Tuesday). So the definition of $\vec a_{av}$ tells us that acceleration will point wherever the CHANGE in velocity $\Delta\vec{v}$ points. Understanding that the direction of $\Delta \vec{v}$ determines the direction of $\vec a$ leads to three non-mathematical but very powerful rules of thumb. 1. If the velocity and acceleration of an object point in the same direction, the object's speed is increasing. 2. If the velocity and acceleration of an object point in opposite directions, the object's speed is decreasing. 3. If the velocity and acceleration of an object are perpendicular to each other, the object's initial speed stays constant (in that initial direction), while the speed of the object in the direction of the acceleration increases--think of a bullet fired horizontally in a vertical gravitational field. Since velocity in the one direction remains constant, and the velocity in the other direction increases, the overall velocity (absolute velocity) also increases. (Again, more general motion is simply a combination of 1 and 3 or 2 and 3.) Using these three simple rules will dramatically help your intuition of what is happening in a particular problem. In fact, much of the first semester of college physics is simply the application of these three rules in different formats. Equations of motion : Constant accelerationEdit A particle is said to move with constant acceleration if its velocity changes by equal amounts in equal intervals of time, no matter how small the intervals may be. $\frac{d \vec v}{dt} = 0\ \mathrm{m\ s^{-2}}$ Since acceleration is a vector, constant acceleration means that both direction and magnitude of this vector don't change during the motion. This means that average and instantaneous acceleration are equal. We can use that to derive an equation for velocity as a function of time by integrating the constant acceleration. $\boldsymbol{v}(t)=\boldsymbol{v}(0)+\int\limits_{0}^{t}\boldsymbol{a}\ dt$ Giving the following equation for velocity as a function of time. To derive the equation for position we simply integrate the equation for velocity. $\boldsymbol{x}(t)=\boldsymbol{x}(0)+\int\limits_{0}^{t}\boldsymbol{v}(t)\ dt$ Integrating again gives the equation for position. The following are the 'Equations of Motion'. They are simple and obvious equations if you think over them for a while. Equations of Motion Equation Description $\vec{x}=\vec{x}_0 + \vec{v}_0 t+\frac{\vec{a}t^2}{2} \$ Position as a function of time $\vec v = \vec v_0 + \vec a t \$ Velocity as a function of time The following equations can be derived from the two equations above by combining them and eliminating variables. $v^2 = v_0^2 + 2\vec{a}\cdot(\vec{x}-\vec{x}_0) \$ Eliminating time (Very useful, see the section on Energy) $\vec{x}=\vec{x}_0+\frac{\vec{v}_0t+\vec{v}t}{2}$ Eliminating acceleration Key to Symbols Symbol Description $\vec{v}$ velocity at time t $\vec{v_0}$ initial velocity $\vec{a}$ acceleration (constant) $t\$ time taken during the motion $\vec{x}$ position at time t $\vec{x_0}$ initial position Acceleration in One DimensionEdit Acceleration in Two DimensionsEdit (Needs content) Acceleration in Three DimensionsEdit (Needs content) A Wikibookian suggests that this book or chapter be merged into Physics Study Guide because: This is because the module itself describes a small chapter already made in another module. Please discuss whether or not this merge should happen on the discussion page. Force in motion We need force in our life and motion too because there are things which needs to have force while it is moving, and if there is no force and there is motion in an object, then the object won't move. If it was the opposite the result will be the same. What does force in motion mean?Edit Force means strength and power. Motion means movement. That’s why we need forces and motions in our life. We need calculation when we want to know how fast things go, travel and other things which have force and motion. ... How do we calculate?Edit If you want to calculate the average speed, distance travelled or time taken you need to use this formula and remember it: ${speed} = \frac{distance}{time\ taken}$ This is an easy formula to use, you can find the distance travelled, time taken or average speed, you need at least 2 values to find the whole answer. What is velocity?Edit It is not just the speed which is important when you go on a journey. The direction matters as well, then when you want to talk about direction as well as a speed we use the word velocity. This equation for the velocity is using the distance travelled, average speed and time taken. So it will be the same. Those two racers are in a high speed while racing each other, they are stable in their direction, because they are in high speed and if the direction went out of control slightly and they are in high speed, then maybe the cyclist will get injured. When a car is speeding up we say that it is accelerating, when it slows down we say it is decelerating. How do we calculate it?Edit When we want to calculate it, the method goes like that: A lorry driver brakes hard, and slows from 25 m/s to 5 m/s in 5 seconds. What was the vehicle's acceleration? ${acceleration} = \frac{change\ in\ velocity}{time\ taken} = \frac{5 - 25}{5} = \frac{-20}{5} = -4\ ms^{-2}$ What is initial velocity and final velocity? Initial velocity is the beginning before motion starts or in the middle of the motion, final velocity is when the motion stops. There is another way to calculate it and it is like that This equations which are written is the primary ones, which means that when you don’t have lets say final velocity, how will you calculate the This is the way you are going to calculate. Observing motionEdit When you want to know how fast an athletic person is running, what you need is a stopwatch in your hand, then when the person starts to run, you start the stopwatch and when the person who is sprinting stops at the end point, you stop the watch and see how fast he ran, and if you want to see if the athlete is wasting his energy, while he is running look at his movement, and you will know by that if he is wasting his energy or not. This athletic person is running, and while he is running the scientist could know if he was wasting his energy if they want by the stop watch and looking at his momentum. Measuring accelerationEdit Take a slope, a trolley, some tapes and a stop watch, then put the tapes on the slope and take the trolley on the slope, and the stopwatch in your hand, as soon as you release the trolley, start timing the trolley at how fast it will move, when the trolley stops at the end then stop the timing. After wards, after seeing the timing , record it, then you let the slope a little bit high, and you will see, how little by little it will decelerate. Who is Newton, and what did he observe?Edit He was an English physicist, mathematician, astronomer, alchemist, and natural philosopher, he has three laws for physics, and they are: 1. Newton's First Law (also known as the Law of Inertia) states that an object at rest tends to stay at rest and that an object in uniform motion tends to stay in uniform motion unless acted upon by a net external force. 2. Newton's Second Law states that an applied force, F, on an object equals the time rate of change of its momentum, p. the acceleration of an object is directly proportional to the magnitude of the net force acting on the object and inversely proportional to its mass. In the MKS system of measurement, mass is given in kilograms, acceleration in metres per second squared, and force in newtons (named in his honour). 3. Newton's Third Law states that for every action there is an equal and opposite reaction. Distance travelled=D Force= F Initial velocity=U Final velocity=V Change in velocity=∆V Gravity= G Now here we are going to learn how to calculate the force, mass and acceleration. When we want to calculate the force, and we have the mass and acceleration, how are we going to calculate it? force= mass multiply acceleration This is considered in the second law of Newton. What is mass? Sometimes it is defined as the amount of matter in a body. But in Newton’s second law is defined as a numerical measure of inertia. What is inertia? The tendency of a body to maintain is state of rest or uniform motion unless acted upon by an external force. Who is Hooke, and what is his law? (July 18, 1635 – March 3, 1703.)He was an English polymath who played an important role in the scientific revolution, through both experimental and theoretical work. His law was about the spring limit, if you have stretched the spring beyond its limit, it will then change permanently, and will not return to its original place. The rubber band doesn’t obey Hooke’s law. This spring is not beyond its limit, so when you remove the weight, then it will automatically return to its place. Please add {{alphabetical}} only to book title pages. Last modified on 1 April 2014, at 22:58
{"url":"https://en.m.wikibooks.org/wiki/Physics_Study_Guide/Linear_motion","timestamp":"2014-04-20T18:31:58Z","content_type":null,"content_length":"54439","record_id":"<urn:uuid:ed461e1b-6bf3-4449-a84b-983dbf876b30>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
Middleboro Geometry Tutor Find a Middleboro Geometry Tutor ...I am also skilled at coaching for test preparation such as SAT, ACT, GRE, large parts of the MCAT, and other exams. I would focus on both learning the material and the best strategies for achieving success, given your particular situation. I'll make good use of your time with me. 47 Subjects: including geometry, chemistry, reading, calculus I have a great deal of experience in the engineering field. I offer tutoring services in Mathematics, Mechanical Engineering, Microsoft Applications and Business. My tutoring approach can be best described as mentoring where I focus on building ability to learn rather than focussing on learning one concept at a time. 21 Subjects: including geometry, calculus, algebra 1, algebra 2 ...I've never taught chemistry in school though I am licensed to do so, but use it commonly as an important component of biology. To a very large degree, every organism is an insanely complex web of chemical systems. I am a science teacher rather than a math teacher, but one of my hobbies is building sailboats and kayaks. 15 Subjects: including geometry, English, chemistry, biology ...As an undergraduate I did a lot of writing, and was published in the school's journal. I read abundantly on my own, and enjoy helping students to develop as writers. I do well (99th percentile) on standardized tests in both math and English. 29 Subjects: including geometry, reading, English, literature My name is Lauren and I am graduate from Saint Michael's College in Vermont. My focus during my four years was mathematics and education where I also participated in the varsity lacrosse program. My entire life I have worked with children both in coaching camps and working in classrooms. 12 Subjects: including geometry, calculus, algebra 1, algebra 2
{"url":"http://www.purplemath.com/middleboro_geometry_tutors.php","timestamp":"2014-04-17T07:18:17Z","content_type":null,"content_length":"23978","record_id":"<urn:uuid:b856d135-e877-47b6-b881-22f4cc4aa30b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
quantum theory Quantum theory is the theoretical basis of modern physics that explains the nature and behavior of matter and energy on the atomic and subatomic level. In 1900, physicist Max Planck presented his quantum theory to the German Physical Society. Planck had sought to discover the reason that radiation from a glowing body changes in color from red, to orange, and, finally, to blue as its temperature rises. He found that by making the assumption that energy existed in individual units in the same way that matter does, rather than just as a constant electromagnetic wave - as had been formerly assumed - and was therefore quantifiable, he could find the answer to his question. The existence of these units became the first assumption of quantum theory. Planck wrote a mathematical equation involving a figure to represent these individual units of energy, which he called quanta. The equation explained the phenomenon very well; Planck found that at certain discrete temperature levels (exact multiples of a basic minimum value), energy from a glowing body will occupy different areas of the color spectrum. Planck assumed there was a theory yet to emerge from the discovery of quanta, but, in fact, their very existence implied a completely new and fundamental understanding of the laws of nature. Planck won the Nobel Prize in Physics for his theory in 1918, but developments by various scientists over a thirty-year period all contributed to the modern understanding of quantum theory. The Development of Quantum Theory • In 1900, Planck made the assumption that energy was made of individual units, or quanta. • In 1905, Albert Einstein theorized that not just the energy, but the radiation itself was quantized in the same manner. • In 1924, Louis de Broglie proposed that there is no fundamental difference in the makeup and behavior of energy and matter; on the atomic and subatomic level either may behave as if made of either particles or waves. This theory became known as the principle of wave-particle duality: elementary particles of both energy and matter behave, depending on the conditions, like either particles or waves. • In 1927, Werner Heisenberg proposed that precise, simultaneous measurement of two complementary values - such as the position and momentum of a subatomic particle - is impossible. Contrary to the principles of classical physics, their simultaneous measurement is inescapably flawed; the more precisely one value is measured, the more flawed will be the measurement of the other value. This theory became known as the uncertainty principle, which prompted Albert Einstein's famous comment, "God does not play dice." The Copenhagen Interpretation and the Many-Worlds Theory The two major interpretations of quantum theory's implications for the nature of reality are the Copenhagen interpretation and the many-worlds theory. Niels Bohr proposed the Copenhagen interpretation of quantum theory, which asserts that a particle is whatever it is measured to be (for example, a wave or a particle), but that it cannot be assumed to have specific properties, or even to exist, until it is measured. In short, Bohr was saying that objective reality does not exist. This translates to a principle called that claims that while we do not know what the state of any object is, it is actually in all possible states simultaneously, as long as we don't look to check. To illustrate this theory, we can use the famous and somewhat cruel analogy of Schrodinger's Cat. First, we have a living cat and place it in a thick lead box. At this stage, there is no question that the cat is alive. We then throw in a vial of cyanide and seal the box. We do not know if the cat is alive or if it has broken the cyanide capsule and died. Since we do not know, the cat is both dead and alive, according to quantum law - in a superposition of states. It is only when we break open the box and see what condition the cat is that the superposition is lost, and the cat must be either alive or dead. The second interpretation of quantum theory is the many-worlds (or multiverse theory. It holds that as soon as a potential exists for any object to be in any state, the universe of that object transmutes into a series of parallel universes equal to the number of possible states in which that the object can exist, with each universe containing a unique single possible state of that object. Furthermore, there is a mechanism for interaction between these universes that somehow permits all states to be accessible in some way and for all possible states to be affected in some manner. Stephen Hawking and the late Richard Feynman are among the scientists who have expressed a preference for the many-worlds theory. Quantum Theory's Influence Although scientists throughout the past century have balked at the implications of quantum theory - Planck and Einstein among them - the theory's principles have repeatedly been supported by experimentation, even when the scientists were trying to disprove them. Quantum theory and Einstein's theory of relativity form the basis for modern physics. The principles of quantum physics are being applied in an increasing number of areas, including quantum optics, quantum chemistry, quantum computing , and quantum cryptography This was last updated in June 2006 Tech TalkComment Contribute to the conversation All fields are required. Comments will appear at the bottom of the article.
{"url":"http://whatis.techtarget.com/definition/quantum-theory","timestamp":"2014-04-18T10:44:36Z","content_type":null,"content_length":"65621","record_id":"<urn:uuid:5125807e-dccf-4113-896c-993986e2d0c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
by members of Uniwersytet Slaski Katowice, Poland (Econophysics, University of Silesia)) These are publications listed in written by members of the above institution who are registered with the RePEc Author Service . Thus this compiles the works all those currently affiliated with this institutions, not those affilated at the time of publication. List of registered members Register yourself . This page is updated in the first days of each month.| Working papers Working papers Undated material is listed at the 1. Edward W. Piotrowski & Jan Sladkowski, 2008. "A model of subjective supply-demand: the maximum Boltzmann/Shannon entropy solution," Papers 0811.2084, arXiv.org.
{"url":"http://ideas.repec.org/d/epslapl.html","timestamp":"2014-04-18T11:50:44Z","content_type":null,"content_length":"32724","record_id":"<urn:uuid:edcdbf20-f1cd-4d59-b5ce-d6a614b465ee>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Is a random subset of the real numbers non-measurable? Is the set of measurable sets measurable? up vote 15 down vote favorite One might say, "a random subset of $\mathbb{R}$ is not Lebesgue measurable" without really thinking about it. But if we unpack the standard definitions of all those terms (and work in ZFC), it's not so clear. Let $\Sigma \subset 2^\mathbb{R}$ be the sigma-algebra of all Lebesgue measurable sets. Give $2^\mathbb{R}$ the product measure. (It's a product of continuum many copies of the two-point set.) We want to say that $\Sigma$ is a null set in $2^\mathbb{R}$...but is $\Sigma$ even measurable? Laci Babai posed this question casually several years ago, and no one present knew how to go about it, but it might be easy for a set theorist. Also, a related question: Think of $2^\mathbb{R}$ as a vector space over the field with two elements and $\Sigma$ as a subspace. (Addition is xor, that is, symmetric set difference.) What is $\dim\ It's not hard to see that $\dim\left(2^\mathbb{R}/\Sigma\right)$ is at least countable, so if $\Sigma$ were measurable, it would be a null set. But that's as far as I made it. 1 Allow me to be the wiseguy that brings it up, in a model of ZF without the axiom of choice, in which all sets of real numbers are measurable the answer is: Yes, $\Sigma$ is measurable and the dimension of this vector space is $1$. :-) – Asaf Karagila Jul 16 '12 at 20:30 1 Relevant: terrytao.wordpress.com/2008/10/14/… – Qiaochu Yuan Jul 16 '12 at 20:44 3 @Asaf: $\:$ No, in that case the dimension of the vector space is $0$. $\;\;$ – Ricky Demer Jul 16 '12 at 20:56 Right, thanks Ricky. I keep confusing those finite numbers! :-P – Asaf Karagila Jul 16 '12 at 21:02 ...because the only subset of $\Sigma$ which is measurable with respect to the product sigma-algebra is the empty set, and the only superset of $\Sigma$ which is measurable in this sense is the 5 whole of $2^\mathbb{R}$. This is because measurability of a subset of $2^\mathbb{R}$ can only depend on sampling at a countable set of points in $\mathbb{R}$. Determining whether a set $S$ is in $ \Sigma$ requires sampling it at uncountably many points. – George Lowther Jul 16 '12 at 21:17 show 1 more comment 4 Answers active oldest votes The answer to your second question (assuming the axiom of choice, to dodge Asaf's comment) is that $2^{\mathbb R}/\Sigma$ has dimension $2^{\mathfrak c}$, where $\mathfrak c=2^{\aleph_0}$ is the cardinality of the continuum. The main ingredient of the proof is a partition of $[0,1]$ into $\mathfrak c$ subsets, each of which intersects every uncountable closed subset of $ [0,1]$. To get such a partition, first note that there are only $\mathfrak c$ closed subsets of $[0,1]$, so you can list them in a sequence of length (the initial ordinal of cardinality) $\ mathfrak c$ in such a way that each closed set is listed $\mathfrak c$ times. Second, recall that every uncountable closed subset of $[0,1]$ has cardinality $\mathfrak c$. Finally, do a transfinite inductive construction of $\mathfrak c$ sets in $\mathfrak c$ steps as follows: At any step, if the closed set at that position in your list is $C$ and if this is its $\ alpha$-th occurrence in the list, then put an element of $C$ into the $\alpha$-th of the sets under construction, being careful to use an element of $C$ that hasn't already been put into up vote another of the sets under construction. You can be this careful, because fewer than $\mathfrak c$ points have been put into any of your sets in the fewer than $\mathfrak c$ preceding 16 down stages, while $C$ has $\mathfrak c$ points to choose from. At the end, if some points in $[0,1]$ remain unassigned to any of the sets under construction, put them into some of these sets vote arbitrarily, to get a partition of $[0,1]$. Once you have this partition, notice that every piece has outer measure 1, because otherwise it would be disjoint from some closed set that has positive measure and is therefore uncountable. This implies that, among the $2^{\mathfrak c}$ sets that you can form as unions of your partition's pieces, only $\varnothing$ and $[0,1]$ can be measurable. In particular, no finite, nonempty, symmetric difference of these pieces is measurable. That is, they represent linearly independent elements of $2^{\mathbb R}/\Sigma$. Nice argument. Thanks for the answer! – Gene S. Kopp Jul 16 '12 at 23:47 add comment $\Sigma$ is clearly not a measurable set in the product sigma-algebra, moreover it is so non-measurable that every measurable set containing it is the whole set (any any measurable set contained in it is trivial). Proof: Consider the set of sets of sets of real numbers consisting of all sets of sets of real numbers $S$ where whether a set of real numbers $T$ is contained in $S$ is determined by $T \ cap U$ for some countable set of real numbers $U$. That is, if $T' \cap U=T \cap U$, then $T' \in S$ if and only if $T \in S$. Less formally, this set of sets of sets of real numbers up vote consists of all properties that can be checked by only looking at countably many points. 12 down vote This is clearly a sigma-algebra. It contains the product sigma-algebra because it contains sets of the form $\{T\in 2^{\mathbb R}| x\in T\}$ for each real number $x$. And it clearly does not contain the set of measurable sets, nor any proper set containing the set of measurable sets, nor any nontrivial set contained in the set of measurable sets, because one can add or remove arbitrary countable sets from a measurable/non-measurable set and preserve its measurability/non-measurability, and both measurable and non-measurable sets exist. +1. Ah, of course. The product sigma-algebra contains only sets $\mathcal{S}$ of sets of real numbers in which membership $S \in \mathcal{S}$ is determined by membership $s \in S$ for countably many real numbers $s$. Thanks! – Gene S. Kopp Jul 16 '12 at 23:46 add comment The problem of choosing subsets at random has been studied in a rather different context in mathematical economics. Suppose we choose a subset of $[0,1]$ by independently throwing a fair coin for each number. Heuristically, such a set should have measure $1/2$. For what we do is randomly choose an indicator function with pointwise expectation $1/2$. By some intuitive apeal to a law of large numbers, the sample realizations should have the same expectation. This kind of reasoning is widely used in economics. A large population is modeled by a continuum and even when each person faces individual uncertainty, there should be no aggregate uncertainty. For the reason given by Will Sawin, the naive approach doesn't work quite well. For Lebesgue measure, some intuition comes from Lusin's theorem to the effect that every measurable function is continuous on a "large" subset. Continouity is a condition to the effect that the value at a point is closely related to the value at nearby points. If you choose independently at each value, you wouldn't expect to get a function continuous on a large set. The general tradeoff between independence and measurable sample realizations is strongly expressed in the following result of Yeneng Sun: Proposition: Let $(I,\mathcal{I},\mu)$ and $(X,\mathcal{X},\nu)$ be probability spaces with (complete) product probability space $(I\times X,\mathcal{I}\otimes\mathcal{X},\mu\otimes\nu)$ and $f$ be a jointly measurable function from $I\times X$ to $\mathbb{R}$ such that for $\mu\otimes\mu$-almost all $(i,j)$ the functions $f(i,\cdot)$ and $f(j,\cdot)$ are independent. Then for $\ mu$-almost all $i$, the function $f(i,\cdot)$ is constant. up vote Note that the independence condition in this result is quite weak. Sun calls it almost sure pairwise independence. But an important discovery by Sun was that if joint measurability and almost 5 down sure pairwise independence were compatible, one could obtain an exact law of large numbers for a continuum of random variables by an application of Fubini's theorem. In particular, such a law vote of large numbers holds for extensions of the product spaces that allow for the conclusion of Fubini's theorem to hold and still allow for nontrivial (a.s. pairwise) independent processes. He called such extensions rich Fubini extensions and gave one example of such a product space: The Loeb product of two hyperfinite Loeb spaces. So one can get natural random sets for some spaces. The reference is: The exact law of large numbers via Fubini extension and characterization of insurable risks (2006) A systematic study of rich Fubini extensions was done by Konrad Podczeck in the paper On existence of rich Fubini extensions (2010), in which he has essentially shown that one can choose random subsets of a probability space if and only if the probability space has the following property, which he called super-atomlessnes (and which is known by a lot of other names such as For any subset $A$ with positive measure, the measure algebra of the trace on $A$ does not coincide with the measure algebra of a countably generated space. Lebesgue measure on the unit interval does not satisfy this condition, but there exists extensions of Lebesgue measure that are superatomless. Conclusion: One cannot obtain random Lebesgue measurable sets in a sensible way by choosing independently elements, but one can choose random sets in an extension of Lebesgue measure this add comment See this amazing paper ... Fremlin, David H.; Talagrand, Michel A decomposition theorem for additive set-functions, with applications to Pettis integrals and ergodic means. Math. Z. 168 (1979), no. 2, 117–142. up vote 3 down vote add comment Not the answer you're looking for? Browse other questions tagged measure-theory set-theory lo.logic real-analysis pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/102386/is-a-random-subset-of-the-real-numbers-non-measurable-is-the-set-of-measurable/102388","timestamp":"2014-04-20T01:07:13Z","content_type":null,"content_length":"78712","record_id":"<urn:uuid:62c83ab9-7406-4121-9708-8556c8cf4dec>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
16.1. Calculating the shortest-path tree for an area Connected: An Internet Encyclopedia 16.1. Calculating the shortest-path tree for an area Up: Connected: An Internet Encyclopedia Up: Requests For Comments Up: RFC 1583 Up: 16. Calculation Of The Routing Table Prev: 16. Calculation Of The Routing Table Next: 16.1.1. The next hop calculation 16.1. Calculating the shortest-path tree for an area This calculation yields the set of intra-area routes associated with an area (called hereafter Area A). A router calculates the shortest-path tree using itself as the root.[21] The formation of the shortest path tree is done here in two stages. In the first stage, only links between routers and transit networks are considered. Using the Dijkstra algorithm, a tree is formed from this subset of the link state database. In the second stage, leaves are added to the tree by considering the links to stub networks. The procedure will be explained using the graph terminology that was introduced in Section 2. The area's link state database is represented as a directed graph. The graph's vertices are routers, transit networks and stub networks. The first stage of the procedure concerns only the transit vertices (routers and transit networks) and their connecting links. Throughout the shortest path calculation, the following data is also associated with each transit vertex: Vertex (node) ID A 32-bit number uniquely identifying the vertex. For router vertices this is the router's OSPF Router ID. For network vertices, this is the IP address of the network's Designated Router. A link state advertisement Each transit vertex has an associated link state advertisement. For router vertices, this is a router links advertisement. For transit networks, this is a network links advertisement (which is actually originated by the network's Designated Router). In any case, the advertisement's Link State ID is always equal to the above Vertex ID. List of next hops The list of next hops for the current set of shortest paths from the root to this vertex. There can be multiple shortest paths due to the equal-cost multipath capability. Each next hop indicates the outgoing router interface to use when forwarding traffic to the destination. On multi-access networks, the next hop also includes the IP address of the next router (if any) in the path towards the destination. Distance from root The link state cost of the current set of shortest paths from the root to the vertex. The link state cost of a path is calculated as the sum of the costs of the path's constituent links (as advertised in router links and network links advertisements). One path is said to be "shorter" than another if it has a smaller link state cost. The first stage of the procedure (i.e., the Dijkstra algorithm) can now be summarized as follows. At each iteration of the algorithm, there is a list of candidate vertices. Paths from the root to these vertices have been found, but not necessarily the shortest ones. However, the paths to the candidate vertex that is closest to the root are guaranteed to be shortest; this vertex is added to the shortest-path tree, removed from the candidate list, and its adjacent vertices are examined for possible addition to/modification of the candidate list. The algorithm then iterates again. It terminates when the candidate list becomes empty. The following steps describe the algorithm in detail. Remember that we are computing the shortest path tree for Area A. All references to link state database lookup below are from Area A's database. 1. Initialize the algorithm's data structures. Clear the list of candidate vertices. Initialize the shortest-path tree to only the root (which is the router doing the calculation). Set Area A's TransitCapability to FALSE. 2. Call the vertex just added to the tree vertex V. Examine the link state advertisement associated with vertex V. This is a lookup in the Area A's link state database based on the Vertex ID. If this is a router links advertisement, and bit V of the router links advertisement (see Section A.4.2) is set, set Area A's TransitCapability to TRUE. In any case, each link described by the advertisement gives the cost to an adjacent vertex. For each described link, (say it joins vertex V to vertex W): 1. If this is a link to a stub network, examine the next link in V's advertisement. Links to stub networks will be considered in the second stage of the shortest path calculation. 2. Otherwise, W is a transit vertex (router or transit network). Look up the vertex W's link state advertisement (router links or network links) in Area A's link state database. If the advertisement does not exist, or its LS age is equal to MaxAge, or it does not have a link back to vertex V, examine the next link in V's advertisement.[22] 3. If vertex W is already on the shortest-path tree, examine the next link in the advertisement. 4. Calculate the link state cost D of the resulting path from the root to vertex W. D is equal to the sum of the link state cost of the (already calculated) shortest path to vertex V and the advertised cost of the link between vertices V and W. If D is: ☆ Greater than the value that already appears for vertex W on the candidate list, then examine the next link. ☆ Equal to the value that appears for vertex W on the candidate list, calculate the set of next hops that result from using the advertised link. Input to this calculation is the destination (W), and its parent (V). This calculation is shown in Section 16.1.1. This set of hops should be added to the next hop values that appear for W on the candidate list. ☆ Less than the value that appears for vertex W on the candidate list, or if W does not yet appear on the candidate list, then set the entry for W on the candidate list to indicate a distance of D from the root. Also calculate the list of next hops that result from using the advertised link, setting the next hop values for W accordingly. The next hop calculation is described in Section 16.1.1; it takes as input the destination (W) and its parent (V). 3. If at this step the candidate list is empty, the shortest- path tree (of transit vertices) has been completely built and this stage of the procedure terminates. Otherwise, choose the vertex belonging to the candidate list that is closest to the root, and add it to the shortest-path tree (removing it from the candidate list in the process). Note that when there is a choice of vertices closest to the root, network vertices must be chosen before router vertices in order to necessarily find all equal-cost paths. This is consistent with the tie-breakers that were introduced in the modified Dijkstra algorithm used by OSPF's Multicast routing extensions (MOSPF). 4. Possibly modify the routing table. For those routing table entries modified, the associated area will be set to Area A, the path type will be set to intra-area, and the cost will be set to the newly discovered shortest path's calculated distance. If the newly added vertex is an area border router (call it ABR), a routing table entry is added whose destination type is "area border router". The Options field found in the associated router links advertisement is copied into the routing table entry's Optional capabilities field. If in addition ABR is the endpoint of one of the calculating router's configured virtual links that uses Area A as its Transit area: the virtual link is declared up, the IP address of the virtual interface is set to the IP address of the outgoing interface calculated above for ABR, and the virtual neighbor's IP address is set to the ABR interface address (contained in ABR's router links advertisement) that points back to the root of the shortest-path tree; equivalently, this is the interface that points back to ABR's parent vertex on the shortest-path tree (similar to the calculation in Section 16.1.1). If the newly added vertex is an AS boundary router, the routing table entry of type "AS boundary router" for the destination is located. Since routers can belong to more than one area, it is possible that several sets of intra- area paths exist to the AS boundary router, each set using a different area. However, the AS boundary router's routing table entry must indicate a set of paths which utilize a single area. The area leading to the routing table entry is selected as follows: The area providing the shortest path is always chosen; if more than one area provides paths with the same minimum cost, the area with the largest OSPF Area ID (when considered as an unsigned 32-bit integer) is chosen. Note that whenever an AS boundary router's routing table entry is added/modified, the Options found in the associated router links advertisement is copied into the routing table entry's Optional capabilities field. If the newly added vertex is a transit network, the routing table entry for the network is located. The entry's Destination ID is the IP network number, which can be obtained by masking the Vertex ID (Link State ID) with its associated subnet mask (found in the body of the associated network links advertisement). If the routing table entry already exists (i.e., there is already an intra-area route to the destination installed in the routing table), multiple vertices have mapped to the same IP network. For example, this can occur when a new Designated Router is being established. In this case, the current routing table entry should be overwritten if and only if the newly found path is just as short and the current routing table entry's Link State Origin has a smaller Link State ID than the newly added vertex' link state advertisement. If there is no routing table entry for the network (the usual case), a routing table entry for the IP network should be added. The routing table entry's Link State Origin should be set to the newly added vertex' link state advertisement. 5. Iterate the algorithm by returning to Step 2. The stub networks are added to the tree in the procedure's second stage. In this stage, all router vertices are again examined. Those that have been determined to be unreachable in the above first phase are discarded. For each reachable router vertex (call it V), the associated router links advertisement is found in the link state database. Each stub network link appearing in the advertisement is then examined, and the following steps are executed: 1. Calculate the distance D of stub network from the root. D is equal to the distance from the root to the router vertex (calculated in stage 1), plus the stub network link's advertised cost. Compare this distance to the current best cost to the stub network. This is done by looking up the stub network's current routing table entry. If the calculated distance D is larger, go on to examine the next stub network link in the advertisement. 2. If this step is reached, the stub network's routing table entry must be updated. Calculate the set of next hops that would result from using the stub network link. This calculation is shown in Section 16.1.1; input to this calculation is the destination (the stub network) and the parent vertex (the router vertex). If the distance D is the same as the current routing table cost, simply add this set of next hops to the routing table entry's list of next hops. In this case, the routing table already has a Link State Origin. If this Link State Origin is a router links advertisement whose Link State ID is smaller than V's Router ID, reset the Link State Origin to V's router links advertisement. Otherwise D is smaller than the routing table cost. Overwrite the current routing table entry by setting the routing table entry's cost to D, and by setting the entry's list of next hops to the newly calculated set. Set the routing table entry's Link State Origin to V's router links advertisement. Then go on to examine the next stub network link. For all routing table entries added/modified in the second stage, the associated area will be set to Area A and the path type will be set to intra-area. When the list of reachable router links is exhausted, the second stage is completed. At this time, all intra-area routes associated with Area A have been determined. The specification does not require that the above two stage method be used to calculate the shortest path tree. However, if another algorithm is used, an identical tree must be produced. For this reason, it is important to note that links between transit vertices must be bidirectional in ordered to be included in the above tree. It should also be mentioned that more efficient algorithms exist for calculating the tree; for example, the incremental SPF algorithm described in [BBN]. Next: 16.1.1. The next hop calculation Connected: An Internet Encyclopedia 16.1. Calculating the shortest-path tree for an area
{"url":"http://www.cotse.com/CIE/RFC/1583/87.htm","timestamp":"2014-04-16T16:25:10Z","content_type":null,"content_length":"46362","record_id":"<urn:uuid:806458bc-3808-45fc-aa8d-473338809684>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
There really is no difference between men and women's math abilities There's a longstanding myth of a gender gap between boys' and girls' math performance, suggesting some basic biological difference in how the two genders approach math. It's deeply controversial and widely discredited. And now, a new study has completely debunked it. Until now, there was maybe a sliver of statistical data to support the existence of this gender gap — nothing remotely convincing, mind you, but just enough that the idea couldn't be entirely dismissed out of hand. While most who studied the issue pointed for cultural or social reasons why girls might lag behind boys in math performance, there was still room for biological theories to be The best-known of these is the "greater male variability hypothesis", which basically says ability among males varies more widely than that of females, which means you'll see more males at the extreme ends of the spectrum, good and bad. Then-Harvard president Larry Summers infamously put forward this idea back in 2005 as a way to explain the lack of great female mathematicians, and this was one of about a dozen different factors that ultimately cost him his job. Now, researchers Jane Mertz of the University of Wisconsin-Madison and Jonathan Kane of the University of Wisconsin-Whitewater have performed the most comprehensive exploration yet of math performance. They took in data from 86 different countries, many of which had not previously kept reliable records of math performance and so their addition allowed for much stronger cross-cultural analysis. So what did they find? First, in many countries, there's no gender gap at all both at the average and very high levels of performance. Some countries, including the United States, do show a gender gap, but that gap has decreased substantially over the last few decades, and some test scores suggest American girls have already caught up to their male counterparts. The researchers looked at one measure of young people with extremely high math abilities - namely, those who scored a 700 or higher on the math section of the SAT before the age of 13. In 1970, boys in this category outnumbered girls 13 to 1, while today the ratio is just 3 to 1 and still falling. Similarly, while just 5% of math Ph.D.s in the United States in the 1960s were given to women, today that figure stands at 30%. All of these findings argue strongly that the apparent gender gaps are really just disparities in education and cultural expectations, not evidence of some deeper biological mechanism. If there really is a "math gene" or something like it that males have and boys don't, we simply wouldn't see such vast changes over time or indeed in different countries, many of which show no gender gap at And what about the greater male variability hypothesis? Well, there's a bit of evidence to support this - provided you blatantly cherry-pick certain countries. Kane and Mertz compared the variability of male and female math scores in different countries and found that the variability ratio in Taiwan is 1.31, meaning boys there do have substantially more variability than girls. However, the ratio in Morocco is 1.00, meaning there is absolutely no difference in the genders' variability. You can go even further by looking at Tunisia, which has a ratio of 0.91, which means it's actually the girls there who show greater variability. For this hypothesis to be correct, it would have to hold true for all countries — the fact that the ratios vary so much means it's just the result of different cultural factors, or it could simply be random statistical noise. Mertz and Kane were also able to debunk a couple other hypotheses about math performance, specifically the "single-gender classroom hypothesis" and "Muslim culture hypothesis", both of which were argued for by Freakonomics author Steven Levitt. The idea here is that the gender inequity found in many Muslim countries actually benefits girls, perhaps because they are generally educated in gender-separated classrooms and that helps somehow. It's an interesting, counter-intuitive idea, but it also appears to be completely wrong. The authors say that, upon close examination of the data, girls in these single-gender classrooms still scored quite poorly. The boys in these countries, such as Bahrain and Oman, had scored even worse, but Kane suggests that's because many attend religious schools with little emphasis on mathematics. Also, low-performing girls are often pressured to drop out of school and so don't appear in the statistics, which falsely inflates the girls' overall performance. The point, says Kane, is that these differing scores don't point to benefits of gender-separated classrooms or speak to features of Muslim culture as a whole - rather, they're due to social factors in play in a few countries, and the single-gender classrooms are just a confounding variable. Indeed, Mertz and Kane were able to demonstrate pretty much the exact opposite of those hypotheses: as a general rule, high gender equality doesn't just remove the gender gap, it also improves test scores overall. In particular, countries where women have high participation in the labor force, and command salaries comparable to those of their male counterparts, generally have the highest math scores overall. The researchers comment on this finding: Kane: "We found that boys — as well as girls — tend to do better in math when raised in countries where females have better equality, and that's new and important. It makes sense that when women are well-educated and earn a good income, the math scores of their children of both genders benefit." Mertz: "Many folks believe gender equity is a win-lose zero-sum game: If females are given more, males end up with less. Our results indicate that, at least for math achievement, gender equity is a win-win situation." As for how to close the gap even further and generally increase math scores, Mertz says the study argues strongly against the proposal to create single-gender classrooms. Instead, the researchers point to fairly common sense solutions: increase the number of math teachers in middle and high schools, decrease the number of children currently living in poverty, and take greater steps to reduce gender inequity. Those may all seem fairly straightforward, but that's pretty much exactly the point - this isn't about tricking our brains or creating some perfect conditions to unlock children's hidden mathematical aptitude. As Mertz explains, this is all about culture, not biology: "None of our findings suggest that an innate biological difference between the sexes is the primary reason for a gender gap in math performance at any level. Rather, these major international studies strongly suggest that the math-gender gap, where it occurs, is due to sociocultural factors that differ among countries, and that these factors can be changed." Read the original paper at the American Mathematical Society. Top image: D Sharon Pruitt/Pink Sherbet Photography, via Flickr. 11 72Reply
{"url":"http://io9.com/5867401/there-really-is-no-difference-between-men-and-womens-math-abilities?tag=debunkery","timestamp":"2014-04-17T21:45:29Z","content_type":null,"content_length":"91791","record_id":"<urn:uuid:aa094887-0dfe-41fc-a235-c399c19ccf2b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: August 2009 [00373] [Date Index] [Thread Index] [Author Index] Re: Lisp Macros in Mathematica (Re: If Scheme is so good • To: mathgroup at smc.vnet.net • Subject: [mg102545] Re: Lisp Macros in Mathematica (Re: If Scheme is so good • From: David Bailey <dave at removedbailey.co.uk> • Date: Thu, 13 Aug 2009 03:20:59 -0400 (EDT) • References: <b0g665llur83sj9dnumktjvnipacj7bgrt@4ax.com> <h5r8ek$l71$1@smc.vnet.net> David Bakin wrote: > It is very easy to make what Lisp calls "special forms". You use HoldAll or > related attributes to create your own "special forms", then manipulate the > arguments in their full form (aka S-expressions), evaluating things when you > wish. > As far as I am aware you do not get reader syntax like quotes and > quasiquotes that make Lisp macros easy to write. But depending on what > you're trying to do you may not need them. With Mathematica you get full > pattern matching on arguments (not just destructuring) and rule-based > programming, and everything else that Mathematica provides. You may find > these features are even more effective than quoting and quasiquoting in > writing macros. > BTW, Mathematica does not need a separate defmacro call that basically means > "define this function such that you don't evaluate any arguments, but when I > return the result, you evaluate it". That is because 1) To get the first > part you add the HoldAll attribute to your function name, and 2) Mathematica > automatically evaluates the result returned by any/all functions, until > there's nothing more to evaluate (which is an important difference between > the Lisp REPL and the Mathematica REPL). If I understand what you mean, then surely this is missing the point. Let's take a specific example. Suppose you have a vector of data representing conditions in a reaction vessel, and you define functions such as: This is a function, and could be used to access the relevant component of a vector, but not to change that value. However, if you code in that style, you will sacrifice a lot of performance for the sake of clarity. One way to solve that problem, is to define another operator for use in defining functions instead of SetDelayed (:=). This lets you make substitutions on the held form of the RHS before the DownValue is created - so you code pH[vec] but execute vec[[17]] . Alternatively, you could use $Pre or $PreRead, but since these do not work on code loaded by Get or Needs, I don't find this approach general As I said before, I think a built-in macro mechanism would be a valuable addition to Mathematica. David Bailey
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Aug/msg00373.html","timestamp":"2014-04-18T03:08:15Z","content_type":null,"content_length":"27692","record_id":"<urn:uuid:c584a4d5-d5b8-492e-a5f9-2714ec5c4172>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Corin on Friday, August 24, 2007 at 2:16pm. How would I go about finding the derivative of f(x)=ln(x)(7x-2)^3 second= (7x-2)^3 Would I do first times the derivative of the second + second times the derivative of the first? If so, what is the derivative of lnx? I'm confused. Thanks for your help. • Calculus - Count Iblis, Friday, August 24, 2007 at 2:38pm "Would I do first times the derivative of the second + second times the derivative of the first?" That's right. But whenever you are not sure about such a rule you should derive it yourself from first principles. Otherwise you are just going to use a rule that you don't understand. The derivative of ln(x) is 1/x. • Calculus - ~christina~, Friday, August 24, 2007 at 3:02pm f'(x)= 1/x If I remember correctly your going to have to derize the second one again (chain rule) x= 7x-2 dx= 7 (x)^3dx get the derivative of this 3(x)^3-1 dx + c 3(x)^2 dx + c and plugging in the found values... 3(7x-2)^2 ( 7)= 21(7x-2)^2 ---(2nd part) For the first part it Should just be ln(x)= 1/x if I'm not incorrect (my text uses 2 functions instead of using ln so I'm not 100% sure) so putting it together assuming my thinking is correct: (1/x )(21(7x-2)^2) since the 2nd was already differentiated.. by the product rule if I remember correctly f'(x)= (1/x)+(21(7x-2)^2) • Calculus - ~christina~, Friday, August 24, 2007 at 3:43pm I forgot a important part..the first part is to use the product rule derivative of the first * second function + first*derivative of the second then doing this again,correcting that error f(x)= ln(x)(7x-2)^3 product rule first then the chain rule for (7x-2)^3 (1/x)(7x-2)^3 + (lnx)3(7x-2)^2 chain rule for the 2nd part x = 7x-2 dx = 7 ~you could replace the internal equation 7x-2 with x or not but if you do (1/x)(7x-2)^3 + (lnx)3(x)^2dx plug in the values of x and dx and (1/x)(7x-2)^3 + (lnx)3(7x-2)^2(7)= (1/x)(7x-2)^3 + 21 (lnx)(7x-2)^2 (I didn't go and simplify though) Related Questions Applied Calculus - Find the first and second derivative of the function: x/(7x+... Calculus - derivatives - Okay, I want to find the derivative of (x^x)^(x^x)... ... Calculus - I need to find the second derivative of y=x(x-1)^1/2. I found the ... calculus - Find the first and second derivative - simplify your answer. y=x/4x+1... calculus - Please help with this. I need the first derivative of f(x)=4(x+... Calculus - I need help with finding first and second derivative--simplify your ... calculus - Please help with this. I submitted it below and was asked to clarify ... Calculus - R=M^2(c/2-m/3) dR/dM=CM-M^2 I found the derivative. Now how would I ... College Calculus 1 - We have to find the first and second derivative of f(x)=x^(... calculus - Please help with this. I submitted it below but no one responded. I ...
{"url":"http://www.jiskha.com/display.cgi?id=1187979396","timestamp":"2014-04-20T22:47:56Z","content_type":null,"content_length":"10852","record_id":"<urn:uuid:2ba80115-5ddc-4178-9832-44cf2e534660>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparison between 15 different cities… September 6, 2009 By the R user... The other day I was thinking about how expensive will it be to live in London (as I'm going to live in London for almost one year), and also thinking about how expensive are the cities where some of my best friends live and I got a report by UBS called "Prices and Earnings 2009", it's all about life costs and salaries in 100 cities around the world, so I started 'playing' with some of the data from 15 cities in that report and I decided to use R, I used some regressions and bar plots, but I thought it would be interesting to use real examples to make it even more interesting, if anybody needs the code to generate the graphics I posted here just let me know ok?, here is what I got... Total expenditure in goods and services vs Salary per hour In this first graph, we can think about the salary as a function of the total expenditure in goods and services, the salary tends to increase when the expenditure in goods and services increases, i.e. the more you spend in goods and services in your city, the more you have to earn. An easy way to think about this graphic is that the 'fair' salary for each point in the x axis (expenditure) should be the red line (a linear regression using these points), then, the cities above the red line are the ones who earn more money than the money they should, then the goods and services there are cheaper than they should be; by the other side, the ones below the red line, earn less money than the money they should given what they spend in goods and services in that city, an important thing to say is that the size of the bubbles in the graphics is the GDP for each city. Then, for example, Mexico city (my city) is expensive, but by the other side, London would be 'fair' talking about the expenditure in goods and services. Any comment about the other cities? To enlarge the graphic click How much will I spend in food in London? After getting the last graphic, I was wondering how much would I spend in food in London, so I did the same with the data available, it was the cost of a weighted basket of goods with 39 foodstuffs, to be more accurate: the monthly expenditure of average Western family, here is what I got: To enlarge the image click It's almost the same than last graphic, but here we have the Net hourly pay in USD per hour as a function of food prices, which, according to a linear regression, the salary should increase as prices in food do so, and again, the bubbles on the line would be the cities with 'fair' food prices according to their salary, here for example, Mexico city it's expensive when we think about food given the net hourly income we have here, but for example, London or Berlin would be cheap, because they earn more than they should given food prices in those cities; while Stockholm has 'fair' prices when we think about food, what do you think about the others?... What about apartment rents? After thinking about food and expenditure in goods and services, I also wanted to think about apartment rents, the data I got was the average cost of housing (excluding extremes) per month, which an apartment seeker would expect to pay on the free market at the time of the survey. The figures given are merely tentative values for average rent prices (monthly gross rents) for a majority of local households. Here's the interesting graphic that I got: To enlarge the image please click Here we (again) can think about the salary as a function of apartment rents, then, the ones above the line are the 'cheap' ones (given their net hourly pay per hour), the ones below are the 'expensive' ones, and the ones on the red line are the ones who with a 'fair' price. Mexico city seems to be expensive, while London seems to have 'fair' prices, any comment about New York? What about going out for dinner? I think this is such an important topic, wether if you're going out for dinner with friends or if you're going out with a special person... I think it's important to know what cities are expensive to go out for dinner at, don't you think?, the data I used here refers to the price of an evening meal (three-course menu with starter, main course and dessert, without drinks) including service, in a good restaurant, here is the graphic: To enlarge the image click Here it's the same idea than in the last 3 graphics, we can think about the salary as a function of restaurant prices, and again (of course), we can see that when restaurant prices increase, we should expect the net hourly pay to increase as well. We can see that going out for dinner in cities like Mexico City, London or Buenos Aires is expensive, but, by the other side, someone who lives in New York, Montreal, Berlin or Toronto, would find it cheaper given how much they earn per hour in their cities. Public Transport, how much for a single ride?... I also found some info about public transport, but I decided to focus on Bus, Tram and Metro, What I did is a bar plot to compare prices in the 15 different cities I chose, here's what I got: To make the image bigger please click At least talking about public transport, Mexico City is the cheapest, but what about London or Stockholm?... An interesting graphic I found... After analyzing the previous graphics, I started thinking about some other things to analyze, the first one was the relationship between the working hours per year and the net hourly pay in each city, and I found something really interesting: To enlarge the graphic please click Isn't it amazing?!!!, What this graphic says is that the less we earn per hour, the more we work!, I was surprised by this, here we have the salary as a function of the working hours per year, and the ones below the red line are the cities that work more than what they earn, and the ones above the red line are the ones that work less than what they earn, all kind of comments accepted... Have you ever thought about the working time required to buy something?.... Well, I found data about the working time required to buy an Ipod and a Big Mac, pretty sad in some cases, here are the graphics... To make the graphic bigger please click We can see how different and hard it is to get an Ipod nano in places like Warsaw, Bogotá, Mexico City or Buenos Aires, but what about London, Los Angeles or New York?, isn't it frustrating? By the other side, I got the same graphic for a Big Mac: To make the image bigger click Ok, this one is pretty sad, I don't know what you think about it, but what it comes to my mind just by looking at this one is inequality, any comments? Thanks for reading me again, and as I said before, If any of you needs the code to generate this graphics just let me know ok? Have a great day :) for the author, please follow the link and comment on his blog: The power of R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/comparison-between-15-different-cities-2/","timestamp":"2014-04-18T16:06:28Z","content_type":null,"content_length":"56438","record_id":"<urn:uuid:5acc6812-98da-4952-b724-52994f8cf645>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
January 9th 2007, 09:25 AM Thank for all the help on the previous subject, now got one on distribution.. In an examination it is known that the distribution of marks is N (52,24) a) what proportion of the marks will exceed 55? b) what proportion of the marks will be less than 45? c) if a grade A is awarded for the to p5% of marks, what is the minimum mark required in order to achieve a grade A? d) the bottom 20% of the marks was graded as an F, What was the range of these marks? Sorry for all the questions, had hundreds of these and stuck on a few of them :( January 12th 2007, 02:22 PM Thank for all the help on the previous subject, now got one on distribution.. In an examination it is known that the distribution of marks is N (52,24) a) what proportion of the marks will exceed 55? b) what proportion of the marks will be less than 45? c) if a grade A is awarded for the to p5% of marks, what is the minimum mark required in order to achieve a grade A? d) the bottom 20% of the marks was graded as an F, What was the range of these marks? Sorry for all the questions, had hundreds of these and stuck on a few of them :( You probably have a table of values for the standard normal distribution N(0,1). Now if X = N(52,24) you can normalize where Z is standard normal N(0,1). Now you look for the corresponding value in table... Using this you should be able to solve a-d.
{"url":"http://mathhelpforum.com/statistics/9744-distribution-print.html","timestamp":"2014-04-21T06:48:02Z","content_type":null,"content_length":"5493","record_id":"<urn:uuid:71f12586-da36-4810-814b-5b12fc757dcd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there anyway we can get the length of The number of binary digits is int(log_base2(decimal)) + 1 which can be calculated by int(log(decimal)/log(2)) + 1 There is some danger of under-representing exact powers of 2. e.g. log_base2(4) = 2 but possibly some calculation error would lead to 1.9999999999999999 which would then be rounded down to 1 The simplest option to be safe is to add an extra char just in case. i.e. use int(log(decimal)/log(2)) + 2 as the space to reserve Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/general/112457/","timestamp":"2014-04-18T13:31:46Z","content_type":null,"content_length":"10275","record_id":"<urn:uuid:67ba2983-f990-4bd2-9740-333e685039bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2005 [00326] [Date Index] [Thread Index] [Author Index] Re: Re: how to have a blind factorization of a polinomial? • To: mathgroup at smc.vnet.net • Subject: [mg54158] Re: [mg54123] Re: how to have a blind factorization of a polinomial? • From: Daniel Lichtblau <danl at wolfram.com> • Date: Sat, 12 Feb 2005 01:57:07 -0500 (EST) • References: <cud789$2t3$1@smc.vnet.net> <cuf453$gfb$1@smc.vnet.net> <200502110833.DAA09170@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com foice wrote: >On Thu, 10 Feb 2005 07:57:23 +0000 (UTC), "Jens-Peer Kuska" ><kuska at informatik.uni-leipzig.de> wrote: >>Sqrt[x + y] /. a_ + x :> x*(a/x + 1) >Thanks for your help. Probably is a viable method, but is a little less than you expect >for a WorldWideFamous software as Mathematica. >In this way you can chose the form of the output, you can also do mistaques typing >something like >Sqrt[x^3 + y^2] /. a_ + x :> x^3*(a/x + 1) >and mathematica will not prevent you from the error giving an output in the form >Is mathematica so feature poor to allow only a "requested form" factorization? >I can't belive it. >Why there isn't somethig as Collect but that can use also negatieve powers to collect? >i.e. Collect[x+y,x,Blind] giving x(1+y/x) >Or better, at least for my task, a Mathematica command converting a function f(x,y) into a >function f(x,y/x) (if possible) >tutto ciò che ho scritto è sempre In My Humble Opinion (IMHO) >probabilmente l'ho scritto di fretta, quindi scusate se sono stato sbrigativo. This is not at all well specified. I don't think you can reasonably expect Mathematica to figure out what you have not clearly requested. (Then there is the question of whether this, once specified, should be built in or programmed by the user. I would guess the latter, since it looks to be fairly obscure.) What sort of input do you generally have in mind? Laurent polynomials possibly inside radicals? Something more general? What would you like done with your example Sqrt[x^3 + y^2]? Would you like Sqrt[x^2*(x + y^2/x^2)]? That's not hard to achieve. PullOutPowers[expr_,x_] /; !SymbolQ[Head[expr]] || !MemberQ[Attributes[Evaluate[Head[expr]]],NumericFunction] := expr PullOutPowers[expr_,x_] := Module[ {vars=Complement[Variables[expr],{x}], e2, head=Head[expr]}, e2 = expr /. Thread[vars->x*vars]; If [head===Plus || head===Times, Factor[e2] /. Thread[vars->vars/x], Map[Factor,e2] /. Thread[vars->vars/x]] InputForm[PullOutPowers[Sqrt[x^3 + y^2], x]] Out[5]//InputForm= Sqrt[x^2*(x + y^2/x^2)] Given that you apparently want to consider variables as weighted with respect to one another, you might also try working with Series expansions, or some of the manipulation tactics presented in in the section "Examples of polynomial manipulation". Daniel Lichtblau Wolfram Research • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Feb/msg00326.html","timestamp":"2014-04-17T00:58:26Z","content_type":null,"content_length":"37563","record_id":"<urn:uuid:f0a8c731-a917-4753-a349-62121431dc9d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Networks Laboratory Computer Networks Laboratory is working in four directions within Research Projects and Top-Down Education Course-ware Development Projects: 1. K. Naik. Automatic hardware synthesis of multimedia synchronizers from high-level specifications. IEICE Trans. on Information and Systems, E79-D(6):160--168, July 1996. In this paper, we show that by suitably selecting a notation to construct synchronization requirement specifications (SRS) for multimedia presentation we can express the timing characteristics at an abstract level, verify the specification, and obtain a hardware implementation through a sequence of transformations of the specification. First, we introduce the notion of a well-formed SRS and its hardware model. Second, we model an SRS as a timed Petri net and interpret the transitions of the net as hardware signals. To obtain logic functions from the SRS, we simplify the net and obtain a signal transition graph satisfying the unique state coding property. Finally, we show how to obtain a logic-level design of synchronizers. 2. V. Varshavsky, V. Marakhovsky, and V. Smolensky. Design of self-timed non-autonomous devices specified by the model of finite automaton. IEEE Design and Test of Computers, Vol.12(1):14--23, 1995. A procedure of designing a self-timed device defined by the model of finite automaton is suggested. In accordance with the chosen automaton standard implementation structure from the automaton transition/output graph one derives the Signal Graph Specification that then is processed by the formal synthesis procedure for self-timed implementation. The design procedure is illustrated by two examples: Stack Memory and Counter with Constant Acknowledge Delay. 3. V. Varshavsky, V. Marakhovsky, and R. Lashevsky. Self-timed data transmission in massively parallel computing systems (in printing). Integrated Computer Aided Engineering, 1996. Local processes in MPCS can be coordinated by asynchronous system of global synchronization via handshake using current sensors for detection of the transient processes completion. The known current sensors are considered, the best one is analyzed and its shortcomings are revealed. Current sensor with wide range of the measured current is suggested. The principles of control in circuits with current sensors are developed. Self-timed data exchange between local processes of the system is discussed. Transmission/reception circuits with single-wire bit handshake are demonstrated. Their transmission rate is no worse than that of double-wire circuits. They allow one to transmit combinations of $n$-bit code by $n+2$ communication lines. 4. P. Termsinsuwan, Zixue CHENG, and N. Shiratori. A new approach to adt specification support based on reuse of similar adt by the application of case-base reasoning. International Journal: Information and Software Technology, Vol. 38(09):555--568, 1996. In this paper, we first propose an ADT specification Model which defines a data carrier part of an ADT as a combination of constructors of four patterns. Based on the specification model, a similarity definition is given. Furthermore we have designed the support system. Finally, in order to evaluate the effectiveness of the system, we experimented with data types of service and protocol in data communication field. From the evaluation, the effectiveness of the system is ensured. 5. Pairoj TERMSINSUWAN, Zixue CHENG, and Norio SHIRATORI. Asse: A support environment for adt specification based on reuse of similar adt. Journal of Information Processing Society of Japan, 1996. No abstract. 1. K. Naik and Z. Cheng. Agents and coordinators: A scheme for distributed implementation of the disabling operator in lotos. In Cheeha Kim, editor, 10th. International Conference on Information Networking, pages 177--184, January 1996. In this paper, we propose a scheme for distributed implementation of the disabling operator in LOTOS. Our main contribution lies in transforming the operational semantics of the disabling operator to a distributed implementation. The distributed implementation is specified as a hierarchy of finite-state machines (FSM) which communicate among themselves by message passing. We define an agent process for each behavior expression not containing any disabling operator and a coordinator process for each instance of the disabling operator. After selecting an event, an agent process requests a coordinator process whether it can execute the event. The coordinator processes collectively keep track of the status of all the agent processes and accordingly allow or disallow an agent to execute the event. A three-way interaction mechanism between an agent and a coordinator and between two coordinators is defined. Detailed behaviors of the agent and coordinator processes are specified as communicating FSMs, which are easy to implement. 2. V. Varshavsky and V. Marakhovsky. Asynchronous control device design by net model behavior simulation. In Lecture Notes in Computer Science 1091. Application and Theory of Petri Nets 1996. Proceedings of the 17th International Conference, volume 1091, pages 497--515, Osaka, Japan, June 1996. Sprinder. We discuss the problem of designing asynchronous control devices for discrete event coordination specified by a Petri net model. The design is based on the compilation of standard circuit modules corresponding to PN fragments into a net modeling PN behavior and on the semantic interpretation of the modeling circuit. The impossibility of asynchronous implementation of the indivisible operation of marking change at the circuit level leads to the necessity of modeling PN with modified rules of marking change. Modifications of the known modules, a number of new module types, the rules of the module connections, and the procedures of minimization are given that considerably improve the quality of the obtained solutions in terms of both speed and area. The design ``reefs'' are fixed. The minimization procedures are usually associated with a change of marking change rules producing the problems of providing the equivalence of the initial and modified PNs. 3. V. Varshavsky and V. Marakhovsky. Hardware support of discrete event coordination (accepted). In The 3rd Workshop on Discrete Event Systems (WODES'96), Edinburgh, UK, August 1996. An approach to design asynchronous systems that coordinate concurrent discrete events of an arbitrary physical nature is discussed. To build asynchronous control circuits that coordinate process interaction, the method of direct translation of interact specifications using Petri nets is being developed. Modifications of the known modules and a number of new modules modeling fragments of Petri nets are given. These modules considerably improve the quality of the obtained solutions. The design ``reefs'' are fixed. The minimization procedures are associated with a change of marking change rules. 4. V. Varshavsky, V. Marakhovsky, and T.A. Chu. Logical timing (global synchronization of asynchronous arrays). In The first Aizu International Symposium, Parallel Algorithm/Architecture Synthesis, pages 130--138, Aizu-Wakamatsu, Japan, 1995. The University of Aizu, IEEE CS Press. The problem of global synchronization is solved for asynchronous processor arrays and multiprocessor systems with an arbitrary interconnection graph. Global synchronization of asynchronous systems is treated as a homomorphic mapping of an asynchronous system behavior in logical time onto the behavior of the corresponding synchronous system with a common clock functioning in physical time. The solution is based on decomposing the system to the processor stratum and synchro-stratum; the latter plays the role of a global asynchronous clock. For the case of a synchronous system with two-phase master-slave synchronization, a simple implementation of the synchro-stratum for the corresponding asynchronous system is proposed. It is shown that, depending on the local behavior of the processors, the synchro-stratum is able to perform two types of global synchronization: parallel synchronization and synchronization that uses a system of 5. V. Varshavsky, V. Marakhovsky, and R. Lashevsky. Asynchronous interaction in massively parallel computing systems. In Proceedings of the IEEE First International Conference on Algorithms and Architectures for Parallel Processing, volume 2, pages 481--492, Australia, Brisbane, April 1995. IEEE, IEEE CS Press. The problems are discussed that arise when designing massively parallel computer systems. The transition from globally synchronized working of such systems to globally asynchronous behavior resolves most of them. The problems of asynchronous interaction of local processes with the system of their global coordination on the base of handshake are considered as well as the problems of self-timed data transmission between processes. If the system modules that realize local processes are not asynchronous and implemented in CMOS-technology, then, to detect the moments of the transient processes completion in them, the idea of current indication is used. A circuit of a current sensor is suggested with wide range of permissible changes of the measured current and with admissible characteristics. Two ways of organizing the interaction between circuits with current sensors are developed. The principles of self-timed data exchange between local processes of the system and data transmission by means of a dual-rail code and binary code with handshake for every bit are considered. The possibility of organizing single-wire bit handshake is demonstrated and its self-timed implementation is developed with the transmission rate no worse than that of double-wire bit handshake. 6. A. Yakovlev, V. Varshavsky, V. Marakhovsky, and A. Semenov. Designing an asynchronous pipeline token ring interface. In Proc. of the 2nd Working Conference on Asynchronous Design Methodologies, pages 32--41, London, May 1995. IEEE, IEEE Comp. Society Press. We describe the design of a speed-independent communication channel based on a pipeline token-ring architecture. We believe that this approach can help reduce some negative "analogue" effects inherent in asynchronous buses (including on-chip ones) by means of using only "point-to-point" interconnections. We briefly outline the major ideas of the channal's organization, protocol and our syntax-driven implementation of the channel protocol controller. The protocol has been recently verified for deadlock-freedom and fairness. 7. V.Varshavsky, V. Marakhovsky, and R. Lashevsky. Critical view on the current sensor application for self-timing in vlsi systems. In Proceedings of the ASP-DAC'95/CHDL'95/VLSI'95, pages 743--750, Chiba, Japan, August 1995. IEEE, ACM, IEEE Press. To solve the problem of global synchronization in massively parallel VLSI systems, it is necessary to organize asynchronous interaction between system blocks. The possibility of applying current sensors for detection of the end of signal transitions to construct asynchronous blocks in CMOS-technology is discussed. For known current sensors, their design principles and characteristics are analyzed. Two ways of organizing the interaction between circuits with current sensors are suggested. Stubborn problems of using the known current sensors that appear due to the imperfection of their characteristics are formulated. A current sensor is suggested that removes the major of these problems but is capable of working only with a particular circuit class. However, simulation results indicated that using even such sensors is not efficient enough. 8. V. I. Varshavsky and V. B. Marakhovsky. Asynchronous event control design using simulating circuits. In Joint Symposium on Systems, Artificial Intellegence, Nueral Networks, Discrete Event Systems and Autonomous Decemtralized Systems, pages 249--256, Toyama, Japan, November 1995. The Society of Sensing Instrument Control Engineers (SICE). An approach to design asynchronous systems that coordinate concurrent discrete events of an arbitrary physical nature is discussed. To build asynchronous control circuits that coordinate process interaction, the method of direct translation of interact specifications using Petri nets is being developed. Modifications of the known modules and a number of new modules modeling fragments of Petri nets are given. These modules considerably improve the quality of the obtained solutions. The design ``reefs'' are fixed. The minimization procedures are associated with a change of marking change rules. 9. V. Varshavsky, V. Marakhovsky, and T. Chu. Asynchronous timing of arrays with synchronous prototype. In Proc. of the Second Intermational Conference on Massively Parallel Computing Systems, pages 47--54, Ischia, Italy, May 1996. Euromicro, IEEE Press. The problem of global synchronization for asynchronous cellular automata arrays is considered. Global synchronization of an asynchronous system is treated as a homomorphic mapping of its behavior in logical time onto the behavior of the corresponding synchronous prototype system that functions in physical time. Here we developed the idea of decomposing an asynchronous array to the automata stratum (close to the synchronous prototype array with cells modified to organize timing handshake) and synchronization stratum which functions as a distributed asynchronous clock. We considered various disciplines of prototype timing and the corresponding synchro-stratum implementations. 10. V. Varshavsky, V. Marakhovsky, and M. Tsukisaka. Data-controlled delays in the asynchronous design. In IEEE International Symposium on Circuits and Systems, ISCAS'96, Atlanta, USA, May 1996. IEEE, IEEE Press. Asynchronous design technique has an approach of using padding delays to produce signals of transient process completion. In order to increase the efficiency of this approach, we suggest to use data-controlled incorporated delays in the cases when the variations of transient process durations are determined by the sets of input signal values. The control over the value of an incorporated delay is illustrated by an example of asynchronous adder design. The results of PSPICE simulation confirm the efficiency of this approach. 11. Zixue CHENG, Tongjun Huang, and Norio Shiratori. A distributed algorithm for implementation of first-order multiparty interactions. In 1996 International Conference on Parallel and Distributed Systems (ICPADS-96), June 1996. First-order multiparty interactions, a generation of multiparty synchronization, are a powerful communication mechanism that allows a set of processes to enroll into different roles of an interaction and to execute the interaction in a synchronous way, and guarantees conflict interactions to be executed exclusively. It is not a trivial problem to implement first-order multiparty interactions in a network environment. The implementation has to maintain the synchronous and exclusive properties mentioned above, be fair and make progress. Recently, Joung et al. gave a definition of the first-order multiparty interactions and presented a distributed algorithm with message complexity $O(|P|^2)$ as solution to the problem (Joung et al.), where $|P|$ is the number of set of processes which may potentially enroll into some roles of the interaction. So far Joung's algorithm is the only one for first-order multiparty interactions by our knowledge. In Joung's algorithm, a coordinator is devised for each process, and each coordinator tries to capture other processes for establishing a quorum. The Joung's algorithm is not a fully distributed one due to the following two reasons: (1) every coordinator has to know and keep global information about the potential enrollment relation between processes and roles, and (2) the local computation time complexity is unbalanced. In this paper, we propose a new distributed algorithm for implementing first-order multiparty interactions based on a new implementation model, in which a role manager is devised for each role. Our algorithm solves the problem with $O(|R|^2 \cdot |P|)$ messages per interaction as opposed to $|P|^2$ messages required in (Joung et al.), where $|R|$ is the number of roles of the interaction and $|P|$ is the number of processes. Our algorithm is more efficient than that in (Joung et al.) when $|R|^2 < |P|$. Furthermore our algorithm is closer to the fully distributed one than that in (Joung et al.) in the sense that (1) every process (role manager) only knows its adjacent role managers (processes), and (2) the time complexity of local computation in each process and role manager is equal. 12. Zixue Cheng and David S. L. Wei. Efficient distributed ranking and sorting schemes for a coterie. In The 1996 International Symposium on Parallel Architecture, Algorithms and Networks (ISPAN'96), June 1996. We consider the problems of distributed ranking and sorting on a Coterie, a communication structure which has proven to be a good candidate as underlying interconnection network for distributed processing. Ranking and sorting problems are harder than a consensus one, a vital and well studied problem in distributed processing, in that the later one computes for only one function (e.g. summation), while the former one actually performs $n$ functions, as ranking is to rank the key in each of $n$ sites. The currently best known decentralized consensus protocols on a coterie uses $O(n\sqrt{n})$ messages, and requires two rounds of message exchange. In this paper we show that both ranking and sorting can be done on a coterie with the same message complexity although the problems we investigate are much harder. We first present a two-round ranking algorithm which requires only $O(n\sqrt{n})$ messages. Then using this ranking algorithm, we obtain a sorting algorithm which also uses only $O(n\sqrt{n})$ messages, but requires two more rounds of message exchange. Our schemes are optimal in the sense that the lower bound of messages needed for achieving a consensus is $\Omega(n\sqrt{n})$.
{"url":"http://web-ext.u-aizu.ac.jp/official/researchact/annual-review/1995/sections/sectionstar2_1_6.html","timestamp":"2014-04-16T04:11:24Z","content_type":null,"content_length":"26262","record_id":"<urn:uuid:a1032196-579f-48e2-ab8b-b2a2353f596d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Coordinate Geometry Re: Coordinate Geometry hi ganesh we have that the intersection of the first two lines is (5,5).We can check that that point lies on the third line. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=96032","timestamp":"2014-04-17T06:49:02Z","content_type":null,"content_length":"10320","record_id":"<urn:uuid:9bf6af80-f5bb-4922-8a0f-d18a5a29338d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Effectiveness of the distinguished theta characteristic in characteristic 2 up vote 9 down vote favorite Let $k$ be an algebraically closed field of characteristic 2. Let $C$ be a (smooth projective connected) curve over $k$. Can there exist a rational function on $C$ whose differential is holomorphic but nonzero? (Note that every perfect square has zero differential.) Alternate formulation: at the end of his paper "Theta characteristics of an algebraic curve", Mumford points out that the canonical sheaf on $C$ has a distinguished square root $\mathcal{L}$: for any rational function $f$ on $C$ which is not a perfect square, the divisor $(df)$ is even and the class of $\frac{1}{2}(df)$ does not depend on $f$. The question is then whether $\mathcal{L}$ can admit a nonzero section. For example, it is an entertaining exercise to check (from a Weierstrass model) that this can never occur for ordinary elliptic curves. ag.algebraic-geometry algebraic-curves divisors characteristic-2 Using duality between trace and differential via the residue pairing (Ch. II, sec. 13 of Serre's book on class fields), to make such an $f$ (in any characteristic) seems amount to making a 1 separable branched cover $f:X \rightarrow \mathbf{P}^1$ whose relative different ideal on $X$ is contained in $O_X(-2\infty)$. This is a condition solely on the ramification profile over $\infty$, so for a sufficiently ramified dvr extension $R$ of $\widehat{O}_{\infty}$ find a monic equation defining this finite extension and globalize it. Hard to see the genus here (or to control hyperellipticity, etc.) – user36938 Aug 18 '13 at 9:39 add comment 2 Answers active oldest votes Let $F:X\to X$ be the relative Frobenius morphism. It is a finite morphism of degree $2$, and we have an exact sequence $$0\to \mathcal{O}_X\to F_*\mathcal{O}_X \to L \to 0$$ for some invertible sheaf $L$ on $X$. The standard formula for the canonical class of a finite morphism tells us that $\omega_X = F^*(\omega_X\otimes L^{-1}) = \omega_X^{\otimes 2}\otimes L^{\otimes -2}.$ This implies that $L = \theta$ is a theta characteristic on $X$. I believe this is the theta characteristic discussed by Mumford. Now apply the cohomology to the first exact sequence up vote to get that $H^1(X,\theta) \cong H^0(X,\theta)^*$ is the cokernel of the Frobenius map $H^1(X,\mathcal{O}_X) \to H^1(X,\mathcal{O}_X)$. 3 down vote So, $\theta$ is effective if and only if the curve is not ordinary. This answer needs some editing because $F$ is not an endomorphism of $X$, but rather is a $k$-morphism from $X$ to its Frobenius twist $X'$. So in the exact sequence displayed one has $O_ 1 {X'} \rightarrow F_{\ast}(O_X)$ and one really has to use that $F^{\ast}(\omega_{X'}) \simeq \omega_X^{\otimes 2}$ (by a small calculation, not a tautology). Likewise see that it is not $L^{\otimes -2}$ that arises, but rather $F^{\ast}(L)^{-1}$, and $L$ lives on $X'$, not $X$. So to get $L^{\otimes 2}$ is not quite "right". I'm a bit confused here, so didn't edit it myself; perhaps user37622 can fix it. – user36938 Aug 19 '13 at 1:06 add comment $ $ Hi, Kiran, welcome to MO. up vote 2 Supersingular elliptic curves in characteristic two have exact holomorphic differentials. On $y^2+y=x^3+ax+b$, $dx$ is holomorphic. In general, there are such things if and only if the down vote curve (or its Jacobian) is not ordinary. This is discussed in §3 of my paper with Stohr, "A formula for the Cartier operator on plane algebraic curves" Crelle 377 (1987), 49-64. Dear @Felipe: You can create blank space with html instead of mathjax, e.g. by using the blank character '&nbsp;'. – Ricardo Andrade Aug 18 '13 at 13:14 1 @RicardoAndrade Apparently is SO policy to remove greetings from questions and answers programmatically. The software might catch your solution but not mine. meta.mathoverflow.net/ questions/410/… – Felipe Voloch Aug 18 '13 at 13:17 Following up, one other observation from your paper I find useful: all of the noncanonical theta characteristics are effective. – kedlaya Aug 18 '13 at 17:32 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-curves divisors characteristic-2 or ask your own question.
{"url":"http://mathoverflow.net/questions/139698/effectiveness-of-the-distinguished-theta-characteristic-in-characteristic-2","timestamp":"2014-04-20T08:48:22Z","content_type":null,"content_length":"62286","record_id":"<urn:uuid:e080bb10-0d8b-438f-869c-c9fb7398e923>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Music::Tension::PlompLevelt - Plomp-Levelt consonance curve calculations Beta interface! Will likely change without notice! use Music::Tension::PlompLevelt; my $tension = Music::Tension::PlompLevelt->new; $tension->frequences(440, 880); $tension->pitches(69, 81); $tension->vertical([qw/60 64 67/]); Plomp-Levelt consonance curve calculations based on work by William Sethares and others ("SEE ALSO" for links). None of this will make sense without some grounding in music theory and the referenced Parsing music into a form suitable for use by this module and practical uses of the results are left as an exercise to the reader. Consult the eg/ directory of this module's distribution for example Fundamental Overtones Harmonic 1 2 3 4 5 6 7 odds o o o o evens e e e ly pitch c, c g c' e' g' bes' MIDI number 36 48 55 60 64 67 70 frequency 65.41 130.82 196.23 261.64 327.05 392.46 457.87 equal temp. 65.41 130.81 196.00 261.63 329.63 392.00 466.16 error 0 -0.01 -0.23 -0.01 +2.58 -0.46 +8.29 The calculations use some number of harmonics, depending on the amplitude profile used, or frequency information supplied. Finding details on the harmonics for a particular instrument may require consulting a book, or performing spectral analysis on recordings of a particular instrument (e.g. via Audacity), or fiddling around with a synthesizer, and likely making simplifying assumptions on what gets fed into this module. Other music writers indicate that the partials should be ignored, for example Harry Partch: "Long experience... convinces me that it is preferable to ignore partials as a source of musical materials. The ear is not impressed by partials as such. The faculty--the prime faculty--of the ear is the perception of small-numbered intervals, 2/1, 3/2, 4/3, etc. and the ear cares not a whit whether these intervals are in or out of the overtone series." (Genesis of a Music, 1947). (However, note that this declamation predates the work by Sethares and others.) On the plus side, this method does rate an augmented triad as more dissonant than a diminished triad (though that test was with distortions from equal temperament), which agrees with a study mentioned over in Music::Tension::Cope that the Cope method finds the opposite of. See also "Harmony Perception: Harmoniousness is more than the sum of interval consonance" by Norman Cook (2009) though that method should probably be in a different module than this one. Any method may croak if something is awry with the input. Methods are inherited from the parent class, Music::Tension. Unlike Music::Tension::Cope, this module is very sensitive to the register of the pitches involved, so input pitches should ideally be from the MIDI note numbers and in the proper register. Or instead use frequencies via methods that accept those (especially to avoid the distortions of equal temperament tuning). The tension number depends heavily on the equation (and constants to said equation), and should not be considered comparable to any other tension modules in this distribution, and only to other tension values from this module if the same harmonics were used in all calculations. Also, the tension numbers could very easily change between releases of this module. Constructor. Accepts various optional parameters. my $tension = Music::Tension::PlompLevelt->new( amplitudes => { made_up_numbers => [ 42, 42, ... ], default_amp_profile => 'made_up_numbers', normalize_amps => 1, reference_frequency => 442, □ amplitudes specifies a hash reference that should contain named amplitude sets and an array reference of amplitude values for each harmonic. □ default_amp_profile what amplitude profile to use by default. Available options pre-coded into the module include: pianowire-medium * the default These all have amplitude values for six harmonics. □ normalize_amps if true, normalizes the amplitude values such that they sum up to one. □ reference_frequency sets the MIDI reference frequency, by default 440 (Hz). Used by pitch2freq conversion called by the pitches and vertical methods. Method that accepts two frequencies, or two array references containing the harmonics and amplitudes of such. Returns tension as a number. # default harmonics will be filled in $tension->frequencies(440, 880); # custom harmonics [ {amp=>1, freq=>440}, {amp=>0.5, freq=>880}, ... ], [ {amp=>0.88, freq=>880}, ... ], The harmonics need not be the same number, nor use the same frequencies nor amplitudes. This allows comparison of different frequencies bearing different harmonic profiles. The resulting tension numbers are not normalized to anything; making them range from zero to one can be solved something like: use List::Util qw/max/; my @results; for my $f ( 440 .. 880 ) { push @results, [ $f, $tension->frequencies( 440, $f ) ]; my $max = max map $_->[1], @results; for my $r (@results) { printf "%.1f %.3f\n", $r->[0], $r->[1] / $max; See the eg/ directory under this module's distribution for example code containing the above. Accepts two integers (ideally MIDI note numbers) and converts those to frequencies via pitch2freq (which does the MIDI number to frequency conversion equation) and then calls frequencies with those values. Use frequencies with the proper Hz if a non-equal temperament tuning is involved. Returns tension as a number. Given a pitch set (an array reference of integer pitch numbers that are ideally MIDI numbers), converts those pitches to frequencies via pitch2freq, then calls frequencies for the first pitch compared in turn with each subsequent in the set. Returns tension as a number. Jeremy Mates, <jmates@cpan.org> Copyright (C) 2012-2013 by Jeremy Mates This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.16 or, at your option, any later version of Perl 5 you may have
{"url":"http://search.cpan.org/dist/Music-Tension/lib/Music/Tension/PlompLevelt.pm","timestamp":"2014-04-20T14:14:39Z","content_type":null,"content_length":"21535","record_id":"<urn:uuid:a4572d8e-3668-40e4-8891-2f43822d2cda>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Circumference of the Base Circle of an Open Cylinder... August 10th 2010, 02:01 AM #1 Jan 2009 The page that this assignment is printed on is to be folded to form an open cylinder with a 1 cm overlap. 1. Measure the length and width of this page to the nearest millimetre. What will be the circumference of the base circle of the open cylinder? Use this measurement to find the radius of the base circle of the open cylinder. (I measured this - Length: 296 mm; Width = 210 mm). I am aware this this question may seem overly simple to you, but I cannot seem to understand what I have to do. I keep calculating the incorrect answer. Any help, if you can do so, is appreciated. Thank you in advance! I would have guessed that the circumference would be 286 mm (296 mm with a 1 cm overlap). From there, you would use the formula for the circumference of a circle $C = 2\pi r$ to find the radius of the base of this open cylinder: \begin{aligned}<br /> C &= 2\pi r \\<br /> 286 &= 2\pi r \\<br /> r &\approx 45.52\,mm \: \text{or} \: 4.552\,cm<br /> \end{aligned} August 10th 2010, 03:44 AM #2
{"url":"http://mathhelpforum.com/geometry/153233-circumference-base-circle-open-cylinder.html","timestamp":"2014-04-21T00:46:02Z","content_type":null,"content_length":"33122","record_id":"<urn:uuid:8e7d2acf-94bb-4480-b47a-bc51a065f86d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Pentagon-Hexagon-Decagon Identity Suppose we inscribe a regular pentagon, a regular decagon, and a regular hexagon in circles of the same radius. If we denote the respective edge lengths of these polygons by $P$, $D$ and $H$, then these lengths satisfy the identity This means that the edges of a pentagon, decagon and hexagon of identical radii can fit together to form a right triangle! Euclid stated this beautiful but mysterious identity as Proposition 10 of Book XIII of the Elements. This is the last book of the Elements, the one which deals with properties of the Platonic solids. He used Proposition 10 as part of his construction of the regular icosahedron in Proposition 13. This has led some historians to suggest that the pentagon-decagon-hexagon identity was first discovered in the course of research on the icosahedron. The idea is this. If you hold an icosahedron so that one vertex is on top and one is on bottom, you’ll see that its vertices are arranged in 4 horizontal layers. From top to bottom, these are: • 1 vertex on top • 5 vertices forming a pentagon: the "upper pentagon" • 5 vertices forming a pentagon: the "lower pentagon" • 1 vertex on bottom Pick a vertex from the upper pentagon: call this $A$. Pick a vertex as close as possible from the lower pentagon: call this $B$. $A$ is not directly above $B$. Drop a vertical line down from $A$ until it hits the horizontal plane on which $B$ lies. Call the resulting point $C$: It is easy to check that $ABC$ is a right triangle. If we apply the Pythagorean theorem to this triangle we get the equation But to see this, we need to check that: • the length $AB$ equals the edge of a pentagon inscribed in a circle; • the length $AC$ equals the edge of a hexagon inscribed in a circle; • the length $BC$ equals the edge of a decagon inscribed in a circle. Different circles, but of the same radius! What’s this radius? The five vertices of the lower pentagon lie on the circle shown in blue. This circle has the right radius. Using this idea, it’s easy to see that the length $AB$ equals the edge of a pentagon inscribed in a circle. It’s also easy to see that $BC$ equals the edge of a decagon inscribed in a circle of the same radius. The hard part is showing that $AC$ equals the edge of a hexagon inscribed in a circle of the same radius… or in other words, the radius of that circle! (The hexagon appears to be a red To prove this, it suffices to show the following marvelous fact: the distance between the "upper pentagon" and the "lower pentagon" equals the radius of the circle containing the vertices of the upper pentagon! Can you prove this? In Ian Mueller’s book Philosophy of Mathematics and Deductive Structure in Euclid’s Elements, he suggested various ideas the Greeks could have had about this. Today’s image shows one. Let’s look at it again: The trick is to construct a new right triangle $AB’C'$. Here $B’$ is the top vertex, and $C’$ is where a line going straight down from $B’$ hits the plane containing the upper pentagon. Remember, we’re trying to show the distance between the upper pentagon and lower pentagon equals the radius of the circle containing the vertices of the upper pentagon. But that’s equivalent to showing that $AC’$ is congruent to $AC$. To do this, it suffices to show that the right triangles $ABC$ and $AB’C'$ are congruent! Can you do it? In the references to Mueller’s book, he says the historians Dijksterhuis (in 1929) and Neuenschwander (in 1975) claimed this is “intuitively evident”. He also notes that Eva Sachs, in her book Die Fünf Platonischen Körper, suggested an accurately drawn figure could let someone guess that the distance between the two pentagons equals the radius of either one. But that’s not a proof. You can see a proof due to Greg Egan here: • Pentagon-hexagon-decagon identity: Proof using the icosahedron, nLab. Egan put some other proofs of the pentagon-hexagon-decagon identity here: • Pentagon-hexagon-decagon identity, nLab. Also see: • John Baez, This Week’s Finds in Mathematical Physics, Week 283, and discussion on the n-Category Café. • Eva Sachs, Die Fünf Platonischen Körper, zur Geschichte der Mathematik und der Elementenlehre Platons und der Pythagoreer, Berlin, Weidmann, 1917, pp. 102–104. See page 102–103 here and page 104 • Ian Mueller, Philosophy of Mathematics and Deductive Structure in Euclid’s Elements, MIT Press, Cambridge Massachusetts, 1981, pp. 257–258 and references therein. Of course, we can prove also prove the pentagon-hexagon-decagon identity using algebra. Start with a unit circle. If we inscribe a regular hexagon in it, then clearly $$ H = 1 $$ So we just need to compute $P$ and $D$. If we think of the unit circle as living in the complex plane, then the solutions of $$ z^5 = 1 $$ are the corners of a regular pentagon. So let’s solve this equation. We’ve got $$ 0 = z^5 – 1 = (z – 1)(z^4 + z^3 + z^2 + z + 1) $$ so ignoring the dull solution $z = 1$, we must solve $$ z^4 + z^3 + z^2 + z + 1 = 0$$ This says that the center of mass of the pentagon’s corners lies right in the middle of the pentagon. Now, quartic equations can always be solved using radicals, but it’s a lot of work. Luckily, we can solve this one by repeatedly using the quadratic equation! And that’s why the Greeks could construct the regular pentagon using a ruler and compass. The trick is to rewrite our equation like this: $$ z^2 + z + 1 + z^{-1} + z^{-2} = 0 $$ and then like this: $$ (z + z^{-1})^2 \, + \,(z + z^{-1}) \, -\, 1 = 0 $$ If we write $$z + z^{-1} = x $$ our equation becomes $$x^2 + x \; – 1 = 0 $$ Solving this, we get two solutions. One of them is the golden ratio $$ x = \phi = \frac{\sqrt{5} -1}{2} \approx 0.6180339\dots $$ Next we need to solve $$ z + z^{-1} = \phi $$ This is another quadratic equation: $$ z^2 – \phi z + 1 = 0 $$ with two conjugate solutions, one being $$ z = \frac{\phi + \sqrt{\phi^2 – 4}}{2} $$ This is a fifth root of unity in the first quadrant of the complex plane, so we know $$ z = \exp(2 \pi i/5) = \cos(2\pi/5) + i \sin(2\pi/5) $$ So, we’re getting $$ \cos(2\pi/5) = \phi/ 2 $$ A fact we should have learned in high school, but probably never did! Now we’re ready to compute $P$, the length of the side of a pentagon inscribed in the unit circle: $$ P^2 = |1 – z|^2 = (1 – \cos (2\pi/5))^2 + \sin^2 (2\pi/5) = 2 – 2 \cos (2\pi/5) = 2 \; – \phi $$ Next let’s compute $D$, the length of the side of a decagon inscribed in the unit circle! We can mimic the last stage of the above calculation, but with an angle half as big: $$ D^2 = 2 – 2 \cos(\pi/5) $$ To go further, we can use the half-angle formula: $$ \cos(\pi/5) = \sqrt{\frac{1 + \cos (2\pi/5)}{2}} = \sqrt{\frac{1}{2} + \frac{\phi}{4}} $$ This gives $$ D^2 = 2 \; – \sqrt{2 + \phi} $$ But we can simplify this a bit more. As any lover of the golden ratio should know, $$ 2 + \phi \approx 2.6180339\dots $$ is the square of $$ 1 + \phi \approx 1.6180339\dots $$ So we really have $$ D^2 = 1 – \phi $$ And now we’re done! We see that the pentagon-hexagon-decagon identity simply says: $$ 2 – \phi = 1 + (1 – \phi) $$ Visual Insight is a place to share striking images that help explain advanced topics in mathematics. I’m always looking for truly beautiful images, so if you know about one, please drop a comment here and let me know! Here’s another proof using the icosahedron: Take an icosahedron whose edge lengths are 2. Consider a plane going through the center of the icosahedron, and cutting the edges it crosses through in half (so e.g. the plane midway between the two planes drawn in your diagram). This cuts the icosahedron in a regular decagon with edge lengths 1. So the distance from the center of the icosahedron to an edge is $1/D$. If we take a plane cutting off a corner of the icosahedron, so that it intersects the adjacent faces at the midpoints, then this cuts off a regular pentagon with edge lengths 1, so the distance from this vertex to a midpoint of an edge is $1/P$. But we have a right triangle with vertices the center of the icosahedron, a vertex, and an adjacent edge midpoint. The altitudes of this right triangle are therefore $1/P$, $1/H$ and $1/D$ from the above discussion (one altitude is half an edge, one is the radius of the midscribed sphere going through the midpoints of the edges, and one is the radius of the circumcircle of the corner pentagon). This is equivalent to $P^2=H^2+D^2$ by some simple geometry. • Cool—I’ll have to think about that a bit! By the way, simple LaTeX works in comments on this blog; just put dollar signs around math expressions as usual. I tested this out by adding dollar signs to your comment, just for fun. □ Cool, thanks. I managed to modify the drawing to try to illustrate the argument. Here’s a link to the image: ☆ Here’s the reciprocal pythagorean theorem I’m using: http://www.maa.org/sites/default/files/Nelsen2009-316026.pdf • That’s ingenious! I especially like the use of the Reciprocal Pythagorean Theorem (which I’ve taken to calling the Dual Pythagorean Theorem), the most unjustly neglected result in elementary geometry. That theorem is the reason we can find, say, the overall spatial frequency of a wave from individual spatial frequencies measured along coordinate axes, or the overall gradient of a planar patch of land from its gradient in the north-south and east-west directions, as square roots of sums of squares. □ I was going to bring this to your attention, because I learned the Dual Pythagorean Theorem from your Orthogonal trilogy, I guess the first volume. • Incidentally, from this perspective, one obtains right-triangles associated to the tetrahedron and octahedron as well. For the tetrahedron, one has a square, hexagon, and triangle. For the octahedron, one has two hexagons and a square. So these are relatively boring! □ Actually, I just realized there’s also right triangles associated to the cube and dodecahedron too. The cube is the same as its dual, the octahedron (triangle, square, hexagon). To the dodedahedron, one gets a triangle, decagon, and decagon-star, i.e. a 10-pointed star that wraps 3 times around. □ It’s incredibly cool that this can be generalised! I suppose you get the “large” polygon (analogous to the decagon in the icosahedral case) by taking a plane that passes through the centre of the polyhedron and the midpoints of two edges. When the polyhedron has equilateral triangles as faces, you can pick any two edges of the same face, and the polygon will then have edges whose length is half the polyhedron’s edge length. This gives a decagon for an icosahedron, a square for a tetrahedron, and a hexagon for an octahedron. But when the polyhedron’s faces aren’t triangular, what’s the rule? For example, for a cube you could choose midpoints of two adjoining edges, which would give you a hexagon from the slicing plane, or two opposite edges, which would give you a square. But in both cases, the edges of the polygon are no longer half the length of the polyhedron’s edge. Is that OK — do we just accept that as the nature of the result? ☆ @Greg Egan: Yes, in each of these cases, one has 3 regular polygons which share an edge which connects midpoints of adjacent edges of the regular polygon face of the polyhedron. There is the polygon on the plane going through the center, the plane cutting off a corner, and the plane of the face. Then there is a right triangle cutting the three planes in its altitudes (with vertices on a vertex, an edge midpoint, and the center of the polyhedron), to which one applies the dual Pythagorean theorem. • @Ian Agol: Thanks! I’ve put a picture of all 5 cases here: The construction is: choose a vertex $V$ of the polyhedron, and edges $E_1$ and $E_2$ that are incident on $V$. Let $M_1$ and $M_2$ be the midpoints of those edges. We then have three regular polygons with $M_1 M_2$ as one of their edges: The green polygon, whose centre is $C$, the centre of the polyhedron. The yellow polygon, whose vertices are all the midpoints of all the edges that are incident on $V$, so it lies in a plane that truncates that vertex. The blue polygon, whose centre is $V$ and which lies in the plane of the face of the polyhedron containing $V$, $E_1$ and $E_2$. The red triangle is a right triangle whose vertices are $V$, $C$ and $M_1$, with two perpendicular sides whose lengths are equal to the radii of the green and blue polygons, and whose altitude with the hypotenuse as base is the radius of the yellow polygon. • @Ian Agol: I’ve taken the liberty of writing up your proof here: If there’s anything incorrect, or anything you wish to improve, this page is editable. □ Great – I’d been planning to do that myself, but not relishing the prospect—not because this proof isn’t cool, but because I don’t understand it yet, and I’m busy doing other stuff. It’s good to have lots of information about this identity in one place. When I get around to understanding this new proof, I may blog about it here! □ Cool, I’ll have a look! I also posted something on Google+ in order to advertise it (I used your image with credit). I’ll add a link to your write-up. □ I added a question on mathoverflow, whether there are other right triangles whose edges are the lengths of edges of regular polygrams inscribed in a unit circle. Note that one other example comes by Galois conjugation, and may be interpreted in the same way from Kepler-Poinsot polyhedra. http://mathoverflow.net/q/153761/1345 • Thanks for all the new information, Ian. This just keeps getting better! I made an image of the Kepler-Poinsot polyhedra: Says 2 plus phi = 2.618… • Could you be a bit less laconic in your question? I’ll try to read your mind. There are two numbers called the golden ratio: “little phi” φ ≈ 0.6180339… and its reciprocal “big Phi”: Φ ≈ 1.6180339… In this article I’m only using φ, which I defined to be (-1 + √5)/2.
{"url":"http://blogs.ams.org/visualinsight/2014/01/01/pentagon-hexagon-decagon-identity/","timestamp":"2014-04-19T09:34:56Z","content_type":null,"content_length":"55754","record_id":"<urn:uuid:47ae1801-5daf-4a77-ae45-23ca437104ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Hypergeometric Distrubution (serious) October 18th 2009, 10:28 AM #1 Junior Member Sep 2009 Hypergeometric Distrubution (serious) An instructor who taught two sections of engineering statistics last term,the first with 20 students and the second with 30, decided to assign a term project. After all projects had been turned in, the instructor randomly ordered them before grading. Consider the rst 15 graded projects. a. What is the probability that exactly 10 of these are from the second b. What is the probability that at least 10 of these are from the second c. What is the probability that at least 10 of these are from the same d. What are the mean value and standard deviation of the number of projects among these 15 that are from the second section? e. What are the mean value and standard deviation of the number of projects not among these 15 that are from the second section How do i set this up? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/108791-hypergeometric-distrubution-serious.html","timestamp":"2014-04-16T22:01:20Z","content_type":null,"content_length":"29790","record_id":"<urn:uuid:3bfd936e-d499-4a20-8c65-9363e7a5a7a4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Hölder exponent of power function up vote 0 down vote favorite Can someone give me a reference where I can see a proof that a power function of some exponent between 0 an 1 is Hölder continuous with the same exponent in some compact set? I have seen a trick to prove it in the case of exponent $\frac{1}{2}$, but I would like to see how to prove it in the general case. Could you make a more precise statement? – Pietro Majer Feb 19 '12 at 21:17 1 If you just mean why $f(t):=|t|^\alpha$ with $\alpha\in (0,1)$ is $\alpha$-Hölder, the reason is that $f$ is concave, hence sub-additive, hence it is a modulus of continuity of itself (check en.wikipedia.org/wiki/Modulus_of_continuity) – Pietro Majer Feb 19 '12 at 21:18 It seems the e-mail notification is not working, so I have returned here only now. Thanks for the clue. Actually, after reading it I was thinking again about an elementary proof, and indeed the key is the subadditivity of the function. Using it, the required result is no more difficult to prove than $||x|-|y|| \leq |x-y|$ from the triangle inequality. – António Caetano Feb 24 '12 at 0:52 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged real-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/88963/holder-exponent-of-power-function","timestamp":"2014-04-20T18:40:01Z","content_type":null,"content_length":"49247","record_id":"<urn:uuid:a127f49a-5146-40ff-b8de-17acedcccc56>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Rotational Motion change angular rotational I guess we could start with why it isn't a or c. Angular momentum is changed when Torque is applied. If you apply a force through the axis of rotation, then r is 0. if you apply force through a moment arm parallel to the axis of rotation then torque is also zero since τ = F x r |τ| = Frsinσ since F and r are parallel, the angle is 0, therefore the torque is 0. If you applied it tangentially, then the σ value will be 90 and sinσ will be 1.
{"url":"http://www.physicsforums.com/showthread.php?t=545911","timestamp":"2014-04-17T15:43:42Z","content_type":null,"content_length":"28512","record_id":"<urn:uuid:f19d35e9-4ffd-44ea-a21b-271b2bb76333>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
= Preview Document = Member Document = Pin to Pinterest Cut out the two dimensional shape and fold it to make a three dimensional pyramid with a square base. Cut out the two dimensional shape and fold it to make a three dimensional rectangular prism. Cut out the two dimensional shape and fold it to make a three dimensional "pyramid" with a triangular base (a tetrahedron). Cut out the two dimensional shape and fold it to make a three dimensional triangular prism. "Place a TRIANGLE at 2,4." Place the colorful pictures (included) in this simple (5x5) grid by following the the correct co-ordinates. This grid game is great for developing early map skills and shape recognition, and for practicing following directions. "Place a TRIANGLE at B,4." Place the colorful pictures (included) in this simple (5x5) grid by following the the correct co-ordinates. This grid game is great for developing early map skills and shape recognition, and for practicing following directions. • Graphic chart to help students study volumes and areas of geometric shapes. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3 Match the pictures of the circle, triangle, square, rectangle and hexagon. Common Core Math: K.G.2 Match the pictures to the words for circle, triangle, square, rectangle, and hexagon. Common Core Math: K.G.2 Match the pictures of the sphere, cone, cube and cylinder. Common Core Math: K.G.2 Match the pictures to the words for sphere, cube, cone and cylinder. Common Core Math: K.G.2 A simple game to review the names of various polygons. Print several copies and have students race in pairs. This math mini office contains information on determining perimeter and area in metric and standard measurement for a variety of geometric shapes. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3 This math mini office contains information on determining perimeter and area in metric and standard measurement for a variety of geometric shapes.Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3 Colorful patterns in rectangles, triangles, and circles. Print out several pages of each set. Perfect for forming patterns in small groups or learning centers. Laminate for longer use. Poster shows and describes square, rhombus, rectangle, parallelogram, trapezoid, and irregular quadrilateral. Common Core: Geometry: 5.g.3, 5.g.4, 5g3, 5g4 Poster showing four different triangles according to their angles, with two activities to test concept.Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1 Eight colorful math posters that help teach the concepts of area, perimeter and dimensional figures. Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3 Three math posters that help teach the geometric concepts of flip, slide, and turn. All 20 of our shape posters in one easy download: quadrilaterals (square, rectangle, rhombus, parallelogram), triangles (equilateral, isoceles, scalene, right, obtuse, acute), curved shapes (circle, oval, crescent), other polygons (pentagon, hexagon, octagon); one per page, each with a picture and a definition. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1 Curved shapes: circle, oval, crescent; one per page, each with a picture and a definition. Polygon shapes: pentagon, hexagon, octagon; one per page, each with a picture and a definition. Brief introductions to basic shapes: quadrilateral shapes, triangles, and curved shapes; one set per page, each with a picture and a definition. Quadrilateral shapes: square, rectangle, rhombus, parallelogram; one per page, each with a picture and a definition. Triangle shapes: equilateral, isoceles, scalene, right, obtuse, acute; one per page, each with a picture and a definition. six pages with answer sheet Common Core: Geometry 6.G.A1, 5.G.B3, 4.MD.3
{"url":"http://www.abcteach.com/directory/subjects-math-geometry-655-3-2","timestamp":"2014-04-19T17:34:33Z","content_type":null,"content_length":"147290","record_id":"<urn:uuid:18f07963-175f-42fe-b66b-365acae24582>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
rlft3.f90 inverse transform - Numerical Recipes Forum Re: rlft3 Hi Dave Thanks for your detailed reply. I ve attached my code for you to look at here is my code program fftcheck implicit none integer, parameter :: Nv=8 complex,dimension(Nv,Nv) :: speq complex,dimension(-Nv/2:Nv/2,-Nv/2:Nv/2,-Nv/2:Nv/2) :: data0 complex,dimension(Nv/2,Nv,Nv) :: data double precision :: ran1,f1,f2 integer :: idum,i,j,k,indc,jc,kc call system_clock(idum) open(unit=220,file='input_speq.dat',status='unknow n') open(unit=221,file='outpu_real.dat',status='unknow n') open(unit=222,file='real_inver.dat',status='unknow n') open(unit=223,file='freq_nyqst.dat',status='unknow n') do i=-Nv/2,Nv/2 do j=-Nv/2,Nv/2 do k=-Nv/2,Nv/2 ! Symmetry due to the realness of data in real-space do i=-Nv/2,Nv/2 do j=-Nv/2,Nv/2 do k=-Nv/2,Nv/2 ! Store the workspace arrays for fft do i=1,Nv/2 do j=1,Nv do k=1,Nv do j=1,Nv do k=1,Nv ! We try to retain symmetry here data(1,1,1)=cmplx(real(data(1,1,1)),0.0) ! global average field do i=1,Nv/2 do j=1,Nv do k=1,Nv write(220,'(2f16.5,3I8)') real(data(i,j,k)),aimag(data(i,j,k)),i-1,jc,kc do j=1,Nv do k=1,Nv write(223,'(2f16.5,2I8)') real(speq(j,k)),aimag(speq(j,k)),jc,kc ! Inverse FFT ing the data call rlft3(data,speq,Nv,Nv,Nv,-1) do j=1,Nv do k=1,Nv write(223,'(2f16.5,2I8)') real(speq(j,k)),aimag(speq(j,k)),jc,kc ! FFT do i=1,Nv/2 do j=1,Nv do k=1,Nv write(221,'(2f16.8,3I8)') real(data(i,j,k)),aimag(data(i,j,k)),i-1,jc,kc call rlft3(data,speq,Nv,Nv,Nv,1) do i=1,Nv/2 do j=1,Nv do k=1,Nv write(222,'(2f16.8,3I8)') real(data(i,j,k)),aimag(data(i,j,k)),i-1,jc,kc end program fftcheck function indc(i,N) implicit none integer:: indc,i,N if(i<=N/2) then end function indc Thanks a lot for your help
{"url":"http://www.nr.com/forum/showthread.php?t=1815","timestamp":"2014-04-18T15:38:37Z","content_type":null,"content_length":"57707","record_id":"<urn:uuid:f914e25d-6d43-444c-be91-6de7f6cbfef6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Durham, PA Calculus Tutor Find a Durham, PA Calculus Tutor I am a motivated worker that recently graduated from the University of Delaware with a degree in mechanical engineering. I am willing to tutor any high school or college student that struggles or would like additional help in math or science. I have spent many hours as a volunteer tutor in high school and was a part-time calculus tutor in college. 9 Subjects: including calculus, physics, geometry, algebra 1 ...My interests have included studying the consequences of dynamic heterogeneity for optimizing therapy and developing a video tutorial course to help interdisciplinary scientists model biological systems mathematically. I received my BS in Physics from Harvey Mudd College, where I earned a 990 on ... 13 Subjects: including calculus, reading, writing, physics ...With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of the struggles all students face. Beyond academics, I spend my time backpacking, kayaking, weightlifting, jogging, bicycling, metalworking, woodworking, and building a wilderness home of my own design. 14 Subjects: including calculus, physics, geometry, ASVAB ...I differentiated the program as necessary for individual student needs. I passed the certification exam (PECT) for elementary education. I am a senior at Muhlenberg College, and I am currently entering my third college season on the track team. 26 Subjects: including calculus, reading, statistics, algebra 1 ...It really makes me happy to find a way to help a student learn the material they are struggling with. Learning should not be confined to school, it should be a part of our everyday lives. Having experienced so many different environments in my life, I have learned not only a large amount of inf... 43 Subjects: including calculus, English, chemistry, geometry Related Durham, PA Tutors Durham, PA Accounting Tutors Durham, PA ACT Tutors Durham, PA Algebra Tutors Durham, PA Algebra 2 Tutors Durham, PA Calculus Tutors Durham, PA Geometry Tutors Durham, PA Math Tutors Durham, PA Prealgebra Tutors Durham, PA Precalculus Tutors Durham, PA SAT Tutors Durham, PA SAT Math Tutors Durham, PA Science Tutors Durham, PA Statistics Tutors Durham, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/Durham_PA_Calculus_tutors.php","timestamp":"2014-04-19T02:29:57Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:14371c4b-444f-48a9-b1f3-ffcae1f65e0a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Using the Earth as an Energy Source for Earth Day | Science Blogs | WIRED • By Rhett Allain • 04.22.12 | • 12:37 pm | Note: I forgot that today was Earth Day. Instead of creating a new post, I will repost (recycle) one of my much older posts. Everyone knows that very soon, we will all have our own Mr. Fusion devices in our cars and homes. But until that time, is there any other source of Energy we could exploit? What about the rotation of the Earth? The basic idea is to somehow use the rotational energy of the Earth to power things we like. Things like iPhones and coffee pots. How much energy could we get out of this and what would we lose? First, what would we lose? If we use the rotational energy of the Earth, it would spin at a slower rate. I will start off assuming we make the day 1 second longer. Currently, the Earth takes 23.9345 hours to rotate. It take the 24 hours for the Sun to be back in the same location. This is the difference between sidereal day and synodic day. I am concerned with the rotational rate of the Earth on its axis, so I need to use the sidereal day. Check the wikipedia link for a great diagram that shows the difference between the two days. This leads to an angular speed of: Now, suppose I want to increase the length of the sidereal day by 1 second, this would give a new angular speed of: How much energy would this produce? (assuming it could all be turned into useful energy). The energy of motion for an object rotating is: Where (1/2) is (1/2), I is the moment of inertia about the axis you are rotating and ? is the angular speed. The moment of inertia is sort of like the “rotational mass”. An object with a higher moment of inertia is more difficult to change its rotational motion. In this case, we are dealing with a spherically-shaped object (the Earth is mostly spherical). The moment of inertia for this can be approximated by assuming it is a uniform sphere, but it isn’t. The density in the center of the Earth is much greater than on the surface. So, for this calculation, I will use someone else’s determination of the moment of inertia of the Earth – here is Wolfram’s Research value: From this, I can calculate the change in energy by slowing the Earth down. Putting in the values from above, I get a change in energy of -5.0968 x 10^24 Joules. This is how much energy the Earth loses, so we could use this for other stuff. Is this enough energy? Energy Usage Let me just look at the energy usage by the U.S.A. because they would be the ones to harness the rotational energy of the Earth (but really, because I found the data for US energy usage first). http: //tonto.eia.doe.gov/ask/electricity_faqs.asp#home_consumption – this site has the data that I started with. It has a spreadsheet with average monthly usage for residential, commercial and industrial: • Residential: There were 122,471,071 consumers that used an average of 920 kilowatt hours per month. • Commercial: There were 17,172,499 users that used an average of 6,307 kilowatt hours per month. • Industrial: There were 759,604 users that used an average of 110,946 kilowatt hours per month. • NOTE: by “users” I mean companies or places or whatever the spreadsheet meant. A kilowatt is a unit of power – or the rate at which energy is used. A kilowatt-hour is a unit of energy. Since 1 watt is a Joule per second, 1 kilowatt-hour would be 3.6 x 10^6 Joules. The monthly US usage would be 1.0989 x 10^18 Joules per month. So, how many months and years would this energy last for the US assuming a steady energy usage? Actually, I will also assume that only 50% of the rotational energy of the Earth goes to useful stuff (like nintendos) and the rest is wasted. This would mean the amount of useful energy would be 2.5484 x 10^24 Joules. This is a long time. So the length of the sidereal day would only increase by 1 second over this time period (that way wouldn’t have to store all this energy, but rather generate it as we use it). Now for the details How exactly do I propose that this rotational energy be harnessed? I will leave that as an exercise for the reader – but I will give a hint: Magnets and wire (I have already said too much). As a bonus, this method produces no greenhouse gases.
{"url":"http://www.wired.com/2012/04/using-the-earth-as-an-energy-source-for-earth-day/","timestamp":"2014-04-19T07:36:45Z","content_type":null,"content_length":"105848","record_id":"<urn:uuid:daeb41cc-9131-4e67-964e-0c6e8bcbc98f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: Roundtable, Fractions, concept and calculations Discussion: Roundtable Topic: Fractions, concept and calculations << see all messages in this topic next message > Subject: Fractions, concept and calculations Author: lanius Date: Sep 30 2004 How do you teach conceptual understanding of fractions? What tools have you found to be effective, and likewise how do you teach computational fluency with fractions and what tools are effective? I'd like to relate a story. Formerly, in Texas, we had the Texas Assessment of Academic Skills (TAAS) where students had to pass what was basically an 8th grade level test in order to graduate. One of the questions always on the test required them to order a series of fractions from least to gratest -- 2/3, 5/6, 3/4, 2/5, etc. I was teaching a test-prep class to seniors who needed only to pass the math test in order to graduate, having failed it several times. I wrote the problem of ordering fractions on the board. The students had some ideas of getting common denominators, etc. to work the problem, but I asked them to talk about the problem a bit to see what they understood about fractional parts. To try to get at what they understood, I wrote 6/7 and 7/6 on the board and asked them to put those in order. And the students couldn't. So what does that mean? I felt like they had no clue about what these numbers meant. So I thought why spend time teaching students to add/subtract/multiply/divide fractions (which also was tested) when the numbers held no meaning for them. How could this happen? These students were plenty bright. I'm sure they'd seen lots of pies cut up all through math classes. Why didn't they get it? Where was the dis-connect in the understanding? Reply to this message Quote this message when replying? yes no Post a new topic to the Roundtable Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=dtype&do=r&msg=15412","timestamp":"2014-04-16T13:18:47Z","content_type":null,"content_length":"16623","record_id":"<urn:uuid:4e1d4f7f-4682-4c66-b0b2-d5655d06e1d7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Attleboro Science Tutor Find an Attleboro Science Tutor ...I also have 5 years of experience in Spanish. Having this diverse background allows me to implement a large variety of strategies in my interactions with students, and methods of teaching. I will not only focus on simply teaching the material at hand, but will also attempt to improve the studen... 22 Subjects: including philosophy, reading, Spanish, calculus ...I am willing to work any day during the week, including weekends. I teach theatre out of Exploration Schools Inc. Additionally I have extensive experience with theatrical makeup and special 23 Subjects: including nutrition, Latin, study skills, elementary math ...I offer a flexible work schedule that includes nights and/or weekends to anyone living within a 15 mile radius of North Attleboro, MA I have a background in business management and have owned and operated a small home care service business for over 15 years in southeastern Massachusetts and nearb... 40 Subjects: including psychology, sociology, reading, English ...The courses I've taught and tutored required differential equations, so I have experience working with them in a teaching context. In addition to undergraduate level linear algebra, I studied linear algebra extensively in the context of quantum mechanics in graduate school. I continue to use undergraduate level linear algebra in my physics research. 16 Subjects: including biology, geometry, physics, calculus ...My personal goal is for you to appreciate how clever you really are. I teach using models from the real world to allow you to see how scientists and mathematicians account for relationships and functions using different tools. We can make this learning experience successful and fun. 11 Subjects: including physics, physical science, calculus, geometry Nearby Cities With Science Tutor Attleboro Falls Science Tutors Central Falls Science Tutors Cumberland, RI Science Tutors East Providence Science Tutors Easton, MA Science Tutors Franklin, MA Science Tutors Mansfield, MA Science Tutors North Attleboro Science Tutors Norton, MA Science Tutors Pawtucket Science Tutors Plainville, MA Science Tutors Providence, RI Science Tutors Rehoboth, MA Science Tutors Taunton, MA Science Tutors Woonsocket, RI Science Tutors
{"url":"http://www.purplemath.com/Attleboro_Science_tutors.php","timestamp":"2014-04-19T17:29:54Z","content_type":null,"content_length":"23910","record_id":"<urn:uuid:561d8256-0090-42b8-bad0-3270d6ad1383>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
osteriori (discrete) I am learning basic collision detection. Using posteriori (discrete) method for collision detection. Assume the simplest case of 2 circles in 2D, same mass, same size, and assume elastic collision and assume they are moving on the x-axis and are moving towards each other’s. Now advanced the simulation one time step. Assume now the circles are now in collision where one circle has entered another. This is found by checking that the distance between their centers is smaller than 2*r where r is the radius). Now the speeds are adjusted according to the standard equation and the simulation is advanced one time step and the positions are adjusted. For this case, the speeds will flip directions and the circles will start moving away from each other’s. The problem is that if the simulation time step is too small or the objects are moving too slow, it is possible that the 2 circles will remain in collision by the next step because they have not moved out of each other’s completely yet. Therefore, in the next time step, the circles are found again to be in collision, and the speeds are adjusted again, but now they will flip backwards, and hence the circles will begin to move back into each other’s. On the next time step, collision is detected again, and the speeds adjusted, and circles will now move away from each other’s. This process repeats, and the circles will remain in collision unable to completely leave each other’s. I am sure this is a known issue with the posteriori method. What is the best way to resolve this scenario?
{"url":"http://www.dreamincode.net/forums/topic/293060-how-to-get-out-of-collision-when-using-posteriori-discrete-method/page__pid__1708521__st__0","timestamp":"2014-04-25T01:41:35Z","content_type":null,"content_length":"83580","record_id":"<urn:uuid:74aa8f96-f38a-4eda-afee-b80f7628eaea>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
One-sided versus two-sided randomness - Computational Complexity , 2000 "... We study how the nondeterminism versus determinism problem and the time versus space problem are related to the problem of derandomization. In particular, we show two ways of derandomizing the complexity class AM under uniform assumptions, which was only known previously under non-uniform assumption ..." Cited by 15 (0 self) Add to MetaCart We study how the nondeterminism versus determinism problem and the time versus space problem are related to the problem of derandomization. In particular, we show two ways of derandomizing the complexity class AM under uniform assumptions, which was only known previously under non-uniform assumptions [13, 14]. First, we prove that either AM = NP or it appears to any nondeterministic polynomial time adversary that NP is contained in deterministic subexponential time infinitely often. This implies that to any nondeterministic polynomial time adversary, the graph non-isomorphism problem appears to have subexponential-size proofs infinitely often, the first nontrivial derandomization of this problem without any assumption. Next, we show that either all BPP = P, AM = NP, and PH P hold, or for any t(n) = 2 n) , DTIME(t(n)) DSPACE(t (n)) infinitely often for any constant > 0. Similar tradeoffs also hold for a whole range of parameters. This improves previous results [17, 5] ... - Proceedings of Random99, LNCS 1671 , 1999 "... A hitting-set generator is a deterministic algorithm which generates a set of strings that intersects every dense set recognizable by a small circuit. A polynomial time hitting-set generator readily implies RP = P . Andreev et. al. (ICALP'96, and JACM 1998) showed that if polynomial-time hitting- ..." Cited by 12 (1 self) Add to MetaCart A hitting-set generator is a deterministic algorithm which generates a set of strings that intersects every dense set recognizable by a small circuit. A polynomial time hitting-set generator readily implies RP = P . Andreev et. al. (ICALP'96, and JACM 1998) showed that if polynomial-time hitting-set generator in fact implies the much stronger conclusion BPP = P . - In Proceedings of the Seventeenth Annual IEEE Conference on Computational Complexity , 2002 "... We prove (mostly tight) space lower bounds for "streaming " (or "on-line") computations of four fundamental combinatorial objects: error-correcting codes, universal hash functions, extractors, and dispersers. Streaming computations for these objects are motivated algorithmically by massive data set ..." Cited by 7 (2 self) Add to MetaCart We prove (mostly tight) space lower bounds for "streaming " (or "on-line") computations of four fundamental combinatorial objects: error-correcting codes, universal hash functions, extractors, and dispersers. Streaming computations for these objects are motivated algorithmically by massive data set applications and complexity-theoretically by pseudorandomness and derandomization for spacebounded probabilistic algorithms. - In Proceedings of the Thirtieth International Symposium on Mathematical Foundations of Computer Science , 2006 "... We use derandomization to show that sequences of positive pspace-dimension – in fact, even positive ∆ p k-dimension for suitable k – have, for many purposes, the full power of random oracles. For example, we show that, if S is any binary sequence whose ∆ p 3-dimension is positive, then BPP ⊆ PS and, ..." Cited by 4 (0 self) Add to MetaCart We use derandomization to show that sequences of positive pspace-dimension – in fact, even positive ∆ p k-dimension for suitable k – have, for many purposes, the full power of random oracles. For example, we show that, if S is any binary sequence whose ∆ p 3-dimension is positive, then BPP ⊆ PS and, moreover, every BPP promise problem is PS-separable. We prove analogous results at higher levels of the polynomial-time hierarchy. The dimension-almost-class of a complexity class C, denoted by dimalmost-C, is the class consisting of all problems A such that A ∈ CS for all but a Hausdorff dimension 0 set of oracles S. Our results yield several characterizations of complexity classes, such as BPP = dimalmost-P, Promise-BPP = dimalmost-P-Sep, and AM = dimalmost-NP, that refine previously known results on almost-classes. 1 , 2003 "... This paper introduces nondeterministic space-bounded Kolmogorov complexity, and we show that it has some nice properties not shared by some other resource-bounded notions of K-complexity. ..." Cited by 2 (0 self) Add to MetaCart This paper introduces nondeterministic space-bounded Kolmogorov complexity, and we show that it has some nice properties not shared by some other resource-bounded notions of K-complexity. - Proceedings of Random99, LNCS 1671 , 2000 "... A hitting-set generator is a deterministic algorithm which generates a set of strings that intersects every dense set recognizable by a small circuit. A polynomial time hitting-set generator readily implies RP = P . Andreev et. al. (ICALP'96, and JACM 1998) showed that if polynomial-time hitting- ..." Add to MetaCart A hitting-set generator is a deterministic algorithm which generates a set of strings that intersects every dense set recognizable by a small circuit. A polynomial time hitting-set generator readily implies RP = P . Andreev et. al. (ICALP'96, and JACM 1998) showed that if polynomial-time hitting-set generator in fact implies the much stronger conclusion BPP = P . We simplify and improve their (and later) constructions. Keywords: Derandomization, RP, BPP , one-sided error versus two-sided error A preliminary version of this work has appeared in the proceedings of Random99. y Department of Computer Science, Weizmann Institute of Science, Rehovot, Israel. oded@wisdom.weizmann.ac.il. z MIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139. salil@theory.lcs.mit.edu. Supported by an NSF Mathematical Sciences Postdoctoral Research Fellowship. x Institute of Computer Science, The Hebrew University of Jerusalem, Givat-Ram, Jerusalem, Israel. avi@cs.huji.ac.... - In Proceedings of the Sixteenth Annual IEEE Conference on Computational Complexity , 2001 "... Most of the hypotheses of full derandomization fall into two sets of equivalent statements: Those equivalent to the existence of ecient pseudorandom generators and those equivalent to approximating the accepting probability of a circuit. We give the rst relativized world where these sets of equival ..." Add to MetaCart Most of the hypotheses of full derandomization fall into two sets of equivalent statements: Those equivalent to the existence of ecient pseudorandom generators and those equivalent to approximating the accepting probability of a circuit. We give the rst relativized world where these sets of equivalent statements are not equivalent to each other.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=112656","timestamp":"2014-04-18T05:55:19Z","content_type":null,"content_length":"30086","record_id":"<urn:uuid:56a8b3ce-4aa1-4cd4-a4d7-0ab7d9069d14>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Equivalent GIC (simple interest) February 1st 2011, 10:35 AM #1 Feb 2011 Calculating Equivalent GIC (simple interest) Rob has $1,000 to invest for 120 days and is considering two options. Option 1: He can invest the money in a 120-day GIC paying simple interest of 4.48%. Option 2: He can invest the money in a 60-day GIC paying simple interest of 4.50% and then re-invest the maturity value into another 60-day GIC. What would the interest rate on the second 60-day GIC have to be for both options to be equivalent? So far I have got: Option 1: Option 2: I know i should use the formula R=I/PT im just not sure what numbers to plug in to find the equivalent rate.. Can anyone show me how to solve this? The answer is not in the back of the book. Rob has $1,000 to invest for 120 days and is considering two options. Option 1: He can invest the money in a 120-day GIC paying simple interest of 4.48%. Option 2: He can invest the money in a 60-day GIC paying simple interest of 4.50% and then re-invest the maturity value into another 60-day GIC. What would the interest rate on the second 60-day GIC have to be for both options to be equivalent? So far I have got: Option 1: Option 2: I know i should use the formula R=I/PT im just not sure what numbers to plug in to find the equivalent rate.. Can anyone show me how to solve this? The answer is not in the back of the book. What is GIC? If you are going to use acronyms, at least tell us what they mean. Your formulas are written wrong which was causing me to get odd answers. $\displaystyle P\left(1+[i\cdot n]\right)eq P(1+i)n$ $\dispalystyle 1000\cdot (1+.0448)\cdot\frac{120}{365}=343.50$ $\displaystyle 1000\cdot\left(1+\left[.0448\cdot\frac{120}{365}\right]\right)=1014.73$ $\displaystyle 1007.39\cdot\left(1+\left[i\cdot\frac{60}{365}\right]\right)=1014.73$ Last edited by dwsmith; February 1st 2011 at 12:07 PM. R(interest rate) is what im trying to figure out im not sure what numbers to put in to find the interest rate. How do I solve for I? to get the interest rate $\displaystyle \frac{1}{1007.39}\left[1007.39\cdot\left(1+\left[i\cdot\frac{60}{365}\right]\right)\left]=1014.73\cdot\frac{1}{1007.39}$ $\displaystyle 1+\left[i\cdot\frac{60}{365}\right]-1=\frac{1014.73}{1007.39}-1$ $\displaystyle \left[i\cdot\frac{60}{365}\right]=\frac{1014.73}{1007.39}-1\Rightarrow\cdots$ I'm not sure how to use that to calculate what the interest rate should be for the second one How do I solve for i? the formula I learned was R(rate)=I/Pt so R(rate)=7.34/1007.39(60/365) R= .0443 4.43%? awesome it worked.. thankyou so much!! i went and plugged that into your formula 1007.39 (1+ [.0443(60/365]) =1014.726 rounded to 1014.73 I was stuck on a similiar question, i think i got it after countless hours! lol so heres my solution. Your first calculation of 4.5% on a 60 day = $1007.39 (MINUS THAT) from the Found MATURITY VALUE of THE 120 day of $1014.73 which will give you a difference of $7.34. Which means for your 2nd 60 day calculation you want the RATE (in percentage) difference from $1007.39 to $1014.73 So the then you plug into the formula you had ... R=I/PT OVERALL interest in dollar amount : was ... R = $7.34 (interest) DIVIDED BY : $1007.39(1st principal amount first 60 day) x 60 (2nd 60day) / divided by 365 days will give you R = $7.34 / 165.60 which equals RATE OF CHANGE = 0.044324111 X 100 % EQUALS = 4.432 % WOULD BE THE INTEREST RATE ON THE 2ND 60 DAY GIC TO BE EQUIVALENT... Hope that is correct and therefore hope it may help... feel free to correct and feedback... :O) I tried... Last edited by abegail; November 25th 2012 at 03:56 PM. Reason: typing error February 1st 2011, 11:27 AM #2 MHF Contributor Mar 2010 February 1st 2011, 11:30 AM #3 Feb 2011 February 1st 2011, 11:34 AM #4 MHF Contributor Mar 2010 February 1st 2011, 11:41 AM #5 MHF Contributor Dec 2007 Ottawa, Canada February 1st 2011, 12:00 PM #6 Feb 2011 February 1st 2011, 12:06 PM #7 MHF Contributor Mar 2010 February 1st 2011, 12:13 PM #8 Feb 2011 February 1st 2011, 12:20 PM #9 MHF Contributor Mar 2010 February 1st 2011, 12:30 PM #10 Feb 2011 February 1st 2011, 12:31 PM #11 MHF Contributor Mar 2010 February 1st 2011, 12:46 PM #12 Feb 2011 February 1st 2011, 12:50 PM #13 MHF Contributor Mar 2010 November 25th 2012, 03:56 PM #14 Nov 2012
{"url":"http://mathhelpforum.com/business-math/169919-calculating-equivalent-gic-simple-interest.html","timestamp":"2014-04-19T05:20:24Z","content_type":null,"content_length":"72397","record_id":"<urn:uuid:259857dd-3714-4286-8874-817c2b9200b6>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving Functions are Onto Next: Mathematical Induction Up: NOTES ON BASIC PROOFS Previous: Proving Functions One-to-One Another problem associated with functions is to show that they are onto. This comes down to showing that for every element in the codomain there exists an element in the domain which maps to it. Again, the method used to establish this property depends on how the function is given and its properties. Typically this property is much harder to establish than showing a function is one-to-one. It is really an existence proof. For functions given by formulas we proceed along the following lines. Let y be any element of the codomain and x an element of the domain. We solve the equation y = f (x) for x. This gives us a possible candidate for a domain element. We prove it is a suitable domain element by substituting this value into the function. As an example, let's prove that f(x) = 5x+2 is onto, where R denotes the real numbers. We let y be a typical element of the codomain and set up the equation y =f(x). then, y = 5x+2 and solving for x we get x =(y-2)/5. Since y is a real number, then (y-2)/5 is a real number and f((y-2)/5) = 5(y-2)/5 +2 = y. It seems that this was too long winded an argument but care does need to be taken. For example, suppose we tried to show the function R denotes the real numbers. Let's go through the same type of proof. We let y be a typical element of the codomain and set up the equation y =f(x). then, f(x) = y. However, x is not a real number for all choices of y - take y to be -2. So -2 is not in the range of the function and hence this is not an onto function. Of course, it is simpler to show f is not onto by a counter example. Next: Mathematical Induction Up: NOTES ON BASIC PROOFS Previous: Proving Functions One-to-One Peter Williams Wed Aug 21 23:10:39 PDT 1996
{"url":"http://www.math.csusb.edu/notes/proofs/bpf/node5.html","timestamp":"2014-04-16T10:58:19Z","content_type":null,"content_length":"4796","record_id":"<urn:uuid:68ba52ec-b8b8-414d-8eb9-b76d4face244>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry and Grid Next: Results Up: COMPUTATIONS OF UNSTEADY MULTISTAGE Previous: Algorithm The þstage compressor geometry used in this study models the midspan geometry of an experiment by Dring (AGARD, 1989). The experimental configuration consists of an inlet guide vane followed by two rotor/stator pairs. There are 44 airfoils in each of the rows leading to a 1:1 ratio of airfoils down the compressor. As it would be prohibitively expensive to compute the flow through the entire 220 airfoil system, the flow has been computed only through one passage and periodicity has been used to model the other 43 passages. The axial gaps between airfoil rows in the experimental configuration are approximately 50% of the average axial chord. In this study the flow through the compressor has been computed with the same midspan airfoil geometry, but with varying axial gaps. In Gundy-Burlet, et. al. (1989, 1990), a parabolic arc inlet guide vane was used because the actual vane geometry was unavailable. The vane geometry has recently become available and is used in this calculation. The first and second stages of the compressor are similar, except that the first-stage rotor is closed 3 degrees from axial relative to the second stage rotor. This reduces the angle of attack of the first stage rotor. The airfoil sections are all defined by NACA 65-series airfoils imposed on a circular-arc mean camber line. The average axial chord is 4 inches. A zonal grid system is used to discretize the flowfield within the þ stage compressor. Figure 1 shows the zonal grid system used for the 20% gap case. In Fig. 1, every other point in the grid has been plotted for clarity. There are two grids associated with each airfoil. An inner, body-centered "O" grid is used to resolve the flow near the airfoil. The thin-layer Navier-Stokes equations are solved on the inner grids. The grid points of the inner grids are clustered near the airfoil to resolve the viscous terms. The Euler equations are solved on the outer sheared cartesian "H" grids. The rotor and stator grids are allowed to slip past each other to simulate the relative motion between rotor and stator airfoils. In addition to the two grids used for each airfoil, there is also an inlet and an exit grid, thus yielding a total of 12 grids. In order to generate inner grids that are wholly contained by the outer grids and yet are not distorted, it was necessary to overlap the rotor and stator outer grids in the gap regions for the 20% axial gap case. This can be seen in the 20% axial gap grid shown in Fig. 1. This required a modification to the grid generator and algorithm, and permits study of turbomachines with small axial gaps. A coarse grid configuration has been used to validate workstation results. The inner grids are dimensioned Fine grids are used to obtain detailed data regarding the steady and unsteady flow structure in the compressor. The inner grids are dimensioned Next: Results Up: COMPUTATIONS OF UNSTEADY MULTISTAGE Previous: Algorithm Karen L. Gundy-Burlet Wed Apr 9 12:58:06 PDT 1997
{"url":"http://ti.arc.nasa.gov/m/profile/gundy/fla/node6.html","timestamp":"2014-04-18T03:26:53Z","content_type":null,"content_length":"6485","record_id":"<urn:uuid:ccb9408a-5afa-4aba-be81-20a4ed32cc0d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Alpha, NJ Math Tutor Find an Alpha, NJ Math Tutor ...Before teaching high school I taught physics, astronomy, and geology at the college level and during graduate school I tutored college students in all of these subjects. For the past ten years, I have spent my summers teaching at summer camps for gifted students. Please contact me to learn more... 7 Subjects: including algebra 2, physical science, geology, algebra 1 ...As a university level instructor, and one who teaches international students, I am constantly incorporating study skills into my classes. Not only do students need to know the content of the class, they also need to know how to take the class. Therefore, I identify which type of learner each student is, and tailor the lessons to meet each students individual needs. 29 Subjects: including algebra 1, prealgebra, Spanish, English ...My favorite subjects are chemistry, physics, and any math subject, but would be willing to step outside my comfort zone as I am exposed to many math/science subjects as a chemical engineer. I have found in my studies that to be good at anything, practice is necessary. I find running through example problems to be the most effective way of learning a subject. 10 Subjects: including calculus, geometry, algebra 2, precalculus ...I have helped hundreds of students prepare for the Math and Verbal portions of the SAT and the ACT and I am more than capable to help a dedicated student prepare for the ASVAB. The SSAT standardized test for entry into specialized schools requires quantitative and verbal skills that students are... 12 Subjects: including prealgebra, algebra 1, algebra 2, calculus ...Along with tutoring, I can be of great assistance to anyone who has questions about ROTC, the service academies, alternate commissioning programs, or enlistment. I have a passion for learning and a desire to instill that in others. I look forward to helping students achieve their goals!I hold the rank of shodan (1st degree black belt) in Shito-ryu karate. 16 Subjects: including calculus, physics, algebra 1, algebra 2 Related Alpha, NJ Tutors Alpha, NJ Accounting Tutors Alpha, NJ ACT Tutors Alpha, NJ Algebra Tutors Alpha, NJ Algebra 2 Tutors Alpha, NJ Calculus Tutors Alpha, NJ Geometry Tutors Alpha, NJ Math Tutors Alpha, NJ Prealgebra Tutors Alpha, NJ Precalculus Tutors Alpha, NJ SAT Tutors Alpha, NJ SAT Math Tutors Alpha, NJ Science Tutors Alpha, NJ Statistics Tutors Alpha, NJ Trigonometry Tutors Nearby Cities With Math Tutor Asbury, NJ Math Tutors Broadway, NJ Math Tutors Durham, PA Math Tutors Glendon, PA Math Tutors Kintnersville Math Tutors Little York, NJ Math Tutors Milford, NJ Math Tutors Phillipsburg, NJ Math Tutors Riegelsville Math Tutors Springtown, PA Math Tutors Stewartsville, NJ Math Tutors Stockertown Math Tutors Tatamy Math Tutors Upper Black Eddy Math Tutors West Easton, PA Math Tutors
{"url":"http://www.purplemath.com/alpha_nj_math_tutors.php","timestamp":"2014-04-18T03:51:48Z","content_type":null,"content_length":"23968","record_id":"<urn:uuid:ee1a0fbb-733c-4ead-aa23-4e473a725e3d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
More Uses for <code>auto</code> More Uses for auto August 10, 2011 This post continues last week's discussion of by showing some more contexts in which it makes programs easier to read. This post continues last week's discussion of auto by showing some more contexts in which it makes programs easier to read. There are many contexts in which we write expressions without caring about the details of those expressions' types, typically because we use them as parts of other expressions. For example, if a, b, and c are the coefficients of a quadratic equation, the equation's roots are (-b + sqrt(b * b – 4 * a * c)) / (2 * a) (-b - sqrt(b * b – 4 * a * c)) / (2 * a) Suppose we want to print those roots. We might write something like this: cout << "Roots: " << (-b + sqrt(b * b – 4 * a * c)) / (2 * a) << " and " << (-b - sqrt(b * b – 4 * a * c)) / (2 * a) << endl; However, doing so clearly repeats a lot of code. We can use auto to make it easier to factor this code: auto d = sqrt(b * b – 4 * a * c), e = 2 * a; cout << "Roots: " << (-b + d) / e << " and " << (-b – d) / e << endl; I can think of no reason for a programmer who wants to refactor code in this way to have to figure out what types to give d and e. In fact, I think that it is less clear to define d and e this way: double d = sqrt(b * b – 4 * a * c), e = 2 * a; because now the reader must figure out whether the types of d and e were declared correctly, or whether sqrt(b * b – 4 * a * c) yields a type, such as long double, that will lose information on being converted to double. We can find another example of using auto to simplify code by looking at make_pair. If a has type A and b has type B, then make_pair(a, b) is an object of type pair<A, B>. So, for example, we can pair<int, double> p = make_pair(3, 4.5); after which p.first is 3 and p.second is 4.5. I claim that restating the type pair<int, double> makes this code harder to read, not easier. Moreover, making the expressions more complicated increases the benefit of auto. Why write pair<pair<int, double>, pair<double<int>>> p = make_pair(make_pair(3, 4.5), make_pair(6, 7.8)); when you can write auto p1 = make_pair(3, 4.5), p2 = make_pair(6, 7.8); auto p = make_pair(p1, p2); In short, I believe that auto is like so many programming tools: It can make programs easier or harder to read depending on how you use it.
{"url":"http://www.drdobbs.com/cpp/more-uses-for-auto/231300549","timestamp":"2014-04-16T21:51:29Z","content_type":null,"content_length":"95301","record_id":"<urn:uuid:9a161b67-c3a8-4d8d-b196-f88a1c51a504>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Subtlety in the definition of the Kobayashi metric up vote 6 down vote favorite When defining the Kobayashi metric on a connected complex analytic space $X$, one makes the following auxiliary definition: A holomorphic chain from $x\in X$ to $y\in X$ is a finite sequence of holomorphic maps $f_1,\ldots ,f_n\colon\Delta\to X$ (where $\Delta$ is the unit disk in $\mathbb{C}$) together with points $z_1,\ldots ,z_n,w_1,\ldots ,w_n\in\Delta$ such that $f_1(z_1)=x$, $f_i(z_i)=f_{i+1}(w_{i+1})$ for $1\le i< n$ and $f(w_n)=y$. The length of a holomorphic chain is, with this notation, is $\sum_{i=1}^nd(z_i,w_i)$ (Poincaré metric on $\Delta$). Finally the Kobayashi pseudo-distance on $X$ is obtained by setting $d(x,y)=$ infimum of lengths of all holomorphic chains from $x$ to $y$. This "pseudo-distance" is obviously symmetric, satisfies $d(x,x)=0$ and the triangle inequality. The space is called Kobayashi hyperbolic if $d$ is in addition non-degenerate, i.e. if $d$ is a Now one could as well begin with the following much simpler construction: Consider the function $\delta :X\times X\to [0,\infty ]$ with $\delta (x,y)=\inf d(z,w)$, the infimum running over all triples $(f,z,w)$ with $f:\Delta\to X$ holomorphic, $z,w\in\Delta$ and $f(z) =x$, $f(w)=y$. This is still symmetric and satisfies $\delta (x,x)=0$, but now it is unclear whether a) $\delta (x,y)$ is finite, i.e. the set of triples $(f,z,w)$ is non-empty; b) $\delta$ satisfies the triangle inequality. Clearly, a) and b) together are equivalent to $d=\delta$, and $d$ can be obtained from $\delta$ by an easy construction. Finally, my questions: Under which circumstances is $d=\delta$? Is there a simple example where $d\neq\delta$? What is the logical relation between a) and b)? add comment 2 Answers active oldest votes Let me give an example where $d \neq \delta$. I learned it from A Survey on Hyperbolicity of Projective Hypersurfaces, Example 1.2.1. Consider $D$ as the following open subset of $\mathbb C^2$ $$ D = \lbrace (z,w) \in \mathbb C^2 ; |z| < 1, |zw|< 1 \rbrace \setminus \lbrace (0,w) | |w| \ge 1 \rbrace . $$ The Kobayashi distance of $p=(0,0)$ and $q=(0,1/2)$ is zero. Indeed, if $p_n = ( 1/n,0)$ and $q_n = ( 1/n, 1/2)$ then $$\lim_{n \to \infty} \delta(p_n,q_n) = 0.$$ And we can verify that this implies $d(p,q)=0$. up vote 5 down vote If $f: \Delta \to D$ is such that $f(0)=p$ and $f(a)=q$ then applying Schwarz lemma to $f_2$, the second component of $f=(f_1,f_2)$, we see that $|a|\ge 1/2$. Therefore $\delta(p,q) = 1/ Notice that the Kobayashi pseudo-distance is continuous while $\delta$ is not. It seems reasonable to expect that the continuity of $\delta$ implies $\delta=d$. add comment This is actually more subtle than you might think. A classification of the spaces for which $\delta = d$ is far from known, even for domains in $\mathbb{C}^n$. However, if $\Omega \subset \mathbb{C}^n$ is convex (or biholomorphic to a lineally convex domain), then $\delta = d$, which was shown by Lempert [Lempert, László . La métrique de Kobayashi et la représentation des domaines sur la boule. Bull. Soc. Math. France 109 (1981), no. 4, 427--474.] There are also some other examples known where $\delta = d$. up vote 2 down vote One fairly simple example where $\delta$ fails to satisfy the triangle inequality is the following. Let $$\Omega_\epsilon = \lbrace z \in \mathbb{C}^2 : |z_1| < 1, |z_2| < 1, |z_1z_2| < \ epsilon \rbrace.$$ Also, let $P = (1/2, 0)$ and $Q = (0, 1/2)$. You can check that $\delta(P,0)$ and $\delta(0,Q)$ (with respect to $\Omega_\epsilon$) are independent of $\epsilon$, but $\ delta(P,Q) \to \infty$ as $\epsilon \to 0$. Hence, if $\epsilon$ is sufficiently small, $\delta_\Omega$ violates the triangle inequality. The paper by Lempert mentioned above is available at numdam.org/item?id=BSMF_1981__109__427_0 – jvp Nov 14 '11 at 16:34 add comment Not the answer you're looking for? Browse other questions tagged cv.complex-variables ag.algebraic-geometry mg.metric-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/80638/subtlety-in-the-definition-of-the-kobayashi-metric?sort=oldest","timestamp":"2014-04-17T04:22:36Z","content_type":null,"content_length":"56309","record_id":"<urn:uuid:1b4a4e1c-4626-4610-bd33-bf1c6115a751>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Dunwoody, GA Precalculus Tutor Find a Dunwoody, GA Precalculus Tutor I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 13 Subjects: including precalculus, statistics, geometry, SAT math ...Math and science has opened many doors for me and they can do the same for you!Differential Equations is an intimidating and potentially frustrating course. The course is usually taken by engineering students and taught by mathematics professors. The pure mathematical approach can be discouraging to engineering students and make the course seem like a waste of time. 15 Subjects: including precalculus, calculus, physics, algebra 2 I have a Ph.D. in sociology and am proficient in SPSS. Not only can I help with data analysis and writing syntax, but also I can explain the logic behind the statistical analyses. With strong background in mathematics, I can explain the concepts easily. 9 Subjects: including precalculus, statistics, prealgebra, algebra 1 ...I focus not only on the subject matter itself, but also on where my student is misunderstanding the material, so we can focus on that and correct it. In addition, I guide my students in AP Calculus exam taking technique to maximize their score. I have taught Precalculus at the high school and college level for 25 years, as well as tutoring individual students. 5 Subjects: including precalculus, calculus, algebra 2, trigonometry I graduated from Clemson University in December 2011. I majored in electrical engineering and currently work in the power industry. My love for math has grown since grade school which prompted me to take all of the math courses that I could in college. 14 Subjects: including precalculus, calculus, geometry, algebra 2 Related Dunwoody, GA Tutors Dunwoody, GA Accounting Tutors Dunwoody, GA ACT Tutors Dunwoody, GA Algebra Tutors Dunwoody, GA Algebra 2 Tutors Dunwoody, GA Calculus Tutors Dunwoody, GA Geometry Tutors Dunwoody, GA Math Tutors Dunwoody, GA Prealgebra Tutors Dunwoody, GA Precalculus Tutors Dunwoody, GA SAT Tutors Dunwoody, GA SAT Math Tutors Dunwoody, GA Science Tutors Dunwoody, GA Statistics Tutors Dunwoody, GA Trigonometry Tutors Nearby Cities With precalculus Tutor Alpharetta precalculus Tutors Chamblee, GA precalculus Tutors Decatur, GA precalculus Tutors Doraville, GA precalculus Tutors Duluth, GA precalculus Tutors Johns Creek, GA precalculus Tutors Mableton precalculus Tutors Norcross, GA precalculus Tutors North Springs, GA precalculus Tutors Roswell, GA precalculus Tutors Sandy Springs, GA precalculus Tutors Smyrna, GA precalculus Tutors Snellville precalculus Tutors Tucker, GA precalculus Tutors Woodstock, GA precalculus Tutors
{"url":"http://www.purplemath.com/Dunwoody_GA_precalculus_tutors.php","timestamp":"2014-04-17T04:18:49Z","content_type":null,"content_length":"24366","record_id":"<urn:uuid:6cd5a431-411e-4665-9b69-c3e91ebca125>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
[Tutor] Python performance on Windows system Roeland Rengelink r.b.rigilink@chello.nl Sun, 28 Oct 2001 12:32:48 +0100 Hi Joel, The problem is certainly the disk read/writes. As you found out, trying to manipulate a large list of integers consumes a lot of memory. A python integer (an integer object) takes about 20 bytes (IIRC), so 100M of memory for a list of 5 milion integers sounds about right. However the integer value itself is only 4 bytes. So, if you use the array module to store the integers in an array, you already slash your memory consumption to 20M. However, there is a nice trick that can reduce the memory consumption a lot more. It's a variation on the trick you could have used with your file too. Note that your file lists all the integers from 1 to end, using 12 bytes per number (10 chars+'\r'+'\n'). However, to see if number n is a prime you compute an offset (n*12) and read the line to see if the number is n (then it's a prime) or 0 then it's not. But, since you already know the number you're interested in, why store it. You could just as easily have made a file with either a 1 or a 0 on each line (costing only three bytes) use n to compute the offset (n*3) and get either a 0 (not prime) or 1 (prime). Or, just store it as one long string of 0s and 1s. The cost would have been one byte per number and the offset for number n would have been n. But, 0 or 1 that's not a byte, that's a bit. It should be possible to store the result for eight numbers in 1 byte. Reducing the memory/disk cost from 20/12 bytes per number to 1 byte per 8 numbers. That's a factor of a hundred. Now, that's a significant factor. With your current method you'll start getting into trouble again if you try to do 80M numbers, since you'll then get in the 1GB limit on your file size. However, if you could reduce the cost to 1 byte per 8 number, Those same 80M numbers would fit comfortably in about 10M of memory without any need to use disk access at all. Without further ado, here's a class (bitset) that packs the information you're interested in. It uses the fact that an integer contains 32 bits. So I can store n bits of information in an array of n/32 integers. I also include a version of your sieve algorithm that uses the bitset to store its result. import math, array, time class bitset: '''A class to hold a bitset of arbitrary length The bitset is stored in chunks of 32 bits.''' def __init__(self, size): self.size = size self.data = array.array('l', [0]*int(math.ceil(size/32.0))) def setbit(self, index): '''set bit at index to true''' arrindex, bitindex = divmod(index, 32) self.data[arrindex] |= (1<<bitindex) def flipbit(self, index): '''flip the value at index''' arrindex, bitindex = divmod(index, 32) self.data[arrindex] ^= (1<<bitindex) def unsetbit(self, index): '''set bit at index to false''' arrindex, bitindex = divmod(index, 32) self.data[arrindex] &= ~(1<<bitindex) def getbit(self, index): '''get the value of bit at index''' arrindex, bitindex = divmod(index, 32) if self.data[arrindex] & (1<<bitindex): return 1 return 0 # And your sieve algorithm becomes: def find_primes(end): '''return a bitset of size end The bit at index i is true if i is not a prime and false if i is a prime bs = bitset(end) endsqrt = math.floor(math.sqrt(end)) x = 2 while x <= endsqrt: for y in xrange(2*x, end, x): while 1: x += 1 if not bs.getbit(x): return bs if __name__ == '__main__': end = 10000L bs = find_primes(end) # print the primes for i in xrange(1, end): if not bs.getbit(i): print i, As a final note, I didn't really think all this out myself. The method I descibe here is almost straight from 'Programming Pearls' by Jon Bentley. Hope this helps, > Joel Ricker wrote: > Hi all, > I've been away from Python for a little while and so to stretch my > programming legs a little bit, I've been working on a script to > generate prime numbers. I've used a sieve algorithm and originally a > list of numbers in an array and have reworked it to use a text file in > the same way. I've noticed when working with big numbers, say finding > all prime numbers between 1 and 5 million, my system becomes a little > less responsive. It was real bad when I used a large array since > sometimes the memory used by python would run into the 100 meg range > but since I've switched to using a file it has improved alot but its > still there. > Basically what happens is while the script is running, other windows > are a little slow to appear and starting new programs or closing > windows takes several seconds to hapen. I'm running Windows 2000, > with a 750 mhz processor and 128megs of memory. What is odd though is > that using Task Manager, I see that Python isn't using that much CPU > -- only 1 or 2 percent at the most and very little memory -- about > 520k. > Is it all the disk writes that is slowing things down? Anything I can > do to my code to help things along? Or anything I can do to the > python interpreter itself? Below is the main part of the code that is > doing most of the work in the script. primein.txt is a text file > containing a list of numbers between 0 and the max number to search > to. > Thanks > Joel > f = open('primein.txt','r+') > x = 2 > endsqrt = math.floor(math.sqrt(end)) > while x <= endsqrt: > print x > for y in xrange(2, end/x+1): > f.seek(x * y * 12) > f.write('%10d\n' % 0) > f.flush() > f.seek(x * 12 + 12) > while 1: > x = f.readline() > if int(x) > 0: > x = int(x) > break "Half of what I say is nonsense. Unfortunately I don't know which half"
{"url":"https://mail.python.org/pipermail/tutor/2001-October/009491.html","timestamp":"2014-04-21T09:15:12Z","content_type":null,"content_length":"8682","record_id":"<urn:uuid:fd6fb01c-f16d-483b-bc51-fe8a92e05129>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: Traffic Jam Applet tool, Traffic Jam Activity Discussion: Traffic Jam Applet tool Topic: Traffic Jam Activity Related Item: http://mathforum.org/mathtools/tool/10/ << see all messages in this topic < previous message | next message > Subject: RE: Traffic Jam: number of moves Author: SteveW Date: Aug 3 2004 Notes from our group discussion: 2. If you know how many people there are on each side, can you tell me the minimum number of moves it will take to complete the exchange? First we wanted to get the "idea down." and then we wanted to find the minimum number of moves. We defined "n". Question: Did you decide initially what "n" was? (Should I use a variable?) As a high school teacher and working towards the quadratics, you would have to define the two variables in some way. We had a communication problem to start with - are we talking about the number of people on each side or the total number of people? Once you have the number of minimum moves for specific cases, describe it algebraically, and then go on to write the description of that. Many of us arrived at some form of n^2 +2n. And, connecting this expression to the patterns noticed in the moves, we noticed that the number of jumps is always n^2. When you count the number of steps, it's 2n. The number of jumps could be calculated making use of a variation of Gauss' formula - recognizing if you add the number of jumps that you go through (e.g. 1+2+3+4+3+2+1 when there are four pieces/people on each side), you are twice adding all of the integers, 1 to n-1, plus the "nth term" in the middle. 2* [n(n-1)/2] + n, which simplifies to n^2. If you use n as odd, you get a different quadratic? (an odd number of n's) If you don't define the rules as clearly, you get different results. Let the students define their own rules and then discover the pattern. What about putting it on a square grid rather than just linear. For example a 6 by 6 or ? (Are there lots of empty spaces?) -- Cut The Knot The "kind of move" came into play. We moved to the level of abstraction almost immediately so that we didn't really learn to "play the game" until we backed up a little. What in the patterns of the movement made the abstraction come out. The lesson plan with the tool is important. There are key moments where the teacher can help with a well-timed question, for instance about connecting mathematical expressions to the pattern of moves. Manipulatives vs virtual manipulatives vs paper/pencil vs (body) Part of what you want them to do is use mathematical language to describe the activity. At the least I want the user to define the action. (Might you restrict it to letters?) No restriction would be better. You want the development of the Could you use music? numeric, alphabetical (Some folks respond better to sound Ways to extend the problem out further - "What if you didn't have an even number of figures on each side?" A good feature of this problem is its richness. It entertains from cradle to Even in presenting this to teachers, using the kinesthetic first is a good Have students try the activity with their bodies WITHOUT talking. Conway's Game of Life - connection (point to some online instances) Used a binary code to view the idea. Reply to this message Quote this message when replying? yes no Post a new topic to the Traffic Jam Applet tool Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?id=10&context=tool&do=r&msg=12335","timestamp":"2014-04-20T12:00:26Z","content_type":null,"content_length":"19431","record_id":"<urn:uuid:d244ff01-d829-4505-9c46-eda5a24df521>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantitative Analysis of Systems Using Game-Theoretic Learning Sanjit A. Seshia and Alexander Rakhlin EECS Department University of California, Berkeley Technical Report No. UCB/EECS-2010-102 June 30, 2010 The analysis of quantitative properties, such as timing and power, is central to the design of reliable embedded software and systems. However, the verification of such properties on a program is made difficult by their heavy dependence on the program's environment, such as the processor it runs on. Modeling the environment by hand can be tedious, error-prone and time consuming. In this paper, we present a new, game-theoretic approach to analyzing quantitative properties that is based on performing systematic measurements to automatically learn a model of the environment. We model the problem as a game between our algorithm (player) and the environment of the program (adversary), where the player seeks to accurately predict the property of interest while the adversary sets environment states and parameters. To solve this problem, we employ a randomized strategy that repeatedly tests the program along a linear-sized set of program paths called basis paths, using the resulting measurements to infer a weighted-graph model of the environment, from which quantitative properties can be predicted. Test cases are automatically generated using satisfiability modulo theories (SMT) solving. We prove that our algorithm can, under certain assumptions and with arbitrarily high probability, accurately predict properties such as worst-case execution time or estimate the distribution of execution times. Experimental results for execution time analysis demonstrate that our approach is efficient, accurate, and highly portable. BibTeX citation: Author = {Seshia, Sanjit A. and Rakhlin, Alexander}, Title = {Quantitative Analysis of Systems Using Game-Theoretic Learning}, Institution = {EECS Department, University of California, Berkeley}, Year = {2010}, Month = {Jun}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-102.html}, Number = {UCB/EECS-2010-102}, Abstract = {The analysis of quantitative properties, such as timing and power, is central to the design of reliable embedded software and systems. However, the verification of such properties on a program is made difficult by their heavy dependence on the program's environment, such as the processor it runs on. Modeling the environment by hand can be tedious, error-prone and time consuming. In this paper, we present a new, game-theoretic approach to analyzing quantitative properties that is based on performing systematic measurements to automatically learn a model of the environment. We model the problem as a game between our algorithm (player) and the environment of the program (adversary), where the player seeks to accurately predict the property of interest while the adversary sets environment states and parameters. To solve this problem, we employ a randomized strategy that repeatedly tests the program along a linear-sized set of program paths called basis paths, using the resulting measurements to infer a weighted-graph model of the environment, from which quantitative properties can be predicted. Test cases are automatically generated using satisfiability modulo theories (SMT) solving. We prove that our algorithm can, under certain assumptions and with arbitrarily high probability, accurately predict properties such as worst-case execution time or estimate the distribution of execution times. Experimental results for execution time analysis demonstrate that our approach is efficient, accurate, and highly portable.} EndNote citation: %0 Report %A Seshia, Sanjit A. %A Rakhlin, Alexander %T Quantitative Analysis of Systems Using Game-Theoretic Learning %I EECS Department, University of California, Berkeley %D 2010 %8 June 30 %@ UCB/EECS-2010-102 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-102.html %F Seshia:EECS-2010-102
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-102.html","timestamp":"2014-04-16T07:17:03Z","content_type":null,"content_length":"7802","record_id":"<urn:uuid:043af775-d6f1-4ac4-8f06-c71d3266636b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Rationalizing the Denominator Date: 07/20/2001 at 00:26:25 From: Matt Sellers Subject: Rules for fractions w/ square roots My teacher and I are having a discussion about square roots in the denominator. I have always been told that you must take out any square roots in the denominator. Over time I have accepted it as a rule of math. Am I wrong to believe that for a final answer using fractions you should take out the square roots from the denominator? If I am right, where can I find the rule so I may show my teacher? Date: 07/20/2001 at 12:32:52 From: Doctor Rob Subject: Re: Rules for fractions w/ square roots Thanks for writing to Ask Dr. Math, Matt. "Rationalizing the denominator" has been taught in schools for many, many years. See any older algebra book. This dates back to the time when computations had to be done by hand. In this situation, 1/sqrt(2) is harder to compute in decimal form than its equal, sqrt(2)/2. Both require the extraction of the square root, but the first involves a much more difficult division operation than the second. Thus it was deemed that the second form was "simpler" than the first. When simplifying, students were always taught to reduce the expression to its simplest form, which invariably included rationalizing any denominators, wherever possible. In the modern world, where such calculations are usually done by computers or calculators, it is not clear that such "simplification" makes sense any more. A more complicated example of this can easily occur. For example, if cbrt(x) means the cube root of x, I think that is simpler than its equal On the other hand, it is clear that 1 + cbrt(2) is simpler than its equal Sometimes simpler expressions result from rationalizing, and sometimes from not rationalizing. Furthermore, if you have nested radicals, things get even worse. I don't even want to write down the rationalized-denominator form of On the other hand, rationalizing denominators does tend to produce one single, well-defined "simplest" form. When you do this, you can easily compare two expressions in simplest form to see whether or not they are equal. So, you see, there are arguments on both sides of this question. I hope that this satisfies your need. If not, write again. - Doctor Rob, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52663.html","timestamp":"2014-04-20T19:34:50Z","content_type":null,"content_length":"7304","record_id":"<urn:uuid:d4681dc0-995c-4ca2-8103-d7e963a2d368>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Web Page of Minhyong Kim Department of Mathematics University College London Gower Street London WC1E 6BT United Kingdom Web Page at Oxford Teaching blog London number theory blog Some Recent Publications Selmer varieties for curves with CM Jacobians. (with John Coates) Kyoto J. Math. 50 (2010), no. 4, 827--852. Massey products for elliptic curves of rank 1. J. of Amer. Math. Soc. 23 (2010), 725--747. Appendix and erratum: Massey products for elliptic curves of rank 1. (with Jennifer Balakrishnan and Kiran Kedlaya) J. Amer. Math. Soc. 24 (2011), no. 1, 281. p-adic L-functions and Selmer varieties associated to elliptic curves with complex multiplication. Annals of Math. 172 (2010), no. 1, 751--759. The unipotent Albanese map and Selmer varieties for curves. Publ. Res. Inst. Math. Sci. 45 (2009), no. 1, 89--133. The l-component of the unipotent albanese map. (with Akio Tamagawa) Math. Ann. 340 (2008), no. 1, 223--235. A remark on fundamental groups and effective Diophantine methods for hyperbolic curves. Serge Lang memorial volume (to be published). The motivic fundamental group of P^1-{0,1,\infty} and the theorem of Siegel. Invent. Math. 161 (2005), no. 3, 629--656. The Hyodo-Kato theorem for rational homotopy types. (with Richard Hain) Math. Res. Lett. 12 (2005), no. 2-3, 155--169. Some Unpublished Papers On relative computability for curves. (2005) A note on Szpiro's inequality for curves of higher genus. (2002) A vanishing theorem for Fano varieties in positive characteristic.(2002) Torsion points on modular curves and Galois theory (with Ken Ribet, 1999) Some Expository Essays Non-abelian fundamental groups in arithmetic geometry (for INI administration, 2009) Galois Theory and Diophantine geometry (2009) Fundamental groups and Diophantine geometry (Leeds colloquium, 2008) Mathematical vistas (2007) Diophantine geometry as Galois theory in the mathematics of Serge Lang. Notices Amer. Math. Soc. 54 (2007), no. 4, 476--497. Motivic L-Functions (lecture at IHES summer school, 2006) Some Recent Presentations Lorentz Center workshop (May, 2011) Essen workshop (February, 2010) Heidelberg workshop (February, 2010) Bordeaux workshop (January, 2010) AMS-KMS joint meetings, Seoul (December, 2009) Paris number theory seminar (November, 2009) Oxford number theory seminar (November, 2009) Colloquia at Leicester and Sheffield (Fall, 2009) Cambridge anabelian workshop (August, 2009) Cambridge workshop (July, 2009) Heidelberg seminar (April, 2009) Regensburg workshop on 'Finiteness for Motives and Motivic Cohomology' (February, 2009) Pan Asian Number Theory, Pohang (January, 2009) Cambridge number theory seminar (November, 2008) Lille colloquium (November, 2008) Fields lecture on Selmer varieties (October, 2008) Muenster lecture (June, 2008) Exeter colloquium (March, 2008) Kings colloquium (March, 2008) Bangalore lecture (March, 2008) IMSc Chennai (January, 2008) London-Paris number theory seminar (November, 2007) Earlier Lectures London Number Theory Seminar Newton Institute Program on Non-Abelian Fundamental Groups in Arithmetic Geometry 2009 Workshop on non-commutative constructions in arithmetic and geometry 2008 Asian Conference on Arithmetic Geometry 2007 Asian-French Summer School in Arithmetic Geometry 2006
{"url":"http://www.ucl.ac.uk/~ucahmki/","timestamp":"2014-04-20T12:08:43Z","content_type":null,"content_length":"5888","record_id":"<urn:uuid:ad80a609-a809-49b4-b113-5169ebecef9a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Hypercomputation and the physical Church–Turing thesis Results 1 - 10 of 16 - http://arxiv.org/quant-ph/0504101 , 2005 "... Abstract. We give an overview of a quantum adiabatic algorithm for Hilbert’s tenth problem, including some discussions on its fundamental aspects and the emphasis on the probabilistic correctness of its findings. For the purpose of illustration, the numerical simulation results of some simple Diopha ..." Cited by 12 (3 self) Add to MetaCart Abstract. We give an overview of a quantum adiabatic algorithm for Hilbert’s tenth problem, including some discussions on its fundamental aspects and the emphasis on the probabilistic correctness of its findings. For the purpose of illustration, the numerical simulation results of some simple Diophantine equations are presented. We also discuss some prejudicial misunderstandings as well as some plausible difficulties faced by the algorithm in its physical implementations. “To believe otherwise is merely to cling to a prejudice which only gives rise to further prejudices... ” 1 - Philosophy of Science. Piccinini, G. (forthcoming b). “Computation without Representation,” Philosophical , 2007 "... According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most ..." Cited by 9 (4 self) Add to MetaCart According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computational explanation. I. Pancomputationalism and the Computational Theory of Mind The main target of this paper is pancomputationalism, according to which everything is a computing system. I have encountered two peculiar responses to pancomputationalism: some philosophers find it obviously false, too silly to be worth refuting; others find it obviously true, too trivial to require a defence. Neither camp sees the need for this paper. But neither camp seems aware of the other camp. The existence of both camps, together with continuing appeals to pancomputationalism in the literature, compel me to analyse the matter more closely. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computational explanation. The canonical formulation of pancomputationalism is due to Hilary Putnam: ‘everything is a Probabilistic Automaton under some Description’ [Putnam 1999: 31; ‘probabilistic automaton ’ is Putnam’s term for "... This paper offers an account of what it is for a physical system to be a computing mechanism—a system that performs computations. A computing mechanism is a mechanism whose function is to generate output strings from input strings and (possibly) internal states, in accordance with a general rule tha ..." Cited by 8 (5 self) Add to MetaCart This paper offers an account of what it is for a physical system to be a computing mechanism—a system that performs computations. A computing mechanism is a mechanism whose function is to generate output strings from input strings and (possibly) internal states, in accordance with a general rule that applies to all relevant strings and depends on the input strings and (possibly) internal states for its application. This account is motivated by reasons endogenous to the philosophy of computing, namely, doing justice to the practices of computer scientists and computability theorists. It is also an application of recent literature on mechanisms, because it assimilates computational explanation to mechanistic explanation. The account can be used to individuate computing mechanisms and the functions they compute and to taxonomize computing mechanisms based on their computing power. 1. Introduction. This - Theoretical Computer Science "... This paper reviews the Church-Turing Thesis (or rather, theses) with reference to their origin and application and considers some models of “hypercomputation”, concentrating on perhaps the most straightforward option: Zeno machines (Turing machines with accelerating clock). The halting problem is br ..." Cited by 5 (0 self) Add to MetaCart This paper reviews the Church-Turing Thesis (or rather, theses) with reference to their origin and application and considers some models of “hypercomputation”, concentrating on perhaps the most straightforward option: Zeno machines (Turing machines with accelerating clock). The halting problem is briefly discussed in a general context and the suggestion that it is an inevitable companion of any reasonable computational model is emphasised. It is suggested that claims to have “broken the Turing barrier ” could be toned down and that the important and well-founded rôle of Turing computability in the mathematical sciences stands unchallenged. - Theoretical Computer Science "... In 1950, Turing suggested that intelligent behavior might require “a departure from the completely disciplined behaviour involved in computation”, but nothing that a digital computer could not do. In this paper, I want to explore Turing’s suggestion by asking what it is, beyond computation, that int ..." Cited by 3 (1 self) Add to MetaCart In 1950, Turing suggested that intelligent behavior might require “a departure from the completely disciplined behaviour involved in computation”, but nothing that a digital computer could not do. In this paper, I want to explore Turing’s suggestion by asking what it is, beyond computation, that intelligence might require, why it might require it and what knowing the answers to the first two questions might do to help us understand artificial and natural intelligence. , 2003 "... Uncertainty is an inherent property of all living systems. Curiously enough, computational models inspired by biological systems do not take, in general, under consideration this essential aspect of living systems. In this paper, after introducing the notion of a multi-fuzzy set (i.e., multisets ..." Cited by 3 (1 self) Add to MetaCart Uncertainty is an inherent property of all living systems. Curiously enough, computational models inspired by biological systems do not take, in general, under consideration this essential aspect of living systems. In this paper, after introducing the notion of a multi-fuzzy set (i.e., multisets where objects are repeated to some degree), we introduce two variants of P systems with fuzzy components: P systems with fuzzy data and P systems with fuzzy multiset rewriting rules. By silently assuming that fuzzy data are not the result of some fuzzification process, P systems with fuzzy data are shown to be a step towards real hypercomputation. "... Digital circuits with feedback loops can solve some instances of NP-hard problems by relaxation: the circuit will either oscillate or settle down to a stable state that represents a solution to the problem instance. This approach differs from using hardware accelerators to speed up the execution of ..." Cited by 1 (1 self) Add to MetaCart Digital circuits with feedback loops can solve some instances of NP-hard problems by relaxation: the circuit will either oscillate or settle down to a stable state that represents a solution to the problem instance. This approach differs from using hardware accelerators to speed up the execution of deterministic algorithms, as it exploits stabilisation properties of circuits with feedback, and it allows a variety of hardware techniques that do not have counterparts in software. A feedback circuit that solves many instances of Boolean satisfiability problems is described, with experimental results from a preliminary simulation using a hardware accelerator. Keywords: NP-hard problem, Boolean satisfiability, digital circuit with feedback, relaxation, simulated annealing "... In the 1930s, Turing suggested his abstract model for a practical computer, hypothetically visualizing the digital programmable computer long before it was actually invented. His model formed the foundation for every computer made today. The past few years have seen a change in ideas where philosoph ..." Cited by 1 (1 self) Add to MetaCart In the 1930s, Turing suggested his abstract model for a practical computer, hypothetically visualizing the digital programmable computer long before it was actually invented. His model formed the foundation for every computer made today. The past few years have seen a change in ideas where philosophers and scientists are suggesting models of hypothetical computing devices which can outperform the Turing ma-chine, performing some calculations the latter is unable to. The Church-Turing Thesis, which the Turing machine model embodies, has raised discussions on whether it could be possible to solve undecidable prob-lems which Turing’s model is unable to. Models which could solve these problems, have gone further to claim abilities relating to quantum computing, relativity theory, even the modeling of natural biological laws themselves. These so called ‘hypermachines ’ use hypercomputational abilities to make the impossible possible. Various models belonging to different disciplines of physics, mathematics and philosophy, have been suggested for these theories. My (primarily research-oriented) project is based on the study and re-view of these different hypercomputational models and attempts to compare the different models in terms of computational power. The project focuses on the ability to compare these models of different disciplines on similar grounds and , 2009 "... The accelerated Turing machine (ATM) is the work-horse of hypercomputation. In certain cases, a machine having run through a countably infinite number of steps is supposed to have decided some interesting question such as the Twin Prime conjecture. One is, however, careful to avoid unnecessary discu ..." Add to MetaCart The accelerated Turing machine (ATM) is the work-horse of hypercomputation. In certain cases, a machine having run through a countably infinite number of steps is supposed to have decided some interesting question such as the Twin Prime conjecture. One is, however, careful to avoid unnecessary discussion of either the possible actual use by such a machine of an infinite amount of space, or the difficulty (even if only a finite amount of space is used) of defining an outcome for machines acting like Thomson’s lamp. It is the authors ’ impression that insufficient attention has been paid to introducing a clearly defined counterpart for ATMs of the halting/non-halting dichotomy for classical Turing computation. This paper tackles the problem of defining the output, or final message, of a machine which has run for a countably infinite number of steps. Non-standard integers appear quite useful in this regard and we describe several models of computation using filters.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1051578","timestamp":"2014-04-16T12:29:58Z","content_type":null,"content_length":"37307","record_id":"<urn:uuid:d47ca195-6e64-4018-81f1-54291aa5f951>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 7: Computation of Derivatives from their Definition Home | 18.013A Tools Glossary Index Up Previous Next Chapter 7: Computation of Derivatives from their Definition We discuss ways to compute derivatives on a spreadsheet, with emphasis on repeating the symmetric approximation with exponentially decreasing d and extrapolating the results. 7.1 Introduction: the Obvious Approximation: f'(x) ~ (f(x+d) - f(x))/d 7.2 Round off Errors and the Derivative 7.3 The Symmetric Approximation: f'(x) ~ (f(x+d) - f(x-d)/(2d)
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter07/contents.xhtml","timestamp":"2014-04-16T13:05:01Z","content_type":null,"content_length":"2624","record_id":"<urn:uuid:4e54d4dc-d8a1-43b8-b3c6-efd32f645e19>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
In the n-body problem, we seek to describe the interaction between a fixed number of objects under Newton’s Law of Gravity. A simple equation given by Newton in 1687 describes the gravitational attraction between two objects. Using a computer, we can compute the sum of the forces acting on a fixed number of objects. Once… Read More Actress Danica McKellar (Winnie Cooper from The Wonder Years) was a math major. She now advocates math education through bestsellers like Math Doesn’t Suck, and Kiss My Math. She is an inspiration to aspiring young women. Read More garden.irmacs.sfu.ca/Open Problem Garden is a collection of unsolved problems in mathematics. Read More Nine math problems of varying difficulty with a separate pdf of solutions. Read More Visit the Store to order sweatbands. Read More
{"url":"http://weusemath.org/?m=200908","timestamp":"2014-04-16T11:16:44Z","content_type":null,"content_length":"37482","record_id":"<urn:uuid:19ee6523-7c12-4f97-bb13-175dad38c239>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Shandean Postscripts to Politics, Philosophy, & Culture Wittgenstein's Fright at Cultish Mathematicians: An Incident in the History of the Philosophy of Mathematics or What did Wittgenstein mean by Cantor's theories being a "cancerous growth" on mathematics? A question asked; When commenting on Cantor's ideas of uncountable sets and different levels of infinity, Wittgenstein called it a "cancerous growth on the body of mathematics". Cantor's (and others such as Dedekind) ideas have since provided the basis for much of the development of mathematics thereafter. What could have led Wittgenstein to make such a remark? What did he mean by it? The hard part in answering this question is trying to explain the pure mathematics in everyday language so that a common reader will know what was at issue between Wittgenstein and those, such as Bertrand Russell, who thought that Cantor, Weirstrass and Dedekind provided a solution to metaphysical problems of the foundations of mathematics. If I get the basic statement of the background wrong please correct me. Still I think it is necessary to state the problem in everyday language because one must have a clear view of how much Cantor's discovery went against common sense. If the reader can understand this she will also be able to understand why so many philosophers and mathematicians thought that Cantor's theories of the infinite did not say anything that made sense. But more important for this note the reader will be able to see how Wittgenstein's view differed from the other condemnations of Cantor's line of thinking. Cantor considered the problems of infinite sets. The common logic since Aristotle had been that the infinite was not actual but only potential.. But against common logic Cantor showed that there are sets larger than the infinite sets of natural numbers. He showed specifically that no infinite set could have as many elements as all possible subsets of that infinite set. This led to a revolution in how we conceived of set theory and of the infinite. The infinite could no longer be considered an anomaly. In other words their were different "kinds" of infinite sets. (Oh mathematicians forgive my simplicity!) What Cantor was able to show was that infinity was "actual" not just an unimaginably large number, not just "potential". He showed there are infintie sets that are larger than other sets that are also infinite. The best example is the set of all natural numbers versus the set of all irrational numbers. Both sets are infinite sets. But the set of all irrational numbers is "larger," or contains more members, than the set of all natural numbers. (Forgive me. I have merely stated the same notion in a number of ways while avoiding technical language. I did this in the hopes that non-mathematical readers will get my drift. Possibly I'm just furthering your confusion. Also for those of you who may belong to the school of mathematical realists forgive me for stating all of this as if it were just another kind of reality.) When a mathematician comes to such conclusions philosophers sneeze. Why? Because to decide that the infinite set of irrational numbers is larger than the infinite set of natural numbers is to indirectly decide questions posed at the origins of Aristotle's metaphysics, i.e. the metaphysical status of the infinite. Philosophers of Mathematics recognized this if no one else did. Russell accepted the mathematics but spent much time trying to ground the insight into his own formal logic. Wittgenstein rejected Cantor but he was not the only one.. Ponicare said, "There is no actual infinity; Cantorians forgot that and fell into contradictions. Later generations will regard Mengenlehre as a disease from which one has recovered " Brouwer said that: Cantor's theory was "a pathological incident in the history of mathematics from which future generations will be horrified." Another quote in my notebook is from Wittgenstein. "Cantor's argument has no deductive content at all.' Yet I would distinguish this reaction from Ponicare and Brouwer. I take Wittgenstein to mean that he would not argue with the mathematics but would just proclaim it all irrelevant to any philosophical or logical view. I think most of these reactions were simply a matter of an inability to reconceive ancient notions. But many mathematicians seized on Cantor's theory. Some philosophers were horrified. It didn't seem gentlemanly that these theories were being used as solutions to ancient problems philosophy. Also, the mathematicians who ceased on Cantor's theories treated them as if they were the second coming of the Pythagorean theorem or a new discovery of Pi. Cantor's theories made much of what was said previously in the philosophy of mathematics hard to justify. There were philosophers who were simply exasperated. Why don't mathematicians stop this nonsense, leave us alone, and get back to their equations? What I wonder is, if there were many mathematicians with a philosophical bent who were discouraged by the narrowness of the philosophers. This is an historical determination that is hard to make. No one can ever know what was lost by way of dogmatism. Wittgenstein was one of those who looked at all of this as an attempt to establish a New Pythagorean Cult around pure mathematics and formal logic. But even though I reject his view I think it should be fully understood. At base Wittgenstein had interesting reasons, that I think can't be easily countered, unless one is a thorrough going rationalist or believes in a pragmatic realism that states in the long run we just work and see what works. (I am somewhere within those choices.) Wittgenstein's view of mathematics was unique and I doubt one could find more than two people who would have agreed with him in 1932. But I don't think he cared much about who agreed with him, except for Turing. When he was giving a course on these subjects it seems that the only person he cared to 'make see' his point of view was Turing, who would argue with W all the way. Wittgenstein thought that "belief" in mathematics was a kind of religion among intellectuals. He would throw out what must have seemed like Delphic statements at the time such as: "There is no religious denomination in which the misuse of metaphysical expressions has been responsible for so much sin as it has in mathematics." "I shall try again and again to show that what is called a mathematical discovery had much better be called a mathematical invention." The quote about "cancerous growth" is not referring directly to Cantor but rather to Russell's discussion of Cantor, Weirstrass and Dedekind.. Russell believed that pure mathematics had laid the foundations which could ground mathematics in formal logic. For Wittgenstein, these mathematicians solutions to problems of the infinitesimal, the infinite and continuity and Russell's acceptance of these solutions as great achievements of mathematical logic had "deformed the thinking of mathematicians and philosophers." But Wittgenstein's position was not the same as other philosophers and mathematicians who criticized Cantor, et. al. He did not question the mathematics of the solutions or criticize their premises, he questioned whether these solutions were solutions to mathematical problems at all. More precisely he re-categorized the solutions to another context outside of mathematics and tried to demonstrate that the new context where these solutions must be discussed could be either accepted or rejected without effecting mathematics or logic at all. Wittgenstein's reference to the 'cancerous growth' on mathematics encapsulates two related notions: In his view mathematicians had grafted onto mathematics the following: (1) the idea that mathematics somehow gave answers to what Wittgenstein believed were metaphysical questions and (2) the idea that when doing certain kinds of 'pure mathematics' what you were doing had some connection to that other kind of game called 'formal logic.' It was these metaphysical 'answers' and the development of a formal logic that were the 'cancerous growth'. Cantor (and the way others developed Cantor) was just an example of this 'cancerous growth.' To the extent that I understand the issues here I think that Wittgenstein was being dogmatic. To the extent that I understand W'ittgenstein's concern I think he was trying to get the best mathematicians (mainly Turing, who he much admired) to see how both mathematics and formal logic had no real 'foundation' but could be restated in ways that were not 'elegant'. These 'non-elegant' restatements would be equally 'true' in that they would come to the same conclusion without flaws but would seem absurd. I think Wittgenstein was saying that sometimes the elegance of the solution tricks us into accepting it as fundamental or correct. If I remember, correctly some of what Wittgenstein wrote in his notebooks on these subjects was recently published (4 years ago?). It seems to me that much of Wittgenstein's rhetoric seems to come from the fact that he simply could not get Turing to see that his (Wittgenstein's) picture of mathematics was one possible view of the cathedral. He just thought that all mathematicians were misled on the "reality" of Cantor's proofs and then compounded it all by developing false notions about proclaiming that here - at last -- was the foundation of mathematics. Of course I may be too hard on Wittgenstein here. There was something in his whole notion about how the "game" of mathematics should be played in order to make sense in the world that also led him to reject Godel's theorem. Who knows maybe in the end we will find that the way Wittgenstein viewed the "game" of mathematics was a sort of anti-foundational foundationalism. I trust I am being appropriately obscure! Again these are very complicated questions and unfortunately unlike during the 80 years between 1860 and 1940 we don't seem to have great mathematicians who are interesting philosophers and great philosophers who are good mathematicians. The other possibility is that I don't know what I am talking about. It has been a long time since I studied these topics, a long time since those courses where very smart and inarticulate professors tried to explain to me (a very dumb but articulate student) the elegance of pure math. At the time I agreed with Wittgenstein on at least one point. The elegance seemed purely imaginary. Jerry Monaco New York City 9 March 2006 (originally written - 5 Feb 2005) Shandean Postscripts to Politics, Philosophy & CultureHopeful Monsters: Poetry, Fiction, Memories by Jerry Monaco This work is licensed under a Creative Commons License The circumstantial evidence begins to mount that what we call mental illness is in fact "too much of a good thing." In other words there is a fine line between physiological attributes of mental disorders that lead to behaviors we consider dysfunctional, and behavioral attributes that we generally define as "good" (i.e. "inventive," creative," or "perceptive"), but originate in the same physiological processes that are connected with mental illness. Below are quotes from reports on three recent studies that lead to this conclusion. Taken together these quotes are tempting to an evolutionary psychologist but I will argue that the temptation should be resisted. We should not completely dismiss speculation about these matters but we should keep clear the line between speculation, hypothesis, and theory. Researchers at the Stanford University School of Medicine have shown for the first time that a sample of children who either have or are at high risk for bipolar disorder score higher on a creativity index than healthy children. The findings add to existing evidence that a link exists between mood disorders and creativity. Many scientists believe that a relationship exists between creativity and bipolar disorder, which was formerly called manic-depressive illness and is marked by dramatic shifts in a person's mood, energy and ability to function. Numerous studies have examined this link; several have shown that artists and writers may have two to three times more incidences of psychosis, mood disorders or suicide when compared with people in less creative professions. Children of bipolar parents score higher on creativity test The more creative a person is, the more sexual partners they are likely to have... The lead author of the study, Dr Daniel Nettle (pictured), lecturer in psychology with Newcastle University’s School of Biology, suggested two key reasons for the findings. He said: “Creative people are often considered to be very attractive and get lots of attention as a result. They tend to be charismatic and produce art and poetry that grabs people’s interest.The lead author of the study, Dr Daniel Nettle (pictured), lecturer in psychology with Newcastle University’s School of Biology, suggested two key reasons for the findings. He said: “Creative people are often considered to be very attractive and get lots of attention as a result. They tend to be charismatic and produce art and poetry that grabs people’s interest. Dr Nettle added that the results suggested an evolutionary reason for why certain personality traits that serious artists and poets were found to share with schizophrenic patients perpetuated in He added: “These personality traits can manifest themselves in negative ways, in that a person with them is likely to be prone to the shadows of full-blown mental illness such as depression and suicidal thoughts. This research shows there are positive reasons, such as their role in mate attraction and species survival, for why these characteristics are still around.” Yet although some 'schizotypal' traits are linked with high numbers of partners, schizophrenic patients do not experience this level of sexual activity. These people tend to suffer from acute social withdrawal and emotional flatness - characteristics that the researchers found were linked with a reduced number of sexual partners.Creativity determines sexual success (Also see the article in Nature - Write poems, get lucky - They may be badly paid, but artists have more sexual success by Tom Simonite.) Surprisingly, people with mild depression are actually more tuned into the feelings of others than those who aren’t depressed, a team of Queen’s psychologists has discovered. “This was quite unexpected because we tend to think that the opposite is true,” says lead researcher Kate Harkness. “For example, people with depression are more likely to have problems in a number of social areas.” The researchers were so taken aback by the findings, they decided to replicate the study with another group of participants. The second study produced the same results: People with mild symptoms of depression pay more attention to details of their social environment than those who are not depressed. The basic speculation among evolutionary psychologists is that "mental illness" is an evolutionary trade-off. The best example of an evolutionary trade-off is the sickle cell gene. Inheriting a sickle cell gene from a single parent promotes resistance to malaria. Inheriting a sickle cell gene from both parents causes anemia and death. In geographical regions of heavy malaria there is a trade off between resistance to malaria provided by the sickle cell gene and the possibility of death from sickle cell anemia - more people sexually reproduce if they have one sickle cell gene and they are able to resist malaria than if they have two sickle cell genes and die of anemia or no sickle cell genes and are not able to resist malaria. (For full explanations see the following links: The Mosquito and the Bottle. The Loom: -Carl Zimmer ; An Immune Basis for Malaria Protection by the Sickle Cell Trait; Malaria and the Human Genome - PDF. Similarly, there are aspects of brain physiology that lead to creativity or the ability to perceive the world "realistically", or to perceive the social environment more empathetically, etc. These same aspects of brain physiology also are traits that are associated with various kinds of mental "illness," such as "manic-ness" and depression. If pushed beyond a tipping point these same aspects of brain physiology lead to dysfunctional mental breakdowns. The reproductive success and sexual attractiveness that rebounds to the person who is very creative or socially perceptive is offset by the possibility of dysfunctional (or non-functional) mental illness. The problem with this line of reasoning is that it is a good story but as a story but it is as yet not a testable hypothesis. We can test the correlation between creativity and bipolar mental disorder in various ways, ranging from statistical studies to studies of the physiology of the brain. But I have yet to see a research program to test the hypothesis of evolutionary trade-offs in relation to mental illness. The hypothesis is a good beginning but too broad. I am very skeptical that an evolutionary theory of mental illness can be developed by focusing on human beings at the level of behavior. I think that the level where such hypotheses can be tested is at the physiological level or perhaps at the "modular" level of a mental system. To illustrate the problem of the appropriate level of study it is only necessary to observe why we know so much about sickle cell anemia. We know the genes that must be inherited in order to produce sickle cell anemia and the shape of a human blood cell when the genes are inherited from both parents. Further, we know the shape of the cell when the gene for sickle cell is inherited from only one parent. We know the geographical spread of sickle cell anemia and we can calculate the differential between resistance to malaria provided by one gene and the possibility of inheriting two genes with the result of early death. In other words we have a good way of estimating the differential of reproductive success between a sickle cell population and a non-sickle cell population in geographical regions rife with malaria. We can trace this back to the physiological and genetic level. The problem with this line of reasoning about mental disorders is that for the most part we are only beginning to learn what mental disorders are and how they exhibit themselves in behavior. The basic descriptive problems of defining mental illness are well known. But the descriptive problems in defining such vague notions as "creativity" or "inventiveness" are even greater. We know creativity when we see it, but that is about all. I am not saying that an evolutionary explanation of mental disorder is impossible only that at this point we must content ourselves with good hints and interesting stories. Turning a just-so story into a testable hypothesis is the hardest part of any scientific project. Jerry Monaco New York City 2 December 2005 Selected by See Tangled Bank #43 @ Rural Rambles This work is licensed under a Creative Commons License. Jerry Monaco's Philosophy, Politics, Culture Weblog is Shandean Postscripts to Politics, Philosophy, and Culture His fiction, poetry, weblog is Hopeful Monsters: Fiction, Poetry, Memories Notes, Quotes, Images - From some of my reading and browsing evolutionary psychology Link Edward Herman does his usual incisive work in decoding the New York Times. "The biases of the New York Times surface in one or another fashion on a daily basis, but while sometimes awfully crude, these manifestations of bias are often sufficiently subtle and self-assured, with facts galore thrown in, that it is easy to get fooled by them. Analyzing them is still a useful enterprise to keep us alert to the paper’s ideological premises and numerous crimes of omission, selectivity, gullible acceptance of convenient disinformation, and pursuit of a discernible political agenda in many spheres that it covers." From Fog Watch - The New York Times Versus The Civil Society: Protests, tribunals, labor, and militarization and wars - By Edward S. Herman Z Magazine - But here is my basic question. Why? We need information but why look at the New York Times at all? Why worry about it? 1) Because it has so much influence over the governing elite? Is this true anymore? Perhaps it guides the governing elite. 2) Perhaps by reading the NYT and the WSJ critically we gain insight into the ruling class and its aims? Is this true? In that case if we can use those insights as an organizing tool then we are doing ourselves a service. 3) Because we don't have counter-hegemonic media of our own that establishes a grand world view for radical change and will set to crumbling the world view of the New York Times? This goes back to to point one and the overwhelming influence the times have on governing elites. That influence is bound to seep through to those who oppose the Rulers and Bosses, unless we counter the distortions and ideological spin and outright lies. 4) But in the end the reason we have to spend so much time decoding the New York Times and other media of ruling class ideological "information", is because we are too weak to establish our own media for organizing and information. So in short: Why pay attention to the New York Times? Because of the failure of the left to organize. Famously, in Lenin's What is to be done? he argued that a regular paper of a working class party is an organizing tool. Bolshevik party organization, was bound to be dictatorial as Rosa Luxembourg realized early on, but the fact is that Lenin, before he took power had deep insight in how to organize. It is part of the tragedy of Bolshevism and the atrocity of Stalinism that these organizational insights have been lost. The fact is that as the left stands today in the Western capitalist republics, there is no network of radical media that is also used as an organizing tool. There are small networks of radical media and they are very loosely connected to organizing networks. But unless the organizing networks and the media networks are organically related we will never be able to make the first step toward constructing a counter-hegemonic world view. The South End Press collective and the people at Z Magazine have been trying to build such integrated networks for years but unfortunately the network is too small and too loosely connected to other cooperative organizations and to unions. It is not there fault. People such as Michael Albert and Lydia Sargent seem to me to be near heroic in their commitment to a vision of radical democracy. But over and over again I keep on coming back to the same point in my mind - we on the left must not be organizing correctly if we are not organizing better than say the right wing Christers. The Process of "Normalization": A suggestion for using Herman's & Chomsky's model to study legal institutions: Edward Herman continues: One very important feature of an establishment institution is that it gives heavy weight to official and corporate news and opinion and little attention to facts and opinions put forward by those disagreeing with the official/corporate view. Government and corporate officials are “primary definers” of the news, and experts affiliated with, funded by, and/or supporting them function to institutionalize those views. In a perverse process, the links of these experts to official and corporate sources give them a preferred position in the media despite the built-in conflict-of-interest, unrecognized by establishment institutions. (PBS has repeatedly turned down labor-funded programs on grounds of conflict-of-interest, but doesn’t do the same for corporate-funded programs, as PBS officials have internalized the establishment’s normalization of conflicts-of-interest involving the dominant institutions of society.) Those in opposition, even if representing very large numbers, even a majority of the population, have difficulty gaining access. Another way of expressing this is to say that the media, as part of the establishment, align themselves with other constituents of the establishment, and are very often at odds with and give little voice to the civil society. First a criticism. It is true that we can put this problem in terms of a conflict between "State" and "Civil Society." There is a long tradition of the radical Enlightenment for doing so. It is part of our tradition and we should claim it. (On this matter I would suggest two books Radical Enlightenment: Philosophy and the Making of Modernity 1650-1750 by Jonathan I. Israel and A Trumpet of Sedition: Political Theory and the Rise of Capitalism, 1509-1688 by Ellen Meiksins Wood & Neal Wood.) But this can be done only if we define the business entity that we call "the Corporation" as a state entity. In fact such corporate entities are very much like states, and act as little sovereignties with their own laws and arbitrary punishments. (This is a fact that right wing libertarians will never comprehend.) Even so the conflict is not simply between "state" and "civil society" but between those who own and manage society - its property, its productive resources and its capital - and those who don't . (Notice I am stating this conflict from a tradition that follows Marx but does not accept his terms as "scientific.") The reason I begin with this criticism, a criticism that Herman would amend in his own way, is that when we try to understand the "normalization" of business interests we must understand that they are normalized as if they were the interests of society as a whole, i.e. civil society. Thus the national interest is the interest of "business" (meaning big corporate profiteering) as a whole. A person such as Herman who speaks in defense of Civil Society, cannot even be heard by the editors of the New York Times because as far as those editors are concerned, they are civil society and so is the totalization of interests that surround General Electric, Disney, CBS, Microsoft, et. al. As long as the worker of a corporation is considered to be a part of that corporation, or a small business owner is considered to be a part of business interests, then they are a part of civil society. But the immigrant who tries to form a union is not part of civil society and therefore does not deserve a voice equal to the New York Times. The same is true of the woman who gets fired from a corporation because she wishes to take care of her child before she arrives at work. How such terms as "conflict-of-interest" are internalized and normalized is in fact the crucial question both here and when studying the ideological normalization of similar terms in legal institutions. The processes are not the same but they are similar. I would suggest that one could write a study of legal institutions and their ideological filtering systems similar to Edward S. Herman's and Noam Chomsky's Manufacturing Consent : The Political Economy of the Mass Media. One might call it Manufacturing Legitmacy: The Political & Social Economy of Legal Institutions. The problem is that there are too many aspects of legal institutions that we would like to account for - not only courts and judges, but Law Schools, law firms, police departments, private "security" forces, administrative agencies, corporate imposed "non-state" rules and regulations (both for workers and consumers), private and negotiated law such as emegers from contracts, etc. etc. Each institution should be studied discretely of course but I propose that an entry problem is distinguishing 'non-state' law from state law and showing how all of this integrates into the current political and social economy. On the ideological level the problem is the same. How do we describe the normalization of an ideological view, accurately and in detail, in a way that can lead to understanding and not further Jerry Monaco New York City 2 December 2005 This work is licensed under a Creative Commons License. Jerry Monaco's Philosophy, Politics, Culture Weblog is Shandean Postscripts to Politics, Philosophy, and Culture His fiction, poetry, weblog is Hopeful Monsters: Fiction, Poetry, Memories Notes, Quotes, Images - From some of my reading and browsing A young friend who is writing a paper on existentiallism asked me to explain the Sartre and Camus break-up to her. So I did. This is material that has been covered so often that I don't know if I have offered anything knew. Never-the-less I decided to post it here for those who might be interested. As an aside, it might be interesting to write an essay taking off from this about the whole notion of "choosing" with "in" history. This idea about history seems to me especially religious... as if history was a kind of god. The Break Between Sartre and Camus: Gossip, Invective, and the Meaning of History. : A Question from a Young Friend Your question: "Why did Sartre and Camus argue and split (or, as you put it. "have a falling out")?" Someday I would like to write an essay about intellectual fame and literary gossip and its meaning for philosophical issues... I think the "true meaning" of the "split" between Sartre and Camus, tells us more about the subject of the "literary star system" and the "ghost of gossip" that haunts every petty bourgeois intellectual enterprise than it tells us about the important historical issues behind the parting of ways . But some other time. Basically the feud between Sartre and Camus was about each individual's relation to resistance and violence, history and action. Sartre and Camus argued over some of the following issues -- political commitment, the nature of history, the relation of the "writer" to the struggles of the oppressed, the nature of violence and terrorism, the role of the individual, etc. All of this was in the context of the growing anti-colonial movements, especially movements against French Imperialism in Africa and Indochina and the postwar influence of Stalinism over the European working class and these same anti-colonialist movements. Sartre's emphasis was on opposing oppression in France and opposing French imperialism. Camus' emphasis was on opposing the tyranny of Stalinism and similar totalitarian tyrannies and would not support an anti-imperialist movement that would simply lead to another form of oppression. For Sartre, Camus' moral position provided backhanded political support for imperial oppression. For Camus, Sartre's political position provided moral cover for Stalinist domination. From this distance we can see that they were both correct and both fundamentally These I believe are the important issues in a nutshell. Readers can stop here if they feel no need to learn more about the interesting gossip or the entangled history. Like all else in the literary world the break between Sartre and Camus began as a feud over a bad book review, the book we know in English as Albert Camus' "The Rebel." In 1951 Camus published "L'Homme revolte". In 1952, soon after the publication, France was deep within one of its periodic political crises, involving Indochina, Algeria and national strikes. In the mean time the only writers with moral credit among the French working and middle classes were the intellectuals who had in one way or another participated in the fight against the Nazis. In this respect Sartre and Camus were the pre-eminent literary stars of the post-war era. They were often paired together as representing a style of revolt among the rising young intellectuals. The radical youth of the era grabbed at existentialism as representing their moral disgust at the hypocrisy of a bourgeoisie that so easily collaborated with Nazi occupation and representing their need for freedom of thought against the stultification of a mechanical Marxism as represented by the PCF. It was in this situation that Francois Jensen wrote a scathing review of Camus' book in Sartre's journal "Les Temps Moderns." Camus in response wrote to Sartre accusing him of making a personal attack in order to gain political points with his leftist friends. Sartre wrote back accusing Camus of betraying the cause of the oppressed in order to advance his career as the popular writer of petty bourgeois angst. Well, all of this is the usual literary gossip, and the Parisian literary culture can be especially vicious, probably because French "intellectuals" are not only "writers," "philosophers," and "artists" but are also caught in the frenzy of fame that elevates the writer to the equivalent of a rock star. It's hard to imagine now but "Paris Intellectual Culture" once held an analogous place in French Society that "Hollywood Star Culture" holds in the U.S. This meant that the friendship between Sartre and Camus was broken in public and the events were played out in the newspapers and broadcast from the lecture halls, in a way that is hard to imagine for a present day American. It would be as if some imagined feud between Richard Rorty and Stanley Fish were to be covered by the New York Times, the Daily News, and the Fox News channel. More than anything else this magnified the bitterness of the break. It also tended to obscure the issues behind the break, then and now. Beneath the posturing, gossip, and frenzy of fame there were actually a few serious philosophical and political questions. And as far as those are concerned it is not easy to say who was more wrong-headed Camus or Sartre. In current intellectual culture, with its automatic bourgeois self-satisfaction (which parades as democratic righteousness while obliterating democracy everywhere) it is usually Camus who is given the last word. Many U.S. writers today (especially those around the oddly jesuitical "New Republic" magazine) would turn him into Saint Camus. Yet when I was coming to awareness intellectually in the 1970s, at a time when U.S. atrocities in the Vietnam war were still obvious to U.S. intellectuals, Sartre was looked upon as the model of the committed intellectual and Camus was considered a naive, if unwitting apologist for imperialism. Much of this is simply the clouded sensorium that is the politics of literary reputation and has more to do with our current ideological battles than with history or moral principle. The issues behind the rise and fall of literary reputation are interesting, but not important for this particular post. To understand the historical issues that give the little literary feud between Sartre and Camus some historical significance it is necessary to understand what most left-leaning French intellectuals understood in the postwar years. They all knew that the French "bourgeoisie" had quickly given in to the Fascists, and collaborated with German occupation. Most believed this was because the bourgeoisie feared the communists more than the fascists. They all believed that in the countries occupied by the Germans it was the communists and the socialists who organized the underground resistance to the Fascists. In short the Stalinist Communist parties emerged from World War II with moral credit for their resistance to the Nazis and the ruling classes of France and Italy were largely discredited. For independent intellectuals, such as Sartre and Camus, who opposed the Nazi occupation with varying degrees of risk to their own lives, the significant question was, what attitude should be taken to the PCF, the French Communist Party. The best known of this group of independent intellectuals, beside Camus and Sartre, were Maurice Merleau-Ponty and Raymond Aron. But there were others who would make their reputations much later such as Cornelius Castoriadis and the intellectuals around a little known but very interesting group called "Socialisme ou Barberie". I mention this group because it was one of the few left intellectual formations that offered commentary on these issues that more than holds up today. The first break between Camus and Raymond Aron on one side and Sartre and Merleau-Ponty on the other took place over how to characterize the Stalinist party and what attitude to take toward the newly reconstructing "bourgeois" parties. Basically, Sartre believed (at least up until 1956 and the Hungarian Workers Rebellion against the Stalinist Communist Party) that the Communists were an oppressive party but were the only game going and represented the interests of the oppressed. Camus believed that all political parties were basically oppressive and that the leaders of these parties cynically claimed to represent the interests of the oppressed in order to become oppressors themselves. (I am highly oversimplifying.) But being writers and intellectuals who were also French, Sartre and Camus were bound to create a theory of their disagreement that would bring it back to fundamental philosophical differences with world historical import. For Camus, individual rebellion, the ability of the individual to say "No" to the oppressive regime was the highest value. (I suppose one could make Antigone the great patron saint of this attitude.) But the history of the previous 200 years seemed to Camus to call into question the very basis of "rebellion" as a collective act of liberation -- of revolution. Collective rebellion, would simply result in organized murder and, therefore, even though the individual "Rebel" should be honored for his act of resistance -- that act of resistance being the basis for asserting human dignity -- revolution itself would fail to constitute justice. For Camus, all collective action could only constitute more injustice. If Camus was willing to take collective action against the Nazis it was only because Nazi injustice was all invasive and total. This meant that any kind of rebellion at all was a Pascallian wager that had to be accepted. In fact for Camus, the Nazis proved his point about the futility of collective rebellion, since the Nazis were simply one more example of that futility. All revolution led to greater terror, even when it was a reaction to the terror of the status quo. Camus' solution to this "paradox" between individual rebellion, which establishes the basis for human dignity, and collective rebellion, which creates the basis for increasing repression, was the solution Sartre regarded as typical of the petty-bourgeois writer. Camus believed that one should essentially "privatize" rebellion, make rebellion into a moral standard of ones own life that could be expressed in the ethics of one's art. Rebellion in Camus' view could not establish a world of justice, but when the rebellion of the individual is turned into the directed energy of human art, it can create a universe of meaning. Sartre believed that the only way to resist oppression was to make a moral choice. So far he agreed with Camus. Sartre also believed that collective rebellion would inevitably lead to violence. But far from shrinking from this violence Sartre tended to think that collective violence was one of the motors of history and the only choice to make was on which side of history the individual would choose to fight. For Sartre and Camus the choice was moral, as well as political. But for Sartre the choice of rebellion was also the choice of history. It sounded to Sartre like a betrayal of the values of the Resistance to Nazi occupation to say that collective rebellion only leads to more violence. Later it would sound like a betrayal of the liberation movement of the anti-French Algerians, to say to them that they should not rebel collectively. For Sartre it was merely a choice between supporting the violence and terrorism of the Algerian rebels against the French oppressors or supporting the violence and atrocities of the French colonialists against the Algerian people. To say that one should retreat into one's own art was simply to make a choice by default, it was to engage in an act of bad faith by pretending not to choose. For Sartre personal retreat into art was merely another way of supporting the violence of the status quo. If one remembers that, at this time (1952), France was actively trying to recover its empire in Indochina and Africa, and that Sartre was actively opposing French colonialism, whereas Camus believed that the anti-colonialists had no "moral legitimacy", then one can get a sense of what the feud was "really" about from Sartre's point of view. If one remembers that Sartre was trying to "existentialize" Marxism and therefore not offering very acute criticism of the "political acts" of the Stalinists, then one can get a sense of what the feud was "really" about from Camus' point of view. For both writers the basic principle was "how" to oppose oppression. For Camus "collective resistance" to oppression only leads to more oppression. For Sartre Camus' "quietism" could only lead to the triumph of the oppressors. Camus believed that Sartre had become an ideologue giving cover to Stalinist domination, while he, Camus, was the advocate of individual human dignity. Sartre believed, that Camus was an apologist for French Imperialism, while he, Sartre was simply choosing to be "in" history and Camus was choosing in "bad faith. " The question of who was "correct" in this argument is not the correct question. The question is how can we come to an historical understanding of the moral issues presented by Camus and how can we come to a moral understanding of the historical issues presented by Sartre. In many ways, in 1952, each represented the missing center in each other's thought. Camus' refusal to see that any fight for the oppressed could be meaningful, and Sartre's refusal to see that his uncritical support of the "resistance" of the oppressed could lead to a glorification of violence, seems to me to dance around the same basic absence in the world view of each philosopher. Quotes from Sartre and Camus: I offer below a few enjoyable quotes from Sartre's "Reply to Camus", which in French reads with the voyeuristic thrill of observing a distant intimacy, like hearing your best friends breaking up in the next room. Sartre constantly addresses Camus as "you, you, you,..." as if it were his version of "J'Accuse." These quotes are "fun" and the reader will get a good flavor of Sartre's side of the Sartre's "Reply to Albert Camus" is a polemic worth reading if only for its rhetoric of energizing invective. Sartre tells us that Camus is claiming to be tired of the fight. Sartre replies: "[I]f I were tired it seems to me that I would feel some shame in saying so There are so many who are wearier. If we are tired, Camus, then let us rest, since we have the means to do so. But let us not hope to shake the world by having it examine our fatigue." "[T]he only way of helping the enslaved out there is to take sides with those who are here." Sartre speaks of Camus' relation to history and to Camus secondary relation to his own personality "outside of history", as if Sartre could perform an existential psychoanalysis on Camus, in a way he would later write about Baudelaire, Jean Genet, and Flaubert. "Your personality, alive and authentic as long as it was nourished by the event, became a mirage. In 1944, it was the future. In 1952, it is the past, and what seems to you the most intolerable injustice, is that all this is inflicted upon you from the outside, and without your having changed. ... Only memories are left for you, and a language which grows more and more abstract. Only half of you lives among us, and you are tempted to withdraw from us altogether, to retreat into some solitude where you can again find the drama which should have been that of man, and which is not even your own any more...." Sartre continues: "Just like the little girl who tries the water with her toe, while asking, "Is it hot?" you view history with distrust, you dabble a toe which you pull out very quickly and you ask, "Has it a meaning?" ... And I suppose that if I believed, with you, that History is a pool of filth and blood, I would do as you and look twice before diving in. But suppose that I am in it already, suppose that, from my point of view, even your sulking is proof of your historicity. Suppose one were to reply to you, like Marx,: "History does nothing... It is real and living man who does everything. History is only the activity of man pursuing his own ends.... It is only within historical action that the understanding of history is given. Does history have a meaning? Has it an objective? For me, these are questions which have no meaning. Because History, apart from the man who makes it, is only an abstract and static concept, of which it can neither be said that it has an objective, nor that it has not. And the problem is not to know its objective but to give it one." With this invective, Sartre could carry the reader with him. What is not remembered about Sartre is that he was one of the great polemicists of our time and wrote best when he was personally angry. Thus the young intellectuals of the time were more likely to read Sartre's side of this argument rather than Camus' side. It was only later, when reacting against Sartre's supposed "communism," his commitment to fighting for the oppressed even if the oppressed used violence, that Camus' clear eyed anti-Stalinism was used as a bludgeon against Sartre's wrestle with the French Communist Party. Sartre could be naive. He could cheer any and all anti-colonial movements on the one hand and cheer Israel as an exemplar of overcoming oppression on the other. But simple ignorance of the history of the time usually prevents most people from understanding the "argument" between Sartre and Camus. In the end, when Camus died, Sartre showed his grudging, and admiring respect for Camus. The following is a quote from the obituary Sartre wrote for Camus: "He [Camus] represented in this century, and against History, the present heir of that long line of moralists whose works perhaps constitute what is most original in French letters. His stubborn humanism, narrow and pure, austere and sensual, waged a dubious battle against events of these times. But inversely, through the obstinacy of his refusals, he reaffirmed the existence of moral fact within the heart of our era and against the Machiavellians, against the golden calf of realism." Some quotes from Albert Camus "By definition, a government has no conscience. Sometimes it has a policy, but nothing more." "A free press can of course be good or bad, but most certainly, without freedom it will never be anything but bad" "The aim of art, the aim of a life can only be to increase the sum of freedom and responsibility to be found in every man and in the world. It cannot, under any circumstances, be to reduce or suppress that freedom, even temporarily." "A man without ethics is a wild beast loosed upon this world." "The evil that is in the world almost always comes of ignorance, and good intentions may do as much harm as malevolence if they lack understanding." "Stupidity has a knack of getting its way." Jerry Monaco New York City 9 December 2005 This work is licensed under a Creative Commons License. Jerry Monaco's Philosophy, Politics, Culture Weblog is Shandean Postscripts to Politics, Philosophy, and Culture His fiction, poetry, weblog is Hopeful Monsters: Fiction, Poetry, Memories Notes, Quotes, Images - From some of my reading and browsing Literature as Experience - A Hope for Literary Darwinism A purpose of literature is to provide experience to humans - this is an expression of my hope that literature can be looked at from the point of view of evolutionary psychology. Some preliminary thoughts on Literary Darwinism. There is much I object to in Joseph Carroll's idea of applying evolutionary psychology to literature. (See Literary Darwinism: Evolution, Human Nature, and Literature.) Yet, I am in sympathy with the point that any rational view of literature, or of human culture in general, accepts the fact that humans are a product of natural processes and that all of human culture is a subset of our biological make-up. Humans share a common evolutionary heritage and because of the contingencies of our biological history we share a set of species-properties including common cognitive faculties. The species-specific cognitive faculty that is easiest to designate and investigate is the language-faculty. This is because it is relatively isolated from other cognitive faculties and is unique in the way it works in our brains. First of all, it is probably true that "narrative" or story telling is a species-specific result of our biological history. But, does that lead us to conclude, that any particular aspect of narrative, or narrative-itself, is what we should be trying to explain when developing an evolutionary theory of cognitive faculties? Narrative may be a by-product of the combination of many other cognitive faculties, which, when combined with the very special faculty of language, brought about the possibility of narrative. The fact that what we call "narrative" is universally observable among homo sapiens does not necessarily mean that narrative, as a separate human faculty, provides the individual with a selective advantage. One would suppose that there must be certain aspect of narrative and story telling that does give selective advantage. If story-telling "merely" allows us to give a good description of how to find food or allows us to sound charming to a potential mate, then I would easily conclude that there must be some selective advantage for the "behavior" of some story-telling. But exactly which aspects of narrative provide this advantage? And how do we "find" and trace these aspects of narrative back to their evolutionary etiology? In theory, all of what we call culture and society can be traced back to human potentials and physical structures that have emerged in the course of biological evolution. Whether these structures are specific adaptations or are spin-offs from other changes in the structures of our body-brain-minds - spin-offs which were necessitated in order to accommodate previous adaptations - does not matter for a biological explanation of culture. Also, it is possible to have a biologically cognizant explanation of culture and literature - relating our cultural products to cognitive structures in the brain - without providing an explanation of how any particular cognitive structures were selected for in the course of our evolutionary history. Of course, we should assume that such an explanation, if provided, would be theoretically important, even if not always pragmatically possible. It is certainly true as E.O. Wilson pointed out to us many years ago that 'society' by any definition is not unique to humans. Moreover, what we call culture - a very loose and non-scientific term - is not unique to humans either. As an example I would suggest that the work of Frans de Waal would be a good starting point. I am currently reading his Chimpanzee Politics : Power and Sex among Apes, which shows that many of the cognitive processes and cultural relations that we consider uniquely human are in fact properties of our closest animal relatives. But the problems of tracing any cultural or cognitive product back to its origins in our biological nature only begins once you accept the above - and the above must be accepted if the premises of evolutionary biology are accepted. But this does not mean that we are in the position to develop non-trivial theoretical descriptions and explanations of any particular aspect of culture from the point of view of evolutionary psychology. Take the following suite of faculties, propensities, and abilities - chimpanzees can plan ahead; they make simple tools that show an ability to manipulate diverse aspects of the environment; they can establish coalitions to obtain leadership; chimpanzees seem to have a complicated ability to recognize and "make" patterns; they seem to have primitive mathematical abilities, which we can suppose grew in the course of human evolution; they display a need for both reciprocal and hierarchical sociability; all chimps "play" among with their peers and the "play" seems both elaborate and sometimes rule-based. Suppose this suite of faculties, propensities, and abilities, were ramped up by the language-faculty into the ability to make narrative. Suppose, also, that narrative or telling stories is both a need and a pleasure. So when we study story-telling from the point of view of evolutionary psychology what do you want to study first? An evolutionary psychologist can in fact make up many stories of why the ability to tell a good story will provide an adaptational advantage. If I wanted to be flip and snarky I would say that evolutionary psychologists are themselves a good example of the advantage provided by good story telling since they often tell good stories and have had some success doing so. But my real question here is what should we study? Joseph Carroll's answer is the A primary concern of literary theory, then, must be to identify the level of analysis at which elements form meaningful units that join with other such units so as to fashion the larger structures of figuration. As the evolutionary psychologists John Tooby and Leda Cosmides rightly affirm, "Sciences prosper when researchers discover the level of analysis appropriate for describing and investigating their particular subject: when researchers discover the level where invariance emerges, the level of underlying order. What is confusion, noise, or random variation at one level resolves itself into systematic patterns upon the discovery of the level of analysis suited to the phenomena under study." (From Joseph Carroll's Rhetoric and the Human Sciences: The Conflict between Poststructuralism and Evolutionary Biology) Do we know enough about human cognition to designate "meaningful units"? My reading of the literature is that beyond our basic study of language and how it grows we know very little in this area. A word might be called a "meaningful unit" but we are nowhere near understanding what makes a unit "meaningful" simply because meaning is a term or concept that has no scientific definition. The closest we can come in most cases is when we define units of perceptual information, but even here we are not yet on solid explanatory ground. When it comes to problems of meaning we are mostly lost. We need much more study of the basic cognitive faculties involved in perception before we reach the point where Carroll wants to begin. The problem might also be grounded in our biology. One reason why we can define "meaningful units" in language is that human language is discrete (there is no such thing as "half" a word) where as most other forms of animal information transfer seem to be continuous. There is no reason to think that the way we perceive narrative as a whole is discrete or that we can isolate a meaningful unit. IIf we can't define basic "meaningful units" of narrative then it may be very hard to look at the cognitive aspects of narrative in the way we have been able to look at language. But let me suppose that we do know enough to begin some kind of study into the biological basis of narrative. Should we then jump directly to telling a story about its adaptational value, i.e. its evolutionary etiology? Perhaps it would be better to study the various cognitive faculties that go into making of narrative. It is precisely here that I find the main problem with literary Darwinism. If the brain is made up of modular cognitive faculties - as I think the best evidence shows us it is - then on what level could we study something such as narrative? As I have already indicated I think that narrative is a by-product of a number of other faculties, propensities, and abilities and that there is unlikely to be a separate "narrative"-faculty that can be studied as if it were a cognitive module. I hope that my previous sentence is incorrect. I hope that there is a sort of deep grammatical structure to narrative that is somehow separate from other cognitive faculties. I hope that this is true only because it would be interesting in many ways. But I don't think that we have the evidence for it. Without being able to isolate a cognitive module it will be next to impossible to give a biological and evolutionary explanation of narrative. The above is the basic problem an evolutionary psychologist runs into when she tries to explain any aspect of evolution that cannot be defined as a bodily organ. The evolution of the eye is easy to define because the physical unit itself is discrete and definable. It is possible to study animal and human vision without knowing anything about how the eye evolved. It is also possible to study the evolution of the eye without knowing a whole lot about how the eye works inside the human brain. Similarly, it is possible to study human narrative as a biological product without necessarily knowing anything about how narrative evolved in our biological history, but in this case I can't say vice-versa. Without knowing the biological basis of narrative it will be well nigh impossible to study how narrative evolved. That is because unless we are able to define the biological basis of narrative we will never be sure what evolved and why. (Similar criticism can be made of Carroll's concepts of "figurative structure" and "elements of figuration." See FN1.) So let us suppose that we simply seek to give an explanation of the behavior that we call narrative? Then my question would be, what kind of behavior is it? Even if we overcome all other hurdles, I have a basic disagreement with Carroll on what literature actually is and how it functions among human animals. Carroll states: The traditional categories--character, setting, and plot--can be explained and validated by invoking the largest principles of an evolutionary critical paradigm. If the purpose of literature is to represent human experience, and if the fundamental elements of biological existence are organisms, environments, and actions, the figurative elements that correlate with these biological elements would naturally assume a predominant position within most figurative structures. Evolutionary theory can thus provide a sound rationale for adopting the basic categories, and it can also provide a means for extending our theoretical understanding of how these categories work within the total system of figurative relations. This theoretical understanding can in turn provide a means for assessing traditional explanations or applications of the categories and measuring their central presuppositions against those of an evolutionary paradigm. (From Joseph Carroll's Rhetoric and the Human Sciences) Is "the purpose of literature to represent human experience"? I am not quite sure of this. I think a better preliminary definition is that "the purpose of literature is to provide experience to humans." Of course the experience that is provided can also be a representation of certain kinds of human experiences, dilemmas, actions, etc. The important point is that the mere change of definition would provide a different focus for investigation into narrative. For example, at times stories may be mere play, rehearsal, or simply a kind of logic game that exercises the mind. I would argue that all good stories have aspects of a logic game about them, but a logic game extended into a very particular kind of experience that somehow ramifies beyond the mere "logic" of the game. What I mean by this is that to a certain extent a structure of narrative may be working through problems that are internal to the mind's own cognitive processes. I think to some small extent dreams might work this way also. I am not making a point that is similar to Freud's point about dreams being a form of wish fulfillment. Rather, I am saying that some of the logical structures of narrative might actually be a working through and a building of logical structures of the mind/brain. Further, humans may need to build these logical structures through out a long period of life (if not a whole life time) in order for certain areas of the brain to continue to grow... at least not atrophy. (This would be in contrast to the growth of the language-faculty during childhood.) In other words this kind of narrative game playing may be an experience that the human mind/ brain 'craves' for its own internal growth. This may just be one function of narrative and it may be a function that is similar to our attraction to rule based games such as chess or poker, games that depend on pattern recognition, etc. In other words this kind of experience of narrative might be relatively independent from the representational aspects of narrative. But it is possible that narrative may help humans to connect social relations with pattern making cognitive abilities. There are other reasons for my redefinition. I think that the primary fact of literature is that it provides humans with a very special kind of human experience and that it is an experience that helps us to build our experience of the world beyond literature. It is this fact that I think is a primary starting point for any evolutionary or sociobiological (to use the taboo designation of the field) view of literature. In one respect I would like to chastise Carroll. Carroll seems to assume that all leftists will be repelled by biological explanations of literature. I think this may be a result of his social position as an English professor. It is likely that the only 'leftists' he knows are in the English Departments of the academy. I am a radical leftist, informed by left marxism, anarcho-syndicalism, romantic radicals such as Shelley, and the tradition of the radical enlightenment. There are others like me who are repelled by the obscurantism that passes for politics in many English departments. Perhaps, Carroll will think it strange that for me the cofounders of evolutionary psychology are two polemicists -- Kropotkin, the anarchist, and T. H. Huxley, Darwin's bulldog. I am sure many on both the leftist and right will choke on the fact that I believe that both Kropotkin and Huxley are politically admirable and scientifically correct. Didn't Huxley produce a rationalization of the ideological justification of dog-eat-dog capitalism that is known to history as Social Darwinism? Didn't Kropotkin deny natural selection and put in its place "cooperation"? No, on both counts. I would suggest that we reread Huxley's The Struggle for Existence in Human Society and Kropotkin's Mutual Aid: A Factor of Evolution and reevaluate both in the light of current debates. I think what a new reader will find is a rehearsal of the arguments over the origins of altruism and the evolution of cooperation. On a political note, suffice it to say for now that I don't believe evolutionary psychology or sociobiology are incompatible with radical democracy and libertarian socialism - a view that one aspect of human nature is a desire for freedom and self-determination, and that this desire can be best fulfilled by a radical democracy that would eliminate the monstrous human destructiveness of our current business forms. I close with the first paragraph of the Joseph Carroll essay I have quoted in this comment. Darwinian evolutionary theory has established itself as the matrix for all the life sciences. This theory situates human beings firmly within the natural, biological order, and evolutionary principles are now extending themselves rapidly into the human sciences: into epistemology, sociology, psychology, ethics, neurology, and linguistics. The rapidly developing and increasingly integrated group of evolutionary disciplines has resulted in an ever-expanding network of mutually illuminating and mutually confirming hypotheses about human nature and human society. If literature is in any way concerned with the language, psychology, cognition, and social organization of human beings, all of this information should have a direct bearing on our understanding of literature. It should inform our understanding of human experience as the subject of literature, and it should enable us to situate literary figurations in relation to the personal and social conditions in which they are produced. Up to this point, contemporary literary theory has not only failed to assimilate evolutionary theory, it has adopted a doctrinal stance that places it in irreconcilable conflict with the basic principles of evolutionary biology. (From Joseph Carroll's Rhetoric and the Human Sciences) The results of the extension of evolutionary theory into the areas of epistemology, sociology, psychology, ethics, and neurology are yet to be seen. Can we go beyond the good hints that we now have and provide theoretical descriptions and explanations of human faculties and propensities that are more than mere truisms? Or is it possible that we have reached areas where there is too much hidden from us for us to come to firm testable scientific conclusions? What is for sure is that only a world views that "situates human beings firmly within the natural, biological order" are contenders for the production of knowledge. This excludes all forms of obscurantism, whether superstition, religion, or deconstruction. Jerry Monaco 7 November 2005 New York City [FN1 - To designate the total set of affective, conceptual, and aesthetic relations within a given literary construct, I shall use the term "figurative structure." Any element that can be abstracted from a figurative structure is ipso facto a figurative element. Thus, representations of people or objects, metrical patterns, rhyme schemes, overt propositional statements, figures of speech, syntactic rhythms, tonal inflections, stylistic traits, single words, and even single sounds are all elements of figuration. Figurative structure, like any other kind of structure, can be analyzed at any level of particularity. It remains to be shown empirically that there is any level of what Carroll is calling "figurative structure" that can be studied directly. Let us assume, to make things easier, that there is a specific metrical mental module that has adapted over the course of biological time. The behavior of making poetry with metrical patterns may have nothing to do with the biological evolution of this "metrical module". One might what to start with any kind of testable hypothesis to study the evolutionary origins of this "metrical module." It may have had something to do with memory of sound that allowed our evolutionary ancestors to perceive or understand patterns. The sounds can be any kind of pattern whether patterns in vocalization or in the rustling of trees. But it may have nothing to do with our current uses of metrical patterns. When studying non-mental physical phenomena such a diversion between the original function of an evolving "module" and its eventual function does not pose an insurmountable problem. For instance, feathers and wings may have first been insulation devices and later feathers and wings enabled flight. We were able to study how one function evolved into a different kind of function. The reason that this is less of a problem when studying physical evolution is because it is easier to isolate the physical entity (the organ or module) upon which adaptation is operating. This is not so when studying most mental phenomena. The isolation of the "module" we call "wings" and the module we call "feathers" was relatively easy. They were in effect predefined for us. If we were mistaken in isolating these specific modules then in the course of investigation we would have been able to clarify and modify our levels of analysis. Very little is predefined for us when we try to isolate the modules involved with general mental, emotional, and related phenomena. We have a hard enough time isolating the modules having to do with perceptual phenomena such as vision, smell, etc. The rare exception to this is probably the language faculty, which seems to be a relatively isolated mental module that can be studied separately. Jerry Monaco 16 November 2005. ] Selected by Tangled Bank #41 @ Flags and Lollipops: bioinformatics and genomics - news and views This work is licensed under a Creative Commons License. -- His fiction, poetry, weblog is Hopeful Monsters: Fiction, Poetry, Memories Notes, Quotes, Images - From some of my reading and browsing In a previous entry, "The Rule of Law" and Secrecy: CIA Prisons and the Plame Affair, I drew connections between the Plame Affair and the gulag of secret prisons run by the Central Intelligence Agency. I wrote: If a CIA agent with a conscience knows where these prisons are located, if she knows the CIA operatives who run those prisons, if she knows the conditions of those prisons and the names of the people in the prisons, if she then reports on the activities of the CIA wardens and their hirelings who run these prisons, and if this person of conscience exposes all of the above, I would celebrate such a person. In my mind, such a person should be considered a courageous fighter for democratic openness. The law that would put such a person in jail should be repealed. All secret security agencies should be exposed to the light of day. This is not a mere hypothetical. Think of Dana Priest's article exposing the CIA secret prisons. She wrote it without naming names. But she must have sources somewhere in order to write the article in the first place and those sources must know names. The names of the people running those secret CIA prisons are engaging in crimes against humanity and the names of the CIA prison wardens and their accomplices should be exposed to democratic sunlight. Perhaps one reason that they are not so exposed is the threat of jail under Intelligence Identities Protection Act. According to the BBC, "The US Central Intelligence Agency has taken the first step toward a criminal inquiry into who told the media that it runs secret jails abroad, reports say." Who are these prisons secret from in the first place? They are not secret from the people in the prisons or their families. They are assumed to exist by most people in countries that fear U.S. imperialism. The U.S. government of course can brush such speculations away as a conspiracy theory and "anti-Americanism" - because, as we know, the people who are under threat by the U.S. government's terror tactics are prone to such conspiracy theories. The truth is that these secret prisons are not meant to be secret from the purveyors of retail terrorism through-out the world. The U.S. government, the main purveyor of wholesale terror in the world today, means to keep these prisons secret from the domestic population of the U.S. and the populations of every country where these prisons are kept. Why? Because if such facts were widely known they would provoke outrage - not the outrage of terrorism, but the outrage of democratic protest. These secret prisons were never so secret. More than a year ago I read about them. Here is one of the articles I read in June 2004 - Secret world of US jails: Jason Burke charts the worldwide hidden network of prisons where more than 3,000 al-Qaeda suspects have been held without trial - and many subjected to torture - since 9/11. The people who leaked the information of these secret prisons to the Washington Post may have been playing their own bureaucratic games, but they have done a service to all of us who value the semblance of democracy that remains to us. Democracy is murdered in secret. The bare minimum of a conservative republican form of government, a government of due process and the rule of law, cannot be maintained when the government is maintained by secret organizations of political spies. The fact that our government runs secret prisons is only an end product of the permanent government of secrecy that has existed in the United States since it became a world empire. As I said in my previous entry, "The demand for the rule of law is a conservative demand in normal times but quickly turns into a radical call in times of 'emergency.'" We live in a time of emergency in the United States. The emergency is for the wounds that are debilitating the republic. Jerry Monaco New York City 9 November 2005 This work is licensed under a Creative Commons License. The Washington Post has an interesting article on CIA secret prisons, which proves that for the ruling class of the U.S. "the rule of law" and "due process" is applied selectively. I quote the beginning of the article and recommend that all who are interested read the complete report. The CIA has been hiding and interrogating some of its most important al Qaeda captives at a Soviet-era compound in Eastern Europe, according to U.S. and foreign officials familiar with the The secret facility is part of a covert prison system set up by the CIA nearly four years ago that at various times has included sites in eight countries, including Thailand, Afghanistan and several democracies in Eastern Europe, as well as a small center at the Guantanamo Bay prison in Cuba, according to current and former intelligence officials and diplomats from three continents. The hidden global internment network is a central element in the CIA's unconventional war on terrorism. It depends on the cooperation of foreign intelligence services, and on keeping even basic information about the system secret from the public, foreign officials and nearly all members of Congress charged with overseeing the CIA's covert actions. The existence and locations of the facilities -- referred to as "black sites" in classified White House, CIA, Justice Department and congressional documents -- are known to only a handful of officials in the United States and, usually, only to the president and a few top intelligence officers in each host country. The CIA and the White House, citing national security concerns and the value of the program, have dissuaded Congress from demanding that the agency answer questions in open testimony about the conditions under which captives are held. Virtually nothing is known about who is kept in the facilities, what interrogation methods are employed with them, or how decisions are made about whether they should be detained or for how long. While the Defense Department has produced volumes of public reports and testimony about its detention practices and rules after the abuse scandals at Iraq's Abu Ghraib prison and at Guantanamo Bay, the CIA has not even acknowledged the existence of its black sites. To do so, say officials familiar with the program, could open the U.S. government to legal challenges, particularly in foreign courts, and increase the risk of political condemnation at home and abroad. CIA Holds Terror Suspects in Secret Prisons Debate Is Growing Within Agency About Legality and Morality of Overseas System Set Up After 9/11 By Dana Priest Washington Post Staff Writer Wednesday, November 2, 2005; A01 The demand for the rule of law is a conservative demand in normal times but quickly turns into a radical call in times of 'emergency.' It is because of the fact that in the U.S. there are no conservatives left in politics that radicals must fill the vacuum. (A non-trivial question for radicals interested in the history of the U.S. ruling class is: Who was the last conservative? Perhaps Robert Taft.) It is the weakness of the left that we must be the conservatives demanding that these rulers of our lives keep to some minimum of the rule of law and provide basic due process. I propose to use the occasion of the elite media's acknowledgment of secret prisons, and the exposure of an international CIA gulag, to make a small comment on the affair of Valerie Plame. The connection between the Plame Affair and CIA secret prisons, may seem a bit odd but it I think they are thematically the same story. It is an indication of the ideological weakness of the U.S. left that the responses to the Plame affair has been limited to schadenfreude. We are happy that the likes of Karl Rove and Scooter Libby have been caught out in the cold of their own hypocrisy and lies. We would be happier still if they were sent to jail, but that seems to me unlikely. But is this the limit of our contribution to the Plame affair? Is it possible that Rove and Libby were engaged in an unwitting service to democracy by their exposure of a covert operative? It seems to me completely unnecessary to further expose the pro-war propaganda campaign that the United States Government and the Bush regime engaged in during the lead up to the invasion of Iraq. It was obvious at the time. Those who believed the Bush-Blair propaganda campaign need to look into themselves and ask what made themselves so susceptible to nationalist fantasy. They should make amends by becoming anti-war activists. The lesson that the left should be teaching is simple skepticism of those in power. We should be pointing out that there has rarely been a war advocated by a powerful state that has been justified in retrospect. Yet, all wars are justified at the time by the propaganda of the state and the rulers and war propaganda more often than not turns out to be cooked. The role of a well functioning intelligence agency is to prop calls of war made by the rulers with the necessary scenery of enemy atrocities and threats. At times, the intelligence agency will also engage in covert operations that are elaborate stage productions aimed to convince the true enemies of the rulers of the U.S., in this case the U.S. people, that war is necessary and inevitable. For those of us who oppose the war drums of the latest imperialist adventures the ideological enemy is patriotism, nationalism, jingoism and racism. One purpose of intelligence agencies and the state in general in the lead up to a war is to lie to the domestic population, producing enough fear and hatred of the target country among the people that the frenzy of jingoism overwhelms reason. When the state and its intelligence agencies fulfill its purpose we on the left should not be surprised. Our duty is to educate people in the historical fact that this is always the way powerful states act in the lead up to the war. Powerful rulers lie and fix the facts in order to get the domestic population to tolerate what the rulers want. Given this general historical viewpoint we should view the framing of the facts and the propaganda campaign as revealed in the Plame affair as politics as usual except for one fact that the affair highlights: A section of the U.S. ruling class and its elite bureaucrats in the intelligence agencies were not cooperating with the Bush regime, led by Chaney and Rove. I think that we can conclude from this that the Bush regime is a relatively narrow clique of the ruling class. One of the reason for the rampant irrationalism of its rhetoric is that a narrow regime has to constantly whip up the various groups of its base. Most of the rhetoric of the Bush regime and many of its actions, political appointments, etc. should be interpreted from the point of view of the narrowness of the Bush regime within the ruling class as a whole. The reason the exposure of Plame is significant, and the only reason it has become an "affair", is that with Plame the Bush regime proclaimed that it has contempt for a portion of the ruling elite that is important to imperial domination. As Nicholas Lemann put it in a recent New Yorker article: [T]he conservative foreign-policy position generated a vigorous subculture. Life inside it had many charms, one of which was the unassailability of the conservatives’ ideas .... Conservatives were smarter, bolder, more strategic-minded, and more historically aware than moderate Republicans, being less vitiated by the need to appease interest groups and by the grind of running bureaucracies. When the Central Intelligence Agency or the State Department ... was mentioned in conversation with a foreign-policy conservative, the reference would usually draw a derisive chuckle or a rolling of the eyes: those organizations had been captured by the appeasers, and could be counted on to respond insufficiently to threats. TELLING SECRETS - How a leak became a scandal by NICHOLAS LEMANN The New Yorker Issue of 2005-11-07, Posted 2005-10-31 The ideological battle of the right wing neo-conservatives has always been aimed against the entrenched bureaucracies of "liberal" imperialism, which they look at as a brake on the expansion of U.S. state and corporate power. Thus, attacking people such as Joseph Wilson (a career State Department official) and his wife Valerie Plame, was simply attacking the representatives of the liberal foreign policy bureaucracy. Such attacks are just part of the game for the extreme reactionaries of the Bush Admnistration. And the fact that this is the way that they play the game, without regard for usual ruling class solidarity, is what separates them from the more 'conservative' elements of the U.S. ruling elite. But when powerful people undermine other powerful people an "affair" or a "scandal" will ensue. This is the simple lesson of the Watergate and the Iran-Contra scandals. (See FN 1) But this does not mean that we who consider ourselves radicals and internationalists should simply parrot those who wish to drive "the affair" for their own interests. Scandals such as the Plame Affair are most useful if we can use them to expose the usual workings of the state and the ruling class. But they are also useful to expose the hypocrisy of the application of "the rule of law." Thus once again I come back to the beginning of this comment. Let me make a thematic connection between the Valerie Plame Affair and the CIA archipelago of secret prisons. Let us be clear: The law that gave Special Counsel Patrick Fitzgerald a mandate to investigate the Valerie Plame Affair is an anti-democratic law meant to protect the national security state against exposures of its 'secret' atrocities. The law is known as Intelligence Identities Protection Act (IIPA) and it was past in order to protect the criminals at the CIA from exposure. The secrecy of CIA operations is aimed at the domestic population. We are the ones who are not supposed to know the history of subversion of democratic movements of our government. The CIA is not simply an intelligence organization it is also an organization that bribes foreign officials, undermines foreign elections, overthrows foreign governments, fosters foreign secret security agencies and trains them in torture and death-squad operations - in short the CIA is an organization meant to inspire fear in foreign civilian peoples through the use of violence and propaganda. In short, by definition, the CIA is engaged in terrorism. Exposing the CIA, its operations and its operatives is a democratic duty that we must fight to make a 'right.' The Intelligence Identities Protection Act was passed in the early 1980s and was aimed at Philip Agee and the Covert Action Information Bulletin (CAIB). Agee made his own separate peace by defecting from the CIA to the multitude. He published CIA Diary: Inside the Company in 1975 and soon after teamed up to publish CAIB. In both his book and in CAIB he exposed CIA operations and operatives. It was Agee's and CAIB's civic activism in exposing CIA secrets that led to the passage of IIPA. The activities exposed by Agee were largely illegal activities which are condemned (with much usual nation-state hypocrisy) by international norms. Agee, no matter what his motivations, was a whistle blower and IIPA is an anti-Whistle Blower law that will be used mainly against the left. In the usual misapplication of the rule of law those who harm the ruling class will be prosecuted and those who benefit the ruling class will not be prosecuted under this law. Which brings us back to the CIA run secret prisons. If a CIA agent with a conscience knows where these prisons are located, if she knows the CIA operatives who run those prisons, if she knows the conditions of those prisons and the names of the people in the prisons, if she then reports on the activities of the CIA wardens and their hirelings who run these prisons, and if this person of conscience exposes all of the above, I would celebrate such a person. In my mind she should be considered a courageous fighter for democratic openness. The law that would put such a person in jail should be repealed. All secret security agencies should be exposed to the light of day. This is not a mere hypothetical. Think of Dana Priest's article exposing the CIA secret prisons. She wrote it without naming names. But she must have sources somewhere in order to write the article in the first place and those sources must know names. The names of the people running those secret CIA prisons are engaging in crimes against humanity and the names of the CIA prison wardens and their accomplices should be exposed to democratic sunlight. Perhaps one reason that they are not so exposed is the threat of jail under Intelligence Identities Protection Act. I am cynical enough to hope that despicable hypocrites, such as Carl Rove and Scooter Libby, will betray the norms of their class and expose covert agents, even if they do so only to further their very narrow political interests. In the end, if the Intelligence Identities Protection Act is consistently violated by those who rule this country, perhaps the act will become a dead letter. This is a mere modest proposal in favor of ruling class wolves eating their own puppies. In reality only an active and organized radical democratic left, which has its own organizations willing to expose the crimes and atrocities of the U.S. government and its secret agencies can put some content into the notion of the "rule of law" and someday make such notions of law into a flexible instrument of pragmatic democratic justice. Jerry Monaco New York City 2 November 2005 [FN 1] Note that this internecine war between ruling class elite sectors is partially represented by the battle inside the intelligence agencies. Thus Dana Priest reports The secret detention system was conceived in the chaotic and anxious first months after the Sept. 11, 2001, attacks, when the working assumption was that a second strike was imminent. Since then, the arrangement has been increasingly debated within the CIA, where considerable concern lingers about the legality, morality and practicality of holding even unrepentant terrorists in such isolation and secrecy, perhaps for the duration of their lives. Mid-level and senior CIA officers began arguing two years ago that the system was unsustainable and diverted the agency from its unique espionage mission. "We never sat down, as far as I know, and came up with a grand strategy," said one former senior intelligence officer who is familiar with the program but not the location of the prisons. "Everything was very reactive. That's how you get to a situation where you pick people up, send them into a netherworld and don't say, 'What are we going to do with them afterwards?' " Put aside the official media-speak of these paragraphs and what you see is that the CIA has stepped outside its usual role and the "old hands" do not like it very much. In the good old days of the U.S. imperialism the CIA trained other people to do their dirty work. The vision of the Bush regime sees a more active role for the CIA in torture and oppression, mainly because as U.S. military might has increased, it has lost political control over many of its foreign clients and servants. I suppose that one of the results of the reorganization of the intelligence agencies is to bring them under direct political control by the Bush Regime. Jerry Monaco New York City Originally Published 2 November 2005 This work is licensed under a Creative Commons License. Jerry Monaco's Philosophy, Politics, Culture Weblog is Shandean Postscripts to Politics, Philosophy, and Culture His fiction, poetry, weblog is Hopeful Monsters: Fiction, Poetry, Memories Notes, Quotes, Images - From some of my reading and browsing
{"url":"http://monacojerry.blogspot.com/","timestamp":"2014-04-20T20:55:20Z","content_type":null,"content_length":"183654","record_id":"<urn:uuid:55e5fe49-64ef-4a5f-b798-6fdbfc92a91c>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
JDBC escape syntax for fn keyword {fn functionCall} where functionCall is one of the following functions: concat (CharacterExpression, CharacterExpression) Character string formed by appending the second string to the first; if either string is null, the result is NULL. {fn concat (CharacterExpression, CharacterExpression) is equivalent to built-in syntax { CharacterExpression || CharacterExpression }. For more details, see Concatenation. sqrt (FloatingPointExpression) Square root of floating point number. {fn sqrt (FloatingPointExpression)} is equivalent to built-in syntax SQRT(FloatingPointExpression). For more details see SQRT. abs (NumericExpression) Absolute value of number. {fn abs(NumericExpression)} is equivalent to built-in syntax ABSOLUTE(NumericExpression). For more details see ABS or ABSVAL. locate(CharacterExpression,CharacterExpression [, startIndex] ) Position in the second CharacterExpression of the first occurrence of the first CharacterExpression, searching from the beginning of the second character expression, unless startIndex is specified. {fn locate(CharacterExpression,CharacterExpression [, startIndex] )} is equivalent to the built-in syntax LOCATE(CharacterExpression, CharacterExpression [, StartPosition] ). For more details see substring(CharacterExpression, startIndex, length) A character string formed by extracting length characters from the CharacterExpression beginning at startIndex; the index starts with 1. mod(integer_type, integer_type) MOD returns the remainder (modulus) of argument 1 divided by argument 2. The result is negative only if argument 1 is negative. For more details, see MOD. Note: Any Derby built-in function is allowed in this syntax, not just those listed in this section. TIMESTAMPADD( interval, integerExpression, timestampExpression ) Use the TIMESTAMPADD function to add the value of an interval to a timestamp. The function applies the integer to the specified timestamp based on the interval type and returns the sum as a new timestamp. You can subtract from the timestamp by using negative integers. Note that TIMESTAMPADD is a JDBC escaped function, and is only accessible using the JDBC escape function syntax. To perform TIMESTAMPADD on dates and times, it is necessary to convert them to timestamps. Dates are converted to timestamps by putting 00:00:00.0 in the time-of-day fields. Times are converted to timestamps by putting the current date in the date fields. Note that you should not put a datetime column inside a timestamp arithmetic function in WHERE clauses because the optimizer will not use any index on the column. TIMESTAMPDIFF( interval, timestampExpression1, timestampExpression2 ) Use the TIMESTAMPDIFF function to find the difference between two timestamp values at a specified interval. For example, the function can return the number of minutes between two specified Note that TIMESTAMPDIFF is a JDBC escaped function, and is only accessible using the JDBC escape function syntax. To perform TIMESTAMPDIFF on dates and times, it is necessary to convert them to timestamps. Dates are converted to timestamps by putting 00:00:00.0 in the time-of-day fields. Times are converted to timestamps by putting the current date in the date fields. Note that you should not put a datetime column inside a timestamp arithmetic function in WHERE clauses because the optimizer will not use any index on the column.
{"url":"http://db.apache.org/derby/docs/10.1/ref/rrefjdbc88908.html","timestamp":"2014-04-16T10:18:54Z","content_type":null,"content_length":"9583","record_id":"<urn:uuid:e8455e0e-2189-4c0a-bee4-c8838e9f9516>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Unit 8: Normal Distribution Text reading and homework: Read chapters 18 and 19 of FPP and do the following review exercises: Chapter 17 (pages 304-306): 7, 8, 9, 12 Chapter 18 (pages 327-329): 2, 4, 5, 6, 7, 12, 13, 14 Chapter 19 (pages 351-353): 5, 10, 11 Just Say No--To Bad Science," Newsweek, "On Science" column, May 7, 2007, page 57. Available on LexisNexis. Possible Essay Questions: • The column finds many problems with various studies of the same topic. How would you design a study of that topic? • It is sometimes claimed that no educational experiment ever fails, because the experimenter is an enthusiastic teacher who got the idea. Does this relate to the present column, and if so, how? Computer project: How close should the count of heads on 40 coin tosses be to 20? What is the spread in this count? Notice that the spread of the count is a different thing than the spread of the collection of flip results. The spread of the count is called the standard error(SE). The spread in the collection of flips is called the standard deviation (SD). Notice also that this means that the SE is the SD of many attempts to count heads. In this project we explore how the spread of the counts and the average of the counts change as the number of flips increase. Preliminary Write-up Using techniques from class and before going to the computer write up what theory predicts for the results as follows: 1. Estimate the number of heads you would expect to get and the spread in this number of heads for three cases of the number of flips: 10 flips, 40 flips and 160 flips. 2. As the number of flips is increased by a factor of 4 from 10 to 40 and from 40 to 160, how does this affect the expected value of the count (what factor of increase occurs)? By what factor does the standard error increase? 3. Consider doing the previous experiment 30 times and trying to get exactly half the flips being heads as many times out of the 30 as possible. Would you get exactly half more often with a 10 flip experiment, 40 flip experiment or 160 flip experiment? Use Excel to perform the following coin tossing simulations. Use the random number (RAND) and IF functions to simulate a coin toss. (See the simulation instructions if needed.) Structure your spreadsheet in an organized fashion with, for example, each simulation being one column. The result of the simulation can then be placed conveniently at the top or bottom of the column. After you check to make sure one simulation is working you can copy that column 30 times and create summary statistics for the entire 30 simulations somewhere convenient. a. Simulate tossing a fair coin 10 times and counting the number of heads. Do the simulation 30 times and compute the average and SD of the 30 counts. Note that the SD function is used here to measure the SE of the counts. Theory says the average of these 30 counts should be close to the expected value and the spread of the counts should be close to the SE of the counts. b. Do 30 simulations of tossing a fair coin 40 times and counting the number of heads. Compute the average and SD of the 30 counts. c. Do 30 simulations of tossing a fair coin 160 times and counting the number of heads. Compute the average and SD of the 30 counts. d. The numbers of tosses was increased by a factor of 4 from (a) to (b) and from (b) to (c). How did this affect the average and SD (in terms of factor of increase -- i.e., unchanged, factor of 4, factor of 0.5, etc.)? Compare these factor increases with your preliminary predictions from 2) above. e. Looking at the counts for each set of simulations, how many times out of 30 did you get exactly half of the flips being heads? In which case, (a), (b) or (c), are you most likely to get heads on exactly half the number of tosses? Compare with your preliminary theory from 3) above.
{"url":"http://math.colgate.edu/math102/Common/unit8.html","timestamp":"2014-04-18T23:14:42Z","content_type":null,"content_length":"4861","record_id":"<urn:uuid:65172ad8-e7f6-41bc-a752-513e5b5ea4f4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/whpalmer4/medals","timestamp":"2014-04-17T06:58:19Z","content_type":null,"content_length":"128266","record_id":"<urn:uuid:de51ffc2-840c-4e3a-9208-b1fc87f67598>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
RIMS Workshop ``Representation Theory and Analysis on Homogeneous Spaces" Organizer FHideko Sekiguchi (University of Tokyo) August 21, 14:00 -- August 24, 12:00, 2006 Room 420, RIMS, Kyoto University August 21 (Monday j 14:00 -- 14:50 Nobuhiko Tatsuuma Duality Theorem for Inductive Limit Groups of Direct Product Type 15:05 -- 15:55 Toshiyuki Kobayashi (RIMS) On Compact Locally Symmetric Spaces 16:10 -- 17:00 Toshio Oshima (University of Tokyo) Systems of differential equations with regular singlarities August 22 (Tuesday) 10:00 -- 10:50 Masahiko Kanai (Nagoya University) Rigidity of the Weyl chamber flow and the classical vanishing theorems of Weil and Matsushima 11:05 -- 11:55 Ken-ichi Yoshikawa (University of Tokyo) A duality between K3 surfaces and Del Pezzo surfaces 13:30 -- 14:20 Taro Yoshino (RIMS) No English Title 14:35 -- 15:25 Kazuko Konno (Fukuoka University of Education) -- Takuya Kon-no (Kyushu University) On doubling construction for real unitary dual pairs 15:40 -- 16:10 Yasufumi Hashimoto (Kyushu University) Analytic properties of prtial zeta functions August 23 (Wednesday) 10:00 -- 10:50 Atsumu Sasaki (Waseda University) Visible actions on irreducible multiplicity-free spaces 11:05 -- 11:55 Katsuhiko Kikuchi (Kyoto University) Invariant polynomials and invariant differential operators for multiplicity-free actions of rank 3 13:30 -- 14:20 Takayuki Oda (University of Tokyo) -- Masao Tsuzuki (Sophia University) The secondary spherical functions and Green currents for certain symmetric pairs 14:35 -- 15:25 Gen Mano (RIMS) A continuous family of unitary representations with two hidden symmetries--an example 15:40 -- 16:30 Noriyuki Abe (University of Tokyo) Jacquet modules of principal series generated by the trivial $K$-type August 24 (Thursday) 10:00 -- 10:50 Minoru Itoh (Kagoshima University) Schur type functions associated to polynomial sequences of binomial type @ and eigenvalues of central elements of universal enveloping algebras 11:10 -- 12:00 Junko Inoue (Tottori University) A characterization of certain spaces of $C^\infty$-vectors of irreducible representations of solvable Lie groups
{"url":"http://www.ms.u-tokyo.ac.jp/~hseki/RIMS06/program2006-e.html","timestamp":"2014-04-16T13:11:22Z","content_type":null,"content_length":"2916","record_id":"<urn:uuid:e797b3ea-0bc4-4f34-9b18-a4833e3a2fae>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivative Word Problem Help Please!~ March 9th 2013, 10:28 AM #1 Junior Member Jan 2013 new york Derivative Word Problem Help Please!~ An observatory is to be in the form of a right circular cylinder surmounted by a hemispherical dome. If the hemispherical dome costs 6 times as much per square foot as the cylindrical wall, what are the most economic dimensions for a volume of 16000 cubic feet? The radius of the cylindrical base (and of the hemisphere) is _____ ft. (Round to the nearest tenth). Re: Derivative Word Problem Help Please!~ Here are some questions to help guide you. What are the equations of surface area for the lateral surface area of a cylinder, and that of a hemisphere? What would be the cost of the cylindrical part if the cost was C per ft^2 (if you need more help: you want to buy 2 kg of apples and the price is $3 per kg: what do you do to these two numbers to figure out the total cost?) Post your answers and further questions below and we'll go from there. Re: Derivative Word Problem Help Please!~ Here are some questions to help guide you. What are the equations of surface area for the lateral surface area of a cylinder, and that of a hemisphere? What would be the cost of the cylindrical part if the cost was C per ft^2 (if you need more help: you want to buy 2 kg of apples and the price is $3 per kg: what do you do to these two numbers to figure out the total cost?) Post your answers and further questions below and we'll go from there. π = pi Area: A = 2πr^2+2πrh C = (6)2πr^2+2πrh Cost: C = (32000/r) + (72πr^2/3) This could be wrong.. I'm not very good with these word problems. Are these equations correct? Last edited by michellederz; March 9th 2013 at 12:25 PM. Re: Derivative Word Problem Help Please!~ Good start. Because we are just calculating the wall of a cylinder the equation has to change to $A=2\pi rh$ (it doesn't include $2\pi r^2$). The area of a hemisphere is half of $4\pi r^2$ or $2\ pi r^2$. Let the cost of the cylindrical portion be C, so the hemispherical portion is 6C. The cost will be $Cost = (CylindricalPortion) \times C + (HemisphericalPortion) \times 6C$ Cost: C = (32000/r) + (72πr^2/3) This could be wrong.. I'm not very good with these word problems. Are these equations correct? . I get $Cost = C\left(\frac{32000}{r}+\frac{32}{3}\pi r^2\right)$ where C is the cost as above. We need to minimize cost, so this is an opportune time for the derivative (I'll let you try to fill in the details). As a final answer, I get $r=7.8 ft$. Re: Derivative Word Problem Help Please!~ Okay I think I got it now! Thank you so very much! March 9th 2013, 10:42 AM #2 Mar 2013 BC, Canada March 9th 2013, 12:17 PM #3 Junior Member Jan 2013 new york March 9th 2013, 05:18 PM #4 Mar 2013 BC, Canada March 10th 2013, 10:25 AM #5 Junior Member Jan 2013 new york
{"url":"http://mathhelpforum.com/calculus/214491-derivative-word-problem-help-please.html","timestamp":"2014-04-21T00:00:47Z","content_type":null,"content_length":"43029","record_id":"<urn:uuid:294ea667-8c1b-41c0-bacf-84cf916ec345>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Color: my MacBook LCD Monitor 2 In the previous post about my monitor, I used five colors: red, green, blue, white, and a pale purple. For each of them, I began by specifying RGB values… I used Digital color meter to find their XYZ values… and I demonstrated that the relationship was nonlinear when applied to the purple color. The relationship was… XYZ = M (RGB)^1.801 where M, however, is linear (a matrix). I found the nonlinear part of the relationship from the following graph out of my ColorSync utility… and, although I found the linear part of the relationship (M) by construction, I showed you that I also found all of its entries in the ColorSync utility (and called that version of it TX since it is close to, but not exactly equal to, M): Briefly, then, every time I specify a set of RGB values, I can get three or four sets of XYZ, and they are very, very close to each other: • measured values from the DigitalColor Meter • ColorSync values from the ColorSync calculator • computed values from my nonlinear transformation using M • computed values from my nonlinear transformation using TX. Oh, I have not actually demontrated that the ColorSync calculator matches the other three! Well, let’s do that. ColorSync calculator So, let’s look at the calculator. We can ask it to convert between RGB and XYZ, and we discover that it pretty well matches what we’ve done already — with two caveats. 1. We had better specify that the device is “Color LCD”, i.e. my monitor. 2. We had better NOT specify “Absolute”, whatever that is. With those two constraints, we find that the calculator performs the nonlinear transformation between RGB and XYZ. Here, for my own reference as much as anything else, are the calculations for red… … for which our measured values (DigitalColor Meter) were 0.35733, 0.20605, 0.02618, and round to 0.3573, 0.206, 0.0262 . ColorSync gets .3574, .2061, .0262, so red and green disagree a little. Not bad. On the other hand, it may be worth noting that the two values do not agree exactly — that is, the DigitalColor meter and ColorSync do not completely agree with each other. Here’s green… … for which our measured values (DigitalColor Meter) were 0.45224, 0.69995, 0.12091 and round to 0.4522, 0.7, 0.1209 . ColorSync gets .4522, .7, .1209 . Here’s blue… … for which our measured values (DigitalColor Meter) were 0.15463, 0.09399, 0.6778 and which round to 0.1546, 0.094, 0.6778 ColorSync gets .1546, .0940, .6778 . Perfect again. Here’s white… … for which our measured values (DigitalColor Meter) were 0.9642, 1.00003, 0.82489 , and which round to 0.9642, 1., 0.8249 . ColorSync gets .9642, 1.0000, .8249 . Perfect again. and finally, here’s purple… … for which our measured values (DigitalColor Meter) were 0.28891, 0.2493, 0.37567 which round to 0.2889, 0.2493, 0.3757 . ColorSync gets .2886, .2488, .3753 . Those are all off a bit. Now let’s reverse it. This time the input is those measured purple XYZ… ColorSync got .5647, .3890, .6864 … which compares nicely but not exactly to our origial input RGB: 0.565, 0.388, 0.686 Okay by me. But once again I want to emphasize that none of these numbers came from my explicit nonlinear relationship; instead, it’s the two software programs that do not exactly agree. I’ll check one more in the reverse direction; here’s my measured XYZ for red… 0.35733, 0.20605, 0.02618. We see that ColorSync reads 1,0,0 . How sweet it is! So, the ColorSync calculator agrees substantively with the DigitColor Meter. Now, just what are those choices? There are 4 of them: Absolute, Perceptual, Saturation, and Relative. They are called rendering intents. You can find additional information about them at Apple… at Wiki… (The ICC itself — who defined them! — doesn’t even list them in its own glossaries. Maybe you can find them there, where I failed. On the other hand, it might be more important to look for their technical definitions rather than English-language descriptions. And the ICC should be the place to look.) From reading the general descriptions, there are probably three key questions: 1. does a rendering intent preserve any colors? 2. what does it do to out-of-gamut colors? 3. what does it do the the white point? The fact is, however, that without explicit algorithms in my hands, I’m not at all sure what the different rendering intents really do. As usual, the English language just doesn’t suffice. In particular, I do not understand the difference between absolute and relative in the treatment of the white points. (Maybe that’s what I would find on the ICC site.) My rough readings of the general descriptions is: • perceptual maps one entire color space to another; it changes all the colors; • saturation preserves relative saturation (but what about out-of-gamut?); • relative clips out-of-gamut colors, preserves in-gamut colors (including the white point); • absolute changes the white point (but what else?). Experimental evidence — I’ve only shown one datum, the purple color, but I’ve seen a lot more — tells me that for an in-gamut color, all three of perceptual, saturation, and relative, give the same answer. That seems at variance with the general descriptions, since relative is supposed to preserve in-gamut colors, while saturation and perceptual are not. I could chatter about why this might be the case — but it would be better to have the actual algorithms. But I don’t want to go do that now. Suffice it to say: I can use anything but “absolute” to duplicate the pairs of measurements taken by DigitalColor Meter, and the (almost) equivalent calculations using the explicit nonlinear We will see more experimental evidence about the absolute rendering intent later in this post. There is additional information in the ColorSync Utility. In addition to the XYZ values provided by the “colorant tristimulus” data… … we find another table of data called “phosphor values”: The immediate question is: Are these the same xy values we would get from the previous XYZ values? We will see that the answer is no. Then the next question is: How can we quantify the difference? While there may be other things we can do, I couldn’t resist modifying the latest question: Don’t we have enough information to compute a primary conversion matrix? (That’s the calculation — which I’ve done before — that starts with a set of xy for red, green, blue, and white, and generates a transformation from RGB to XYZ.) Let’s answer the first question: are these the same numbers — do these xyz correspond to the given XYZ? Red? The new xy (and z) values for red — which I might call the given xyz — are… 0.5921, 0.3466, 0.0613 . (I computed z as z = 1 – x – y.) On the other hand, our XYZ values for the pure red disk were 0.35733, 0.20605, 0.02618 which lead to xyz of… 0.606096, 0.349498, 0.044406 . ($x=\frac{X}{X+Y+Z}\$, etc.) I might call those measured xyz. We seem to have two different sets of xyz values for the red phosphor. Although we now know that the answer to the first question is no, we want (and need) some more data. Green. The given xyz are… 0.333, 0.5472, 0.1198 , … and we confirm that they too are different from the xyz 0.355227, 0.5498, 0.0949729 of our measured XYZ, which were 0.45224, 0.69995, 0.12091 . Blue? The given xyz are… 0.1576, 0.0885, 0.7539 … which are different from the xyz 0.166911, 0.101455, 0.731634 of our measured XYZ (0.15463, 0.09399, 0.6778). Finally, the given white point xyz… 0.3127, 0.329, 0.3583 … differs from the xyz 0.3457, 0.358547 ,0.295753 of the measured XYZ, 0.9642, 1.00003, 0.82489 . Now is a good time to look at those white point numbers. They look familiar. A quick check of Hunt’s “The Reproduction of Color” tells me that the measured xyz is the D50 white point, while the given xyz is the D65 white point. (That is, they are the chromaticity coordinates of the D50 and D65 standard illuminants.) Primary Conversion matrix Now let’s proceed to use the given xy coordinates to get a transformation matrix. (I’ve just repeated the same link for this.) We set XYZ of the white point so that Y = 1… just divide xyz by y: W = (0.950456, 1., 1.08906) We construct an attitude matrix K whose rows are the given (not measured) xyz coordinates of the red, green, and blue phosphors… $K = \left(\begin{array}{ccc} 0.5921 & 0.3466 & 0.0613 \\ 0.333 & 0.5472 & 0.1198 \\ 0.1576 & 0.0885 & 0.7539\end{array}\right)$ We compute $V = W\ K^{-1}\$… $V = \{0.571472,\ 1.27209,\ 1.19596\}$ … make a diagonal matrix G of that vector… $G = \left(\begin{array}{ccc} 0.571472 & 0. & 0. \\ 0. & 1.27209 & 0. \\ 0. & 0. & 1.19596\end{array}\right)$ … and then compute N = G K: $N = \left(\begin{array}{ccc} 0.338369 & 0.198072 & 0.0350312 \\ 0.423605 & 0.696086 & 0.152396 \\ 0.188483 & 0.105842 & 0.901631\end{array}\right)$ We have found N mapping RGB to XYZ. Recall, however, that N is an attitude matrix. I want a transition matrix T such that XYZ = T RGB so T = N’. Call it Tg to distinguish it from TX. $Tg = N^T = \left(\begin{array}{ccc} 0.338369 & 0.423605 & 0.188483 \\ 0.198072 & 0.696086 & 0.105842 \\ 0.0350312 & 0.152396 & 0.901631\end{array}\right)$ Check it, if only once. We apply Tg to red, i.e. to the vector (1, 0, 0), and we get XYZ = 0.338369, 0.198072, 0.0350312 from which we get xyz = 0.5921, 0.3466, 0.0613, and that is exactly the given xyz for the red phosphor, as it should be. But why didn’t they just give us this matrix? They almost did. Let me grab another item from ColorSync. chromatic adaptation matrix My guess is that PCS stands for profile connection space, but I’m not sure it matters. $C = \left(\begin{array}{ccc} 1.04788 & 0.022919 & -0.050201 \\ 0.029572 & 0.990494 & -0.017059 \\ -0.009232 & 0.015076 & 0.751648\end{array}\right)$ Here’s what does matter: applied to the computed Tg, that matrix (almost exactly) gives M (or TX): M = C Tg. Since we were actually given TX and C, we could have gotten Tg from $Tg = C^{-1}\ M\$. OK, that’s all nice. They didn’t give us Tg, but the gave us a chromatic adaptation matrix that would let us compute it from the M matrix. We didn’t need to go to all the work of computing Tg. Or, I could have used Tg and TX to compute my monitor’s chromatic adaptation matrix. Then I would have to wonder where the given xyz came from…. It’s the chicken and the egg, at this point: I don’t know whether C and TX are primary, and Tg is computed from them — or whether Tg and TX are primary, and C is computed from them. Regardless of whether we construct Tg or compute it from TX and the chromatic adaptation matrix, we get effectively the same thing. Let’s look at white. We have Tg.{1, 1, 1} = {0.950456, 1., 1.08906}, and we had better have that: it’s what we set the white point to. But I’ve seen that somewhere (you haven’t yet, but I have)… … that is exactly what ColorSync gives me, when I ask for absolute rendering intent! I have to say this made me happy. Please understand: I just brought in the ColorSync calculator out of left field. As far as I knew, it had nothing to do with the primary conversion matrix Tg. But ColorSync absolute does have something to do with Tg. Unfortunately, it doesn’t have much more to do with it than mapping the white point (see below). Incidentally, the chromatic adaptation matrix I most often hear about is called the von Kries; here it is: $\left(\begin{array}{ccc} 1.0161 & 0.0553 & -0.0522 \\ 0.006 & 0.9956 & -0.0012 \\ 0 & 0 & 0.7576\end{array}\right)$ NOTE that it is different from the one provided for my monitor: $C = \left(\begin{array}{ccc} 1.04788 & 0.022919 & -0.050201 \\ 0.029572 & 0.990494 & -0.017059 \\ -0.009232 & 0.015076 & 0.751648\end{array}\right)$ They are similar, but mainly because they both have small off-diagonal terms and a (3,3) term rather less than 1. Both are predominantly diagonal matrices, both significantly reduce “blue”. We have seen that the ColorSync calculator essentially delivers the XYZ coordinates measured by the DigitalColor Meter, or computed by the nonlinear transformation — provided I do not use the “absolute” rendering intent. (Strictly speaking, all I’ve really shown you is that “relative” works, but I have tested the others.) The linear part of that nonlinear transformation was described by ColorSync (TX), or could have computed (M) from the XYZ coordinates of red (1,0,0), green, and blue. Implicit in those XYZ coordinates for red, green, and blue are the corresponding xyz coordinates. But we have found a second set of xyz coordinates, and they are different. We also saw that the two white points are standard: one is D50, the other D65. We don’t know why, but we do know what. Still, we constructed a primary conversion matrix (Tg), and we discovered that ColorSync provided the information (C, a chromatic adaptation matrix) required to compute Tg from the supplied TX The construction of Tg guarantees that Tg applied to white (1,1,1) gives us the media white point — that was one of the inputs to the computation of Tg! It’s no accident. Finally, we saw that the ColorSync calculator, with rendering intent absolute, says that the XYZ coordinates of (1,1,1) are… the media white point. And there it gets murky. Real murky. I don’t know about you, but I certainly expected that Tg applied to red, green, or blue would match the ColorSync calculator using absolute rendering intent. After all, Tg applied to white matches the calculator. But this is not to be. Yes, I expected that the overall transformation is nonlinear, but I thought it would preserve zeros and ones. (If the nonlinear transformation raises each of R and G and B to a power, then it preserves zeros and ones.) For red, I compute… Tg.{1, 0, 0} = {0.338369, 0.198072, 0.0350312} … and ColorSync shows .3523, .2061, .0346 : These differences are larger than we’ve seen before. For green, I compute Tg.{0, 1, 0} = 0.423605, 0.696086, 0.152396 and ColorSync shows .4458, .7, .1596 : Finally, for blue I compute Tg.{0, 0, 1} = {0.188483, 0.105842, 0.901631} … while ColorSync shows .1524, .0940, .8949 : These are all quite a bit different: the ColorSync calculator doesn’t agree with applying Tg to red, green, and blue. I am shocked. The nonlinear part of this transformation does not preserve zeros and ones — so it’s something other than raising to a power (or even to three of them). Whatever it is doing, ColorSync maps the white point the same way Tg does, but it does not map red, green, and blue individually the same way Tg does. Gee, if I want to understand this, I’m going to have to look under the hood of the absolute rendering intent. We’ll see. I’ve shown you the pieces of ColorSync that I understand, and I’ve shown you a piece I do not understand. I’m not at all sure that I care much about the part I don’t understand — for now, it may be enough to know that ColorSync (with absolute rendering intent) is doing something I can’t reproduce. Once again, if you were thinking I was omniscient, forget it. There is stuff out there that I don’t understand, and this is one of them. I suppose I should close by reminding us that I do understand the nonlinearity of my monitor, so long as I stay with non-absolute rendering intents; we got that out of the previous monitor post. Leave a Reply Cancel reply Recent Comments Color Systems — Part… on Color: from Spectrum to XYZ an… Mariz Yap on Trusses – Example 3, a Howe… Poker game(r) on Poker Hands – 5 card draw sola on Trusses – Snow Load on Howe, F… pahb on Control Theory: Transfer Funct… pahb on Control Theory: Transfer Funct… David Cortes-Ortuño… on Mathematica Notes – Coloring… kmoxe on Color: from Spectrum to XYZ an… Dan on Color: Color-primary transform… Guenter Bruetting on Color: from XYZ to spectr… Femi on Cubic Polynomials and Complex… Femi on Cubic Polynomials and Complex… Nikunj on Happenings – 2013 Apr 13 bad boy sex secrets… on axis and angle of rotatio…
{"url":"http://rip94550.wordpress.com/2010/06/28/color-my-macbook-lcd-monitor-2/","timestamp":"2014-04-17T21:54:27Z","content_type":null,"content_length":"106844","record_id":"<urn:uuid:bfefede9-d194-41bb-b745-1033be08fe7d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Bernice E. Holloway Bellwood School District #88 1801 N. 36th Avenue Stone Park, IL 60165 -to use grouping symbols and the standard order of operations to simplify numerical expressions. -to use the order of operations to evaluate variable expressions. -to use the calculator and computer to solve numerical expressions. -calculators(generic brand) -banners(computer printouts) A. To bring out the comparison of punctuation marks in a sentence with grouping symbols in a numerical expression (signs with the expressions Robin Lee Travis and I love computers;Slow Children Playing;Save Rags and Waste Paper.), ask if students can explain why the banners are Without commas, the sentence Robin Lee Travis and I love computers, implies that two people love computers. Depending on where commas are inserted, the sentence can state that three or four people love Sample Activities: 1) ask students to simplify 10+2*3-1 to get as many different answers as they can (use calculators and computers to compare 2) discuss the need for a standard order of performing operations so that there is no ambiguity about the value of such 3) discuss the steps of the standard order of operations and show how they would be used to simplify the expression above. 10+6-1 multiplication 16-1 addition (from left to right) 15 subtraction (from left to right) 4) show how parentheses could be used to give different meanings to the same expression. (10+2)x3-1 (10+2)x(3-1) 12 x3-1 12 x 2 36 -1 24 In expressions with more than one operation, grouping symbols such as parentheses or division bars are often used to indicate the order in which to do the operations. These grouping symbols can change the meaning of an expression, just as commas or other punctuation marks can change the meaning of a sentence. Whenever the order of operation is not indicated by grouping symbols, there is a standard order of operations to be followed.(Do exponents, multiplication/division, addition/subtraction from left to right.) In mathematics, more than in some other forms of written expression, ambiguity must be eliminated. Otherwise, different people may assign different meanings to the same symbols, and communication is faulty. Ambiguity is eliminated using grouping symbols and the order of operations rule. In examples #1 and #2, the expressions do not have grouping symbols, the standard order of operations is used. #1 13-4x2-3 #2 2x3^2-4 13-4x2-3 2x3^2-4 13- 8 -3 2x 9 -4 5-3 18-4 In examples #3 and #4, notice that the two expressions have the same numbers and the same operations, but the results are different because of grouping symbols.(Do operations within parentheses, exponents, multiplication/division, addition or subtraction from left to right.) #3 (8+5)x3 #4 8+(5x3) (8+5)x3 8+(5x3) 13x3 8+15 B. To give additional practice using the correct order of operations, have students: 1) replace the variable in each row or column to make a true equation in puzzle #1 (see handout). 2) write the operations sign (+,-,x,/) in each row or column to make a true equation in puzzle #2 (see handout). C. To check progress of students have them complete the Grouping Symbols-Review (see handout and below). IV. COMMENTS/INFO. The use of the calculator is so common to us that we tend to take certain things for granted...only with the wide use of personal computers are we being forced to reevaluate the function, the appropriate use, and the correct method(s) of teaching students certain mathematical concepts using both machines. It should be pointed out to students that people communicate with computers by using programs. Programs tell the computer what to do. However, it is not always necessary for a person to be able to write a program in order to use a computer. Programs can be written in such a way that an operator can use them by answering a series of questions that are written into the program. Nevertheless, the best way to learn what a computer can and cannot do is to learn a little about To program arithmetic calculations in BASIC, you use the following + addition - subtraction * multiplication / division ( ) parentheses ^ raised to a power(exponent) BASIC follows the order of operations. Sample BASIC program: 10 PRINT 21*34+35/7 20 END 719 (answer) Grouping Symbol-Review Select each answer from the choices in parentheses. Write the answer in the 1) ab means a______________b. (plus, divided by, times) 2) _ means a______________b. (plus, divided by, times) 3) a____________b means a is not equal to b. (=,<>,.) 4) Parentheses are an example of a ______________.(grouping symbol, value, Simplify each expression with the calculator. Translate each expression into BASIC. Use the computer to check answers.(Remember to type PRINT before the numerical expression.) 5) 7+(12-3)_____________________ 6) (18-3) ____________ 7) (7x3)-(5x4)__________________ 8) 10-(3+4)___________ 9) 24- (63/(6+3))_______________ 10) 36/12+6____________ 11) 15-5x2+8/4__________________ 12) 20(12-8)-30/(10+5)___________ Simplify the expression on each side of the ----?----. Make a true statement by replacing the ? with the symbol = or <>. Check your answers using the computer.(If the computer prints 1, your answer is true; if your answer is false, the computer will print 0.) Remember the numerical expressions must be in BASIC. 13) 16+3 ? 9+3 _______ 14) (8-3x2) ? (8-3)x2_________ ---- ---- --- ------- 8+4 4-1 15) 3(5+2) ? 3x5+2 ______ 16+4 ? 8+4x3____________ ----- 16) 1+ ---- ----- ---- 3+2 8-2-2 Return to Mathematics Index
{"url":"http://mypages.iit.edu/~smile/ma8620.html","timestamp":"2014-04-18T05:55:11Z","content_type":null,"content_length":"8310","record_id":"<urn:uuid:94a23d6e-f753-45b0-8e30-44d0db051dce>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Categorical duals in Banach spaces up vote 6 down vote favorite Near the bottom of the nlab page for Banach space I see "To be described: duals (p+q=pq)". Are $(\mathbb{R}^n)_p$ and $(\mathbb{R}^n)_q$ dual objects in the closed symmetric monoidal category of Banach spaces and linear contractions (with the tensor product described on that page)? Edit: take n=2, p=1, q=∞. Then the question becomes whether $V \times V$ (which is $V^2$ with the $l_\infty$ norm) is isomorphic to $(\mathbb{R}^2)_\infty \otimes V$. But it seems to me that the functor $V \mapsto V \times V$ does not even commute with coproducts... is that right? fa.functional-analysis ct.category-theory 1 [I deleted a somewhat rambling and confused comment.] I was wondering if that note was only referring to the fact that (ℝ^n)_p and (ℝ^n)_q are isometrically isomorphically each other's dual spaces. There isn't a strict adherence on that page to the category with linear contractions; for instance, it is stated without specifying the category that (ℝ^n)_p and (ℝ^n)_q are isomorphic. – Jonas Meyer Jan 5 '10 at 9:32 2 +1 for making me think about duals in category theory and analysis. – Andrew Stacey Jan 5 '10 at 11:31 @Jonas: yeah, that had me confused for a while too. – Reid Barton Jan 5 '10 at 15:03 While the question is good, there are several senses of "dual" used in category theory. We (authors of the nLab article) meant a pair of Banach spaces V, W equipped with an isometric isomorphism W 3 --> hom(V, k) [where k = R or C as appropriate], where V is reflexive, so that the transpose V --> hom(W, k) is also an isometric isomorphism. I think this is a standard functional analysis sense of "V and W are dual to one another as Banach spaces", and it's also one of the meanings of "dual" used for smc cats, even if it's not the "compact closed" sense of the question above. – Todd Trimble♦ Jan 5 '10 at 23:06 Thanks, Todd, for the clarification. (My main motivation in asking this question was actually to use Ban as a test case for a statement about categories with duals in the "compact closed" sense. I wasn't trying to suggest that there aren't other useful notions of "dual", or doubting the statements on the nlab page, although I can see how it could read that way.) – Reid Barton Jan 5 '10 at add comment 1 Answer active oldest votes My suspicion is "no", because if I recall correctly the map $I \to V \otimes V^*$ naturally lands in the injective tensor product, not the projective tensor product, and it is the latter which appears as the ``correct'' tensor product for the SMC category of Banach spaces and linear contractions. In the toy example given, $V\oplus V$ with the sup norm is the same as continuous maps from a 2-point set to $V$, equipped with sup-norm, and I'm pretty sure that this is indeed isometrically linearly isomorphic to ${\mathbb R}^2 \check{\otimes} V$ i.e. the injective tensor product. EDIT: as Reid points out my remarks above assume without justification that the inj. t.p. does differ from the proj t.p. in the specific case being considered. I think this is indeed the up vote 2 case. Take $V$ to be ${\mathbb R}^2$ with usual Euclidean norm. The projective tensor product of $V$ with $V^\*$ can be identified with $M_2({\mathbb R})$ equipped with the trace class down vote norm; the injective tensor product would lead to the `same' underlying vector space, equipped with the operator norm. The 2 x 2 identity matrix has trace class norm 2 and operator norm accepted 1, so the two norms are genuinely different. My answer is still not as clear as it should be, because due to a sluggish and temperamental internet connection I'm having trouble looking up just what the axioms for categorical duals in a SMC are. But if I recall correctly the natural map from $I \to V \otimes V^\*$ should be given by multiplying a scalar by the vector $e_1\otimes e_1 + e_2\otimes e_2$ where $e_1,e_2$ is an o.n. basis of ${\mathbb R}^2$ -- and that vector does not have norm 1 in the proj t.p. althought it does have norm 1 in the inj t.p. And the injective and projective tensor products are non-isometric in this case for general V, right? – Reid Barton Jan 5 '10 at 17:46 I think so; see the updated entry. – Yemon Choi Jan 5 '10 at 18:02 1 Here is a concrete counterexample for someone to sanity check. It seems that "the set of extreme points of the unit ball" is an isomorphism invariant of a Banach space, and that it preserves coproducts and products, at least the ones used in forming $(R^{\amalg n})^{\times 2}$ and $(R^{\times 2})^{\amalg n}$. It then sends these spaces to sets of cardinality $ (2n)^2$ and $4n$, respectively, which are distinct for $n \ge 2$. This shows that $V \mapsto V \times V$ doesn't commute with coproducts. – Reid Barton Jan 5 '10 at 18:15 (Just to spell out exactly how this is related to my original question: Asking for a dual for an object $X$ of a closed symmetric monoidal category is the same as asking for an object $X^*$ and an identification $\mathbf{Hom}(X, {-}) = X^* \otimes {-}$. In particular (using closedness again) $\mathbf{Hom}(X, {-})$ must commute with colimits. In my case $X = (\ mathbb{R}^2)_1 = \mathbb{R} \amalg \mathbb{R}$, and so $\mathbf{Hom}(X, V) = V \times V$. – Reid Barton Jan 5 '10 at 18:30 add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis ct.category-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/10789/categorical-duals-in-banach-spaces?sort=votes","timestamp":"2014-04-20T06:20:11Z","content_type":null,"content_length":"64739","record_id":"<urn:uuid:368494e9-893a-4e87-af32-5a32ae018ac6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Patente US6270252 - Predictive temperature measurement system The present invention relates generally to improvements in thermometers and, more particularly, to electronic thermometers for more rapidly obtaining accurate temperature measurements. It is common practice in the medical field to determine the body temperature of a patient by means of a temperature sensitive device that not only measures the temperature but also displays that temperature. Such temperature measurements are taken routinely in hospitals and in doctors' offices. One such device is a glass bulb thermometer incorporating a heat responsive mercury column that expands and contracts adjacent a calibrated temperature scale. Typically, the glass thermometer is inserted into the patient, allowed to remain inserted for a sufficient time interval to enable the temperature of the thermometer to stabilize at the body temperature of the patient, and subsequently removed for reading by medical personnel. This time interval is usually on the order of 3 to 8 The conventional temperature measurement procedure using a glass bulb thermometer or the like is prone to a number of significant deficiencies. Temperature measurement is rather slow and, for patients who cannot be relied upon (by virtue of age or infirmity) to properly retain the thermometer for the necessary period of insertion in the body, may necessitate the physical presence of medical personnel during the relatively long measurement cycle, thus diverting their attention from other duties. Furthermore, glass bulb thermometers are not as easy to read and, hence, measurements are more susceptible to human error, particularly when the reading is made under poor lighting conditions or when read by harried personnel. Various attempts have been made to minimize or eliminate these deficiencies of the glass bulb thermometer by using temperature sensing probes that are designed to operate in conjunction with direct-reading electrical thermometer instrumentation. In one such approach, an electric temperature sensitive device, such as a thermistor, is mounted at the end of a probe and inserted into the patient. The change in voltage or current of the device, depending on the particular implementation, is monitored and when that output signal stabilizes, a temperature is displayed in digital format. This is commonly referred to as the “direct reading” approach and it reduces the possibility of error by misreading the measured temperature. This approach has provided a significant contribution to the technology of temperature measurement. An inherent characteristic of electronic thermometers is that they do not instantaneously measure the temperature of the site to which they are applied. It may take a substantial period of time before the temperature sensitive device stabilizes at the temperature of the site and the temperature indicated by the thermometer is representative of the actual temperature of the body or site measured. This lag is caused by the various components of the measurement system that impede heat flow from the surface of the body or site to the temperature sensor. Some of the components are the sensor tip, the tissue of the body, and any hygienic covering applied to the sensor tip to prevent contamination between measurement subjects. One attempt to shorten the amount of time required to obtain a temperature reading of a subject involves the use of a temperature sensitive electronic probe coupled with prediction or estimation circuitry or programming to provide a digital display of the patient's temperature before the probe has reached equilibrium with the patient. With this approach, assuming the patient's temperature is not significantly changing during the measurement period or cycle, the temperature that will prevail upon thermal stabilization of the electronic thermometer with the patient is predicted from measured temperatures and is displayed before thermal stabilization is attained. Typically, prediction of temperature is performed by monitoring the measured temperature over a period of time as well as the rate of change thereof, and processing these two variables to predict the patient's temperature. With an electronic thermometer that operates by predicting the final, steady state temperature, an advantage is that the temperature measurement is completed before thermal stabilization is attained, thereby reducing the time required for measurement. This would lessen the risk that the patient would not hold the probe in the correct position for the entire measurement period and requires less time of the attending medical personnel. Another advantage is that because body temperature is dynamic and may significantly change during the five minute interval typically associated with traditional mercury glass thermometer measurements, a rapid determination offers more timely diagnostic information. In addition, the accuracy with which the temperature is predicted improves markedly as the processing and analysis of the data are more accurately performed. This approach has also contributed significantly to the advancement of temperature measurement technology. Electronic thermometers using predictive-type processing and temperature determination may include a thermistor as a temperature-responsive transducer. The thermistor approaches its final steady state temperature asymptotically with the last increments of temperature change occurring very slowly, whereas the major portion of the temperature change occurs relatively rapidly. Prior attempts have been made to monitor that initial, more rapid temperature change, extract data from that change, and predict the final temperature at which the thermistor will stabilize and therefore, determine the actual temperature of the tissue that is contacting the thermistor long before the thermistor actually stabilizes at the tissue temperature. A prior approach used to more rapidly predict the tissue temperature prior to the thermistor reaching equilibrium with that tissue is the sampling of data points of the thermistor early in its response and from those data points, predicting a curve shape of the thermistor's response. From that curve shape, an asymptote of that curve and thus the stabilization, or steady state, temperature can be predicted. To illustrate these concepts through an example of a simpler system, consider the heat transfer physics associated with two bodies of unequal temperature, one having a large thermal mass and the other having a small thermal mass, placed in contact with each other at time t=0. As time progresses, the temperature of the small thermal mass and the large thermal mass equilibrate to a temperature referred to as the stabilization temperature. The equation describing this process is as follows: T(t)=T[R]+(T[F]−T[R])(1−e^−(t/τ))=T[F]−(T[F]−T[R])e^−(t/τ(Eq. 1) where: T(t) is the temperature of the smaller body as a function of time, T[R ]is the initial temperature of the smaller body, T[F is the actual, steady state temperature of the system, ] t is time, and τ is the time constant of the system. From this relationship, when T is known at two points in time, for example T[1 ]at time t[1 ]and T[2 ]at time t[2], the stabilization temperature T[F ]can be predicted through application of Equation 2 below. $T F = T 2 - T 1 - t 2 - t 1 τ 1 - - t 2 - t 1 τ = T 2 t 2 τ - T 1 t 1 τ t 2 τ - - t 1 τ ( Eq . 2 )$ Further, for a simple first order heat transfer system of the type described by Equation 1, it can be shown that the natural logarithm of the first time derivative of the temperature is a straight line with slope equal to −1/τ as follows: $ln ( T t ) = K - t τ ( Eq . 3.1 )$ and also: T[F]=T(t)+τT′(t)(Eq. 3.2) where: $τ = - T ′ ( t ) T ″ ( t ) ( Eq . 3.3 )$ where: K=a constant dependent upon T[R], T[F], and τ, T′=first derivative T″=second derivative. Prior techniques have attempted to apply these simple first order relationships through the use of thermistor time constants established by the thermistor manufacturer. However, these techniques have failed to recognize that the temperature response curve cannot be modeled as first order and is to a great extent affected by factors not reflected by the thermistor's time constant. When the thermometer is placed in contact with body tissue, such as a person's mouth for example, the response curve depends on the physical placement of the probe in relation to that tissue, on the heat transfer characteristics of the particular tissue, and on the hygienic cover that separates the probe from the tissue. All of these factors contribute to the heat flow characteristics of the measurement system and they are not represented in the factory-supplied time constant of the thermistor alone. Moreover, the factors described above impede the flow of heat in series and with different resistance characteristics, thus causing an overall time response behavior that is not that of a first order system. While electronic thermometers and prior predictive techniques have advanced the art of electronic thermometry, a need still exists for an electronic thermometer that can predict a stabilization temperature at an early stage of the measurement process and thereby shorten the amount of time taken to obtain a final temperature reading. Additionally, a need exists for a thermometer having an algorithm that can be computed in readily available, relatively simple, relatively inexpensive circuitry or processors. The invention fulfills these needs and others. Briefly and in general terms, the present invention is directed to providing an electronic thermometer and a method for determining the temperature of an object or biological subject by predicting the steady state temperature at an early stage of the measurement process. The thermometer and method of the present invention relate certain variables determined from an early portion of the temperature rise curve to the predicted steady state temperature so that the predictive process requires a reduced process of data acquisition and a shortened data processing time while yielding accurate approximations of the stabilization temperature. Thus, briefly and in general terms, in one aspect of the present invention is directed to a thermometer incorporating a temperature sensor, a processor for predicting an object's temperature based on the average value, slope, and curvature of the initial reading of the object's temperature, and a display for displaying the predicted temperature. In a more detailed aspect, the processor comprises a finite impulse response filter connected so as to sample the temperature signal a plurality of times to calculate an estimate of the temperature of the subject and provide an estimated final temperature signal and a display connected with the processor to receive and display the estimated final temperature signal. In yet further detail, the finite impulse response filter takes a linear combination of a plurality of samples in calculating the estimate of the temperature of the subject. In another aspect, the processor adds an offset coefficient based on ambient temperature to the estimate of the temperature provided by the finite impulse response filter in providing an estimated final temperature signal. In another aspect, a thermometer for determining the temperature of an object is provided and comprises a sensor that provides a time varying temperature signal in response to sensing the temperature of the object, a processor that receives the temperature signal, samples the temperature signal over a time frame, determines the average value, the first derivative, and the second derivative of the signal over the time frame, combines the average value, first derivative, and second derivative, and thus calculates an estimate of the temperature of the object. In a more detailed aspect, the processor applies a weighting factor to each of the average value, the first derivative, and the second derivative of the signal, and further adds an offset factor selected in accordance with the ambient temperature, to calculate a prediction of the temperature of the object. In further detailed aspects, the processor further comprises finite impulse response filters to calculate the average value, slope, and curvature of the temperature data. In a more detailed aspect, the processor continues to sample the signal and calculate a new prediction for the temperature of the object with each new temperature data value sampled. In another detailed aspect, the processor monitors a predetermined number of the last predicted temperatures and calculates the final predicted temperature of the object based on an average of these last predicted temperatures. In yet another detailed aspect, the processor calculates the final predicted temperature when certain selected conditions have been met. In a still further aspect, the selected conditions include a predetermined time period that must lapse after the sensor has been in contact with the object, predetermined threshold values that the first derivative and the second derivative must reach, and a maximum difference between any two of a predetermined number of the last temperature estimates that must be less than a predetermined threshold. These and other features and advantages of the present invention will become apparent from the following more detailed description, when taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the invention. FIG. 1 is a view of an electronic clinical thermometer incorporating aspects of the present invention; FIG. 2 is a block diagram of a system in accordance with aspects of the present invention for determining the temperature of an object before final equilibrium of the temperature sensor with the object using the thermometer shown in FIG. 1; FIG. 3 presents a graph of a typical thermistor response curve to a temperature that differs from, and is higher than, its own temperature; FIG. 4 is a diagram illustrating data flow and tasks performed by the system depicted in FIG. 2 which incorporates aspects of the invention; FIG. 5 is a diagram illustrating the Initialize System task of FIG. 4; FIG. 6 is a diagram illustrating the Acquire and Filter Temperature Data task of FIG. 4; FIG. 7 is a diagram illustrating the Calculate Predicted Temperature task of FIG. 4; and FIG. 8 is a block diagram of the processor functions performed by the system in accordance with aspects of the invention as depicted in FIG. 2. In the following description, like reference numerals will be used to refer to like or corresponding elements in the different figures of the drawings. Although temperatures are given in both Fahrenheit and Celsius, the parameters provided are used with only Fahrenheit. Parameters usable for Celsius have not been provided although temperatures have been provided in both Fahrenheit and Referring now to the drawings, and particularly to FIG. 1, there is shown one embodiment of an electronic thermometer incorporating novel features of the present invention. The electronic thermometer 10 contains a probe 12 for sensing the temperature of a selected part of the body, connected by conductors 14 to the main body 16 of the thermometer. As shown in FIG. 1, the probe 12 has been removed from a storage well 17 in the main body. The main body 16 of the thermometer contains the electrical components and power supply of the thermometer, and also has a display 18 for displaying temperature values and error or alarm messages. A second probe 20 is included with the thermometer and is shown in the stored position inserted in a well 19 of the main body 16. Also shown is a hygienic cover 21 for placement over a probe 12 or 20 before insertion of the probe into the patient. Referring to FIG. 2, the block diagram generally shows major electronic components of one embodiment of a thermometer 22 in accordance with the present invention. The temperature sensor 24 provides temperature signals in response to the temperature sensed during measurement. In the case where a thermistor is used as the temperature sensor 24, these signals are analog voltages or currents representative of the resistance of the thermistor and therefore representative of the sensed temperature. They are converted into digital form for further processing by an analog-to-digital converter 26. The analog-to-digital converter 26 is connected to a processor 28 that receives the digital temperature signals and processes them to determine the temperature of the subject being measured. A timer 30 provides time signals to the processor 28 used during the processing of the temperature signals, and a memory 32 stores the temperature and time signal data so that the signal data can be analyzed at a subsequent time. The memory 32 also stores empirically-derived constants used by the processor 28 to calculate the predicted temperature. Once the signals have been processed, the processor 28 provides a signal to the display 34 to display the predicted stabilization temperature. Activating a switch 36 enables the temperature measurement functions of the thermometer 22. This switch is preferably located within the probe storage well such that removal of the probe enables the measurement. Referring now to FIG. 3, a graph of measured temperature 38 plotted as a function of measurement time for a measurement system is shown. Although the relationship illustrated is similar in form to that specified by Equation 1, the measurement system of the present invention does not exhibit first order heat transfer behavior and therefore the curve 38 differs from the simple exponential function of Equation 1. As discussed above, the temperature indicated 38 by the thermistor lags the actual temperature T[F ]of the subject being measured. This lag is shown by line 38. It can be seen that as the measurement progresses from a start time, t[0], the temperature rapidly increases from T[R ]to T[1 ]between times t[0 ]to t[1]. The rate of increase in the indicated temperature 38 is reduced between times t[1 ]and t[2], and the temperature line 38 gradually tends toward the stabilization temperature T[F ]asymptotically as the time increases even more. As discussed above, the present invention is directed to a system capable of analyzing the temperature data gathered during an early period of the measurement, for example, between times t[1 ]and t[2], and predicting the final temperature T[F]. Referring now to FIG. 4, the general functions (tasks) of an embodiment of the system in accordance with aspects of the invention are shown, along with the data that flows among them. A data flow does not imply an activation sequence; control and activation are not shown on this diagram. The subsequent flow diagrams, FIGS. 5 and 6, illustrate the sequential flow of certain key tasks. The data flows are labeled with the data that is provided to one task by another. FIG. 8 presents the temperature computation functions performed by the processor 28. With continued reference to FIG. 4, the Initialize System task 40 is run each time the thermometer is activated. It serves to execute all logic that needs to occur only once per measurement. It activates the Acquire and Filter Temperature Data task 42, which in turn activates the Calculate Predicted Temperature task 44. Once the predicted temperature has been calculated, it is displayed by the Display Predicted Temperature task 46. FIG. 5 provides a flow diagram for the Initialize System task 40. It is initiated when a probe is removed from the well 60 and initializes, tests, and calibrates the hardware devices 62, initializes the FIR (finite impulse response) filter coefficients 63, and resets the clock counter “t=0” 64 and the running temperature estimates T[0], T[1], T[2], T[3], and T[4 ] 66 to equal zero. The task 40 then proceeds to the Calculate Offset Coefficient C task 68. If the probe is not out of the well in step 60, the task continues to increment the probe in the well timer 61. In accordance with this step, the amount of time that the probe 12 is in the well 17 of the body 16 is monitored to determine if the probe is at ambient temperature. If the probe has not been in the well 17 for a certain time period, such as one minute, the measurement system assumes that the probe is not at ambient temperature and a previously-saved ambient temperature is used. If the probe has been in the well 17 for more than a minute, it is considered to be at ambient In an alternative embodiment, the Initialize System task 40 may be triggered by other events, such as a rapid rise in the temperature of the probe signifying contact with the patient, or the lapse of a preselected length of time following the removal of the probe from the well, or the activation of the switch 36 (FIG. 2). In step 68, the offset coefficient C is calculated 68 as shown in Equation 4 below: C=B 0 +(B 1 ×Ta)+(B 2 ×Ta ^2)(Eq. 4) where: T[a ]is the ambient temperature, and parameters B[0], B[1 ]and B[2 ]are constant weighting factors empirically derived through the use of well known statistical regression techniques upon a large sample of actual clinical data. The offset coefficient C is used to factor the ambient temperature into the calculation of the predicted temperature, as detailed elsewhere in the specification. The calculation of the offset factor C, as shown in Equation 4 above, thus relies on the assumption that the initial temperature reading is equal or nearly equal to the ambient temperature T[a ]in the environment where the thermometer is to be used, and the algorithm determines whether to actually measure T[a ]or to use a previously stored value. FIG. 6 provides a flow diagram for the Acquire and Filter Temperature Data task 42. The illustrated steps, once initiated, are carried out with precise timing and without interruption to ensure no loss of data and no timing inaccuracy. The processor 28, timer 30, and analog-to-digital converter 26 (FIG. 2) acquire and filter incoming temperature sensor data to remove line noise and other interference that may affect the quality of the temperature determination. Various techniques well known in the area of signal processing may be used in this process. In a preferred embodiment, with each timer interrupt 76, the system updates the clock counter 77 and acquires voltage input 78. This voltage is related to the resistance, and thus the temperature, of the thermistor at the probe tip. A typical mathematical relationship applied to this voltage converts it to a temperature value 82. The system then performs range checks 83 on the temperature for the purpose of determining if the thermometer is broken or poses a possible hazard. Thresholds are set and if the thermometer provides a reading outside these thresholds, the probe is considered to be broken. As an example, if the probe reads less than 32 degrees or greater than 500 degrees F., the probe is considered to be broken (an open circuit or short circuit for example). If the probe reads at a temperature of 115 degrees F., it is considered a hot probe and a hazard alarm is provided. Temperature data samples are stored 84 in the memory 32. The mathematical relationship between the resistance of the thermistor and the temperature takes into account the thermistor characterization data supplied by its manufacturer, along with the particular circuit details having to do with the interconnection of the thermistor to the analog-to-digital converter, and is well-known to those skilled in the art. After the temperature value is stored in memory 84, the system determines if tissue contact exists 86 by analyzing temperature data for a temperature above 94 degrees F. Other approaches may be used to determine tissue contact. If contact does not exist, the system waits for the next timer interrupt 76 and repeats the above process until tissue contact is determined 86. At the time that tissue contact is detected, depending on whether enough samples have been acquired or not 87 the system then either awaits the next interrupt 76 to acquire another data sample 78 (if enough samples have not been acquired)or proceeds to calculate the estimate of steady state temperature T[0]. In a preferred embodiment the timer runs at 0.1 second intervals and the system acquires twenty-one samples over a period of time of approximately two seconds prior to commencing to calculate the current temperature estimate T[0]. This number of samples and sampling rate are currently known to provide the best compromise between temperature prediction accuracy and acceptable measurement time. During this period of time, the temperature signals provided by the probe rise along a curve approaching the temperature of the patient asymptotically and the current temperature estimate T[0 ]is calculated by Equation 5 shown below: T 0 =A 1 T+A 2 T′+A 3 T″+C(Eq. 5) where: T is the average value of the temperature based upon the twenty-one data samples stored in memory, T′ is the first derivative, or slope, of the temperature curve described by the twenty-one samples stored in memory, T″ is the second derivative, or curvature, of the temperature curve described by the twenty-one samples stored in memory, Parameters A[1], A[2 ]and A[3 ]are empirically derived constant weighting factors, and C is the offset coefficient dependent on ambient temperature discussed previously. Ideally, the number of actual clinical temperature data samples used in deriving parameters A[1], A[2], A[3], B[0], B[1], and B[2 ]should be as large as possible and the measured temperatures should be uniformly spread over the entire range of temperatures of potential interest, i.e. the entire range of temperatures that may be encountered by a thermometer according to the invention. However, by necessity the number of temperature data samples obtained was relatively limited and, furthermore, the majority of the temperatures measured were in the normal body temperature range. In particular, the clinical data employed consisted of approximately 240 temperature data samples that ranged from 95.5 to 104 degrees F. (35.3 to 40 degrees C.) measured at an ambient temperature ranging from 60 to 92 degrees (16 to 33 degrees C.). For this reason, standard regression analysis applied to these data samples produced parameters that tended to predict too low at high actual temperatures and predict too high at low actual temperatures, with a predicted temperature error that exhibited a trend, or relationship, with the stabilization temperature T[F ]which is essentially equal to the actual body temperature of the patient. Because the difference between the measured temperature T(t) and the actual temperature T[F ]tends to be a function of the slope T′(t) and the curvature T″(t) of the measured temperature rather than of the measured temperature T(t) itself, parameter A[1 ]was artificially constrained to equal 1.0 in order to eliminate the trend exhibited by the error in the computed temperature T[0]. Thus, the remaining five parameters A[2], A[3], B[0], B[1], and B[2 ]were computed by regression analysis of the clinical data to yield the most accurate predictions, in terms of the lowest mean squared prediction error, across the measured temperatures, and in a preferred embodiment their values are: $A 1 = 1.0 B 0 = 15.8877 A 2 = 7.6136 B 1 = - 0.3605 A 3 = 8.87 B 2 = 0.002123 ( Eq . 5.1 With continued reference to FIG. 6, once it is determined that tissue contact has been established 86 and a preselected number of samples (e.g. five) have been acquired 87 since the last temperature estimate has been calculated, the system proceeds to compute the average value T of the temperature 88 and store it in memory 90, compute the first derivative T′ of the temperature 92 and store it into memory 94, compute the second derivative T″ of the temperature 96 and store it into memory 98, and then finally compute the current temperature estimate T[0 ]by Equation 5 defined above. Referring now to FIG. 7, a flow diagram for the Calculate Predicted Temperature task 44 is shown. This task is performed immediately after the Acquire and Filter Temperature Data task 42 which, as shown in FIG. 4, provides filtered and processed data to the Calculate Predicted Temperature task 44. The first steps entail calculating a prediction 100 and updating the running temperature estimates 110, which are the last four predicted temperatures T[1], T[2], T[3], and T[4 ]in this case, although other numbers of predicted temperatures may be used in other embodiments. The temperature prediction process of the present invention is performed continuously by the system once the initial twenty-one data points have been acquired (see discussion elsewhere in the specification regarding the use of finite impulse response filters), and a new temperature prediction is computed with every additional five (in this embodiment) data points consequently acquired. Therefore, a new predicted temperature (i.e. the current temperature estimate T[0]) is calculated with every five clock cycles, and thus the oldest of the running temperature estimates T[4 ](or T[N], depending on how many temperature estimates N are employed) must be discarded in favor of T[0 ]by shifting all values over by one from T[1], to T[4 ]and finally updating T[1 ]with the value of T[0]. The purpose of tracking the last four (or N) temperature estimates is to better determine the first instant when a sufficiently accurate temperature prediction can be posted to the display by determining when an optimal balance between prediction accuracy and elapsed time has been reached. Determining such an optimal balance must take into account various parameters, such as a predetermined minimum waiting time; the number of temperature data samples used to calculate the temperature mean T(t), slope T′(t), and curvature T″(t); a predetermined maximum slope T′(t); a predetermined maximum and minimum curvature T″(t); a predetermined maximum deviation among the running temperature estimates; and a predetermined number of prior predictions to consider for As shown in FIG. 7, in the preferred embodiment of the present invention four such conditions were chosen for determining when the optimal balance between prediction accuracy and elapsed time has been reached. Namely, a minimum waiting time t, a maximum slope T′(t), a maximum and minimum curvature T″(t), and a maximum deviation among the running temperature estimates are determined. Each of these conditions must be met before the Display Predicted Temperature task 46 is activated. Thus, as shown in FIG. 7, in the preferred embodiment at least 5 seconds must have passed 112 since the probe was first activated and placed in contact with the patient's body to ensure that transients have died out. Next, the first derivative T′ must have a positive value no larger than 0.25 degrees F. (0.14 degrees C.) per second 114 to ensure that the temperature is not rising too quickly and that it is therefore “leveling off” and approaching steady state. Third, the second derivative T″ must also be between minus and plus 0.05 degrees F. (0.028 degrees C.) per second per second 116, again to ensure proper convergence toward the steady state temperature. Fourth, the difference between any two of the running temperature estimates T[1], T[2], T[3], and T[4 ]. . . T[N ]must be no greater than 0.3 degrees F. (0.17 degrees C.) 118 in the present embodiment to ensure the accuracy of the final prediction. If any of these conditions is not met, the system initiates the Acquire and Filter Temperature Data task 42 to acquire the next temperature data sample point and recalculate the current temperature estimate T[0]. The preferred values for these four conditions, as presented above, were determined empirically to provide an optimal compromise between the time necessary to compute the prediction and its accuracy. When all four conditions have been met, the system proceeds to calculate the final temperature estimate T[f ] 120 by averaging the four running temperature estimates T[1], T[2], T[3], and T[4]. T[f ] is the temperature predicted by the system, and as shown in FIG. 4 the Display Predicted Temperature task 46 can now be activated to display T[f ]upon the display 18 of the thermometer 10 (FIG. 1). By implementing these multiple conditions, the system of the invention substantially ensures the improved accuracy of the final result by evaluating the uniformity of the temperature data obtained over time and identifying the earliest instant at which the predicted temperature offers a sufficient degree of certainty to be displayed and thus relied upon by the user of the thermometer. In a preferred embodiment the processor 28 (FIG. 2) is provided with finite impulse response (“FIR”) digital filters to perform the computations necessary to determine T, T′ and T″. FIR filters are typically configured to operate upon a string of K numerical values by multiplying each value with a predetermined coefficient h[k]. In a preferred embodiment, the filter is implemented by a computer program and the CPU 28 memory 32 is used for storage. In another embodiment, the filter may comprise K registers connected in a linear configuration and, as each new value is acquired and fed to the filter, the new value is stored in the first register and all previously acquired values are shifted one register over such that eventually the first acquired value is stored in the last register. Each register k has a coefficient h[k ]associated with it, and as a new value is shifted into each register, that value is multiplied with the coefficient h[k ]associated with the particular register. In the present implementation these steps are performed by software, and the processor memory 32 is utilized to store the register values. The output of a FIR filter is equal to the sum of the values in all of the filter's registers multiplied by the respective coefficients h[k ]associated with each register. Thus, generally speaking, given an input sequence x(n), n=0, 1, 2 . . . K, a FIR filter with coefficients h[0], h[1], . . . h[K ]will provide a filtered output signal y(n) as follows: $y ( n ) = ∑ k = 0 K - 1 h k × x ( n - k ) ( Eq . 6.1 )$ wherein: h[k (k= . . . K−1)]=set of coefficients for the particular FIR filter, K=number of coefficients for the FIR filter, x(n) corresponds to the value of the last temperature data sample acquired and fed to the filter. Thus, through the use of the proper coefficients, FIR filters can be programmed to extract the average value T, the slope T′, and the curvature T″ of the sample data curve from the temperature data sample points acquired and stored by the system. These coefficients can be readily derived for an input sequence of twenty-one data points using known mathematical methods and, for a system of twenty-one samples (n=−10, −9 . . . 10, where n=10 is the most recent sample) acquired at 0.1 second intervals, the coefficients for the FIR filters are as shown in the following equations: $T = ∑ n = - 10 10 1 21 × x ( n ) ( Eq . 6.2 ) T ′ = ∑ n = - 10 10 n 0.1 × 770 × x ( n ) ( Eq . 6.3 ) T ″ = ∑ n = - 10 10 2 × ( n 2 - 770 21 ) 0.1 2 ( 50666 - 770 2 21 ) × x ( n ) where: ( Eq . 6.4 ) ∑ n = - 10 10 n 2 = 770 ∑ n = - 10 10 n 4 = 50666 ( Eq . 6.5 )$ With reference to FIG. 8, a block diagram depicting the operation of the processor, including the FIR filters, is shown. The last temperature data sample acquired x(n) is provided by the analog/ digital converter (not shown) to the processor 28, where it is simultaneously fed to FIR circuits FIR[1 ] 202, FIR[2 ] 204, and FIR[3 ] 206, which are configured to calculate the average value T, slope T′, and curvature T″, respectively, of the last twenty-one temperature data points provided by the analog/digital converter, including x(n). The average value T, slope T′, and curvature T″ are each multiplied by the respective weighting factor A[1 ] 212, A[2 ] 214, and A[3 ] 216, as per Equation 5 above, and then summed together 218. Finally, the offset factor C, which as detailed previously is a function of the ambient temperature T[a], is added 220 to the sum of the weighted average value T, slope T′, and curvature T″ to calculate the current temperature estimate T[0]. It must be noted that the curvature T″(t) may also be computed by calculating the slope of the slope T′(t) by concatenating two slope calculating filters FIR[2]. Such an approach, however, was not selected for use in the preferred embodiment of the invention because it would require calculating the slope T′(t) prior to, rather than simultaneously with, the curvature T″(t), and would thus cause delays in computing the final temperature. By employing FIR filters to derive the average value, slope, and curvature of the temperature data, the system of the invention can utilize a rather sophisticated algorithm for predicting temperature with readily available, relatively inexpensive mathematical processors, such as commonly available eight-bit processors. As one example, the eight-bit processor having a part no. of UPO78064 from NEC may be used. Lastly, the algorithm of the invention was fine tuned by applying it to actual data to empirically derive weighting factors that provide the most accurate results over the widest range of final, steady state temperatures. In an alternative embodiment of the present invention, a single FIR filter may be programmed to extract the weighted sum of the temperature mean T(t), slope T′(t), and curvature T″(t). However, the use of a single FIR filter would not allow the individual extraction of the mean T(t), slope T′(t), and curvature T″(t) from the sampled data and therefore these parameters could not be individually monitored to determined when the optimal balance between prediction accuracy and elapsed time has been reached, as previously discussed. Furthermore, the use of individual FIR filters allows for the individual adjustment of each of these parameters such as, for example, adjusting the size of each FIR filter to obtain a particular amount of smoothing for each parameter. While one form of the invention has been illustrated and described, it will be apparent that further modifications and improvements may additionally be made to the device and method disclosed herein without departing from the scope of the invention. Accordingly, it is not intended that the invention be limited, except as by the appended claims.
{"url":"http://www.google.es/patents/US6270252?dq=flatulence","timestamp":"2014-04-19T14:56:00Z","content_type":null,"content_length":"136504","record_id":"<urn:uuid:2bf11710-dc38-4727-bd16-d5e7b2e1c19a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Bachelor of engineering degree curriculum ENT 153/4 PRINCIPALS OF THERMALFLUID AND MATERIAL At the end of the course, students are expected to understand basic thermodynamics, fluid mechanics and engineering materials concepts, and will be able to make analyses and calculations while using thermofluids and materials knowledge. Course Syllabus: Introduction to Material Science Background study. Importance of Material Science and Engineering. Material types. Mechanical Characteristics of Metal Introduction. Concepts of Stress and Strain. Behavior of Stress-Strain. Non-Flexibility. Characteristics of Metal Flexibility. Tensile Characteristics. Actual Stress and Strain. Change of Shape under Compression, Torsion and Shear. Hardness. Transition Characteristic of Materials. Design factors and Safety. Principles of Fluid Mechanics Fluid Definition. Analysis Method. Dimension and units. Characteristic of Fluids and Linear Approach. Stress and Field Velocity. Viscosity. Classification and Study of Fluid Flow. Static Fluid Basic Equations. Change of Pressure in Static Fluid. Hydrostatic Force on Bend Surface and Area. Float and Stability. Thermodynamic Concept Thermodynamic and Heat. Dimension and Unit. Close and Open Systems. Types of Energy. Characteristic of a Equilibrium System. Process and Cycles. Pressure. Temperature and Zeroth law of First Law of Thermodynamics. Heat Transfer. Work. Characteristic of Mechanical Work. First Law of Thermodynamic. Specific Heat. Internal Energy, Enthalpy and Specific Heat of Gas, Solids and Fluids. Second Law of Thermodynamics. Conservation of Heat and Energy. Engine Heat. Refrigerator and Heat Pump. Continuous Machine Movement. Reversible and Irreversible Process. Tensile Test for material samples given. Hardness Test. Impact Test (Charpy & Izod). Non-Destructive Test (NDT). Experiment of Fluid Pressure. Experiment of Heat Conduction. Experiment on Engine Heat. Experiment on Cold-Freeze Cengel, Y.A. and Turner, R.H. (2001). Fundamentals of Thermal-Fluid Sciences. 1st Ed. McGraw Hill. (teks) CallisterJr, W.D. (2000). Materials Science and Engineering: An Introduction. 5th Ed. John Wiley. Bacon, D.H. and Stephens, R.C. (2000). Mechanical Technology. 3rd Ed. Crowe, C.T. Elger, D.F. and Roberson, J.A. (2001). Engineering Fluid Mechanics. 7th Ed. John Wiley. Smith, W.F. (2000). Principles of Material Science and Engineering. 2nd ed. Mcgraw Hill. Fox, W. and McDonald, A.T. (1998). Introduction to Fluid Mechanics. Cengel, Y.A. (1997). Introduction to Thermodynamics and Heat Transfer. ISE Ed. McGraw-Hill. ENT 161/4 ELECTRIC CIRCUITS This course purpose is to introduce students with: DC and AC electric circuit system, AC system concept such as inductance, capacitance, R-L-C circuits, impedance, three phase system, electric circuit analysis using Laplace transformation, concept of frequency response for AC circuit, analysis of electric circuit using Fourier series, concept of two port circuit Course Syllabus: Circuit Elements and Variables SI Unit, Voltage and Current, Power, Energy, Basic Circuit Elements ( Passive and Active), Voltage and Current Source, Ohm’s law, Kirchoff’s Law, Circuit Model, Circuit with Dependent Source. Introduction to an Inductor, Voltage relationship, Current, Power and Energy, Capacitor and Combination of Serial-Parallel Inductor and Capacitor. Resistance Circuit Serial/Series Resistors, Circuit Voltage/Current Dividers, Measurement of Voltage and Current, Wheatstone Bridge and Equal Circuit Delta-Wye (Pi-Tee) Circuit Analysis Method Node-Voltage Method and this Method encompass Dependent Source and Special Case. Introduction to Mesh-Current Method which encompass Dependent Source and Special Case. Point Transformation. Equivalent Circuits of Thevenin and Norton. Maximum Power Transfer and Superposition. Mutual Inductance Introduction to Self Induction, Concepts of Mutual Inductance, Induced Mutual Polar Voltage, Energy Calculation, Linear and Ideal Transformer, Coupled Magnet in Equivalent Roll Circuit, Ideal Transformer in Equivalent Circuit. RL and RC circuits first-order response RL and RC circuit original response, step response (forced function) RL and RC circuits, general solution of original and step responses, sequential switching, introduction to original and step RLC Steady state Sinusoidal analysis Sine Source, Sine Response, Phase Concept, Circuits Passive Element in Frequency Domain, Impedance and Reactance, Kirchoff’s Law in Frequency Domain, Circuit Analysis Techniques in Frequency Domain. Step Frequency in AC Circuit Step Frequency (Magnitude Plot and Phase Stripe Pass, Stripe Limit), Cut Frequency, Typical Filter Type, Low-pass Filter in RL and RC Circuits, High-Pass Filter in RL and RC Circuits, RLC Stripe Pass Filter, Frequency Response using Bode Diagram. Steady state Sinusoidal Power calculation Real-Time Power, Average and Reactive Power, Force Calculation and RMS Value, Complex and Triangulation Power, Maximum Force Transfer in Impedance Term. Power Circuits Systems One and Two Phase Systems, Equal Three Phase Point Voltage, Y-Delta Circuit Analysis, Power Calculation in Equal Three Phase Circuit, Average Power Calculation in Three Phase Circuit. Introduction to Lab Equipments. Kirchoff’s Law. Serial and Series Circuit. Norton and Thevenin Theorem. Inductive Reactance. RC and RL Series Circuit. RLC Circuit. Sinusoidal Response RC Series Circuit. RLC Impedance Serial Circuit. Analysis of Steady-State Sinusoidal. Three Phase Equilibrium Circuit. Nilson And Riedel. (1996). Electric Circuits. 5^th E. Addison Wesley, Reading, Massachusetts. Dorf and Svoboda. (1996). Introduction to Electric Circuits. 3^rd Ed. John Wiley & Sons. ENT162/4 ANALOG ELECTRONICS The objective of this course is to expose the students about basic knowledge in analog electronics field. Students will be exposed towards the knowledge of amplifier design based on two-pole BJT transistor and FET, for first stage and multistage, power amplifier design, in-depth analysis frequency response and learn about special electronic devices such as the Shockley Diode, the Silicon-Controlled Switch (SCS), the DIAC and TRIAC, the Unijunction transistor (UJT), the Light-Activated SCR (LASCR) and Optical Couplings. Apart from that, students will learn about operations and functions of Op-Amp, basic design aspects and applications. In summary, this course is design to introduce the basic knowledge of analog electronics which involved with basic theory and practical. Course Syllabus: Basic Introduction to Electronics Devices To study Semiconductor Devices and Operational Characteristics. Semiconductor Materials and P-N Junctions. Diodes and applications, Two-pole BJT transistor, Biasing BJT, FET transistors and biasing, Two-base devices. Small signal transistor amplifier Small signal operation, Transistor AC equivalent circuit, common transmission amplifier schematic diagram, common collector schematic diagram, common base schematic diagram hybrid approximation equivalent circuit, hybrid complete circuit model. Small signal FET amplifier Introduction to FET small signal model, FET fixed bias schematic diagram, FET self bias schematic diagram, voltage divider schematic diagram, common flow schematic diagram, common base schematic Big signal amplifier Introduction the types of amplifiers, Class A amplifier, Class B operational amplifier, Class B amplifier circuits, skewing amplifier, Class C and D amplifiers, power transistor and heat sink. Frequency Response Introduction to basic concepts. Miller Theorem and Decibels. Low-Frequency Amplifier Response. High-Frequency Amplifier Response. Total Amplifier Frequency Response. Frequency Response Measurement Thyristor and Special Devices Introduction to The Shockley Diode, The Silicon-Controlled Rectifier (SCR) and its applications. The Silicon-Controlled Switch (SCS). The DIAC and TRIAC. The Unijunction transistor (UJT) . The Light-Activated SCR (LASCR). Optical Couplings. Operational Amplifiers (Op-Amp) Operation of Op-Amp. Differential and Common-Mode Amplifiers. Op-Amp Parameters. Op-Amp Basic. Practical Op-Amp Circuits. Op-Amp Datasheets. 1. Introduction to diode 2. Diode as rectifier 3. Current and voltage characteristics of BJT 4. Common collector amplifier 5.Common base amplifier 6.Common amplifier channel 7.Class A Power amplifier 8.Class B Amplifier push-pull 9. Controller rectifier, SCR 10.Comparator op-amp Boylestad, R.L., and Nashelsky, L. (1999). Electronic Devices and Circuit Theory. 7^th ed. Prentice Hall. Floyd, T. (1997). Electronic Devices. 6^th ed. Prentice Hall. ENT 163/4 FUNDAMENTAL OF ELECTRICAL ENGINEERING The main objective of this course is to enhance basic knowledge of theory and principles of electrical technology, introduce students with electrical and electromechanical devices that are used in the industry, and also train students with basic electrical wiring and installation skills Course Syllabus: Introduction to Electric Circuit Electron theory, electrical sources, resistance and factors which influence the resistance, study the types of electrical circuits, study the voltage, current and resistor relationship, electrical power, electrical energy, characteristics of serial and parallel circuits, Ohm’s Law, Kirchoff’s Law, Thevenin’s Theorem and Norton’s Theorem. Inductor and Capacitor Basic principle of inductor and basic principle of capacitor Magnetic and Electromagnetic Basic principle of magnetic and characteristics, basic principle of electromagnetic, factors influence magnetic field strength, electromagnetic induction, magnetic circuits for electrical machines, electrical and permanent magnetic field excitation Introduction to Alternating Current (AC) Circuit Basic principle of AC circuit Principles of transformer, construction and design, efficiency of operation, efficiency of three-phase transformer’s operation, parallel transformer operation Three-Phase System Basic principle of three-phase system, star and delta connections, applications Direct Current (DC) Electrical Machine DC generator, DC construction machines, characteristics of DC motor, loss in DC motor, efficiency of DC motor Alternating Current (AC) Electrical Machine AC generator, single-phase AC motor, three-phase AC motor, types of starter, relation between torque and speed, applications, motor’s speed control. Electrical Safety Disconnector circuit, current devices residual, contactors, relay, fuses, earthing, insulator, rules of electrical wiring and pairs. Introduction to Lab Instruments and Basic Measurements Kirchhoff’s Law Parallel Circuits and Voltage Divider Rules for Series Circuit Thevenin’s and Norton’s Theorem Single Phase Transformer Direct Current (DC) Series Motor 1. Alexander, C. K., Sadiku, M.N.O. (2004). Fundamental of Electrical Circuits. 2^nd Ed. McGraw Hill. 2. Nilsson, J.W. and Riedel, S.A. (2004). Electric Circuits. 6^th ED. Prentice Hall. 3. Naidu, M.S. Introduction to Electrical Engineering. 4. Bruce, C.A. Electrical Engineering: Concepts and Applications. 5. Hyatt, W.H. Engineering Electromagnetics. 6. Rajput, R.K. (2003). Electrical Machine. Laxmi Pub. 7. Wildi, T. (2002). Electrical Machines, Drives and Power systems. Prentice Hall. 8. Bhattacharya, S.K. (1998). Electrical Machines. Mc Graw-Hill. 9. Sen, P.C. (1997). Principles of Electric Machines and Power Electronics. 2^nd Ed. John Wiley & Sons. ENT 164/4 SENSOR & MEASUREMENT Introduction of measurement system, basic measurement circuit, resistance-based transducer, magnetic-based transducer, capacitance-based transducer, self-generating transducer, electrochemical transducer, semiconductor transducer, mechanical transducer in flow, pressure, power and weight measurement , interfacial sensor and transducer with computer and input data. Course Syllabus: Introduction to measurement system Fundamental terminology, elements in the measurement, control amplifier, inverted amplifier, phase amplifier differential amplifier, feed-back capacitor, Wheatstone bridge. Transducer and resistance-based sensor and its measurement Potentiometer, resistance thermometers, Thermistor, strain gage. Examples of measurement applications. Transducer and magnetic sensor and its measurement Linear voltage differential transducer (LVDT)- specification, circuit, application. Linear circuit variable reluctant transducer, applications of transducer magnet measurement. Transducer and capacitance-based sensor and its measurement. Fundamental of capacitance, capacitor measurement circuit. Application of capacitance transducer measurement. Transducer and self-generating sensor and its measurement. Thermocouple-basic thermocouples, types of thermocouples, applications of thermocouple measurement, piezoelectric, basic piezoelectric , types of piezoelectric, application of piezoelectric Transducer and electrochemical sensor and its measurements. Potentiometric sensor, amperometrik sensor, other elechtrochemical sensor. Conductivity measurement, pH measurement. Basic biosensor and biosensor application. Transducer and semiconductor sensor and its measurement. Hall’s sensor, photodiode, Ion-MOSFET sensitive device, ISFET. Transducer and mechanical sensor Flow, pressure, power and weight measurements Interfacial sensor and transducer with computer and input data Analog-digital converter, computer network, programming techniques for data acquisition, time divider multiplexer, typical data acquisition systems. 1. Practical temperature measurement with wheatstone bridge circuit and thermistor. 2. Practical linear voltage differences transducer (LVDT) 3. Practicla thermocouple circuit 4.Practicla piezoelectric circuit 5.Practicla amperometric 6. Practicla sensor effect Hall 7. Practicle pressure measurement use strain measurement. 8. Practicla input data. 1. Doeblin, E.O. (2004). Measurement System: Application and Design. McGraw-Hill. 2. Sinclair,I. (2001). Sensor and Transducers. 3^rd Edition. Newnes. 3. Holman, J.P. (2001). Experimental Methodes for Engineers. 7^th Edition. McGraw-Hill. 4. Harsanyi G. (2000). Sensors in Biomedical Applications. Techomic Pub. 5. Usher, M.J. (1996). Sensors and Transducers. MacMillan. 6. Bell D.A. (1994). Electronic Instrumnetation and Measurements. 2^nd Edition. Prentice Hall. 7. Beckwith T.G., Marangoini R.R.D and Lienhard J.H. (1993). Machanical Measurements. 5^th Edition. Prentice Hall. 8. Trietly H.L. (1986). Transducers in Mechanical and Electronic Design. Marcel Decker. ENT 165/4 INSTRUMENTATION The main objective of the course is to introduce electronic instrumentation system to students so that they are capable of doing accurate measurement on electrical and mechanical quantity. Students are also given analytical and experimental exposure in instrumentation and also introduction to measurement devices which are widely used in the industry Course Syllabus: Measurement and error analysis Definition, accuracy and pressician, significant digit, analysis statistic, error probability, error limit. Analog equipment and digital Multimeter (voltmeter, ammeter, ohmmeter), osciloscope, power resource. Circuit of Ac and DC bridge Introduction, type of circuit bridge, Bridge Wheatstone circuit, Bridge H circuit, application. Introduce, tube cathode light, tube cathode light circuit, divergen sistem, and transducer osciloloscope, measurement with osciloscope, particular oscilloscope. Analysis and signal generating Sinus wave generating, signal sintetic frequency generating, signal frequency generating audio, noise digital generating and analog, wave analysis, distorsion and spectrum. Data accuatition system and analogy Introduce, signal conditioning input, data accuatition system single channel, data accuatition multi channel, data changer, A/D changer and D/A and input and out put device and analog record, I/O digital source multiplex, sample circuit and palka. Sensor and transducer Sensor classification, passive sensor and active, behaviour of sensor. 1. Introduce kind of error 2.Introduce to measurement analog and digital 3.Develop circuit bridge 4.Application ADC and DAC 5.Introduce sensor 1. Figliola R.S., Beasley D.S. (1995). Theory and Design for Mechanical Measurements. 2nd Edition. Wiley and Sons. 2. Dally J.W., Riley W.F., McConnell K.G. (1993). Instrumentation for Engineering Measurements. 2nd Edition. J. Wiley and Sons. 3. Beckwith T.G. (1990). Marangoni R.D., Mechanical Measurements. Addison-Wesley. 4. Tse F.S, Morse I.E. (1989). Measurement and Instrumentation in Engineering. Marcel Dekker. ENT 211/4 THERMOFLUID Students are given ample exposure to thermodynamics and fluid mechanics. In the end of the course, students are able to relate these subjects to biomedical engineering and they shall apply thermofluids in solving problems in biomedical engineering. Course syllabus Introduction of engineering thermodynamics, basic concepts and defintion; First law of thermodynamics; Second law of thermodynamics; Pure materials; reversibility; power cycle; ideal gas Properties of mixtures, thermodynamics cycle Fluid mechanics Basic concepts; pressure measurement; Fixed flow energy equation and Bernoulli equation; flow rate measurement; Momentum equation; flow in pipe; similarity analysis and dimension Laminar and turbulent flow, Priciples of fluid machines, reciprocating pump, rotodynamics pump. Heat flow Insulation experiment Presssure measurement Flow in pipe experiment Pump experiment Ideal gas equation Massoud,M. (2005). Engineering Thermofluids : Thermodynamics, Fluid Mechanics, and Heat Transfer. 1^st Ed. Springer. 2.Cengel Y.A, Boles M.A. (2001). Thermodynamics: an engineering approach. 4th Ed. McGraw Hill. Marquand, C. (2000). Thermofluids: an integrated approach to thermodynamics and fluids mechanics principles. John Willey. Sherwin, K.,Horsely, M. (1999). Thermofluids. Nelson Thornes. Kannapa, I. (1998). Applied Thermofluids. Prentice Hall. ENT212/4 BIOMEDICAL SIGNAL AND SYSTEM In the end of the course, the students are able to understand different types of continuous and discrete signals. They are also capable to identify linear systems and Fourier Transform series. They could able to design the system and the filters involved. Course Syllabus Discrete-time and continuous -time signals, sinusoidal and exponential signals Impulse response and unit step function, characterization of basic systems Linear Time-Invariant Systems LTI Systems: Convolution sum, characterization of LTI systems Continuous-time LTI systems; Convolution integration Differential equation: Causality of LTI systems Continuous-time Fourier analysis Fourier series for periodic continuous-time signals Characterization of continuous-time Fourier series, Fourier series and LTI systems Non-periodic signal representation Continuous-time Fourier transforms Characterization of continuous-time Fourier transform Systems identification with linear constant coefficient Discrete signals Fourier Analysis Discrete-time Fourier transform, characterization of discrete-time Fourier transform Systems identification of discrete signals The Z-Transform Z-transform and inverse Z-transform Introduction to signals Differential Equation and state variable Linear time-invariant frequency response systems Convergence signals of Fourier representation Frequency response systems and signals analysis in the frequency-domain 1.Roberts. M.J. (2003). Signals and Systems: Analysis of Signals Through Linear Systems. McGraw-Hill. 2.Haykin, S., Van Veen, B. (2002). Signals and Systems. 2^nd Ed. Wiley. 3.Oppenheim, A.V. (1996). Signals and Systems. 2^nd^ Ed. Prentice Hall. Our objectives here is to introduce the students to medical instruments used at hospitals and in medical industries.. In the end of the semester, the students are expected to provide clear understanding in various medical instrumentation principles and demonstrate the ability to design basic biomedical electronic circuits. Course syllabus Basic concepts in medical instrumentation Terminology, principles of instrumentation, PC based instrumentation, microcontroller based instrumentation, electronic controlled instrument, electronic powered instrument, motor controller Biopotential amplifier and signal processor in medical instrumentation Biopotential signals, biopotential amplifier,instrumentation amplifier design, bioelectric amplifier design, active filtering, digital filtering, image processing and data reduction techniques Physiological Measurement Measurement of blood pressure and sound, measurement of blood volume and flow, measurement of respiratory system Electrodes, electrode-skin interface, resistance sensors, bridge circuits, inductive sensors, capacitive sensors and piezoelectric sensors ECG, EEG, Defibrillator, Pacemaker, respiratory assistance equipment, ultrasonic equipment, X-ray, CT-scan Introduction to medical instrumentation Design of medical sensors Application of instrumentation amplifier in biosignal detection Design of biopotential filters Application of bridge rectifiers in DC supply design Fundamentals of ECG Principles of hemoglobin meter Webster, J.G. (2003). Bioinstrumentation. Wiley. Perez,R. (2002). Design of Medical Electronic Devices. Academic Press. Carr, J.J. (2000). Introduction to Biomedical Equipment Technology. 4^th Ed. Prentice Hall. Webster, J.G. (1997). Medical Instrumentation: Application and Design. 3^rd Ed. Wiley. ENT 214/4 Biomechanics In the end of the course, the students are competent to apply mechanical concepts to human motion analysis, human tissue analysis and rehabilitation analysis. Course syllabus Introduction for Analyzing Human Motions Concepts of kinematics and kinetics for human motion analysis Biomechanics of Human Skeletal Articulations and Muscle The classifications of joints based on motion capabilities and basic behavioral properties of the musculotendinous unit. Biomechanics of Human Upper and Lower Extremity The anatomical structure affects movement capabilities of upper and lower extremity articulations Biomechanics of Human Spine The anatomical structure affects movement capabilities of the different region of spine. Introduction to Biomechanics of Gait, Running and Rehabilitation Gait cycle is used in determined the relation between walking and running and applying biomechanics concepts in rehabilitation. Force analysis on the equilibrium of the human body and its segments Application of engineering mechanics analysis on the body and its segments equlibrium body segments motion Force reaction on the body and its segments Force reaction on the body, the effect of force reaction on body segments, mechanics of muscle, mechanics of joints Gait Analysis Force plate and transducer, foot pressure, normal and pathological gait analysis i) Application of basic kinematic and kinetics of the human body ii) Analysis of human body equilibrium iii) Analysis of the motion of the human body segments iv) Analysis of force reaction onto the human body v) Normal gait analysis vi) Pathological gait analysis 1. Basic Biomechanics, 5^th Edition, 2007, Susan J. Hall 2. Biomechanics and motor control of human movement, 3^rd Edition, 2005, David A. Winter. 3. Biomechanical basis of human movements, 2^nd Edition, 2003, Joseph Hamill, Kathleen M. Knutzen 4. Principles of biomechanics & Motion Analysis, 3^rd Edition, 2006, Iwan W. Griffiths. In the end of this course, the student should have a firm grasp of basic electromagnetic and identify their effects on the biosystem which cover bioelectric, bioelectromagnetic, and biomagnetic phenomena. The knowledge encompasses laws which determine the electrical and magnetic field. Thus, they will be able to understand the operational principles of electrical instrumentation and machine for biomedical engineering application. Course Syllabus Vector Analysis Scalar and vector quantity, gradient, curl of a vector field, laplacian operator, divergence of a vector fields and Stokes’s theorem Electrostatic Fields Fundamental Theorem: Coulomb’s Law, Gauss’s Law, electric flux density, intensity of electric fields and electric potential. Laplace’s equation and Poisson’s equation, boundary conditions, electrostatic fields in dielectric, capacitance. Electrostatic fields strength Magnetostatic Fields Biot-Savart law, Ampere’s circuital law, magnetic field intensity, magnetic flux density, magnetic force and magnetic materials Interaction of Humans with Electromagnetic Fields Bioelectromagnetism, Electromagnetic Frequency Spectrum, Electrosmog or Radiation Pollution and Bioeffects of ELF Fields Analysis of fundamental principal of electromagnetic theory using MATLAB. Using the Gauss’s Meter for the performance analysis of electromagnetic signals. Analyzing Magnetic Properties using FEMM (Finite Element Method Magnetic) Software Measurement of EMF on the Biomedical Appliances William H. Hayt, Jr and John A. Buck “Engineering Electromagnetics”, 7^th Ed., McGraw Hill International Ed. 2006 2. Ulaby, F.T. (2003). Fundamentals of Applied Electromagnetics. Prentice Hall. 3. Kraus, J.D., Fleisch, D.A. (1999). Electromagnetics. 5^th ed. McGraw-Hill. 4. Cheng D.K. (1992). Fundamentals of Engineering Electromagnetics. Prentice Hall. 5. Dragan Poljak, “Human Exposure to Electromagnetic Fields”, WIT Press, 2004 ENT 250/3 MECHANICAL MANUFACTURING SKILL The aims of this course is to introduce and provide the students with theoretical and practical skills that are required in fabricating and manufacturing mechanical parts or components. At the end of this course the students will be able appreciate various skills and technology in manufacturing processes. Course Syllabus: Manufacturing Metrology Introduction and usages of manufacturing measurement tools, standard of measurements, dimensional measurements, straightness, flatness, roundness and profile. Welding terminology, safety procedures, work piece preparation, electrodes, suitability of welding process with materials and applications, welding processes: Arc (SMAW), MIG(GMAW) and TIG(GTAW), weld test. Conventional Machining Introduction to conventional machining, safety procedure, materials suitability and preparation, cutting tools preparation, machining processes: turning, milling and grinding. CNC Machining Introduction to advanced machining, safety procedure, materials suitability and preparation, cutting tools preparation, machine codes (G code and M code) and programming (can cycle and subroutine), CNC machine set-up, machining processes: turning and milling. EDM Machining Introduction and concept of EDM (Electro discharge machining), machine tooling and accessories, safety procedure, electrode and work piece preparation, machine set-up, machine code and programming, EDM machining. 1. Metrology 2. Arc, MIG and TIG welding 3. Conventional Lathe(Turning) Machining 4. Conventional Milling 5. CNC Lathe or CNC Milling 6. EDM die-sinking or wire-cut. Krar, Steve F., Gill, Arthur R., Smid, Peter. (2005). Technology Of Machine Tools. 6^th Ed. McGraw-Hill. (teks) Kalpakjian,S. andSchmid, S.R. (2001). Manufacturing Engineering and Technology. 4^th Edition. Prentice Hall. Groover,M.P. (2002). Fundamental of Modern Manufacturing. Prentice Hall. Schey, J.A. (2000). Introduction to Manufacturing Processes. 3^rd Ed. Mc Graw Hill. Fitzpatrick, Michael. (2005). Machining and CNC Technology with Student CD-ROM. 1st Ed. McGraw-Hill.
{"url":"http://lib.znate.ru/docs/index-9742.html?page=4","timestamp":"2014-04-21T12:23:50Z","content_type":null,"content_length":"51452","record_id":"<urn:uuid:fd7f157e-b359-429e-8c99-6ae73b39b90d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the mass of the block of ice Course:- Physics Reference No.:- EM131712 Like Us & Win 1 Month Access of Solution Library We announce one lucky winner in a month who can have rights to access solution library free of cost for one month. Problem 1 Two horses pull horizontally on ropes attached to a stump. The two forces \(\texttip{\vec {F_1}}{F_1_vec}\) = 1230\({\rm N}\) and \(\vec{F}_2\) that they apply to the stump are such that the net (resultant) force \(\texttip{\vec{R}}{R_vec}\) has a magnitude equal to that of \(\vec{F}_1\) and makes an angle of 90\(^\circ\) with \(\vec{F}_1\). Part A Find the magnitude of \(\texttip{\vec{F}_{\rm 2}}{F_2_vec}\). Part B Find the direction of \(\texttip{\vec{F}_{\rm 2}}{F_2_vec}\) (relative to \(\texttip{\vec {F_1}}{F_1_vec}\)). Exercise 1.1 A dockworker applies a constant horizontal force of 84.0\({\rm N}\) to a block of ice on a smooth horizontal floor. The frictional force is negligible. The block starts from rest and moves a distance 12.5\({\rm m}\) in a time 4.80\({\rm s}\) . Part A What is the mass of the block of ice? Part B If the worker stops pushing at the end of 4.80\({\rm s}\) , how far does the block move in the next 5.90\({\rm s}\) ? Exercise 1.2 At the surface of Jupiter's moon Io, the acceleration due to gravity is 1.81\({\rm m/s^2}\) . A watermelon has a weight of 40.0\({\rm N}\) at the surface of the earth. In this problem, use 9.81\({\rm m/s^2}\) for the acceleration due to gravity on earth. Part A What is its mass on the earth's surface? Part B What is its mass on the surface of Io? Part C What is its weight on the surface of Io? Exercise 1.3 A chair of mass 14.5\({\rm kg}\) is sitting on the horizontal floor; the floor is not frictionless. You push on the chair with a force \(\texttip{F}{F}\) = 39.0\({\rm N}\) that is directed at an angle of 42.0\({\rm ^\circ}\) below the horizontal and the chair slides along the floor. Part A Draw a clearly labelled free-body diagram for the chair. Draw the vectors starting at the black dot. The location and orientation of the vectors will be graded. The exact length of your vectors will not be graded but the relative length of one to the other will be graded. Part B Use your diagram and Newton's laws to calculate the normal force that the floor exerts on the chair. Problem 2 A parachutist relies on air resistance (mainly on her parachute) to decrease her downward velocity. She and her parachute have a mass of 57.5\({\rm kg}\) and air resistance exerts a total upward force of 690\({\rm N}\) on her and her parachute. Part A What is the combined weight of the parachutist and her parachute? Part B Draw a free-body diagram for the parachutist. Draw the force vectors with their tails at the dot. The orientation of your vectors will be graded. The exact length of your vectors will not be graded but the relative length of one to the other will be graded. Part C Calculate the net force on the parachutist. Part D Is the net force upward or downward? Part E What is the magnitude of the acceleration of the parachutist? Part F What is the direction of the acceleration? Problem 3 An athlete whose mass is 97.0\({\rm kg}\) is performing weight-lifting exercises. Starting from the rest position, he lifts, with constant acceleration, a barbell that weighs 410\({\rm N}\) . He lifts the barbell a distance of 0.65\({\rm m}\) in a time of 2.0\({\rm s}\) . Part A Draw a clearly labelled free-body force diagram for the barbell. Draw the force vectors with their tails at the dot. The orientation of your vectors will be graded. The exact length of your vectors will not be graded but the relative length of one to the other will be graded. Part B Draw a clearly labelled free-body force diagram for the athlete Draw the force vectors with their tails at the dot. The orientation of your vectors will be graded. The exact length of your vectors will not be graded but the relative length of one to the other will be graded. Part C Use the diagrams in parts A and B and Newton's laws to find the total force that his feet exert on the ground as he lifts the barbell. Express your answer using two significant figures. Exercise 4 An adventurous archaeologist crosses between two rock cliffs by slowly going hand-over-hand along a rope stretched between the cliffs. He stops to rest at the middle of the rope (Figure 1) . The rope will break if the tension in it exceeds 2.35×104\({\rm N}\) , and our hero's mass is 92.4\({\rm kg}\) . Part A If the angle between the rope and the horizontal is \(\texttip{\theta }{theta}\) = 11.8\({\rm ^\circ}\), find the tension in the rope. Part B What is the smallest value the angle \(\theta\) can have if the rope is not to break? Exercise 5 A man pushes on a piano with mass 160\({\rm {\rm kg}}\) so that it slides at constant velocity down a ramp that is inclined at 13.9\({\rm ^\circ}\) above the horizontal floor. Neglect any friction acting on the piano. Part A Calculate the magnitude of the force applied by the man if he pushes parallel to the incline. Part B Calculate the magnitude of the force applied by the man if he pushes parallel to the floor. Exercise 6 A 79.0-\({\rm kg}\) painter climbs a ladder that is 2.75\({\rm m}\) long leaning against a vertical wall. The ladder makes an 32.0\({\rm ^\circ}\) angle with the wall. Part A How much work does gravity do on the painter? Part B Does the answer to part A depend on whether the painter climbs at constant speed or accelerates up the ladder? Exercise 7 Use the work-energy theorem to solve each of these problems. You can use Newton's laws to check your answers. Neglect air resistance in all cases. Part A A branch falls from the top of a 86.0 \(\rm m\) tall Australian cedar, starting from rest. How fast is it moving when it reaches the ground? Part B A volcano ejects a boulder directly upward 521\({\rm m}\) into the air. How fast was the boulder moving just as it left the volcano? Part C A skier moving at 5.00\({\rm m/s}\) encounters a long, rough horizontal patch of snow having coefficient of kinetic friction 0.220 with her skis. How far does she travel on this patch before Part D Suppose the rough patch in part C was only 2.90\({\rm m}\) long? How fast would the skier be moving when she reached the end of the patch? Part E At the base of a frictionless icy hill that rises at 21.0\({\rm ^\circ}\) above the horizontal, a toboggan has a speed of 12.0 \({\rm{ m/s}}\) toward the hill. How high vertically above the base will it go before stopping? Exercise 8 A little red wagon with mass 6.80\({\rm kg}\) moves in a straight line on a frictionless horizontal surface. It has an initial speed of 3.40\({\rm m/s}\) and then is pushed 4.3\({\rm m}\) in the direction of the initial velocity by a force with a magnitude of 10.0 \({\rm N}\). Part A Use the work-energy theorem to calculate the wagon's final speed. Express your answer using two significant figures. Part B Calculate the acceleration produced by the force. Express your answer using two significant figures. Exercise 9 A child applies a force \(\vec F\) parallel to the \(x\) -axis to a 10.0-\({\rm kg}\) sled moving on the frozen surface of a small pond. As the child controls the speed of the sled, the \(x\) -component of the force she applies varies with the \(x\) -coordinate of the sled as shown in the figure (Figure 1) . Part A Calculate the work done by the force \(\vec F\) when the sled moves from \(x\)=0 to \(x\)=8.0\({\rm{ m}}.\) Express your answer using two significant figurs. Part B Calculate the work done by the force \(\vec F\) when the sled moves from \(x\)=8.0\({\rm{ m}}\) to \(x\) =12.0\({\rm{ m}}\). Express your answer using two significant figurs. Part C Calculate the work done by the force \(\vec F\) when the sled moves from \(x\)=0 to \(x\) =12.0\({\rm{ m}}\). Express your answer using two significant figurs. Exercise 10 A tandem (two-person) bicycle team must overcome a force of 175\({\rm N}\) to maintain a speed of 9.00\({\rm m/s}\) . Part A Find the power required per rider, assuming that each contributes equally.Express your answer in watts. Part B Express your answer in horsepower. Browse some more Materials find the volume of the irregularly shaped object. What is sigma_inner, the surface charge density (charge per unit area) on the inner surface of the conducting shell. Buy Now An ocean liner leaves New York City and travels 19.2 ° north of east for 186km. How far east and how far north has it gone? In other words, what are magnitudes of the components of the ship's displacement vector in the directions (1) due east and (2) due north. Buy Now A massless string runs around two massless, frictionless pulleys. An object with mass m = 12.20 kg hangs from one pulley. A force F is exerted on the free end of the string. What is the magnitude of the force F if the object is lifted at the constant speed. Find the work function of the emitting surface and the wavelength of the second source.Determine the net work done on the piano. What is the speed of the large cart after the collision. A 345 g chunk of gold at 98.5 degrees Celsius is dropped into a 656 g of water at 22.5 degrees Celsius. What will final temperature of the gold be after the system reaches equilibrium. Buy Now find the period of this oscillation. Determine the magnitude of the friction force. 5.80 g lead bullet traveling at 460 m/s is stopped by a large tree. If half the kinetic energy of the bullet is transformed into internal energy and remains with the bullet while the other half is transmitted to the tree, what is the increase in temperature of the bullet. Assume you are swimming in a river while a friend watches from the shore. In calm water, you swim at a speed of 1.25 m/s. The river has a current that runs at a speed of 1m/s.
{"url":"http://www.expertsmind.com/library/what-is-the-mass-of-the-block-of-ice-51712.aspx","timestamp":"2014-04-20T03:19:46Z","content_type":null,"content_length":"40091","record_id":"<urn:uuid:5954d5f9-941e-4ed1-93a5-1dee00092bbf>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Mahdi Majidi Curriculum Vitae Here is my Curriculum Vitae. Research Interests I am interested in Algebraic Geometry, Algebraic Dynamics and Number Theory. I received my PhD in 2005 from CUNY Graduate center under the supervision of Prof. Lucien Szpiro. In the past few years I have focused my research on Algebraic Dynamics. I have been studying local and global invariants of self-morphisms of algebraic varieties: I am interested in applying tools and technics from Algebraic Geometry, such as Grothendieck Riemann-Roch theory and etale cohomology to the study of these invariants and their interrelations. I am also interested in Derived Category methods and Fourier-Mukai Transforms in Algebraic Geometry. Selected Teaching Here is a Teaching Philosophy written by Nozomi Kato, a colleague at LaGuardia. I couldn't agree more with this. Here are some courses that I have taught: Last modified on: 09/06/2013
{"url":"http://faculty.lagcc.cuny.edu/mmajidi/","timestamp":"2014-04-16T04:11:28Z","content_type":null,"content_length":"37768","record_id":"<urn:uuid:ed95e631-24db-4079-a826-1d4d09b95b9d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Financial Mathematics April 18th 2010, 01:48 PM #1 Jan 2010 Financial Mathematics This problem is in the sequences and series review section of my book, so could someone help me with the problem using sequences and series concepts? Michele invested 1500 francs at an annual rate of interest of 5.25 per cent. a) Find the value of Michele's investment after 3 years. Give your answer to the nearest franc. b) How m any complete years will it take for Michele's investment to double in value? c) What should the interest rate be if Michele's initial investment were to double in value in 10 years? What I did was... a) 1500 x 1.0525^2 = $1662 (which is incorrect, the book has it as: $1749 b) I got the correct answer: 14 complete years c) 3000 = 1500r^9 Therefore, the interest rate is 8% (which is incorrect, the book has it as: 7.18%) What did I do wrong, and what are the correct steps to the solutions? Thanks! This problem is in the sequences and series review section of my book, so could someone help me with the problem using sequences and series concepts? Michele invested 1500 francs at an annual rate of interest of 5.25 per cent. a) Find the value of Michele's investment after 3 years. Give your answer to the nearest franc. b) How m any complete years will it take for Michele's investment to double in value? c) What should the interest rate be if Michele's initial investment were to double in value in 10 years? What I did was... a) 1500 x 1.0525^2 = $1662 (which is incorrect, the book has it as: $1749 b) I got the correct answer: 14 complete years c) 3000 = 1500r^9 Therefore, the interest rate is 8% (which is incorrect, the book has it as: 7.18%) What did I do wrong, and what are the correct steps to the solutions? Thanks! In both a and c you have used the wrong time. In (a) you should use 3 not 2, in (c) use 10 not 9. If you are using the a x r^(n-1) formula, you need to be careful with what n is. Draw a timeline. For part c, you need to find the following: $1500*r^{10} = 3000$ A bit of simple arithmetic gives us: taking the log of each side: $10log(r) = log 2$ which simplifies to $log(r) = \frac{log 2}{10}$ take the inverse log of this gives you the answer 1.07177: 7.18% interest. April 18th 2010, 03:00 PM #2 Senior Member Oct 2009 April 18th 2010, 07:42 PM #3 May 2008
{"url":"http://mathhelpforum.com/calculus/139902-financial-mathematics.html","timestamp":"2014-04-20T14:18:16Z","content_type":null,"content_length":"36337","record_id":"<urn:uuid:49f142cb-195c-4ec2-8e68-769f7491bd19>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
what is your favorite math for K-8 I'm trying to find math for K-8. I don't like Saxon or Abeka. Any ideas? We aren't big on manipulatives. We enjoy flashcards. We love Math-U-See. They do use manipulatives, but my kids only used them at the very start and weaned quickly off of them and didn't need them. I've used flashcards and math copywork to learn facts. I love MUS because it's to the point, videos are helpful/fun, and my kids are mostly independent. MUS is the only curriculum (besides Handwriting) we haven't changed in our 7 yrs. of homeschooling! We have been using Teaching Textbooks since grade 5 for my son, who is now in the 6th grade. He hates math textbooks because it is visual overload for him. He was diagnosed as ADHD while in public school thus my reason for pulling him out (he had to be medicated, and that DID NOT work out at all...long story). For my daughter (age 8), I used MUS just after pulling her out at the end of 2nd grade. She loved Mr. Steve and did the work wonderfully. I changed to Rod & Staff Math though for grade 3 and will use R/S for grade 4 then she will pick up with Teaching Textbooks. MUS is great. We just do not use it anymore because she caught on so quickly after just the first book of MUS (We used the adding/ substracting focused one). Currently we are using MUS and Saxon. I like both for very different reasons, but would also like to use some of Right Start Math, especially the games/flashcards. I do like the way MUS is to the point, as Gina pointed out, and that pages are not colorful or littered with pictures and whatnot. My kids liked MUS but they seemed to have a hard time with retention. I don't think this is necessarily a problem with the program, but it wasn't a good fit for us. We're using Math Mammoth and I like it. If you like Manipulatives, Right Start math is fantastic. You might want to check out this older thread while making your descision. http://simplycharlottemason.com/scmforum/topic/straight-math-advice-cont-rs-and-other We llove Ray's here. Practical Arithmetic is my favorite. We have gone from the second book into Algebra (two boys went to Art of Problem Solving Prealgebra and my eldest went to Foerster's Algebra). We have done some of the geometry in Practical Arithmetic Book 3. Good old fashioned math. Well, my favorite is MUS, but two of my ds's have done very well with Singapore math. We started using Life of Fred. We really love it!! We used Math U See before and I think that is a very good choice as well. My 4th grader is using Teaching Textbooks. As soon as my 6YO is ready for that, I plan to get Teaching Textbooks for him, too. It is a very thorough, comprehensive program. MUS works great for us! My DD7 finished K in a Christian private school and was B L A N K on math! We spend several weeks on the first few chapters in Alpha just to get her to learn basics but now she catched up and we even expect to finish Alpha in time. Most important, she really enjoys doing math now, before it was all tears and frustration. We use Math-u-see here. If you like games and stuff like that you should try "Family Math" It is a book of math teaching games you play with your kids. We have used some of these to use if we are stuck on a concept or just for fun to reinforce. There was also a website for math games that you could play with your kids. You could probably go to IXL Math and find out what your state requires and do games by that. You could even have your kids play the games on there if you don't mind them being on the computer. Developmental Mathematics (my son) and Math Mammoth (my dd and supplmentary for son in measurements). Inexpensive and nothing fancy, just an Abacus. For some more word problems, I like Singapore Math Challenging Word Problems for Primary Mathematics and Ray's just for review of math facts. Also, my dd being very auditory, I bought her Classical Math to Classical Music for memorization of facts. We switched this year to Professor B math and the kids are really enjoying it! Last year my dd had a lot of tears with math.
{"url":"http://simplycharlottemason.com/scmforum/topic/what-is-your-favorite-math-for-k-8","timestamp":"2014-04-19T11:56:11Z","content_type":null,"content_length":"26433","record_id":"<urn:uuid:b483b864-1ed8-4988-8837-30560a4c3aa4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Dunn Loring Algebra Tutor Find a Dunn Loring Algebra Tutor ...I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help students understand better course materials and what is integral in extracting information from problems and solving them. I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated. 17 Subjects: including algebra 1, algebra 2, chemistry, physics ...I feel very strongly about help students succeed in math because I believe a true understanding of math can make many other subjects and future studies easier and more rewarding. I graduated from University of Virginia with a degree in economics and mathematics. While in college, I tutored calculus to many students. 22 Subjects: including algebra 1, algebra 2, calculus, geometry Hey all! My name is Jenna and I'm an enthusiastic and well-rounded professional who loves to teach and help others. I have a BA in International Studies and spent the last year and a half in a village in Tanzania (East Africa) teaching in- and out-of-school youth a variety of topics including English, health, life skills, and civics. 35 Subjects: including algebra 2, algebra 1, English, Spanish ...I would describe myself as a knowledgeable math tutor with great communication skills and patience. I encourage dialogue with the student to understand where they are having trouble. I will work diligently with you to ensure comprehension and a comfort level with the concepts. 19 Subjects: including algebra 1, algebra 2, calculus, geometry ...I can teach them to utilize their textbooks and outside resources effectively and efficiently. I tutor in my home in Burke, VA. Feel free to contact me! 26 Subjects: including algebra 2, algebra 1, Spanish, English Nearby Cities With algebra Tutor Cabin John algebra Tutors Chevy Chase Village, MD algebra Tutors Chevy Chs Vlg, MD algebra Tutors Clifton, VA algebra Tutors Falls Church algebra Tutors Fort Myer, VA algebra Tutors Glen Echo algebra Tutors Martins Add, MD algebra Tutors Martins Additions, MD algebra Tutors Mc Lean, VA algebra Tutors Merrifield, VA algebra Tutors Pimmit, VA algebra Tutors Somerset, MD algebra Tutors Vienna, VA algebra Tutors West Mclean algebra Tutors
{"url":"http://www.purplemath.com/Dunn_Loring_Algebra_tutors.php","timestamp":"2014-04-18T18:44:25Z","content_type":null,"content_length":"23964","record_id":"<urn:uuid:937b4d52-89a7-473a-9cf2-9daa90911e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
OF NEW YORK EDUC 20500: DR HOPE HARTMAN Cooperative Learning Definition and description of the approach Cooperative learning involves students working together towards a common goal in a teaching-learning situation. It is a relationship in a group of students that requires positive interdependence (a sense of sink or swim together), individual accountability (each of us has to contribute and learn), interpersonal skills (communication, trust, leadership, decision making, and conflict resolution), face-to-face promotive interaction, and processing (reflecting on how well the team is functioning and how to function even better). There are three basic forms of cooperative learning: tutoring (peer or cross-age), in which one student teaches another; pairs, who work and learn with each other and small groups of students teaching and learning together. In other words cooperative learning is a successful teaching strategy in which small teams, each with students of different levels of ability, use a variety of learning activities to improve their understanding of a subject. Each member of a team is responsible not only for learning what is taught but also for helping teammates learn, thus creating an atmosphere of achievement. How it works as a teaching strategy How does it work? Here are some typical strategies that can be used with any subject, in almost any grade, and without a special curriculum: Group Investigations: are structured to emphasize higher-order thinking skills such as analysis and evaluation. Students work to produce a group project, which they may have a hand in selecting. STAD (Student Teams-Achievement Divisions): is used in grades 2-12. Students with varying academic abilities are assigned to 4 or 5 member teams in order to study what has been initially taught by the teacher and to help each reach his or her highest level of achievement. Students are then tested individually. Teams earn certificates or other recognition based on the degree to which all team members have progressed over their past records. Jigsaw II: is used with narrative material in grades 3-12. Each team member is responsible for learning a specific part of a topic. After meeting with members of other groups, who are “expert” in the same part, the “experts” return to their own groups and present their findings. Team members then are quizzed on all topics. There are three basic ways students can interact with each other as they learn. They can compete to see who is "best," they can work individualistically toward a goal without paying attention to other students, or they can work cooperatively with a vested interest in each other's learning as well as their own. Of the three interaction patterns, competition is presently the most dominant. Research indicates that a vast majority of students in the United States view school as a competitive enterprise where one tries to do better than other students. Cooperation among students who celebrate each other’s successes, encourage each other to do homework, and learn to work together regardless of ethnic backgrounds or whether they are male or female, bright or struggling, disabled or not. Even though these three interaction patterns are not equally effective in helping students learn concepts and skills, it is important that students learn to interact effectively in each of these ways. Students will face situations in which all three interaction patterns are operating and they will need to be able to be effective in each. They also should be able to select the appropriate interaction pattern suited to the situation. An interpersonal, competitive situation is characterized by negative goal interdependence where, when one person wins, the others. In individualistic learning situations, students are independent of one another and are working toward a set criteria where their success depends on their own performance in relation to an established criteria. The success or failure of other students does not affect their score. In a cooperative learning situation, interaction is characterized by positive goal interdependence with individual accountability. Positive goal interdependence requires acceptance by a group that they “sink or swim together”. A cooperative spelling class is one where students are working together in small groups to help each other learn the words in order to take the spelling test individually on another day. Each student’s score on the test is increased by bonus points if the group is successful (i.e., the group totals meet specified criteria). In a cooperative learning situation, a student needs to be concerned with how he or she spells and how well the other students in his or her group spell. This cooperative umbrella can also be extended over the entire class if bonus points are awarded to each student when the class can spell more words than a reasonable, but demanding, criteria set by the teacher. There is a difference between simply having students work in a group and structuring groups of students to work cooperatively. A group of students sitting at the same table doing their own work, but free to talk with each other as they work, is not structured to be a cooperative group, as there is no positive interdependence. Perhaps it could be called individualistic learning with talking. For this to be a cooperative learning situation, there needs to be an accepted common goal on which the group is rewarded for its efforts. If a group of students has been assigned to do a report, but only one student does all the work and the others go along for a free ride, it is not a cooperative group. A cooperative group has a sense of individual accountability that means that all students need to know the material or spell well for the whole group to be successful. Putting students into groups does not necessarily gain a cooperative relationship; it has to be structured and managed by the teacher or professor. It is only under certain conditions that cooperative efforts may be expected to be more productive. These conditions are: 1. Clearly perceived positive interdependence 2. Considerable promotive (face-to-face) interaction 3. Clearly perceived individual accountability and personal responsibility to achieve the group’s goals 4. Frequent use of the relevant interpersonal and small-group skills 5. Frequent and regular group processing of current functioning to improve the group’s future effectiveness Two specific examples The following is a simple cooperative learning segment of a lesson from a senior honors mathematics class. The class was studying related-rate word problems. The problem was: A boat is pulled into a dock by means of a rope with one end attached to the bow of the boat, the other end passing through a ring attached to the dock at a point 4 ft higher than the bow of the boat. If the rope is pulled in at the rate of 2 ft/s, how fast is the boat approaching the dock when 10 ft of rope are out? Students were assigned to heterogeneous groups of 3-4 students to work on the problem. Their task was to: a) individually generate questions for solving the problem, b) share their questions with the group, c) as a group decide on the best questions for this problem, d) individually solve the problem using the group's questions and e) share and compare individual solutions and explain how they were obtained from applying the questions selected. Students were taught to use self questioning as a strategy for thinking through the problem solving process. Initially the students found it strange to be asked to write questions for a mathematics class. They learned how to use questioning to help them plan, monitor and evaluate problem solving. Examples of questions were modeled by the teacher thinking aloud how to use them when solving a problem. Then students generated and used their own questions. Questions generated for this problem were: Group 1 Student 1: What should the diagram look like? Where do the values belong? What do I want to find? Student 2: Where do I start? How do I find the desired answer? Where do the numbers belong in the formula? Which number goes to which part? Student 3: What does the diagram look like? What variables should I use? Where does the 2ft/s go? What derivatives do I have to find? Student 4: How do I draw a picture to represent what the problem says? What parts of the diagram get labeled? What is the unknown? What equation do I use to get the derivative? This group discussed their questions and made the following list for their group to use when solving the problem: 1. What should the diagram look like? 2. How should it be labeled? 3. What do we have to find? 4. What equation do we use to find the derivative? While the groups worked on their questions and used them to solve the problem, the teacher walked around to watch and listen to each group to make sure they were on task and making reasonable progress. As she checked up on each group she saw that some students still could not solve the problem. She checked the individual and group lists of questions and realized they were incomplete, so she decided to have the groups share their questions, evaluate them as a class, and come up with a composite list. She guided the discussion to make sure the class generated questions for all three phases of the problem solving process (planning, monitoring and evaluating). The following is the composite list that emerged: Planning: 1. Does this problem resemble a problem already done? 2. How should I diagram this problem? 3. What do I have to find? 4 . What equation must I differentiate? Monitoring: 1. Is my algebra correct? 2. Am I using the correct formula? 3. Is my diagram labeled correctly? Evaluating: 1. Does the answer make sense? 2. Did I find what I was supposed to find? 3. How can I check my answer? Students then returned to solving the problem with the new set of questions. Individuals within the group shared their answers with each other, decided on the correct answer, and raised their hands to let the teacher know when they were finished so she could check their solutions. She randomly asked students to explain their solutions to make sure everyone in the group understood the problem and solution process. Then she had students who had solved the problem help those who had difficulty. At the end of the lesson the class looked at how the questions related to each part of the problem solution. Constructing, comparing, discussing and evaluating problem solving questions, individually, in small groups and with the entire class enriched students' understanding of what questions and strategies were best suited for the particular problem. Some of the students said that in the past, they had been so concerned with getting the right answer that they had never given as much thought to the thinking process. When each student has just his/her own knowledge, thoughts and questions, the perspective on problem solving is much more narrow and shallow. Mathematicians frequently discuss their solution strategies and outcomes with others . They know that others can sometimes detect limitations, suggest alternative approaches to and applications of problem solutions. By discussing problem solving with others, students learn to think more like mathematicians. Another example: Compare a moment from a class in Self Science with the classroom experiences you can recall. A fifth-grade group is about to play the Cooperation Squares game, in which the students team up to put together a series of square-shaped jigsaw puzzles. The catch: their teamwork is all in silence, with no gesturing allowed. The teacher, Jo-An-Varga, divides the class into three groups, each assigned to a different table. Three observers, each familiar with the game, get an evaluation sheet to assess, for example, who in the group takes the lead in organizing, who is a clown, who disrupts. The students dump the pieces of the puzzles on the table and go to work. Within a minute or so it’s clear thar one group is surprisingly efficient as a team; they finish in just a few minutes. A second group of four is engaged in solitary, parallel efforts, each working separately on their own puzzle, but getting nowhere. Then they slowly start to work collectively to assemble their first square, and continue to work as a unit until all the puzzles are solved. But the third group still struggles, with only one puzzle nearing completion, and even that looking more like a trapezoid than a square. Sean, Fairlie and Rahman have yet to find the smooth coordination that the other two groups fell into. They are clearly frustrated, frantically scanning the pieces on the table, seizing on likely possibilities and putting them near the partly finished squares, only to be disappointed by the lack of fit. The tension breaks a bit when Rahman takes two of the pieces and puts them in front of his eyes like a mask; his partners giggle. This will prove to be a pivotal moment in the day’s lesson. Jo-An-Varga, the teacher, offers some encouragement: “Those of you who have finished can give one specific hint to those who are still working”. Dagan moseys over to the still-struggling group, points to two pieces that jut out from the square, and suggests, “You’ve got to move those two pieces around”. Suddenly Rahman, his wide face furrowed in concentration, grasps the new gestalt, and the pieces quickly fall into place on the first puzzle, then the others. There’s spontaneous applause as the last piece falls into place on the third group’s final puzzle. Research and theory suggest it’s considered useful as a teaching strategy Cooperative learning have been shown to be effective for developing student’s higher level thinking strategies and abilities to work independently. Provides situations for students to teach each other. When students explain and teach concepts to each other, retention of these concepts improves. Explaining also helps students connect their prior knowledge with the new information. This teaching strategy is a powerful instructional method for developing content knowledge and higher level thinking skills across the curriculum. Academic work is usually much more fun and exciting to students when they work together cooperatively. The social context and active involvement make it more motivating to learn. Research has shown that cooperative learning increases confidence in student’s abilities. It improves self-esteem as well as feelings of competence in specific subjects. Research has also documented the positive effects of cooperative learning on improving social relations with students of different ethnicity and cultural backgrounds. It has been demonstrated to be an especially effective method of teaching in settings characterized by such diversity. It helps improve achievement from elementary grades through graduate school. Cooperative learning promotes academic achievement, is relatively easy to implement, and is not expensive. Children's improved behavior and attendance, and increased liking of school, are some of the benefits of cooperative learning. Although much of the research on cooperative learning has been done with older students, cooperative learning strategies are effective with younger children in preschool centers and primary classrooms. In addition to the positive outcomes just noted, cooperative learning promotes student motivation, encourages group processes, fosters social and academic interaction among students, and rewards successful group participation. A review of 99 studies of cooperative learning in elementary and secondary schools that involved durations of at least four weeks compared achievement gains in cooperative learning and control groups. Of sixty-four studies of cooperative learning methods that provided group rewards based on the sum of group members' individual learning, fifty (78%) found significantly positive effects on achievement, and none found negative effects (Slavin, 1995). One theoretical perspective somewhat related to the motivational viewpoint holds that the effects of cooperative learning on achievement are strongly mediated by the cohesiveness of the group, in essence that students will help one another learn because they care about one another and want one another to succeed. Relevant theory, theorists and research Reciprocal Education Reciprocal teaching is already established as a powerful technique for improving reading comprehension. Students and a tutor or teacher alternate roles leading text dialogues structured around modeling the strategies of predicting, clarifying, questioning and summarizing. This teaching procedure is based on a set of instructional principles that have practically unlimited application potential. Fantuzzo, King and Heller (1992) have successfully used a related instructional method, “reciprocal peer tutoring” (RPT), for elementary students in math computation. RPT is based on cognitive theory and research showing the academic benefit of explaining material to other students. In this strategy two or more students work together cooperatively and follow a structured format in which students teach, prompt, monitor, evaluate and encourage each other. Students alternate between teacher and student roles and engage in peer teaching, peer choice of rewards, and peer management. Fantuzzo emphasizes that it is the combination of these components (peer teaching, peer choice of rewards, and peer management) that produces greater academic and motivational gain than using them in isolation. Reciprocal questioning involves students taking turns asking and answering questions about the material after a lesson or presentation. Students learn to ask questions through a scaffolding procedure in which the teacher provides question stem prompts such as “What do you think would happen if...?, What is a new example of...? and What are the strengths and weaknesses of...?” Eventually students can create their own questions without the teacher's stems (King, 1990). Reciprocal tutoring is a model in which all tutors first get experiences as tutees as part of their apprenticeship for becoming a tutor. This model provides tutors with an experiential basis for tutor-centered learning (Gartner & Riessman, 1993). There are other variations of this type of approach. The pair-problem solving method of Whimbey and Lochhead (1982) and I DREAM of A methods in this book involve reciprocal teaching types of activities. The varieties of reciprocal teaching types of procedures have a core of two common principles: students work with other students and students take on roles of both teachers and learners. Instructional models that share these basic elements can be called “reciprocal education”. The term “reciprocal” is used to reflect students taking turns, especially with other students. The term “education” is chosen to represent participation in both teaching and learning activities. Reciprocal education may be adapted for use in virtually any subject area. The major alternative to the motivationalist and social cohesiveness perspectives on cooperative learning, both of which focus primarily on group norms and interpersonal influence, is the cognitive perspective, which holds that interactions among students will in themselves increase student achievement for reasons which have to do with mental processing of information rather than with motivations. Cooperative methods developed by cognitive theorists involve neither the group goals that are the cornerstone of the motivationalist methods nor the emphasis on building group cohesiveness characteristic of the social cohesion methods. However, there are several quite different cognitive perspectives, as well as some which are similar in theoretical perspective but have developed on largely parallel tracks. These are described in the following sections. Cognitive theorists would hold that the cognitive processes that are essential to any theory relating cooperative learning to achievement can be created directly, without the motivational or affective changes discussed by the motivationalist and social cohesion theorists. This may turn out to be accurate, but at present demonstrations of learning effects from direct manipulation of peer cognitive interactions have mostly been limited to very brief durations and to tasks which lend themselves directly to the cognitive processes involved. For example, the Piagetian conservation tasks studied by developmentalists have few practical analogs in the school curriculum. Social cohesion theorists, in contrast, emphasize the idea that students help their groupmates learn because they care about the group. A hallmark of the social cohesion perspective is an emphasis on teambuilding activities in preparation for cooperative learning, and processing or group self-evaluation during and after group activities. Social cohesion theorists tend to downplay or reject the group incentives and individual accountability held by motivationalist researchers to be essential. Outcomes a teacher can expect of using this teaching strategy Cooperative learning and cooperative learning groups are means to an end rather than an end in themselves. Therefore, teachers should begin planning by describing precisely what students are expected to learn and be able to do on their own well beyond the end of the group task and curriculum unit. Regardless of whether these outcomes emphasize academic content, cognitive processing abilities, or skills, teachers should describe in very unambiguous language the specific knowledge and abilities students are to acquire and then demonstrate on their own. It is not sufficient for teachers to select outcome objectives: students must perceive these objectives as their own. They must come to comprehend and accept that everyone in the group needs to master the common set of information and/ or skills. In selected strategies where groups select their own objectives, all members of each group must accept their academic outcomes as ones they all must achieve. Advantages and disadvantages of using this approach Academic benefits: promotes critical thinking skills, involves students actively in the learning process, classroom results are improved, models appropriate student problem solving techniques, large lectures can be personalized. Social benefits: develops a social support system for students, builds diversity understanding among students and staff, establishes a positive atmosphere for modeling and practicing cooperation, develops learning communities. Psychological benefits: student centered instruction increases students' self esteem, cooperation reduces anxiety, develops positive attitudes towards teachers. Cooperative learning can be ineffective if it is not handled right. Not all groupwork is cooperative learning. Students can sit side by side in a group and do their work completely independently without cooperating. Potential problems implementing cooperative learning in high school mathematics classes may be student-oriented or teacher-oriented. Student-oriented problems include: a group of students may become bored with each other, there may be inadequate leadership within a group, students may feel abandoned by the teacher, difficult problems may cause feelings of defeat while easy problems may be boring, and students may need a change of pace or more praise. Teacher-oriented problems include: teachers may feel uncomfortable not being the center of the classroom, they may not have explained the task adequately, and they may get mixed feedback about what students have learned. Although many students prefer working cooperatively to working independently, some students would rather work alone. Such students can inhibit effective group interaction. Another problem is that one or two students can do all the work solving problems while the others do not. Time can be a problem when implementing cooperative learning and sometimes lessons end without summarizing what was learned and assessing the group process. Daniel Goleman 1995 Emotional Intelligence. Bantam Books Hope Hartman 1997 Human Learning & Instruction. City College of New York Slavin, R.E. 1995 Cooperative learning:Theory, research, and practice (2^nd Ed.). Boston: Allyn & Bacon Robert E. Slavin 1995 Research on Cooperative Learning and Achievement: What We Know, What We Need to Know. John Hopkins University Stahl, Robert J. 2000 The Essential Elements of Cooperative Learning in the classroom. Eric Digest.
{"url":"http://condor.admin.ccny.cuny.edu/~eg9306/marios%20research%20paper.htm","timestamp":"2014-04-19T09:23:49Z","content_type":null,"content_length":"72993","record_id":"<urn:uuid:22be3080-2365-4d41-a3c2-444c459133b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Memoirs on Differential Equations and Mathematical Physics Table of Contents: Volume 34, 2005 A. Dzhishkariani Approximate Solution of One Class of Singular Integral Equations by Means of Projective and Projective-Iterative Methods Mem. Differential Equations Math. Phys. 34 (2005), pp. 1-76. download pdf file. M. Grigolia Some Remarks on The Initial Problems for Nonlinear Hyperbolic Systems Mem. Differential Equations Math. Phys. 34 (2005), pp. 77-95. download pdf file. Haishen Lü, Donal O'Regan, Ravi P. Agarwal Nonuniform Nonresonance at the First Eigenvalue of the One-Dimensional Singular p-Laplacian Mem. Differential Equations Math. Phys. 34 (2005), pp. 97-114. download pdf file. G. Makatsaria Correct Boundary Value Problems for Some Classes of Singular Elliptic Differential Equations on a Plane Mem. Differential Equations Math. Phys. 34 (2005), pp. 115-134. download pdf file. Abdur Rashid A Three-Levels Finite Difference Method for Nonlinear Regularized Long-Wave Equation Mem. Differential Equations Math. Phys. 34 (2005), pp. 136-146. download pdf file. I. Kiguradze and B. Půža On the Well-Posedness of Nonlinear Boundary Value Problems for Functional Differential Equations Mem. Differential Equations Math. Phys. 34 (2005), pp. 149-152. download pdf file. Koplatadze and G. Kvinikadze On Oscillatory Properties of Ordinary Differential Equations of Generalized Emden--Fowler Type Mem. Differential Equations Math. Phys. 34 (2005), pp. 153-156. download pdf file. © Copyright 2005, Razmadze Mathematical Institute.
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/MDEMP/vol34/contents.htm","timestamp":"2014-04-20T06:19:09Z","content_type":null,"content_length":"3540","record_id":"<urn:uuid:f9e06ba1-4f20-4e21-9f46-3645819ba370>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Given a graph embedded on a torus, how many edges are necessary for noncontractible loops to be long? up vote 11 down vote favorite If we are given a graph embedded on a torus, with the following properties, what is the minimum number of edges it can have? • Any noncontractible loop is comprised of at least n edges. • Any noncontractible dual loop is comprised of at least n edges. • Any noncontractible loop drawn on the torus intersects the graph at least once. (The third condition is just to rule out cases where we embed a small planar graph on the torus, and trivially satisfy the first two conditions, there being no noncontractible loops) We use the following definitions: • A loop is a series of edges, with each consecutive pair sharing a (different) common vertex, and with the first and last sharing a common vertex. It is noncontractible if the path formed by tracing along these edges is noncontractible on the torus. • A dual loop is a series of edges, with each consecutive pair sharing a (different) common face, and with the first and last sharing a common face. The name is because these edges form a loop on the dual graph. Likewise, it is noncontractible if it is noncontractible on the torus. I believe that the answer is $n^2$ for even n, $n^2 + 1$ for odd n. The equality case, I think, is a square lattice on the torus, but rather than identifying horizontal and vertical lines, as is usually done to put a grid on the torus, you identify lines at 45 degrees to the grid. (Or slightly off 45 degrees, if n is odd) It seems like a simple statement, but I haven't been able to find out whether this is true. Thanks for any help! Graham Edit: Whoops - rather than face-width, the second condition is asking about the edge-width of the dual graph. Apologies for the confusion! topological-graph-theory graph-theory co.combinatorics 1 If I take two cycles of length $n$ glued together at a vertex, and embed them on the torus as two non-homotopic non-contractible curves, do I satisfy the conditions? It seems like (1) and (3) are satisfied, and probably (2) as well since the dual graph only has one vertex. Maybe you want the minimum number of edges of a graph of face-width (representativity) $n$? – Tony Huynh Oct 20 '10 at I would consider this graph to have lots of dual loops of length 1, in the same way as an edge from a vertex to itelf which wraps around the torus could form a (primal) loop of length 1. Thanks for pointing me to the correct terminology - yes, by condition (2), I mean that the graph has face-width n. (And condition (1), that the graph has edge-width 1.) – Graham Oct 20 '10 at 1:40 The last comment should say edge-width n - apologies. – Graham Oct 20 '10 at 1:41 add comment 2 Answers active oldest votes There should be a nice proof, but here is a reference that proves something stronger and weaker. This paper by de Graaf and Schrijver proves that every graph embedded on the torus with up vote face-width at least $n \geq 5$, contains the toroidal $\lfloor 2n/3 \rfloor$-grid as a minor. Note that the toroidal $\lfloor 2n/3 \rfloor$-grid has (almost) $8n^2/9$ edges. So any graph on 3 down the torus with face-width at least $n$ has at least (almost) $8n^2/9$ edges, which is pretty close to the conjectured answer of $n^2$. Tony, thanks for the response. If I understand correctly, you are only using the fact that the face-width is at least n, and not the edge-width condition. ie, using the second of my original conditions and not the first. Why is this graph not a counterexample to your claim? Consider a graph with a single vertex, and 2n edges from this vertex to itself, n of which are horizontal loops on the torus, and the other n of which are vertical loops. This graph has face-width n, and only 2n edges. Am I misunderstanding the definition of face-width, and this graph actually has face-width 1? – Graham Oct 21 '10 at 1:54 Face-width at least n is actually your third condition with 1 replaced by n. That is, every non-contractible curve in the surface intersects the graph at least n times. So, yes your graph 1 has face-width 1. From this definition, we have that face-width $\leq$ edge-width because any short non-contractible curve in the graph yields a short non-contractible curve in the surface (just follow the edges in parallel). So, you might be asking about graphs of face-width at least $n$. Or, you might be asking about graphs with edge-width at least $n$, and whose duals have edge-width at least $n$. – Tony Huynh Oct 21 '10 at 10:53 Ah - apologies then. I didn't mean to ask about face-width, but rather, as you suggest, edge-width of the dual graph. – Graham Oct 22 '10 at 11:39 add comment Here is the idea of the proof that the number of vertices in such a graph is at least $n^2$. See step 7 for the hole in the proof. [S:I believe it can be patched.:S] Edit: As was shown in comments there are a lot of problems with this attempt. I am in doubt whether it can be patched or not. Step 1: Cut the graph at any shortest loop (its length $l\geq n$). You will get a graph on a cylinder (imagine it as a "vertical" cylinder) with a correspondence between edges and vertices on its bottom to edges and vertices at its top. Lets call these bottom edges as $e_1,\dots,e_l$ and top --- $e'_1,\dots,e'_l$. Step 2: Take any shortest dual path between correspondent edges on the top and on the bottom of the cylinder. It corresponds to a noncontractible dual loop on the torus and its length $k\geq n$. Cut the cylinder at the vertices "just to the right of this path". Now we have a graph on the square with correspondence between vertices on its left to vertices on its right. Denote "left" vertices by $x_0,\dots, x_m$ and right by $x'_0,\dots, x'_m$, sorted from the bottom: $x_0$ and $x'_0$ are vertices at the bottom correspondent to $x_m$ and $x'_m$. Step 3: Start $n$ dual paths $p_1,\dots,p_n$ at edges $e_1,\dots,e_n$. Assume $x=x_0$ are current left vertex and $x'=x'_0$ --- current right vertex. Step 4: Suppose on this step $x=x_i$ is the current left vertex and $x'=x'_ j$ --- current right vertex. "Move up one of these vertices": if $i< j$ let $\widetilde x=x_{i+1}$ to be next up vote current left vertex and if $j< i$ let $\widetilde x'=x'_ {j+1}$ to be the next current right vertex. In the case $i=j$ draw the shortest path from $x_{i+1}$ to $x'_ {i}$ and the shortest path 2 down from $x_{i}$ to $x'_ {i+1}$. At least one of them has length at least $n$. Choose it and let $\widetilde x= x_{i+1}$ or $\widetilde x'=x'_{i+1}$ correspondingly to the choice. Step 5: Assume, that we have just defined $\widetilde x$ on step 4 (otherwise we have defined $\widetilde x'$ and this step should be rewritten correspondingly). Prolong $p_1,\dots,p_n$ to the shortest path from $\widetilde x$ to $x'$ avoiding intersections (all edges of $p_1,\dots,p_n$ should be different). To show it is possible define $d(v)$ to be a distance from vertex $v$ to $x'$ (length of the shortest path). Suppose last edge of $p_j$ joins vertices $v$ and $w$. Then $|d(v)-d(w)|=1$, since this edge is the part from the shortest path from $x$ to $x'$. Let $d_j=\min(d(v),d(w))$. On this step add to $p_j$ only edges connecting vertices of the same type (i.e. $|d(v)-d(w)|=1$ and $\min(d(v),d(w))=d_j$). Use only vertices between two shortest paths: between $\widetilde x$ and $x$ with $x'$. Note that all $d_j$ are different. This proves that $p_j$ will not intersect with others. Finally set $x=\widetilde x$. Step 6: If $x\neq x_m$ or $x'\neq x'_m$ go to step 4. Otherwise go to step 7. Step 7: Note that now we have $n$ dual paths $p_1,\dots,p_n$ from the bottom to the top of the square. All we need is to assure that they correspond to the cycles of the initial graph (because in this case each of these cycles has length of at least $n$). To achieve this, I believe, we should do step 5 more accurately. For n=3, the minimal triangulation of a torus has 7 vertices... – Gjergji Zaimi Oct 20 '10 at 8:22 I don't understand, how is it related to the problem. It is about minimizing the number of edges. Minimal triangulation has 21 edge, while there is a solution with only 10 edges (as was shown by Graham). – Fiktor Oct 20 '10 at 8:56 Fiktor, Many thanks for the attempt. Unfortunately, I believe that the hole is very difficult to fill. To illustrate this, let me present a slightly different question. Imagine the question specified that horizontal loops (both primal and dual) had to have length at least n, and that vertical loops (primal and dual) had to have length at least m (just pick two arbitrary classes of loop to label vertical and horizontal). Your argument would be exactly the same for this question, and if valid, would let us conclude that such a graph had at least mn edges. This conclusion is false (continued...) – Graham Oct 20 '10 at 10:39 We can show that this stronger statement is false, by considering the graph with distance 3 and 10 edges. On this graph, horizontal loops have distance at least 3, vertical loops distance at least 3, and diagonal loops distance at least 4. If the above conclusion were true, then because we had two classes of loops with minimum distance 3 and 4, we would have at least 12 edges, which is clearly false. This shows that there is something special about the fact that the minimum distance is the same for vertical and horizontal loops, and I don't see how this can be used in your outline. – Graham Oct 20 '10 at 10:42 add comment Not the answer you're looking for? Browse other questions tagged topological-graph-theory graph-theory co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/42797/given-a-graph-embedded-on-a-torus-how-many-edges-are-necessary-for-noncontracti","timestamp":"2014-04-18T15:53:35Z","content_type":null,"content_length":"73816","record_id":"<urn:uuid:616e7fb4-a529-46ea-86e9-2151d64246ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Jamaica, NY Algebra 1 Tutor Find a Jamaica, NY Algebra 1 Tutor I have over 20 years experience in tutoring, primarily in high school and introductory college level physics. I know that physics can be an intimidating subject, so I combine clear explanations of the material with strategies for how to catch mistakes without getting discouraged. Keeping a good attitude can be a key part of mastering physics. 18 Subjects: including algebra 1, reading, calculus, algebra 2 ...I have taught at Hofstra University and Polytechnic Institute at NYU. I am working with the United Way of Long Island to teach Robotics to middle school students in Westbury and Freeport. I am a member of the Long Island STEM Diversity Roundtable located at SUNY-Farmingdale. 16 Subjects: including algebra 1, physics, calculus, geometry ...In school, C was the main language we worked with for several semesters. From all this practice, I was able to learn both the syntax of the language itself as well as understand more complicated data structures and algorithms implemented in C. I think learning C is a good opportunity not only t... 37 Subjects: including algebra 1, chemistry, physics, calculus ...We also had another two semesters of a course called abstract algebra. Vector Spaces can be studied in a more general sense in abstract algebra as well. This area of math is concerned with abstract mathematical structures such as groups, rings, and fields. 11 Subjects: including algebra 1, physics, calculus, geometry ...Whether you want to understand the nuances of a subject or learn the basics I can help. I tutored undergraduates and high school students through out my graduate school days. My lessons are tailored to the student and my emphasis is to make sure the fundamental concepts are clear. 11 Subjects: including algebra 1, calculus, GRE, GMAT Related Jamaica, NY Tutors Jamaica, NY Accounting Tutors Jamaica, NY ACT Tutors Jamaica, NY Algebra Tutors Jamaica, NY Algebra 2 Tutors Jamaica, NY Calculus Tutors Jamaica, NY Geometry Tutors Jamaica, NY Math Tutors Jamaica, NY Prealgebra Tutors Jamaica, NY Precalculus Tutors Jamaica, NY SAT Tutors Jamaica, NY SAT Math Tutors Jamaica, NY Science Tutors Jamaica, NY Statistics Tutors Jamaica, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/jamaica_ny_algebra_1_tutors.php","timestamp":"2014-04-19T14:55:26Z","content_type":null,"content_length":"24146","record_id":"<urn:uuid:ceaefc1e-9b83-4779-857c-2d30ecfc1ab4>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Public Function FutureValue( _ ByVal vRate As Variant _ , ByVal vNPer As Variant _ , ByVal vPmt As Variant _ , Optional ByVal vPV As Variant _ , Optional ByVal vType As Variant _ ) As Variant Calculate the Future Value of an annuity based on fixed, periodic payments and a fixed interest rate. Example: What is the value 20 years from now of a savings account if you invest $100 per month in an account that pays 5.25% annual interest, assuming that the interest is compounded monthly? Approximately $42,311.18. FutureValue(0.0525 / 12, 20 * 12, -100) = 42311.1776128932 Example: How about is interest if only compounded annually? Approximately $40,743.87. FutureValue(0.0525, 20, -100 * 12) = 40743.8701285033 See the FutureValueVerify Subroutine for more examples of this function. See also: InterestRate Function NumberPeriods Function Payment Function PresentValue Function PaymentType Function FV Function (Visual Basic) FV Function (Microsoft Excel) Summary: An annuity is a series of fixed payments (all payments are the same amount) made over time. An annuity can be a loan (such as a car loan or a mortgage loan) or an investment (such as a savings account or a certificate of deposit). vRate: Interest rate per period, expressed as a decimal number. The vRate and vNPer arguments must be expressed in corresponding units. If vRate is a monthly interest rate, then the number of periods (vNPer) must be expressed in months. For a mortgage loan at 6% annual percentage rate (APR) with monthly payments, vRate would be 0.06 / 12 or 0.005. Function will return Null if vRate is Null or cannot be interpreted as a number. vNPer: Number of periods. The vRate and vNPer arguments must be expressed in corresponding units. If vRate is a monthly interest rate, then the number of periods (vNPer) must be expressed in months. For a 30-year mortgage loan with monthly payments, vNPer would be 30 * 12 or 360. Function will return Null if vNPer is Null or cannot be interpreted as a number. vPmt: Amount of the payment made each period. Cash paid out is represented by negative numbers and cash received by positive numbers. Function will return Null if vPmt is Null or cannot be interpreted as a number. vPV: Optional present value (lump sum) of the series of future payments. Cash paid out is represented by negative numbers and cash received by positive numbers. vPV defaults to 0 (zero) if it is missing or Null or cannot be interpreted as a number. vType: Optional argument that specifies when payments are due. Set to 0 (zero) if payments are due at the end of the period, and set to 1 (one) if payments are due at the beginning of the period. vType defaults to 0 (zero), meaning that payments are due at the end of the period, if it is missing or Null or cannot be interpreted as a number. Function returns Null if vType is not 0 (zero) nor 1 v2.0 Addition: This function is new to this version of Entisoft Tools. Copyright 1996-1999 Entisoft Entisoft Tools is a trademark of Entisoft.
{"url":"http://www.entisoft.com/ESTools/MathFinancial_FutureValue.HTML","timestamp":"2014-04-17T06:42:02Z","content_type":null,"content_length":"4989","record_id":"<urn:uuid:ff372bc8-5cc4-46a5-b3fd-92155c66d819>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help (vectors and circle geometry)! January 21st 2006, 08:04 AM Need help (vectors and circle geometry)! Vectors- i and j are unit vectors- i=1 unit along the x axis j=1 unit along the y axis. 1. Given that a= 2i+3j Find the angle it makes with the positive x axis. 2. Given that a=i + j and b= 2i - j Find the angle each makes with the positive x axis a) 2a+b and b) 3b-a 3. A chord of a circle (x+2)^2+(y-9)^2=26 has the equation y=x+5. Find the coordinates of the end points of the chord and hence find the equation of the perpendicular bisector of the chord. Verify that the perpendicular bisector passes through the centre of the 4. The points A(1,5) and B (7,9) are the ends of a diameter of circle. Show that P (2,4) lies on the circle by showing that AP and BP are perpendicular and using the property that the angle in a semicircle is a right angle. for question 4 I found that AP and BP are perpendicular after substituting the values of A, P and B (gradients are 1 and -1) however what else do I need to prove? Thanks so much in advance if you can help :) January 21st 2006, 08:26 AM Originally Posted by Confuzzled? 1. Given that a= 2i+3j Find the angle it makes with the positive x axis. 2. Given that a=i + j and b= 2i - j Find the angle each makes with the positive x axis a) 2a+b and b) 3b-a Consider a right triangle, you can then use the fact that the tangent of the angle is equal to the opposite side over the adjacent side. So for a general point (a,b), we have that tan(x) = b/a so x = arctan(b/a) where x is the angle you're looking for and "arctan" is the inverse tangent. Originally Posted by Confuzzled? 3. A chord of a circle (x+2)^2+(y-9)^2=26 has the equation y=x+5. Find the coordinates of the end points of the chord and hence find the equation of the perpendicular bisector of the chord. Verify that the perpendicular bisector passes through the centre of the Well, find the intersection points! It's a system of two equations: solve the lineair one for either y or x (it's already solved for y) and substitute that expression in the equations of the circle. You then have a quadratic equation in 1 unknown which you can solve. To find the perp bisector, you can set up the equation of a line through a point (the point P which is (A+B)/2, the middle of A and B where A and B are the two intersection points you just found) and with a given slope. You can find the slope because for two slopes c and d, we have that cd = -1 if they're perpendicular, and you're give the slope of the line of the chord. Originally Posted by Confuzzled? 4. The points A(1,5) and B (7,9) are the ends of a diameter of circle. Show that P (2,4) lies on the circle by showing that AP and BP are perpendicular and using the property that the angle in a semicircle is a right angle. for question 4 I found that AP and BP are perpendicular after substituting the values of A, P and B (gradients are 1 and -1) however what else do I need to prove? It's a bit unclear to me, are they asking to show that the angle between AP and BP is 90°? January 21st 2006, 11:57 AM Originally Posted by Confuzzled? 4. The points A(1,5) and B (7,9) are the ends of a diameter of circle. Show that P (2,4) lies on the circle by showing that AP and BP are perpendicular and using the property that the angle in a semicircle is a right angle. for question 4 I found that AP and BP are perpendicular after substituting the values of A, P and B (gradients are 1 and -1) however what else do I need to prove? All you need is the converse of:"the angle in a semicircle is a right angle", which in this case is something like "Given a circle on diameter AB, and a point P, then if angle APB is a right angle then P is on the circle". Then what you have shown with this property shows that P is on the January 22nd 2006, 04:56 AM hi thanks guys for you help however for question 1 and 2, I have done that method yet I still can't the right answer (I know what the answers are but I don't know the exact method). :confused: January 22nd 2006, 05:00 AM Filling in the correct numbers don't give you the answers you expect? What are the answers then? January 30th 2006, 07:15 AM Vectors- i and j are unit vectors- i=1 unit along the x axis j=1 unit along the y axis. 1. Given that a= 2i+3j Find the angle it makes with the positive x axis This is quite easy... the vector i is pointing x axis direction. So all we need to to is to find dot product of a and i and divide it by |a| (the vector a length) and |i| (it is 1 :)) it will be: a \dot i = (2i+3j) \dot i = 2 |a| = sqrt(2^2 + 3^2) = sqrt(13) This way we have found cosine of the angle: cos(alpha) = 2/sqrt(13) -----> alpha = arccos(2/sqrt(13)) 2. Given that a=i + j and b= 2i - j Find the angle each makes with the positive x axis a) 2a+b and b) 3b-a The way will be similar (first dot product, then vectors' lengths, and in the end arccos) a \dot i = 1 |a| = sqrt(2) so the angle with x axis: alpha = arccos(1/sqrt(2)) = pi/4 [rad] (oh, what a surprise!) b \dot i = 2 |b| = sqrt(5) so the angle with x axis: beta = arccos(2/sqrt(5)) a \dot (2a+b) = 2|a|^2 + a \dot b = 4 + 1 = 5 |2a+b| = |4i+1j| = sqrt(17), |a| = sqrt(2) thus the angle is: gamma = arccos(5/sqrt(34)) b \dot (2a+b) = 2(a \dot b) + |b|^2 = 2+5 = 7 the angle is: delta = arccos(7/sqrt(85)) a \dot (3b-a) = 3 (a \dot b) - |a|^2 = 3 - 2 = 1 |3b-a| = |5i -4j| = sqrt(41) the angle is epsilon = arccos(1/sqrt(82)) b \dot (3b-a) = 3 |b|^2 - a \dot b = 15 - 1 = 14 the angle is zeta = arccos(14/sqrt(205))
{"url":"http://mathhelpforum.com/calculus/1683-need-help-vectors-circle-geometry-print.html","timestamp":"2014-04-17T07:29:51Z","content_type":null,"content_length":"11390","record_id":"<urn:uuid:c8cdeafb-db76-4c72-b932-84a90cdc9e58>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] optimizers module dmitrey openopt@ukr.... Mon Aug 20 12:40:18 CDT 2007 Matthieu Brucher wrote: > Hi again ;) > I have already committed > ./solvers/optimizers/line_search/qubic_interpolation.py > tests/test_qubic_interpolation.py > qubic should be cubic, no ? yes, I will rename it now > the problems: > 1. I have implemented stop tolerance x as self.minStepSize. > However, isn't it more correct to observe |x_prev - x_new| > according to > given by user xtol, than to observe |alpha_prev - alpha_new| > according > to xtol? If the routine is called from a multi-dimensional NL problem > with known xtol, provided by user, I think it's more convenient > and more > correct to observe |x_prev - x_new| instead of |alpha_prev - > alpha_new| > as stop criterion. > The basic cubic interpolation works on alpha. If you want to implement > another based on x, not problem. I think that as a first step, we > should add standard algorithms that are documented and described. > After this step is done, we can explore. yes, but all your solvers are already written in terms of n-dimensional problem (x0 and direction, both are of nVars size), so it would be more natural to use xtol (from general problem), not alpha_tol (from line-search subproblem) > 2. (this is primarily for Matthieu): where should be gradtol taken > from? > It's main stop criterion, according to alg. > Currently I just set it to 1e-6. > It should be taken in the constructor (see the damped_line_search.py > for instance) > 3. Don't you think that maxIter and/or maxFunEvals rule should be > added? > (I ask because I didn't see that ones in Matthieu's > quadratic_interpolation solver). > That is a good question that raised also in our discussion for the > Strong Wolfe Powell rules, at least for the maxIter. As for SWP, I think a check should be made if solution with required c1 and c2 can't be obtained and/or don't exist at all. For example objFun(x) = 1e-5*x while c1 = 1e-4 (IIRC this is the example where I encountered alpha = 1e-28 -> f0 = f(x0+alpha*direction)). The SWP-based solver should produce something different than CPU hangup. OK, it turned to be impossible to obtain new_X that satisfies c1 and c2, but an approximation very often will be enough good to continue solving the NL problem involved. So I think check for |x_prev - x_new | < xtol should be added and will be very helpful here. You have something like that with alphap in line s68-70 (swp.py) but this is very unclear and I suspect for some problems may be endless (as well as other stop creteria implemented for now in the SWP). > It will make algorithms more stable to > CPU-hanging errors because of our errors and/or special funcs > encountered. > I had implemented that ones but since Matthieu didn't have them in > quadratic_interpolation, I just comment out the stop criteria (all > I can > do is to set my defaults like 400 or 1000 (as well as gradtol > 1e-6), but > since Matthieu "state" variable (afaik) not have those ones - I can't > take them as parameters). > So should they exist or not? > If you want to use them, you should put them in the __init__ method as > well. > The state could be populated with everything, but that would mean very > cumbersome initializations. On one hand, you should create each module > with no parameter and pass all of them to the optimizer. That could > mean a very long and not readable line. On the other hand, you should > create the optimizer, create then every module with the optimizer as a > parameter. Not intuitive enough. > This is were the limit between the separation principle and the object > orientation is fuzzy. > So the state dictionary is only responsible for what is specifically > connected to the function. Either the parameters, or different > evaluations (hessian, gradient, direction and so on). That's why you > "can't" put gradtol in it (for instance). I'm not know your code very good yet, but why can't you just set default params as I do in /Kernel/BaseProblem.py? And then if user wants to change any specific parameter - he can do it very easy. And no "very long and not readable line" are present in my code. > I saw that you test for the presence of the gradient method, you > should not. If people want to use this line search, they _must_ > provide a gradient. If they can't provide an analytical gradient, they > can provide a numerical one by using > helpers.ForwardFiniteDifferenceDerivatives. This is questionable, I > know, but the simpler the algorithm, the simpler their use, their > reading and debugging (that way, you can get rid of the f_and_df > function as well or at least of the test). I still think the approach is incorrect, user didn't ought to supply gradient, we should calculate it by ourselves if it's absent. At least any known to me optimization software do the trick. As for helpers.ForwardFiniteDifferenceDerivatives, it will take too much time for user to dig into documentation to find the one. Also, as you see my f_and_df is optimized to not recalculate f(x0) while gradient obtaining numerically, like some do, for example approx_fprime in scipy.optimize. For problems with costly funcs and small nVars (1..5) speedup can be significant. Of course, it should be placed in single file for whole "optimizers" package, like I do in my ObjFunRelated.py, not in qubic_interpolation.py. But it would be better would you chose the most appropriate place (and / or informed me where is it). Regards, D. More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2007-August/007664.html","timestamp":"2014-04-17T09:43:53Z","content_type":null,"content_length":"8979","record_id":"<urn:uuid:b06cf240-1af3-4534-9012-8804bb9a419c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
KenDoku Dog Legs, sudokuwiki.org In this example, the first KenDoku in the solver examples, some cages are indicated with their respective combinations. The easiest is the 9x which can only be 1 x 3 x 3. But because this cage spans two boxes we can instantly place the two 3s and the 1. Placing them diagonally apart will mean the 3s don't conflict with each other. The one number per row, column and box is preserved. The more complate 12+ has five possible combinations. Two of them are highlighted to show they contain duplicates. The last cage indicated shows a 'dog leg' 3-cell cage with 9+. However, since it is entirely contained within a single box no duplicates can exist. Three combinations are possible. KenDoku Dog Leg examples
{"url":"http://www.sudokuwiki.org/Print_KenDoku_Dog_Legs","timestamp":"2014-04-17T01:16:51Z","content_type":null,"content_length":"4227","record_id":"<urn:uuid:b4da9d4c-f47b-44a1-9f71-af5ae0754fd3>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
The Central Limit Theorem made easyThe Central Limit Theorem made easy Ok, so yesterday wasn’t quite as basic as I planned on shooting for in this week or two of working on non-mathematical concepts. But the idea was too cool to resist. This isn’t exactly a mathematically elementary subject either, but the concept can be grasped without needing to see the actual functions involved. This is a random sample of a large number of women, arranged by height: There are many people of average height in the world, and a smaller number of very tall and very short people. The more extreme the height, the rarer the people with that height. On the other hand, we could imagine a species in which there was a certain average height and every added inch dropped the probability by a constant amount. The way the heights were distributed wouldn’t be a smooth curve (which happens to be called a Gaussian distribution, or a bell curve, or a normal distribution) as it is in the picture, but instead sort of a pyramid. But we never see This isn’t confined to biology either. Everything from the frequencies of photons emitted by a laser to the velocity components of a gas molecule do the same thing. That same smooth bell curve happens all throughout the sciences. It’s inescapable. Why? The answer is a mathematical fact called the central limit theorem. In slightly imprecise nonmathematical language it says the following: any time you have a quantity which is bumped around by a large number of random processes, you end up with a bell curve distribution for that quantity. And it really doesn’t matter what those random processes are. They themselves don’t have to follow the Gaussian distribution. So long as there’s lots of them and they’re small, the overall effect is Gaussian. This is of dramatic importance in the sciences, and aside from that it happens to be a good thing to watch for – you’ll start seeing it everywhere. 1. #1 ObsessiveMathsFreak February 5, 2009 The central limit theorem is the most underdiscussed aspect of the standard normal distribution and all work that follows on from it. Many will apply the assumption of a normal distribution when absolutely no justification for it exists. Case in point, the stock market and economics. Quants based, and still base, their predictions on the assumption that money trading of all kinds has some kind of bell curve distribution, meaning it’s the result of random variables. As the recent global sychronised credit freeze showed, this assumption is not justified by evidence. 2. #2 toto February 5, 2009 “Bell curve” doesn’t necessarily mean “Gaussian”. There are “bell-curve” distributions that have very different properties from the Gaussian, such as, say, undefined mean and infinite variance. The Cauchy distribution is a famous example. Mandelbrot studied stock price variation back in the 60s (!) and found that the distribution was strongly “fat-tailed”, apparently with a diverging variance. Anyone modelling stock prices variation as Gaussian in this day and age is an idiot. 3. #3 Kent February 5, 2009 I can visualize the summation of probabilities even more easily when I recall those science museum exhibits of the dropping steel balls through a matrix of pins with bins underneath and a normal curve drawn on the surface of the glass covering the pin matrix. (Think of a pachinko machine.) Very nice, Matt. You’ve taken a very complicated statistical topic and started it off with a nicely intuitive demonstration. This is just the sort of short story that might pique the interest of a young student introduced to statistics for the very first time. I must say, though, that from my googling of “central limit theorem” we get into the math deep end quite rapidly. If you had a link to share for further reading in the shallow end of the pool, that would be appreciated. This looks like a good way to introduce statistics. 4. #4 bwv February 5, 2009 For the CLT to hold the underlying distributions do not have to be Gaussian but they do have to have finite variance for the theorem to hold. This is a problem for data that exhibits power law tails, such as securities returns. There is a generalized version for stable distributions (from wiki) The central limit theorem states that the sum of a number of random variables with finite variances will tend to a normal distribution as the number of variables grows. A generalization due to Gnedenko and Kolmogorov states that the sum of a number of random variables with power-law tail distributions decreasing as 1 / | x | α + 1 (and therefore having infinite variance) will tend to a stable Levy distribution f(x;α,0,c,0) as the number of variables grows. (Voit 2003 § 5.4.3) 5. #5 Matt Springer February 5, 2009 When writing this I came across a Wikipedia article with a very good illustration of the theorem. It starts with a single random variable with a very wonky-looking distribution, and then calculates the distribution for the sum of two of those random numbers. Then three, etc. The more terms, the more the distribution for their sum looks like a normal distribution. 6. #6 dWj February 5, 2009 One of the points I wanted to make is that variance of the underlying distributions has to be finite; that has been well covered above. The other point I want to make is that, for a finite number of observations, the tails converge much more slowly than the center does; if you’re looking at tens of thousands of samples (with a probability, say, proportional to 1/(x^4+1)), you’re liable to find a curve that looks very Gaussian within two or three sigma but still has tails that look like the original distribution (in this case, 1/x^4). 7. #7 killinchy February 5, 2009 Re: some earlier comments….. Even the distribution of the speed of gas molecules isn’t Gaussian. 8. #8 Eric Lund February 5, 2009 Another key assumption in the Central Limit Theorem is that there are lots of random processes contributing to the overall distribution. There are applications where this assumption is false. For example, astrophysical plasmas are often collisionless, by which I mean that the mean time between a given particle’s interactions with other particles is large compared with other time scales of interest (most often the cyclotron period). This is why cosmic ray fluxes obey a power law over so many orders of magnitude, for instance, and we often see power law distributions in velocity space for magnetospheric and heliospehric plasmas as well. Another example may well be the stock market. If the number of random interactions in the stock market is sufficiently large, it doesn’t matter that the prices of particular stocks vary in a non-Gaussian fashion. But not all interactions are random, as Wall Street quants are learning the hard way. One such nonrandom process: Forced unwinding of a leveraged position, especially a large one, will tend to reduce the value of the position being unwound. That’s one of the things that happened in 1929: people forced to sell stock to meet margin calls pushed down the value of the stock being sold, causing other people to get margin calls, until essentially everybody who had stocks bought on margin was wiped out (at that time retail investors could have 10:1 leverage in stocks, vs. the 2:1 limit that has been in effect since the 1930s). 9. #9 Paul Johnson February 5, 2009 and as a result the most glorious tool of political science random sampling and measurable error 10. #10 Flathead Phillips February 5, 2009 I’m having trouble understanding that there isn’t a manmade element here. The women in the picture aren’t arranged by height. Arbitrarily-sized bins of women are arranged in height order. Looks like about 14 of them. If the bin size were much larger or much smaller, then the bell curve wouldn’t be as obvious. The photograph seems to illustrate a combination of the CLT and some truism about sampling. 11. #11 Chris P February 6, 2009 While random in nature many distributions will NOT be Gaussian. In engineering many of the distributions are Weibull distributions. These are the foundation fo reliability and wear out calculations. Chris P 12. #12 Brian February 6, 2009 Matt, you might be interested in an article I wrote last night that equates distribution of life on the planet to the partial pressure of gases dissolved in a liquid. I’ve labeled the partial pressure of life (pL) and among other things, show geographically from pole to pole, there appears to be a Gaussian distribution. 13. #13 Comrade PhysioProf February 7, 2009 Dude, if this is the beginning of a series of posts on the mathematical basis of statistics, then “w00t!!!!” 14. #14 JKB2 March 3, 2009 I wholeheartedly disagree with Bell Curve theory being not applicable to markets. What you see is oscillation being deviated over a center line. The market’s run (DOW) from 1k (1983) to 14k (2008) was the peak of accumulation by a large percentage of the population investing for retirement. Now, on the distributive side, divestiture is occurring although at a faster rate in a flight to safety. This is due to “investor IQ” of the thought to put one’s earnings into stocks and real estate. As this “IQ” becomes lessened by the rupture of this bubble, less and less will be inclined to continue. Hence, the Bell Curve shape of monetary amounts invested by population and percentage. (Less people involved, less money 15. #15 Richard De Veaux November 1, 2011 Unfortunately, this interpretation of the Central Limit Theorem is WRONG. The Central Limit Theorem says nothing about individual heights (or weights, or lifetimes). What it says is that AVERAGES of many heights will have a Normal distribution as the sample size gets large. The fact that women’s heights has a Normal distribution is not a consequence of the CLT. The fact that they are “bumped around by large number of random processes” isn’t enough. Lifetimes are bumped around by just as many, but they clearly aren’t Normally distributed.
{"url":"http://scienceblogs.com/builtonfacts/2009/02/05/the-central-limit-theorem-made/","timestamp":"2014-04-19T09:51:48Z","content_type":null,"content_length":"63106","record_id":"<urn:uuid:0bf603fa-93e4-4356-abc4-1a7fb22b116f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Pete's Webpage • Inez is a constraint solver that implements the ILP Modulo Theories (IMT) scheme, as described in our CAV 2013 paper. An IMT instance is an Integer Linear Programming instance, where some symbols have interpretations in background theories. We have used the IMT approach to solve industrial synthesis and design problems with real-time constraints arising in the development of the Boeing 787. Inez can be used to solve problems involving linear constraints and optimization. Inez is OCaml-centric. The preferred mode of interacting with the solver is via scripts written in a Camlp4-powered superset of OCaml. • BAT: The Bit-level Analysis Tool: A tool for deciding bounded model checking and k-induction queries for a powerful hardware description language that is strongly typed (with type inference), includes bit vectors, memories, and the standard operations on these data types, allows for user defined functions, functions which return multiple values, etc. BAT has been used to solve problems that cannot be handled by any other decision procedure we have tried. For example, BAT can prove that a 32-bit 5 stage pipelined machine model refines its ISA in approximately 2 minutes. These examples and many more are included. • ACL2s: The ACL2 Sedan: ACL2s is a powerful theorem proving system based on the ACL2 theorem prover. Our goal is to bring formal reasoning to the masses. To that end, ACL2s features enchancements such as a modern graphical integrated development environment in Eclipse, levels appropriate for beginners through experts, state-of-the-art enhancements such as our recent improvements to termination analysis, etc. • Bloom filter calculator: The Bloom filter calculator has been used over 20,000 times by a variety of users. It is a Web application that can be used to compute optimal settings, determine false positive rates, and much more. • 3Spin and X3Spin: Modified versions of the Spin model checker with advances in the efficiency, configurability, and usability of probabilistic explicit-state verification. • 3Murphi: A modified version of the Murphi verifier with advances in the efficiency, configurability, and usability of probabilistic explicit-state verification.
{"url":"http://www.ccs.neu.edu/home/pete/software.html","timestamp":"2014-04-21T07:58:09Z","content_type":null,"content_length":"4883","record_id":"<urn:uuid:55db293a-16f8-4857-9d90-d374c3e2ca9a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
? normed space and metric space What are the differences between a normed space and a metric space? I thought a metric space means a space that comes with the "distance" being defined, and likewise a normed space with a norm being defined. But isn't norm kind of distance? Any differences? by Cheng Cosine Feb/17/2k8 NC
{"url":"http://sci.tech-archive.net/Archive/sci.math/2008-02/msg02934.html","timestamp":"2014-04-19T01:49:03Z","content_type":null,"content_length":"9688","record_id":"<urn:uuid:d18f1cd1-5379-417a-97ca-75b03b3481ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Moving data between R Moving data between R and Excel via the clipboard These notes explain how to move data between R and Excel and other Windows applications via the clipboard. R has a function writeClipboard that does what the name implies. However, the argument to writeClipboard may need to be cast to a character type. For example the code > x <- "hello world" > writeClipboard(x) copies the string "hello world" to the clipboard as expected. However the code > x <- 3.14 > writeClipboard(x) produces the error message Error in writeClipboard(str, format) : argument must be a character vector or a raw vector The solution is to call writeClipboard( as.character(x) ), casting the object x to a character string. All variables in R are vectors, and elements of a vector can have differing types. If one element of a vector is a character string, all elements will be cast to strings without the need for an explicit as.character statement. After a vector has been copied to the clipboard, the elements of the vector will be separated by newlines when pasted into a document. The companion function for writeClipboard is readClipboard. The command x <- readClipboard() will assign the contents of the clipboard to the vector x. Each line becomes an element of x. The elements will be character strings, even if the clipboard contained a column of numbers before the readClipboard command was executed. If you select a block of numbers from Excel, each row becomes a single string containing tabs where there were originally cell boundaries. You can use the scan function to copy a column of numbers from Excel to R. Copy the column from Excel, run x <- scan(), type Ctrl-v to paste into R, and press enter to signal the end of input to scan. Then x will contain the numbers from Excel as numbers, not as quoted strings. Note that scan only works with columns of numbers. R will produce an error message if the copied column contained a string. If there is an empty cell, only the numbers above the first empty cell will be copied into the R vector. Note that scan works with columns in Excel. If you copy a row of numbers from Excel and call scan, the numbers will be concatinated into a single number in R. For example, if you copy horizontally adjacent cells containing 19 and 44 and run x <- scan(), then x will contain 1944. To copy a row from Excel, first transpose the row in Excel, then copy the result as a column. The function scan() is not limited to Excel. It could be used to paste a column of numbers copied from other applications, such as Word or Notepad. read.table and write.table The functions above only work with columns of data; rows are combined into single entries. To move a block of cells from The code write.table(x, "clipboard", sep="\t") will copy a table x to the clipboard in such a way that it can be pasted into Excel preserving the table structure. By default, the row and column names will come along with the table contents. To leave the row names behind, add the argument row.names=FALSE to the call to write.table. write.table(x, "clipboard", sep="\t", row.names=FALSE) Similarly, add col.names=FALSE if you do not want the row names to come over to Excel. write.table(x, "clipboard", sep="\t", row.names=FALSE, col.names=FALSE) Other R resources: The R Project R for programmers Five kinds of subscripts in R
{"url":"http://www.johndcook.com/r_excel_clipboard.html","timestamp":"2014-04-16T18:57:24Z","content_type":null,"content_length":"6592","record_id":"<urn:uuid:8cff3d6d-c11f-4621-a7ef-da78754532bf>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
A Neural Network for Common Due Date Job Scheduling Problem on Parallel Unrelated Machines Hamad, Abdelaziz and Sanugi, Bahrom and Salleh, Shaharuddin (2001) A Neural Network for Common Due Date Job Scheduling Problem on Parallel Unrelated Machines. Matematika, 17 (2). pp. 63-70. ISSN Official URL: http://www.fs.utm.my/matematika/content/view/58/31... This paper presents an approach for scheduling under a common due date on parallel unrelated machine problems based on artificial neural network. The objective is to allocate and sequence the jobs on the machines so that the total cost be minimized. This cost is composed of the total earliness and the total tardiness cost. Neural network is a suitable model in our study due to the fact that the problem is NP-hard. In our study, neural network has been proven to be effective and robust in generating near optimal solutions to the problem. Item Type: Article Uncontrolled Keywords: Neural networks, unrelated machines, scheduling Subjects: Q Science > QA Mathematics Divisions: Science ID Code: 1754 Deposited By: Dr Faridah Mustapha Deposited On: 14 Mar 2007 07:48 Last Modified: 13 Aug 2010 02:33 Repository Staff Only: item control page
{"url":"http://eprints.utm.my/1754/","timestamp":"2014-04-20T00:42:29Z","content_type":null,"content_length":"18581","record_id":"<urn:uuid:e52aec84-19e4-4914-91bf-a0a964fabbd0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
[Fwd: On Pi and E] Next message: Vernon Jenkins: "Re: Watershed" Dear All, You will find attached some comments of Iain Strachan regarding the discoveries of 'pi' and 'e' in the biblical text. As you may remember, Iain is a computer scientist with a specialised interest in biologically inspired algorithms, principally neural networks and genetic algorithms. attached mail follows: Dear Vernon, You might like to forward the following comments of mine to the ASA group. I no longer subscribe to the ASA mailing list because I don't really have the time to keep up with a high volume list like this, as I am already involved with another such list. However, Vernon kept me up to date on the exchanges with the group on this issue, and I would like to add the following comments. A mixture of angry dismissal and scorn seems to have followed this post, rather than a considered opinion of what has been presented. I think that even if you reject Vernon's premise (that it indicates some form of Divine Design on the Biblical text), that the numerical features are sufficiently striking at least to admit that it seems to be a scientific phenomenon worth investigating. I will state here that when I first heard of the approximation to Pi in Genesis 1:1, that I was a bit skeptical to start with. Initially, it was just a formula taking the letter product divided by the word product, and it gave a figure for pi/4 multiplied by an arbitrary power of 10. At that point, I reckoned that it was no more than "mildly interesting", as someone else put it. Then it became clear that the number of letters (28) was 4 times the number of words, so the correction to the formula (Num letters) * (letter product) / (Num words)*(word product) led to an approximation for Pi, times an arbitrary power of 10. This still wasn't quite enough to convince me that this was a genuine happening and not a fluke of coincidence; the formula seems an arbitrary one plucked out of thin air, and difficult to justify unless a different verse could be made to show a similar feature with exactly the same formula; which would confirm independently that the formula was in some sense It was when someone else plugged the numerical values of John 1:1 into precisely the same formula to arrive at a similarly accurate approximation for e (multiplied by a power of 10), that I was finally convinced that this merited further attention. It makes coincidence an extremely long shot, as the formula was not tweaked or altered in any way to produce the "e" result. Also validates the letter count/word count correction factor. If the original formula had been applied to John 1:1, there would have been a very accurate approximation to 17e/52, which would hardly have jumped out at It is difficult to come up with a rational explanation for the above, barring coincidence, and there are long odds against this, given the independent occurrences of the formula in two verses so clearly linked. Furthermore, one can rule out deliberate contrivance by the author of John's Gospel; the number "e" was not defined by mathematicians till the 18th Century AD. Other observations I have made subsequently involve evaluating "the formula" on every verse in the Torah (some 5000 verses - with a computer program). I can confirm that the formula effectively computes a random varible. The fractional part of the base 10 part of the logarithm of the function gives a uniform distribution in the range 0 to 1, as one would expect. Genesis 1:1 is the verse that is closest to pi, differing by 10^-5. If one selects an arbitrary other constant (say the square root of two), then one normally finds that the closest one is around 10^-4 distant, which is in accord with what one would expect with 5000 data samples. The second closest verse to pi has a difference of 10^-4; so Gen 1:1 is an order of magnitude closer to pi than it. Leaving aside what it means, I would have thought that as scientists the above indicates strongly that a peculiar phenomenon is taking place that should not simply be dismissed because we don't like the look of it, or because it challenges our notions of what Scripture ought to be. What is beyond doubt is that these two verses are pivotal to our faith. Gen 1:1, whatever our approach to Biblical hermeneutics, establishes God as the Creator; John 1:1 establishes the Word as equivalent to God, and later in the chapter shows the Word becoming Flesh. Together the two verses therefore assert the Deity of Christ. Is it therefore unreasonable to speculate that the numerical links between these two verses tend to harmonise with their textual links, and that this is for some purpose? Iain Strachan. This archive was generated by hypermail 2b29 : Fri Jun 29 2001 - 18:28:20 EDT
{"url":"http://www2.asa3.org/archive/asa/200106/0245.html","timestamp":"2014-04-19T07:29:40Z","content_type":null,"content_length":"7321","record_id":"<urn:uuid:7982b6b5-df73-4986-a0b9-6bc19de87e63>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Taalman and Rosenhouse on Sudoku Jason Rosenhouse and Laura Taalman have written an excellent book, Taking Sudoku Seriously. Despite the title, this is not primarily a puzzle book; rather it uses Sudoku, and the idea that mathematicians are essentially puzzle solvers, as a gateway into explaining to the general reader what it is that mathematicians do. The book actually contains very few standard Sudoku puzzles but a large number of variations on the standard theme. This is more interesting than standard Sudoku, I find; once you learn how to do standard Sudokus you essentially are just executing an algorithm (Don’t borrow this book from a library, as I did, if you intend to do the puzzles, unless you’re okay with hogging a copy machine for a while; also, even then, some puzzles in this book involve color in a meaningful way.) Some Sudoku-related facts from this book that I found interesting: How many filled-in Sudoku squares are there? The authors begin by carefully working through counting “Shidoku” puzzles; this is like Sudoku, but 4-by-4 instead of 9-by-9. The method is essentially to fill in the upper left 2-by-2 square; then finish the first row and first column. If we start with then there are three ways to complete this board. Each of these represents 96 possible Shidokus, obtained by switching the third and fourth columns, or not; switching the third and fourth rows, or not; and permuting the numbers 1, 2, 3, 4. So there are 288 possible Shidokus. But many of these are equivalent; it turns out that there are only two essentially different Shidoku puzzles: and all others can be reached from these by various symmetries, including relabeling the digits, permutation of certain rows, columns, or blocks, and rotations or reflections. These symmetries form the Shidoku group; unfortunately the orbits of its action on the set of Shidokus are of different sizes! (There are 96 Shidoku puzzles equivalent to the first one above, and 192 equivalent to the second, for a total of 288. Determining the number of Sudoku puzzles is harder. There are 948,109,639,680 ways to fill in the first three rows of a Sudoku, satisfying the Sudoku constraints. (See the Wikipedia article on the mathematics of Sudoku.) This actually gives a nice heuristic argument for the total number of ways to fill in a Sudoku. There are $(9!)^3 \approx 4.78 \times 10^{16}$ ways to fill in the first three rows of a Sudoku if we just force each row to have nine different numbers and ignore the constraint on the blocks. So if we fill in the first three rows by filling each row with a permutation chosen uniformly at random, then the probability of satisfying the block constraint is $948109639680/(9!)^3 \approx 1.98 \times 10^{-5}$; call this number $p$. Now imagine filling all nine rows of a Sudoku with permutations chosen uniformly at random; there are $(9!)^9$ ways to do this. But now the probability that the top three blocks satisfy the block constraint is $p$; the same is true for each of the three rows of blocks and each of the three columns. If we assume these constraints are independent, which they’re not, then we estimate that the total number of 9-by-9 squares with each row containing nine different numbers, and each set of three blocks forming a row or column being valid in Sudoku, is $(9!)^9 p^6$. But these are just filled Sudokus, and that number is about $6.6 \times 10^{21}$. Perhaps coincidentally, but perhaps not, the actual number of Sudokus is about $6.7 \times 10^{21}$, as shown by Felgenhauer and Jarvis; the heuristic is due to Kevin Kilfoil. How are sudoku puzzles constructed? One way to do it is to set a “mask” — a set of squares within the puzzle which will be filled in — and then populate randomly and count the number of solutions. Then make random changes to the numbers filling in this mask, generally but not always reducing the number of solutions, (The authors state that something like this was used to form the puzzles in the book by Riley and Taalman, No-Frills Sudoku.) Sudoku puzzles can be encoded in terms of graph colorings. Each square in Sudoku becomes a node, and we draw an edge between two vertices that can’t be filled with the same number; then the question is to extend a partial coloring of the Sudoku graph to a full coloring. But they can also be encoded in terms of polynomials For these purposes we’ll play Sudoku with the numbers -2, -1, 1, 2, 3, 4, 5, 6, 7. Let each cell of the Sudoku be represented by a variable. Then for each cell we have an equation $(w+2)(w+1)(w-1)(w-2)(w-3)(w-4)(w-5)(w-6)(w-7) = 0encoding the fact that it contains one of these nine numbers, and for each region of nine cells (row, column, or block) we have two equations$latex x_1 + x_2 + \cdots + x_9 = 25, x_1 x_2 \cdots x_9 = 10080$ encoding the fact that each of the nine numbers is contained in the region exactly once. (Why not use the usual numbers, 1 through 9? Because there is another set of nine integers that has sum 45 and product 362880; can you find it?) I’ll close with an extended quote that I think bears repeating (pp. 115-116): Your authors have been teaching mathematics for quite some time now, and it has been our persistent experience that this lesson [that hard puzzles are more interesting to solve than easy puzzles], obvious when pondering Sudoku puzzles, seems to elude students struggling for the first time with calculus or linear algebra. When presented with a problem requiring only a mechanical, algorithmic solution, the average student will dutifully carry out the necessary steps with grim determination. Present instead a problem requiring an element of imagination or one where it is unclear how to proceed, and you must brace yourself for the inevitable wailing and rending of garments. This reaction is not hard to understand. In a classroom there are unpleasant things like grades and examinations to consider, not to mention the annoying realities of courses taken only to fulfill a requirement. Students have more on their minds than the sheer joy of problem solving. They welcome the mechanical problems precisely because it is clear what is expected of them. A period of frustrated confusion can be amusing when working on a puzzle, because there is no price to be paid for failing to solve it. The same experience in a math class carries with it fears of poor grades. Finally, if you don’t have time to spare to read the whole book, look at Taalman’s article Taking Sudoku Seriously. If you like your math in podcast form, listen to Sol Lederman’s interview with Rosenhouse and Taalman. If you could care less about math but want books of Sudoku: No Frills Sudoku (standard Sudoku, but starting with an astonishingly low 18 clues each), Naked Sudoku (Sudoku variants with no starting numbers), Color Sudoku (more graphically interesting). 4 Comments Post your own or leave a trackback: Trackback URL 1. Thanks for the link to the podcast. □ You’re welcome! 2. Nice summary. Unfortunately, some of your latex seems to have leaked or otherwise not compiled. 3. [...] God Plays Dice, Michael Lugo reviews Taking Sudoku [...]
{"url":"http://gottwurfelt.com/2012/03/13/taalman-and-rosenhouse-on-sudoku/","timestamp":"2014-04-20T18:22:39Z","content_type":null,"content_length":"59031","record_id":"<urn:uuid:2b23dba5-4ec1-4ff9-a840-8c153cfa8496>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
In a previous post I introduced the following game: Suppose you play the following game: Someone holds a set of cards with the numbers {1,2,…,N} in random order, opens up the first card and asks if the next card is greater or smaller. Every time you predict correctly, you get one point, while every wrong [Project Euler] – Problem 57 It is possible to show that the square root of two can be expressed as an infinite continued fraction.√ 2 = 1 + 1/(2 + 1/(2 + 1/(2 + … ))) = 1.414213…By expanding this for the first four iterations, we get:Read More: 547 Words Totally Machine Learning Ex3 – Multivariate Linear Regression Part 1. Finding alpha.The first question to resolve in Exercise 3 is to pick a good learning rate alpha.This require making an initial selection, running gradient descent and observing the cost function.Read More: 221 Words Totally clusterProfiler in Bioconductor 2.8 In recently years, high-throughput experimental techniques such as microarray and mass spectrometry can identify many lists of genes and gene products. The most widely used strategy for high-throughput data analysis is to identify different gene clusters based on their expression profiles. Another commonly used approach is to annotate these genes to biological knowledge, such as Gene Ontology (GO) and... Machine Learning Ex2 – Linear Regression Thanks to this post, I found OpenClassroom. In addition, thanks to Andrew Ng and his lectures, I took my first course in machine learning. These videos are quite easy to follow. Exercise 2 requires implementing gradient descent algorithm to model data with linear regression.Read More: 243 Words Totally The easiest way to get UTR sequence I just figure out the way to query UTR sequence from ensembl by biomart tool.It is very simple compare with using bioperl to parse gbk file to extract UTR sequence.Read More: 232 Words Totally The easiest way to get UTR sequence I just figure out the way to query UTR sequences from ensembl by biomart tool.It is very simple compared with using bioperl to parse gbk files to extract UTR sequences.Read More: 236 Words Totally Estimate Probability and Quantile Simple root finding and one dimensional integrals algorithms were implemented in previous posts.These algorithms can be used to estimate the cumulative probabilities and quantiles.Here, take normal distribution as an example.Read More: 281 Words Totally Estimate Probability and Quantile Simple root finding and one dimensional integrals algorithms were implemented in previous posts.These algorithms can be used to estimate the cumulative probabilities and quantiles.Here, take normal distribution as an example.Read More: 281 Words Totally Single variable optimization Optimization means to seek minima or maxima of a funtion within a given defined domain.If a function reach its maxima or minima, the derivative at that point is approaching to 0. If we apply Newton-Raphson method for root finding to f’, we can get the optimizing f.Read More: 223 Words Totally
{"url":"http://www.r-bloggers.com/tag/english/page/2/","timestamp":"2014-04-17T09:56:24Z","content_type":null,"content_length":"36010","record_id":"<urn:uuid:325337f0-2aeb-4768-94db-36ae7300d561>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Determining the Number of Non-Spurious Arcs in a Learned DAG Model: Investigation of a Bayesian and a Frequentist Approach Determining the Number of Non-Spurious Arcs in a Learned DAG Model: Investigation of a Bayesian and a Frequentist Approach Jennifer Listgarten and David Heckerman, Microsoft Research To appear in Proceedings of Twenty Third Conference on Uncertainty in Artificial Intelligence, Vancouver, Canada, UAI Press, July 2007. (8 page pdf) Abstract: In many application domains, such as computational biology, the goal of graphical model structure learning is to uncover discrete relationships between entities. For example, in our problem of interest concerning HIV vaccine design, we want to infer which HIV peptides interact with which immune system molecules (HLA molecules). For problems of this nature, we are interested in determining the number of non-spurious arcs in a learned graphical model. We describe both a Bayesian and frequentist approach to this problem. In the Bayesian approach, we use the posterior distribution over model structures to compute the expected number of true arcs in a learned model. In the frequentist approach, we develop a method based on the concept of the False Discovery Rate. On synthetic data sets generated from models similar to the ones learned, we find that both the Bayesian and frequentist approaches yield accurate estimates of the number of non-spurious arcs. In addition, we speculate that the frequentist approach, which is non-parametric, may outperform the parametric Bayesian approach in situations where the models learned are less representative of the data. Finally, we apply the frequentist approach to our problem of HIV vaccine design.
{"url":"http://www.cs.toronto.edu/~jenn/UAI2007NumberArcs.html","timestamp":"2014-04-21T07:27:24Z","content_type":null,"content_length":"4366","record_id":"<urn:uuid:b246ca35-1773-4254-bd47-08e67f759dd6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
improper pointer/ integer combination? 11-15-2005 #1 Registered User Join Date Nov 2005 improper pointer/ integer combination? I seem to get a line 19: warning: improper pointer/integer combination: arg #1, when I run this. I don't know why. #define ARRAY 500 int getRandA(int min, int max); void exchangeSmallest(int list[], int first, int last); void selectionSort(int list[], int last); int main(void) int i; int j; int first = 1; int last = 500; int list; for (i = 0; i < ARRAY; i++) j = getRandA(1, 1000); selectionSort(j, last); int getRandA(int min, int max) static int I = 0; int rn; if (I == 0) I = 1; rn = (rand() % (max - min +1)+ min); void selectionSort(int list[], int last) int current; for(current = 0; current < last; current++) void exchangeSmallest(int list[], int current, int last) int walker; int smallest; int tempData; smallest = current; for(walker = current +1; walker <= last; walker++) if (list[walker] < list[smallest]) smallest = walker; tempData = list[current]; list[current] = list[smallest]; list[smallest] = tempData; Try changing this void selectionSort(int list[], int last); void selectionSort(int list, int last); Or change the actual paramater, whichever it was you were meaning to do. I'm assuming its the actual parameter you want to change. Obviously then you need to declare j as a pointer. Last edited by kalium; 11-15-2005 at 10:37 PM. There are other warnings also....u also need to include time.h...but are u sure what u r doing "Service of the poor and destitutes is the service of the God" Normative Changes to ISO/IEC 9899:1990 in Technical Corrigendum 1 Incompatibilities Between ISO C and ISO C++ You don't need return; at the end of every void function. And you also need <time.h> (as mentioned) because of your use of time(). Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. Don't cross-post: Error.. What everyone's saying is that selectionSort() takes a pointer (or array), and you're giving it an int: void selectionSort(int list[], int last); int main(void) int i; int j; int first = 1; int last = 500; int list; for (i = 0; i < ARRAY; i++) j = getRandA(1, 1000); selectionSort(j, last); So, either make selectionSort take an int argument, or pass it an array (or pointer). Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. 11-15-2005 #2 Registered User Join Date Dec 2003 11-15-2005 #3 11-16-2005 #4 11-16-2005 #5
{"url":"http://cboard.cprogramming.com/c-programming/72290-improper-pointer-integer-combination.html","timestamp":"2014-04-20T16:40:33Z","content_type":null,"content_length":"59627","record_id":"<urn:uuid:286fc867-83e7-4eeb-abcb-04dae1586c2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula What is the type that they never go for? The needy, drag-up guy who does their bidding? Re: Linear Interpolation FP1 Formula Not exactly, the guy everybody says is "a nice guy." In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Oh yes, that guy... I think that is what a lot of girls think about me. Re: Linear Interpolation FP1 Formula Is it true? That is more important than what they think. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yeah, I think so... Re: Linear Interpolation FP1 Formula Sorry, I am having connection problems. So what are you going to do about it? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I could do some things to naturally be more aggressive, e.g. smile less often or say more direct things/be more argumentative. But another thing is that, maybe I e-mail them too much. For instance, part of me wants to e-mail adriana again instead of waiting for her to reply to my old e-mail from 4 days ago, and another part of me is just saying to wait, since the ball is in her court. Re: Linear Interpolation FP1 Formula The emailing is not the point. Whether you do or you do not again is not the point. What that post up there says is that "nice guys finish last." In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula So what part of what I am doing is making me a nice guy? Re: Linear Interpolation FP1 Formula Beats me, I do not know enough about how British guys treat their women. What is considered the norm over there I do not know. I know that if you did not fit into one of the the nine points in the earlier posts here in America you did not get any action. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Hmm, I don't fit into one of those nine categories yet... Super Member Re: Linear Interpolation FP1 Formula shouldnt this be in another cat.egory Re: Linear Interpolation FP1 Formula shouldnt this be in another cat.egory We also discuss mathematics in here too. Hmm, I don't fit into one of those nine categories yet... Get into one of them fast! Use the techniques you have learned. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula bobbym wrote: 5) You can end up a greasy guy that knows how to talk and they will want you. This might be doable but I'm not sure. adriana reacted positively to it, she ended up telling me every intimate detail about herself. Re: Linear Interpolation FP1 Formula Yes, it is the one I was going to suggest and it is the secret of my charm. But it creates a paradox. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula Yes, if you pretend you are #5 then you really are! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I can do it but it really depends on my mood, sometimes I just don't feel like doing it, even if there is a girl waiting to be groomed or 'greased'. Unfortunately this has led to some potential missed opportunities. Re: Linear Interpolation FP1 Formula In the modern world it is not good to be introspective. Personally it is but you will be viewed with suspicion and mistrust. People are taught to make snap judgements about almost everything. They are constantly in a hurry. Also they can not concentrate longer than about 10 minutes on anything. That is why an interviewer thinks he can determine an applicant in about 8 minutes and a girl sizes you up in about 5 minutes. I do not agree with any of that but it is the world we live in and it will not change. Therefore you must adapt to it and even take advantage of their shallow thinking. That is why the first meeting with the girl is the most important! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I think the first meeting is definitely important -- a couple of times I have been told that and it can really swerve their opinion of you, from my perspective. That girl from UCL talked about me and said that I was hot and wanted to get into contact with me, just from a first impression -- and it was what I did to make her think that of me that made her want to contact me so much in the first place. With adriana she found my performance at one of the maths Saturday classes impressive (I used a Fourier series to solve one of the problems, they found it cool), which was why she started to contact me a bit (and of course once the conversations got more intimate it drew her interest a bit more). That may not have happened if I didn't go up to adriana in person on the day and start talking to her. The power of being approached in that manner is what seems attractive to them. Re: Linear Interpolation FP1 Formula Like homework guy you will adapt. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Slowly, but surely... I hope. Nothing from adriana yet, it has been 5 days. With Holly, we're still talking pretty much on a daily basis, no greasy talk or intimate details yet. We are however talking about a teacher at my school who flirts with students and there's a rumour that she slept with one of them (hardly a rumour to be honest, there are texts on the student's phone that would strongly suggest it). Re: Linear Interpolation FP1 Formula That does happen but if she is caught she will be canned. Over here they can bring charges against her. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula It is the same over here. Another teacher that did that at my school before was fired, the boy was 15 or 16. Over here the age of consent is 16 though so there are no paedophilia charges if the current teacher was caught (my friend is 18), but she would definitely be fired. Re: Linear Interpolation FP1 Formula 16 over here too. Because he is 18, she would just be fired because he is not a minor. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=257928","timestamp":"2014-04-16T16:06:27Z","content_type":null,"content_length":"36917","record_id":"<urn:uuid:cd215cae-571f-43c2-9726-0c13efb422c9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
This is a typed up copy of my lecture notes from the combinatorics seminar at KTH, 2010-09-01. This is not a perfect copy of what was said at the seminar, rather a starting point from which the talk In some points, I’ve tried to fill in the most sketchy and un-articulated points with some simile of what I ended up actually saying. Combinatorial species started out as a theory to deal with enumerative combinatorics, by providing a toolset & calculus for formal power series. (see Bergeron-Labelle-Leroux and Joyal) As it turns out, not only is species useful for manipulating generating functions, btu it provides this with a categorical approach that may be transplanted into other areas. For the benefit of the entire audience, I shall introduce some definitions. Definition: A category C is a collection of objects and arrows with each arrow assigned a source and target object, such that
{"url":"http://blog.mikael.johanssons.org/archive/category/mathematics/algebra/operads-and-props/","timestamp":"2014-04-21T13:42:28Z","content_type":null,"content_length":"65355","record_id":"<urn:uuid:f21bb190-b29f-47c1-8225-057f354ade96>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Josiah Willard Gibbs Josiah Willard Gibbs (February 11 1839 – April 28 1903) was an American theoretical physicist, chemist and mathematician. One of the greatest American scientists of all time, he devised much of the theoretical foundation for chemical thermodynamics, as well as physical chemistry and statistical mechanics. From his writingsEdit • One of the principal objects of theoretical research is to find the point of view from which the subject appears in the greatest simplicity. □ From Gibbs's letter accepting the Rumford Medal (1881). Quoted in A. L. Mackay, Dictionary of Scientific Quotations (London, 1994). • His true monument lies not on the shelves of libraries, but in the thoughts of men, and in the history of more than one science. □ From Gibbs's obituary for Rudolf Clausius (1889). See The Collected Works of J. Willard Gibbs, vol. 2, (New York: Longmans, Green and Co., 1928), p. 267. Complete volume • In all these papers we see a love of honest work, an aversion to shams, a caution in the enunciation of conclusions, a distrust of rash generalizations and speculations based on uncertain premises. He was never anxious to add one more guess on doubtful matters in the hope of hitting the truth, or what might pass as such for a time, but was always ready to take infinite pains in the most careful testing of every theory. With these qualities was united a modesty which forbade the pushing of his own claims and desired no reputation except the unsought tribute of competent • Mathematics is a language. □ At a Yale faculty meeting, during a discussion of language requirements in the undergraduate curriculum. Quoted in M. Rukeyser, Willard Gibbs, (Garden City, NY: Doubleday, Doran & Co., 1942), p. 280. • The whole is simpler than its parts. □ Quoted by I. Fisher in "The Applications of Mathematics to the Social Sciences," Bulletin of the American Mathematical Society 36, 225-243 (1930). Full article • Anyone having these desires will make these researches. □ About his own scientific work. Quoted in M. Rukeyser, Willard Gibbs, (Garden City, NY: Doubleday, Doran & Co., 1942), p. 431. • I wish to know systems. □ Quoted in M. Rukeyser, Willard Gibbs (Garden City, NY: Doubleday, Doran & Co., 1942), p. 4. • A mathematician may say anything he pleases, but a physicist must be at least partially sane. □ Quoted in R. B. Lindsay, "On the Relation of Mathematics and Physics," Scientific Monthly 59, 456 (Dec. 1944) • If I have had any success in mathematical physics, it is, I think, because I have been able to dodge mathematical difficulties. □ Quoted by C. S. Hastings in "Biographical Memoir of Josiah Willard Gibbs 1839-1903," National Academy of Sciences Biographical Memoirs, vol. VI, (Washington, D.C.: National Academy of Sciences, 1909), p. 390. Complete memoir Quotes about GibbsEdit External linksEdit Last modified on 13 April 2014, at 13:36
{"url":"http://en.m.wikiquote.org/wiki/Josiah_Willard_Gibbs","timestamp":"2014-04-17T12:34:25Z","content_type":null,"content_length":"26351","record_id":"<urn:uuid:b229c2c0-aadd-4c4e-ae3e-b58e1ced6978>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 57 - Psychological Bulletin , 1996 "... Distinctions have been proposed between systems of reasoning for centuries. This article distills properties shared by many of these distinctions and characterizes the resulting systems in light of recent findings and theoretical developments. One system is associative because its computations refle ..." Cited by 321 (3 self) Add to MetaCart Distinctions have been proposed between systems of reasoning for centuries. This article distills properties shared by many of these distinctions and characterizes the resulting systems in light of recent findings and theoretical developments. One system is associative because its computations reflect similarity structure and relations of temporal contiguity. The other is &quot;rule based & quot; because it operates on symbolic structures that have logical content and variables and because its computations have the properties that are normally assigned to rules. The systems serve complementary functions and can simultaneously generate different solutions to a reasoning problem. The rule-based system can suppress the associative system but not completely inhibit it. The article reviews evidence in favor of the distinction and its characterization. One of the oldest conundrums in psychology is whether people are best conceived as parallel processors of information who operate along diffuse associative links or as analysts who operate by deliberate and sequential manipulation of internal representations. Are inferences drawn through a network of learned associative pathways or through application of a kind of &quot;psychologic&quot; , 1998 "... this article, I will argue rather strongly that computational innovation---at least certain important facets of the processes of innovation---has been achieved, and that computational creativity is plausibly within our sights. Specifically, I will argue that modern research in genetic algorithms---s ..." Cited by 43 (17 self) Add to MetaCart this article, I will argue rather strongly that computational innovation---at least certain important facets of the processes of innovation---has been achieved, and that computational creativity is plausibly within our sights. Specifically, I will argue that modern research in genetic algorithms---search - Psychological Bulletin , 1999 "... This review integrates 4 major approaches to the study of science—historical accounts of scientific discoveries, psychological experiments with nonscientists working on tasks related to scientific discoveries, direct observation of ongoing scientific laboratories, and computational modeling of scien ..." Cited by 27 (2 self) Add to MetaCart This review integrates 4 major approaches to the study of science—historical accounts of scientific discoveries, psychological experiments with nonscientists working on tasks related to scientific discoveries, direct observation of ongoing scientific laboratories, and computational modeling of scientific discovery processes—by viewing them through the lens of the theory of human problem solving. The authors provide a brief justification for the study of scientific discovery, a summary of the major approaches, and criteria for comparing and contrasting them. Then, they apply these criteria to the different approaches and indicate their complementarities. Finally, they provide several examples of convergent principles of the process of scientific discovery. The central thesis of this article is that although research on scientific discovery has taken many different paths, these paths show remarkable convergence on key aspects of the discovery processes, allowing one to aspire to a general theory of scientific discovery. This convergence is often obscured by the disparate cultures, research methodologies, and theoretical foundations of the various disciplines that study scientific discovery, including - Machine GRAPHICS & VISION 3(1/2 , 1994 "... Abstract. The rapidly developing field of diagrammatic knowledge representation and reasoning is surveyed. The origins and rationale of the field, basic principles and methodologies, as well as selected applications are discussed. Closely related areas, like visual languages, data presentation, and ..." Cited by 21 (2 self) Add to MetaCart Abstract. The rapidly developing field of diagrammatic knowledge representation and reasoning is surveyed. The origins and rationale of the field, basic principles and methodologies, as well as selected applications are discussed. Closely related areas, like visual languages, data presentation, and visualization are briefly introduced as well. Basic sources of material for further study are indicated. Key words: diagrammatic representation, diagrammatic reasoning, visual languages, diagrams, visual programming, data presentation, visualization, knowledge representation, computer graphics, qualitative physics, geometry theorem proving. 1. , 1965 "... This discussion outlines and implements the theory of an inductive inference technique that automatically discovers classes among large numbers of input patterns, generates operational definitions of class membership with explicit levels of confidence, creates a continuously updated "self-organized" ..." Cited by 9 (3 self) Add to MetaCart This discussion outlines and implements the theory of an inductive inference technique that automatically discovers classes among large numbers of input patterns, generates operational definitions of class membership with explicit levels of confidence, creates a continuously updated "self-organized" coded hierarchical taxonomic classification of patterns, and recognizes to which already discovered class or classes, if any, a new input belongs in an information-theoretically efficient way. Relationships to the "scientific method" and learning are discussed. - Journal of Mind and Behavior, submitted , 2005 "... Mental representations are based upon categories in which the state of a mental system is stable. Acategorial states, on the other hand, are distinguished by unstable behavior. A refined and compact terminology for the description of categorial and acategorial mental states and their stability prope ..." Cited by 8 (4 self) Add to MetaCart Mental representations are based upon categories in which the state of a mental system is stable. Acategorial states, on the other hand, are distinguished by unstable behavior. A refined and compact terminology for the description of categorial and acategorial mental states and their stability properties is introduced within the framework of the theory of dynamical systems. The relevant concepts are illustrated by selected empirical observations in cognitive neuroscience. Alterations of the category of the first person singular and features of creative activity will be discussed as examples for the phenomenology of acategorial states. Harald Atmanspacher is also associate member of the Max-Planck-Center for Interdisciplinary "... Although a general sense of the magnitude, quantity, or numerosity of objects is common in both untrained people and animals, the abilities to deal exactly with large quantities and to reason precisely in complex but well-specified situations—to behave formally, that is—are skills unique to people t ..." Cited by 8 (3 self) Add to MetaCart Although a general sense of the magnitude, quantity, or numerosity of objects is common in both untrained people and animals, the abilities to deal exactly with large quantities and to reason precisely in complex but well-specified situations—to behave formally, that is—are skills unique to people trained in symbolic notations. These symbolic notations typically employ complex, hierarchically embedded structures, which all extant analyses assume are constructed by concatenative, rule-based processes. The primary goal of this article is to establish, using behavioral measures on naturalistic tasks, that some of the same cognitive resources involved in representing spatial relations and proximities are also involved in representing symbolic notations—in short, that formal notations are a kind of diagram. We examined self-generated productions in the domains of handwritten arithmetic expressions and typewritten statements in a formal logic. In both tasks, we found substantial evidence for spatial representational schemes even in these highly symbolic domains. It is clear that mathematical equations written in modern notation are, in general, visual forms and that they share some properties with diagrammatic or imagistic displays. Equations and mathematical expressions are often set off from the main text, use nonstandard characters and shapes, and deviate substantially from linear symbol placement. Furthermore, evidence indicates that at least some mathematical processing is sensitive to the particular visual form of its presentation notation (Cambell, 1999; McNeil & Alibali, 2004, 2005). Despite these facts, notational mathematical representation is typically considered sentential and is placed in opposition to diagrammatic representations in fields as diverse as education "... ion in itself is not the goal: for Whitehead [117]"it is the large generalisation, limited by a happy particularity, which is the fruitful conception." As an example consider the theorem in ring theory, which states that if R is a ring, f(x) is a polynomial over R and f(r) = 0 for every element of ..." Cited by 6 (2 self) Add to MetaCart ion in itself is not the goal: for Whitehead [117]"it is the large generalisation, limited by a happy particularity, which is the fruitful conception." As an example consider the theorem in ring theory, which states that if R is a ring, f(x) is a polynomial over R and f(r) = 0 for every element of r of R then R is commutative. Special cases of this, for example f(x) is x 2 \Gamma x or x 3 \ Gamma x, can be given a first order proof in a few lines of symbol manipulation. The usual proof of the general result [20] (which takes a semester's postgraduate course to develop from scratch) is a corollary of other results: we prove that rings satisfying the condition are semi-simple artinian, apply a theorem which shows that all such rings are matrix rings over division rings, and eventually obtain the result by showing that all finite division rings are fields, and hence commutative. This displays von Neumann's architectural qualities: it is "deep" in a way in which the symbol - Annals of Mathematics and Artificial Intelligence , 1997 "... . The nature and history of the research area common to artificial intelligence and symbolic mathematical computation are examined, with particular reference to the topics having the greatest current amount of activity or potential for further development: mathematical knowledge-based computing envi ..." Cited by 6 (1 self) Add to MetaCart . The nature and history of the research area common to artificial intelligence and symbolic mathematical computation are examined, with particular reference to the topics having the greatest current amount of activity or potential for further development: mathematical knowledge-based computing environments, autonomous agents and multi-agent systems, transformation of problem descriptions in logics into algebraic forms, exploitation of machine learning, qualitative reasoning, and constraint-based programming. Knowledge representation, for mathematical knowledge, is identified as a central focus for much of this work. Several promising topics for further research are stated. As an introduction to the proceedings of the first international conference that was devoted specifically to symbolic mathematical computing (SMC) and artificial intelligence, we wrote a combination of a short survey and a summary of our predictions and suggestions for the future development of the territory common ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=993897","timestamp":"2014-04-19T05:39:26Z","content_type":null,"content_length":"37929","record_id":"<urn:uuid:055706ed-81a4-49d3-be98-a67d2eb0acf5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: SYSTEMS AND METHODS FOR LDPC DECODING WITH POST PROCESSING Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Various embodiments of the present invention provide systems and methods for decoding encoded information. For example, a method for post processing error correction in a decoder system is disclosed. The method includes receiving and iteratively decoding a soft input to generate a hard output associated with the soft input. The method further includes post processing when a plurality of parity checks fail. At least one bit of the hard output is identified as being potentially incorrect. The identified bit is modified, and the plurality of parity checks is thereafter repeated. A system for performing LDPC decoding, the system comprising:an LDPC decoder, wherein the LDPC decoder receives a soft input that is decoded to provide a hard output; anda post processor, wherein the post processor identifies at least one bit in the hard output that is potentially incorrect, wherein the post processor modifies the at least one bit that is potentially incorrect, and wherein the post processor determines whether the modifying the at least one bit eliminated the potential error in the hard output. The system of claim 1, wherein the soft input is a reliability of a codeword bit, and wherein the length of the codeword is the same as the length of the hard output. The system of claim 1, wherein identifying the potential error in one or more bits of the hard output includes determining if one or more parity checks performed on the hard output failed. The system of claim 3, wherein determining whether the modifying the one or more bits of the hard output eliminated the potential error includes executing the one or more parity checks to determine if any of the one or more parity checks indicates an error. The system of claim 1, wherein two or more parity checks are executed on the hard code word, wherein identifying the potential error in one or more bits of the hard output includes determining that at least a first parity check and a second parity check of the two or more parity checks performed on the hard output failed. The system of claim 5, wherein the first parity check operates on at least a first and a second bit of the hard output, wherein the second parity check operates on the first and a third bit of the hard output. The system of claim 6, wherein the first bit of the hard output is indicated as a potential error because it is operated on by both the first parity check and the second parity check. The system of claim 7, wherein the post processor modifies the first bit of the hard output, and wherein the post processor executes the two or more parity checks on the hard output after modifying the first bit of the hard output. The system of claim 8, wherein the post processor determines that the two or more parity checks passed after modifying the first bit of the hard output, and wherein the post processor outputs the hard output with the first bit modified. The system of claim 8, wherein the post processor determines that at least one of the two or more parity checks failed after modifying the first bit of the hard output. The system of claim 10, wherein a fourth bit of the hard output is also identified as a potential error, wherein the post processor modifies the fourth bit of the hard output, and wherein the post processor executes the two or more parity checks on the hard output after modifying the fourth bit of the hard output. The system of claim 11, wherein the post processor determines that the two or more parity checks passed after modifying the fourth bit of the hard output, and wherein the post processor outputs the hard output with the fourth bit modified. A method for LDPC decoding, the method comprising:receiving a soft input;performing LDPC decoding on the soft input, wherein the LDPC decoding generates a hard output corresponding to the soft input; applying at least a first parity check and a second parity check to the hard output, wherein at least one of the first parity check and the second parity check indicates an error in the hard output; identifying at least one bit in the hard output that is a potential error;modifying the at least one bit; andapplying the first parity check and the second parity check to the hard output after modifying the at least one bit. The method of claim 13, wherein the method further comprises:determining that the first parity check and the second parity check passed after modifying the at least one bit; andoutputting the hard output with the at least one bit modified. The method of claim 13, wherein identifying the at least one bit that is a potential error comprises:determining a frequency of association of the identified bit with the first parity check and the second parity check. The method of claim 15, wherein the frequency of association of the identified bit is greater than a frequency of association with the first parity check and the second parity check of another bit in the hard output. A method for post processing error correction in a decoder system, the method comprising:receiving a soft input;decoding the soft input, wherein a hard output is generated;applying a plurality of parity checks to the hard output, wherein a subset of the plurality of parity checks fail;identifying at least one bit in the hard output that is a potential error;modifying the at least one bit; andapplying the plurality of parity checks to the hard output after modifying the at least one bit. The method of claim 17, the method comprising:determining that the plurality of parity checks passed after modifying the at least one bit; andoutputting the hard output with the at least one bit The method of claim 17, wherein identifying the at least one bit that is a potential error comprises:determining a frequency of association of the identified bit with a subset of the plurality of parity checks that failed. The method of claim 19, wherein the frequency of association of the identified bit is greater than a frequency of association with the subset of parity checks of another bit in the hard output. BACKGROUND OF THE INVENTION [0001] The present invention is related to systems and methods for decoding information, and more particularly to systems and methods for LDPC decoding with post processing. A number of encoding/decoding schemes have been developed to meet the needs for, among other things, data storage and data transmission. As one example, low-density parity-check (LDPC) codes have been developed that provide excellent error correcting performance using a highly parallelized decoding algorithm. Turning to FIG. 1, an exemplary transmission system 100 utilizing an LDPC encoder and a separate LDPC decoder is depicted. Transmission system 100 includes a transmission device 110 and a receiving device 160. Transmission device 110 includes an information source 120 that provides a stream of information to an LDPC encoder 130. LDPC encoder 130 encodes the received stream of information and provides an encoded data set to a transmitter 140. Transmitter 140 modulates the encoded data set to create a transmitted data set 150 that is received by a receiver 190 of receiving device 160. Receiver 190 demodulates the encoded data set and provides it to an LDPC decoder 180 that decodes the encoded data set and provides the decoded information as received information 170. If only a limited number of errors occur in transmitted data set 150, LDPC decoder 18C will after a finite number of iterations come to a result representing the actual information originally provided by information source 120. However, in some cases, insufficient bandwidth exists to perform sufficient iterations to derive the desired result. In other cases, too many errors exist in transmitted data set 150, and thus the desired result is not achievable using standard LDPC decoder 180. Hence, for at least the aforementioned reasons, there exists a need in the art for advanced systems and methods for decoding information. BRIEF SUMMARY OF THE INVENTION [0006] The present invention is related to systems and methods for decoding information, and more particularly to systems and methods for LDPC decoding with post processing. Various embodiments of the present invention provide systems and methods for decoding encoded information. For example, a method for post processing error correction in a decoder system is disclosed. The method includes receiving and decoding a soft input to generate a hard output associated with the soft input. The method further includes applying a plurality of parity checks to the hard output such that a subset of the plurality of parity checks fail. At least one bit of the hard output is identified as being potentially incorrect. The identified bit is modified, and the plurality of parity checks is thereafter repeated. In some instances of the aforementioned embodiments, the decoding is iterative LDPC decoding. In various instances of the aforementioned embodiments, the methods further include determining that the plurality of parity checks passed after modifying the at least one bit, and outputting the hard output with the at least one bit modified. In some instances, identifying the at least one bit that is a potential error includes determining a frequency of association of the identified bit with a subset of the plurality of parity checks that failed. In such instances, the frequency of association of the identified bit is greater than a frequency of association with the subset of parity checks of another bit in the hard output. Other embodiments of the present invention provide systems for performing LDPC decoding. The systems include an LDPC decoder that receives a soft input that is decoded to provide a hard output, and a post processor. The post processor identifies at least one bit in the hard output that is potentially incorrect, modifies the identified bit that is potentially incorrect, and determines whether the modifying the at least one bit eliminated the potential error in the hard output. In some instances of the aforementioned embodiments, the soft input is a reliability of received bits. In various instances of the aforementioned embodiments, identifying the potential error in one or more bits of the hard output includes determining if one or more parity checks performed on the hard output failed. In such instances, determining whether the modifying the one or more bits of the hard output eliminated the potential error includes executing the one or more parity checks to determine if any of the one or more parity checks indicates an error. This summary provides only a general outline of some embodiments of the invention. Many other objects, features, advantages and other embodiments of the invention will become more fully apparent from the following detailed description, the appended claims and the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0011] A further understanding of the various embodiments of the present invention may be realized by reference to the figures which are described in remaining portions of the specification. In the figures, like reference numerals are used throughout several drawings to refer to similar components. In some instances, a sub-label consisting of a lower case letter is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar FIG. 1 is a prior art transmission system including an LDPC encoder and a separate LDPC decoder; FIG. 2 is a decoder system including a post processor in accordance with one or more embodiment of the present invention; FIGS. 3a-3c depict an exemplary LDPC decoding process that may be employed in relation to one or more embodiments of the present invention; FIGS. 4 and 5 are flow diagrams showing decoding including post processing in accordance with various embodiments of the present invention for encoding and decoding information; and FIG. 6 is a graphical depiction of a decision tree that may be followed for determining bit modification in a codeword used to test parity accuracy in accordance with some embodiments of the present DETAILED DESCRIPTION OF THE INVENTION [0017] The present invention is related to systems and methods for decoding information, and more particularly to systems and methods for LDPC decoding with post processing. Turning to FIG. 2, a decoder system 200 including a post processor 230 is depicted in accordance with one or more embodiments of the present invention. Decoder system 200 includes a code word receiver 210, an LDPC decoder 220, a post processor 230, and an information receiver 240. Codeword receiver 210 may be any circuit or device capable of receiving an encoded data set. Thus, as just some examples, codeword receiver 210 may be, but is not limited to, a read channel associated with a magnetic storage medium, a receiver in a cellular telephone, or the like. LDPC decoder 220 may be any LDPC decoder capable of receiving a soft input and providing a decoded output therefrom. Post processor 230 may be any processor capable of receiving a decoded output and providing error checking and/or error correction on the decoded output. Information receiver 240 may be any device or system capable of accepting decoded information. Thus, for example, information receiver 240 may be, but is not limited to, a processor, a memory system, or the like. In operation, codeword receiver 210 receives a codeword that is passed to LDPC decoder as a soft output 215 n-bits in length. As used herein, the phrases "soft input" or "soft output" are used in their broadest sense to mean respectively any output or input that includes probability information. Thus, for example, a soft input may include a number of bits that are each associated with or represented by a probability that the bit is correct. LDPC decoder 220 performs LDPC decoding as is known in the art to provide a decoded output 225 to post processor 230. As one example, LDPC decoder 220 may perform a sum-product iterative decoding algorithm described in Moon, Todd K., "Error Correction Coding", section 15.5, John Wiley and Sons Inc., Hoboken, N.J., 2005. The entirety of the aforementioned reference is incorporated herein by reference for all purposes. Based on the disclosure provided herein, one of ordinary skill in the art will recognize a variety of different LDPC decoding approaches and even other systematic block decoding processes that may be used in relation to one or more embodiments of the present invention. Decoded output 225 is a hard output n-bits in length. As used herein, the phrases "hard input" or "soft input" are used in their broadest sense to mean respectively any output or input represented as absolute values. Thus, for example, a hard output may be a series of binary values without any interim probability information. Post processor 230 determines whether any errors remain in decoded output 225 after LDPC decoding, and if any, which bits of decoded output are most likely incorrect. Post processor 230 then applies one or more algorithms to correct any remaining errors. Once the errors are reduced or eliminated, post processor 230 strips any redundancy information from decoded output 225 and provides the stripped data as an output 235. Output 235 is k-bits in length, and represents the most likely and in many cases the actual information that was originally encoded using an LDPC encoding process. In some embodiments of the present invention, post processor 230 determines which bits are associated with a number of different parity checks, and which of the included bits are more frequently associated with failed parity error checks. These more frequently involved bits are identified as possible sources of the remaining error(s). Post processor 230 may then modify each of the identified bits either individually or in combination and re-run the parity checks. This process continues until either all of the parity checks indicate that there are no remaining errors, until all possible combinations modifying the identified probable error bits have been exhausted, or until the process timed out. Upon identifying the condition where all of the parity checks indicate that there are no remaining errors, post processor 230 provides the corrected codeword to information receiver 240. In some cases, the process times out or all possibilities are exhausted. In such cases, either the codeword originally provided from LDPC decoder 220 or the most likely codeword are provided to information receiver 240 along with an indication that an error occurred. Based on the disclosure provided herein, one of ordinary skill in the art will recognize other post processing approaches and/or modifications to the previously described post processing approach that may be used in accordance with one or more embodiments of the present invention. Among other things, one or more embodiments of the present invention may reduce the bandwidth required to obtain a result when compared with operating LDPC decoder 220 to its conclusion. Thus, LDPC decoding may be used in situations demanding high throughput and/or allowing for implementation with less chip area. Based on the disclosure provided herein, one of ordinary skill in the art will recognize a variety of other advantages that may be had through use of different embodiments of the present invention. Turning to FIGS. 3a-3c, an exemplary received information 301 and LDPC decoding process is described. It should be emphasized that the described decoding process is merely exemplary, and that codewords of differing length may be processed in accordance with embodiments of the present invention. Further, it should be noted that the depicted decoding matrix is merely exemplary and that a number of different decoding matrices may be implemented in accordance with different embodiments of the present invention depending upon, among other things, the length of codewords that are to be processed and the amount of redundancy utilized in the system. Based on the disclosure provided herein, one of ordinary skill in the art will recognize a variety of different codewords, decoding matrices, and/or redundancy that may be used in relation to different embodiments of the present invention. As shown in FIG. 3a, received information 301 includes n-bits. The n-bits include k information bits 303 and (n-k) redundancy bits 305. Information bits 303 represent data that is originally received prior to encoding, and redundancy bits 305 represent bits that are added to information bits 303 during the encoding process. In this example, n is six and k is four. Received information 301 from a transmission channel is a soft input consisting of a number of probabilities not only indicating the particular binary value of the bits in the codeword, but also the probability that the particular bits have been correctly predicted. For this example, assume each of the bits is represented by a ten where the bit is a one with a one hundred percent probability of being correct. When the probability is zero, the bit has a value of zero. Other probabilities are linearly represented between zero and ten. The bits are represented by a negative ten when the bit is a zero with a one hundred percent likelihood of being correct, and a zero when the probability is zero. Again, other probabilities are linearly represented between zero and negative ten. In decoding received information 301, a decoding matrix 311 is utilized. In the abstract, where product of the codeword multiplied by matrix 311 is equal to zero, a correct codeword has been identified. Where the matrix multiplication does not yield a zero, one or more errors remain in an estimation of received information 301. Iterative LDPC decoding performs a process of iteratively modifying received information 301 until the zero result is achieved. As the result of the iterative multiplication converges, increased confidence in received information 301 is achieved. In some cases, only a limited number of iterations may be performed to conserve both bandwidth and chip area. In such cases, convergence may not be possible and one or more errors remain in the codeword reported by LDPC decoder 220. In embodiments of the present invention, post processor 230 operates to correct the remaining errors using only limited bandwidth and chip area. Thus, as previously mentioned, among other things, various embodiments of the present invention provide higher bandwidth LDPC decoding and/or reduced chip area. Based on the disclosure provided herein, one of ordinary skill in the art will recognize additional advantages that attend the various embodiments of the present invention. A generator matrix corresponding to the parity matrix is used to encode information 303 to produce received information 301. As will be appreciated by one of ordinary skill in the art, matrix 311 is merely exemplary and a number of decoding matrices may be used in accordance with the embodiments of the present invention depending upon a desired codeword length and implemented redundancy. Matrix 311 includes a number of columns 313 and a number of rows 315. The number of columns 313 corresponds to the length of received information 301. Thus, in this case, received information 301 is six bits in length. The number of rows 315 corresponds to the implemented redundancy applied to information bits 303. In particular, each of rows 315 corresponds to a different parity check that is built into received information 301 by a preceding encoding process. Matrix 311 may be represented by a Tanner diagram 361 that displays the relationship between the rows 315 and columns 313 of matrix 311. In particular, there is a circle for each column of matrix 311, and a square for each row of matrix 311. Where there is a binary `1` in matrix 311, it is represented by a path between the circle and square corresponding to location of the `1` in the matrix. Thus, where there is a `1` corresponding to the intersection of column five and row three, a path is drawn between the square representing row three and the circle representing column five. Alternatively, where there is not a `1` at the intersection column four and row three, there is not a path drawn between the square representing row three and the circle representing column 4. Tanner diagram 361 shows all of the paths corresponding to the row/column intersections in matrix 311. Tanner diagram 361 provides an effective graphic for discussing the decoding algorithm. The algorithm begins by applying the probability value of each of the individual bits of received information 301 to the circle corresponding to the respective bit. To illustrate, the following exemplary probability values for received information 301 are used for codeword[5 . . . 0] 301: 10, 9, -5, -6, -9 and 9. The value of 10 corresponding to bit 5 of received information 301 is assigned to the circle corresponding to column 5; the value of 9 corresponding to bit 4 of received information 301 is assigned to the circle corresponding to column 4; the value of -5 corresponding to bit 3 of received information 301 is assigned to the circle corresponding to column 3; the value of -6 corresponding to bit 2 of received information 301 is assigned to the circle corresponding to column 2; the value of -9 corresponding to bit 1 of received information 301 is assigned to the circle corresponding to column 1; and the value of 9 corresponding to bit 0 of received information 301 is assigned to the circle corresponding to column 0. These values are then applied to a formula implemented by each of the boxes corresponding to the respective rows. The formula may be any number of formulas as are known in the art, however, for the purposes of this illustration the following formula is applied: row result =f(Σf (column value)), where the function f (x) is the decoding function. The value for each of the row results is then transferred back to each circle attached to the row via a path of Tanner diagram 361 where the various results are aggregated. Another iteration is then performed using the newly identified values in the circles and the process is repeated. This process continually accumulates the probability data. Where only a limited number of errors exist in received information 301, over a number of iterations the values maintained in the circles corresponding to the respective columns represents the decoded codeword. Thus, assume the aforementioned process ends with the following decoded codeword: 10, 5, -7, 5, -9 and 10. In this case, the hard output corresponding to the decoded codeword would be: 1, 1, 0, 1, 0, As previously stated, a correct codeword is found where matrix 311 multiplied by the hard output of LDPC decoder 220 is equivalent to zero. Again, this may not always be possible due to, for example, a limit on the number of LDPC iterations performed in the LDPC decoding process. In such cases, it is possible that the product of the multiplication of the decoded codeword by matrix 311 will yield a zero in relation to some rows of the result matrix, but not all. This corresponds to passing a parity check corresponding to some rows (i.e., the rows yielding a zero) and failing a parity check corresponding to other rows (i.e., the rows yielding a non-zero result). These pass and fail results may be used in combination with Tanner graph 361 to describe a process in accordance with embodiments of the present invention for resolving errors remaining in the decoded codeword at the completion of LDPC decoding. Turning to FIG. 4 and FIG. 5, the process of using the results of parity checks upon completion of the LDPC decoding is discussed. In particular, FIG. 4 shows the process implemented by LDPC decoder 220 and an abstracted view of the processes implemented by post processor 230. A more detailed view of the process that may be implemented in one or more embodiments of post processor 230 is depicted in FIG. 5. Following flow diagram 400, an encoded codeword is received and LDPC encoding is performed on the codeword (block 405). This encoding may proceed as discussed above in relation to FIGS. 3a-3c above, or using another approach to LDPC encoding known in the art. Upon completing each iteration of the LDPC decoding, it is determined if a maximum number of LDPC decoding iterations have been completed (block 410). The maximum number of LDPC decoder iterations may be selected based on a desired throughput of the LDPC decoder (i.e., based on a desired bandwidth of LDPC the LDPC decoder). In an ideal situation, a large number of iterations would be performed to allow the decoding algorithm to converge on the correct codeword. However, in some cases, convergence is either not possible due to a large number of errors in the received codeword, or insufficient available processing time to allow the decoding algorithm to converge. Where the maximum number of iterations have not yet been performed (block 410), another LDPC decoding iteration is performed (block 405). The process of LDPC decoding continues to iterate until the maximum number of iterations has been achieved. Where the maximum number of LDPC iterations have been performed (block 410), processing is turned over to the post processor (block 415). At this point, the resulting parity check results are provided to the post processor (block 415) along with the hard output representing the decoded codeword (block 420). The post processor then determines if any of the received parity check results are non-zero (block 425). As previously discussed, a non-zero parity check indicates one or more remaining bit errors in the hard codeword received from the LDPC decoder. Where there are not any non-zero parity checks (block 425) indicating no remaining bit errors in the hard codeword, the hard result is provided as an output (block 435). Alternatively, where one or more of the parity checks are non-zero (block 425) indicating one or more remaining bit errors in the hard codeword, post processing on the hard codeword received from the LDPC decoder is performed (block 430). Post processing includes determining one or more likely error bits within the hard code word based on the received parity check information. Using this information, the most likely error bits are sequentially modified until either the parity check equations do not include any remaining non-zero results or until all bit modification possibilities are exhausted. Where the parity check equations result in all zero results, the resulting hard code word including the modified bits is provided as an output (block 435). Alternatively, where the possibilities are exhausted and the parity checks still indicate one or more non-zero results, the hard codeword including bit modifications that coincide with the least likely remaining error bits is reported along with an error indication (block 440). An exemplary implementation process for performing the post processing of block 430 in accordance with one or more embodiments of the present invention is discussed in greater detail in relation to a flow diagram 500 of FIG. 5. Following flow diagram 500, the parity check results reported from the LDPC decoder are accessed (block 505). These results may be maintained in an array of parity check results (i.e., Parity[PCC], where PCC ranges from zero to the number of parity checks less one). There is one parity check result associated with each of the squares in Tanner diagram 361. Thus, continuing with the example discussed in relation to FIG. 3a-3c above, there are four parity check results (i.e., one parity check result associated with each row in matrix 311). It should again be noted that matrix 311 and the encoding associated therewith is merely exemplary, and that other matrices may be used in accordance with one or more embodiments of the present invention. Thus, there may be more or fewer than the exemplary four parity checks in various implementations of the embodiments of the present invention. In addition, a parity check counter (PCC) is initialized to zero (block 510), and a number of codeword error bit counters (CWEBC[i]) are initialized to zero (block 520). The parity check counter is used to count through each of the parity checks that are received from the LDPC decoder. Thus, following the example of FIGS. 3 where four parity check results are available, the parity check counter increments from one to three as each of the parity check results are accessed. Once the parity check counter has incremented to four (i.e., greater than the number of parity checks available), all of the parity check results will have been utilized. There is one codeword error bit counter (CWEBC[i]) for each bit in the codeword received from the LDPC decoder. Thus, following the example of FIGS. 3 where the codeword is six bits long, the value of `i` ranges from zero to five with each position representing a respective one of the six bits in the codeword. Each of these counters is used to account for the number of times that each bit in the received codeword is associated with a parity check result that indicates an error. Further, a codeword bit counter (CWBC) is initialized to zero (block 520). The codeword bit counter is used to count individual bits of the hard codeword received from the LDPC decoder. At this junction, it should be noted that flow diagram 500 represents one particular method in accordance with some embodiments of the present invention, and that a variety of flow diagrams illustrating various implementations of one or more embodiments of the present invention. As one example, while shown in flow diagram 500, the CWBC counter is not necessary and various implementations of the present invention may eliminate use of the CWBC counter. Based on the disclosure provided herein, one of ordinary skill in the art will recognize a variety of implementations using different counters and/or index tools. It is then determined if all of the parity checks have been examined in the process (block 525). Thus, following the example of FIGS. 3 where four parity check results are available, it is determined if the parity check counter has been incremented passed three as there are four parity checks that are to be examined. Where the parity check count is not yet equal to the maximum (block 525), the next parity check is processed in blocks 530 through 555. Alternatively, where the parity check count is equal to the maximum (block 525), an error correction process of blocks 560 through 590 is performed using the information identified in the parity check processing. Processing each of the parity checks includes determining whether the parity check exhibits a non-zero result (block 530). Where the current parity check (i.e., Parity[PCC]) is non-zero (block 530), one or more bits of the codeword used in the particular parity check is incorrect. In such a situation, a counter (CWEBC[i]) associated with each bit used to calculate the parity check under current examination is incremented indicating that it may be part of the error identified by the parity check (block 535 and block 540). The matrix used for decoding provides an indication of which bits of a codeword are used in each of the parity checks. In particular, each row of the matrix represents an individual parity check and each one indicates a bit in the codeword that plays a part in the particular parity check. Thus, following the example of FIGS. 3 there are four parity checks corresponding to respective rows of matrix 311 (Parity[3 . . . 0]). In this case, Parity[3] is developed using bit five, bit three, bit one and bit zero of the codeword. This information is graphically depicted in Tanner graph 361 by the paths extending from the square associated with Row 3. Similarly, Parity[2] is developed using bit five, bit four, bit two and bit zero; Parity[1] is developed using bit four and bit one; and Parity[0] is developed using bit four, bit three and bit zero. Where a bit is not used in the parity check under examination (block 535), the counter (i.e., CWEBC[CWBC]) associated with the particular bit is not incremented. The process of checking each bit to determine if it played a part in the parity check under examination continues sequentially through each bit position of the codeword through incrementing the codeword bit counter (block 545). It is then determined if all of the bits in the parity check under examination have been examined (block 550). All of the bits have been examined when the codeword bit counter exceeds the number of bits of the provided codeword. Thus, following the example of FIGS. 3 where the codeword is six bits long, when the codeword bit counter equals six, all of the bits have been examined. Where all of the bits have not yet been examined (block 550), the processes of block 535 through block 550 are repeated for the next codeword bit. Alternatively, where all of the codeword bits have been examined (block 550), the parity check counter is incremented (block 555), and the processes of block 520 through block 555 are repeated for the next parity check. Where the parity check counter indicates that all of the parity checks have been examined (block 525), the codeword bits that are associated with the largest number of parity check failures are identified as potential error bits (block 560). The identified bits of the codeword are then modified one at a time and/or in combinations (block 565). After modifying the bits (block 565), each of the parity checks are again processed using the newly modified codeword (block 570). Where all of the parity checks provide a zero result (block 575), the newly modified codeword is identified as correct and provided as a hard result (block 435). Alternatively, where one or more of the parity checks indicates a non-zero result (block 575), it is determined if there are other potential bits of the codeword of combinations thereof that have not yet been considered (block 580). Where other bits or combinations thereof remain to be checked (block 580), the processes of block 565 to block 580 are repeated for the next possible bit modification. Alternatively, where the possibilities have been exhausted (block 580), the most likely hard codeword is provided along with an indication of a likely error remaining in the codeword (block 440). At this point, an example of the process of FIG. 4 and FIG. 5 is provided. Assume that matrix 311 is used indicating a codeword length of six bits and four parity checks. The LDPC processing of block 405 and block 410 is repeated until the maximum number of iterations have been accomplished. Assume at this point that a hard codeword is provided by the LDPC decoder that fails the parity checks associated with Row 2 and Row 0, and passes the parity checks associated with Row 3 and Row 1. PCC, each of CWBC[i] and CWBC are each initialized to zero. As the parity check of Row 0 failed, Row 0 of matrix 311 is examined bit by bit (block 530-block 550). Using matrix 311, CWEBC[0] is incremented as there is a `1` in the zero bit position of Row 0 of matrix 311 (block 535 and block 540). In contrast, CWEBC[1] is not incremented as there is a `0` in the one bit position of Row 0 of matrix 311 (block 535). This process is repeated for each bit of Row 0 of matrix 311. This results in the following counters being incremented in addition to CWEBC[0]: CWEBC[2], CWEBC[3] and CWEBC[4]. Once all of the bits of Row 0 have been considered (block 550), the parity check counter is incremented (block 555) indicating that the parity check associated with Row 1 will now be examined. The parity check corresponding to Row 1 is passed over (block 530) as it indicates a correct result. The parity check counter is then incremented (block 555) indicating that the parity check associated with Row 2 will now be examined. The codeword bit counter is reset to zero (block 520) and the parity check corresponding to Row 2 is examined bit by bit as it is non-zero (block 530). Using matrix 311, CWEBC[0] is incremented as there is a `1` in the zero bit position of Row 2 of matrix 311 (block 535 and block 540). In contrast, CWEBC[1] is not incremented as there is a `0` in the one bit position of Row 2 of matrix 311 (block 535). This process is repeated for each bit of Row 2 of matrix 311. This results in the following counters being incremented in addition to CWEBC[0]: CWEBC[2], CWEBC[4] and CWEBC[5]. Once all of the bits of Row 2 have been considered (block 550), the parity check counter is incremented (block 555) indicating that the parity check associated with Row 3 will now be examined. As the parity check for Row 3 is zero, it is passed over as it does not indicate an error (block 530). At this point, all four parity checks corresponding to ROWS 0-3 of matrix 311 have been examined and result processing is initiated (block 525). In this case, the CWEBC[i] counters have the following values: CWEBC[5]=1; CWEBC[4]=2; CWEBC[3]=1; CWEBC[2]=2; CWEBC[1]=0; and CWEBC[0]=2. The bits associated with the largest count values are then identified as potential error bits (block 560). In this case, the zero, two and four bits of the six bit codeword are identified as potentially incorrect as each exhibit a count of two (i.e., CWEBC[4]=2; CWEBC[2]=2 and CWEBC[0]=2). This leaves seven possible bit combinations (i.e., 23 combinations less the existing incorrect combination) that may be tried to determine which of the aforementioned bits may be incorrect. For this, assume that the hard codeword reported by the LDPC decoder is: 1 0 1 0 1 0. The following Table 1 shows an exemplary output of the parity checks where each of bits four, two and zero are modified: -US-00001 TABLE 1 Exemplary Parity Results Bit Combination Row 0 [4, 2, 0] Row 3 Parity Row 2 Parity Row 1 Parity Parity 000 zero non-zero zero non-zero 001 non-zero zero zero zero 010 zero non-zero zero zero 011 non-zero non-zero zero zero 100 zero non-zero zero zero 101 zero zero zero zero 110 non-zero non-zero non-zero non-zero 111 non-zero zero non-zero zero In this case, switching the zero and the four bit of the codeword received from the LDPC decoder from zero to a one (block 565) causes a zero result in all four parity checks (block 570). Thus, the reported hard codeword (block 435) is: 1 1 1 0 1 1. It should be noted that the foregoing is merely an example of the process and that a variety of different bit manipulations and parity check results are possible depending upon the codeword received from the LDPC decoder, the decoding matrix that is used, the length of the codeword, and a variety of other variables. Based on the disclosure provided herein, one of ordinary skill in the art will recognize a variety of possibilities that may occur in relation to one or more embodiments of the present invention. Some embodiments of the present invention use a decision tree process for determining which combination of bits to flip (block 565) when more than one bit is reported as a possible error as was the case in the preceding example. As one example, list decoding may be performed following a decision tree 600 of FIG. 6 to test all possible combinations. As shown, decision tree 600 outlines bit modification where three possible error bits are identified: Bit X, Bit Y, Bit Z. In operation, Bit X may be set to a logic `0` and Bit Y is set to a logic `0`. Then, Bit Z is tested in its two possible settings. If a solution is not found, Bit Y is flipped to a logic `1`, and Bit Z is switched between its two possible settings. If a solution is still not found, Bit X is switched to a logic `1`, and the process of flipping Bit Y and Bit Z is continued until a solution is identified. It should be noted that larger or smaller decision trees similar to decision tree 600 may be used where more or fewer than three potential error bits are identified. Based on the disclosure provided herein, one of ordinary skill in the art will recognize a variety of methodologies that may be used to select combinations of bit modifications to be re-checked using the parity checks (block 565) in accordance with different embodiments of the present invention. It should be noted that the preceding example is somewhat simple and does not exhibit enough minimum distance to correct errors, but it does demonstrate post processing in accordance with one or more embodiments of the present invention. Based on the disclosure provided herein, one of ordinary skill in the art will recognize many applications and examples in accordance with the various embodiments of the present invention including sufficient minimum distance. In conclusion, the invention provides novel systems, devices, methods and arrangements for decoding encoded information. While detailed descriptions of one or more embodiments of the invention have been given above, various alternatives, modifications, and equivalents will be apparent to those skilled in the art without varying from the spirit of the invention. Therefore, the above description should not be taken as limiting the scope of the invention, which is defined by the appended claims. Patent applications by Hao Zhong, Bethlehem, PA US Patent applications in class Forward correction by block code Patent applications in all subclasses Forward correction by block code User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20080301517","timestamp":"2014-04-18T09:09:20Z","content_type":null,"content_length":"69527","record_id":"<urn:uuid:d52433df-b2ac-43f8-b8d9-493610e23efb>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
An orchard contains 62 peach trees with each tree yielding an average of 50 peaches. For each 3 additional trees... - Homework Help - eNotes.com An orchard contains 62 peach trees with each tree yielding an average of 50 peaches. For each 3 additional trees planted, the average yield per trees decreases by 12 peaches. How many trees should be planted to maximize the total yield of the orchard? the number of trees is= (give your answer as whole number) The orchard contains 62 peach trees with each tree yielding an average of 50 peaches. For each 3 additional trees planted, the average yield per trees decreases by 12 peaches. If x sets of three extra trees are planted, the yield from the orchard is Y = (62 + 3x)*(50 - 12*3x) = -108*x^2-2082*x+3100 To determine the number of trees to be planted to maximize the yield solve Y' = 0 for x. Y' = -216x - 2082 -216x - 2082 = 0 => x = -1043/105 => `x ~~ -10` Assuming that as the number of trees is decreased there is an increase in the yield, the total number of trees in the orchard should be approximately 32. Else, the number of trees in the orchard should not be increased. Planting extra trees decreases the total yield of the orchard. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/an-orchard-contains-62-peach-trees-with-each-tree-433662","timestamp":"2014-04-16T05:16:10Z","content_type":null,"content_length":"25873","record_id":"<urn:uuid:88a79d10-5a77-4c54-bf85-c46cb2c20148>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof-Theoretic Characterisations of Logic Programming James H. Andrews Abstract: A characterisation of a logic programming system is given in the form of a natural deduction proof system. The proof system is proven to be ``equivalent'' to an operational semantics for the logic programming system, in the sense that the set of theorems of the proof system is exactly the set of existential closures of queries solvable in the operational semantics. It is argued that this proof-theoretic characterisation captures our intuitions about logic programming better than do traditional characterisations, such as those using resolution or
{"url":"http://www.lfcs.inf.ed.ac.uk/reports/89/ECS-LFCS-89-77/","timestamp":"2014-04-20T00:50:47Z","content_type":null,"content_length":"4688","record_id":"<urn:uuid:8ceaed0d-360e-4358-b8f5-ca2101bc9cde>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Monte Sereno, CA ACT Tutor Find a Monte Sereno, CA ACT Tutor ...It introduces trigonometry. Topics include equations and functions, factoring, graphing, conics (including quadratics), matrices, polynomials, and logarithms. I have helped in tutoring a number of students in Algebra 2. 32 Subjects: including ACT Math, reading, English, ADD/ADHD ...Learning organic chemistry, at large, requires a build up from the physicochemical properties of molecules and functional groups, to reactions and mechanisms, to synthesis. Thus, the better the understanding of the prior steps, the better the understanding of the following steps. About half of my time was spent in helping students with their lab reports. 24 Subjects: including ACT Math, reading, chemistry, calculus ...As a private tutor, I've had the privilege of guiding a wide variety of students through every level of physics, chemistry, and math including remedial pre-algebra, college-level calculus, and everything in between. In addition, I've prepared many students for the SAT 1, SAT 2, and ACT standardi... 14 Subjects: including ACT Math, chemistry, calculus, physics ...The subject matter of linear algebra was studied when I was in college and then used throughout my graduate studies and career as a research scientist. In general, I teach students how to solve problems, and then lead students through understanding why the subject is introduced and what is the c... 15 Subjects: including ACT Math, calculus, statistics, physics ...When students are too challenged they feel overwhelmed and discouraged. When students are not interested or challenged they get bored. I am expert at finding each person's sweet spot where s/he is comfortable, interested and challenged - where learning feels less like work and more like play. 22 Subjects: including ACT Math, reading, English, physics
{"url":"http://www.purplemath.com/monte_sereno_ca_act_tutors.php","timestamp":"2014-04-17T21:51:52Z","content_type":null,"content_length":"23948","record_id":"<urn:uuid:f4b7962e-4dbb-4fbe-b82c-907c71de7d2e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00126-ip-10-147-4-33.ec2.internal.warc.gz"}