content
stringlengths
86
994k
meta
stringlengths
288
619
Another Newton May 15th 2010, 06:18 PM #1 May 2010 Another Newton Use Newton's method to find the coordinates of the inflection point of the curve y = ecos(x), 0 ≤ x ≤ π , correct to six decimal places. Your notation is not clear, but an inflection point is a point at which the curvature changes sign, in practice this means that it is a local extrema of y'. Do you mean $y= e^{cos x}$? As Captain Black said, an inflection point occurs at a local extremum of y' which, in turn, mean that y"= 0 (since y" always exists for this function). Take the second derivative of this function, set it equal to 0 and use Newton's method to solve the equation. May 15th 2010, 11:02 PM #2 Grand Panjandrum Nov 2005 May 16th 2010, 02:26 AM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/144924-another-newton.html","timestamp":"2014-04-18T05:34:39Z","content_type":null,"content_length":"36261","record_id":"<urn:uuid:8d9f0dc3-d542-427d-8499-62d264040a24>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
An algebraic proof of Fermat's conjecture Hi, suchness, you have made a clean exposition of your argument, that's nice. I do have a question. The heart of the matter seems to be that, when n>2, it is presumed that[tex]{n \choose k} \frac {y^k + (-1)^k c^k} {b^k}[/tex]cannot be an integer. I'm not sure I follow your reason here. Even if you assume y,c,b to be pairwise coprime, [itex]n \choose k[/itex] has no obligation, I think, to be coprime to b. And, for a large n and small k, [itex]n \choose k[/itex] could be larger than b^k. Actually, even if all fractions in the sum are proper fractions, why can't they end up adding to an integer? I am probably missing something, though. I'm not a mathematician so I probably won't understand your question, but I'll give it my best shot. The remaining series together with the value of n would have to resolve to a value of either 0 for when n is an odd number and -2 when n is an even number. When n is odd and equal to 3 or greater, the series should resolve to a positive number and therefore prevent -n plus the series being equal to 0. When n is an even number greater than 2, the series will also resolve to a positive number which together with -n with be greater than -2. I think this can be shown by substituting c + (-b) for y and showing that even if c^k/b^k is negative it will be canceled out by the first term of the binomial expansion of (c + (-b))^k. Then the remaining terms of (-b)^k plus the remainders of that binomial expansion multiplied by the value of the factorial terms would end in the results noted above. It occurred to me to explicitly proof this but I wasn't sure it was necessary. If you are suggesting that it is, I think you may be right and I will work on that first thing this morning once I get a bit more rest. Thank you.
{"url":"http://www.physicsforums.com/showthread.php?t=572613","timestamp":"2014-04-21T12:20:22Z","content_type":null,"content_length":"54453","record_id":"<urn:uuid:74438983-f3a6-48ec-b592-2d6b301258ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: This Week's Finds in Mathematical Physics (Week 140) Replies: 10 Last Post: Oct 27, 1999 9:53 PM Messages: [ Previous | Next ] Flies, trains, and series summation Posted: Oct 26, 1999 1:13 PM [sci.physics.research removed] In sci.physics Chris Hillman <hillman@math.washington.edu> wrote: > On 22 Oct 1999, Phillip Helbig wrote: >> The problem being: Two trains are 60 miles apart and approaching each >> other at 30 miles per hour. There is a fly flying between the >> cowcatchers of each train, at 60 miles per hour, back and forth, turning >> around immediately. What is the total distance covered by the fly > The problem you mention certainly strikes me as much too simple to have > the ring of truth, since anyone could sum -that- series in their head in a > few seconds! Then, "ian" <walkersystems@one.net.au> wrote: > why sum a series? > it takes the trains 1 hour to hit, the fly travels at 60 m/hr so the fly > flies 60 miles... When I heard about this puzzle long time ago I got the idea that perhaps this way you can sum some complicated series using the following procedure: Construct (complicated) trains-fly path such that the fly, flying to and fro, travels distances equal to the terms of your series. Than the distance traveled by train(s) equals the sum of the series. Now, I was too lazy (and mathematically ignorant) to investigate into this idea but I'd love to hear if this wouldn't work and why (too trivial probably). Kresimir Kumericki kkumer@phy.hr http://www.phy.hr/~kkumer/ Theoretical Physics Department, University of Zagreb, Croatia Date Subject Author 10/16/99 This Week's Finds in Mathematical Physics (Week 140) john baez 10/18/99 Re: This Week's Finds in Mathematical Physics (Week 140) Charles Francis 10/18/99 Re: This Week's Finds in Mathematical Physics (Week 140) Ralph Frost 10/21/99 Re: This Week's Finds in Mathematical Physics (Week 140) Christopher Hillman 10/22/99 Re: This Week's Finds in Mathematical Physics (Week 140) Phillip Helbig 10/23/99 Re: This Week's Finds in Mathematical Physics (Week 140) Christopher Hillman 10/25/99 Re: This Week's Finds in Mathematical Physics (Week 140) ian 10/26/99 Flies, trains, and series summation Kresimir Kumericki 10/27/99 Re: Flies, trains, and series summation MARK FERGERSON 10/25/99 Re: This Week's Finds in Mathematical Physics (Week 140) Barnaby Finch 10/25/99 Re: This Week's Finds in Mathematical Physics (Week 140) Jeremy Boden
{"url":"http://mathforum.org/kb/message.jspa?messageID=208042","timestamp":"2014-04-23T17:38:26Z","content_type":null,"content_length":"29750","record_id":"<urn:uuid:0f7e0426-7706-4ba9-b46d-d1d0512d455b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
How To - Probability Estimation Package Hi to all, I was reviewing the orange.statistics package and the documentation is straightforward however in the Probability Estimation part there are no examples like in the other sections (Basic Statistics, ContingencyTable and Distributions). I was trying to instantiate and play arround by myself with the EstimatorConstructors, Estimators, ConditionalEstimators and I could neither figure out how to setup them nor how to execute them nor how to combine them. Therefore a good improvement in the documentation could be to add some examples. So my question is: Any of you can help me with examples of how to use them (EstimatorConstructor, Estimators and ConditionalEstimators), starting with a toy dataset like "iris.tab". Thanks for your help, Nestor Andres Rodriguez The documentation is in fact wrong or misleading in some parts. There are also some bugs in the code itself, for instance the 'pass None for the distribution' does not even work. It also does not specify that only the *ByRows estimators can actually make use of the 'instances' argument, the others just raise an (uninformative) error if not also given the distribution. Example code: Code: Select all import Orange iris = Orange.data.Table("iris") # discrete class distribution iris_dist = Orange.statistics.distribution.Distribution("iris", iris) # m estimate constructor mest_constructor = Orange.statistics.estimate.M(m=10) # estimator mest = mest_constructor(iris_dist) print mest(iris[0]['iris']) # prints 0.333... as expected # petal length distribution plength_dist = Orange.statistics.distribution.Distribution("petal length", iris) # loess contructor loess_est_constructor = Orange.statistics.estimate.Loess() # loess estimator plength_dist.normalize() # loess constructor does not normalize the distribution (this is a bug). loess_est = loess_est_constructor(plength_dist) print loess_est(iris[0]['petal length']) # contingency matrix for the conditional estimator contingency = Orange.statistics.contingency.VarClass('petal length', iris) conditional_loess_constructor = Orange.statistics.estimate.ConditionalLoess() cloess_est = conditional_loess_constructor(contingency) print cloess_est(iris[0]['petal length']) # prints <0.980, 0.008, 0.012> Hi Ales, Thanks again for your great help. I will check your example so that I can undestand how to use it. Nestor Andres Rodriguez
{"url":"http://orange.biolab.si/forum/viewtopic.php?p=4920","timestamp":"2014-04-18T23:01:48Z","content_type":null,"content_length":"19973","record_id":"<urn:uuid:333260e2-63ff-401d-9b63-728b5ce4adf3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] 2D interpolate issues Mike Toews mwtoews@gmail.... Tue Mar 22 17:56:43 CDT 2011 I have a few questions regarding interpolate.interp2d, as I would like to do some bilinear interpolation on 2D rasters. I'll illustrate my issues with an example: import numpy from scipy import interpolate x = [100, 110, 120, 130, 140] y = [200, 210, 229, 230] z = [[ 1, 2, 3, 4, 5], First, why do I get an error with the following? >>> f1 = interpolate.interp2d(x, y, z, kind='linear', bounds_error=True) Warning: No more knots can be added because the additional knot would coincide with an old one. Probably cause: s too small or too large a weight to an inaccurate data point. (fp>s) kx,ky=1,1 nx,ny=8,4 m=20 fp=263.568959 s=0.000000 I do not get an error if I swap x, y: >>> f2 = interpolate.interp2d(y, x, z, kind='linear', bounds_error=True) but this is incorrect, as my z list of lists has 5 columns or x-values and 4 rows or y-values. Do I need to transpose my z? za = numpy.array(z).T f3 = interpolate.interp2d(x, y, za, kind='linear', bounds_error=True) The example in http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html does not use a transposed array. From the documented example, we can see that intuitively len(x) = columns and len(y) = rows in z Secondly, why does bounds_error do nothing? >>> f3(-100,-100) array([ 1.]) >>> f3(1000,1000) array([ 38.]) I've supplied x and y values far outside the range, and I do not get an error. Similarly, setting bounds_error=True, fill_value is not returned when x and y are out of bounds, as documented. Are these user errors or bugs? More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2011-March/028767.html","timestamp":"2014-04-20T08:35:06Z","content_type":null,"content_length":"4191","record_id":"<urn:uuid:b3921ed1-22a8-417c-88d3-eac3a28622de>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
28 April 2003 Vol. 8, No. 17 THE MATH FORUM INTERNET NEWS Exploratorium Store | A Brief History of Mechanical Calculators ISSAC 2003 - Drexel University EXPLORATORIUM STORE The Exploratorium Store has three new publications for sale: Math and Science Across Cultures This book uses activities based on elements of other cultures to allow students to gain awareness of multicultural issues as well as an understanding of the day-to-day relevance of math and science for people of all backgrounds. Math Explorer A book filled with games, puzzles, and science experiments to help kids develop math skills while having fun. Square Wheels The latest in the Snackbook series, this volume features 31 all-new Science Snacks. A BRIEF HISTORY OF MECHANICAL CALCULATORS by James Redin Redin describes the most common non-electronic calculating devices, starting 2500 years ago with the abacus and ending 30 years ago with the introduction of the first electronic Part I - The Age of the Polymaths Includes the evolution of the calculating devices up to the invention of the Stepped Wheel by Leibniz. Part II - Crossing the 19th Century Discusses commercialized designs and abandoned designs, and the quest for a keyboard. Part III - Getting Ready for the 20th Century Reviews the development of office machines until the 1960's, when the first electronic calculators appeared on the market. See also: Mechanical Calculators Album Related Museums ISSAC 2003 The International Symposium on Symbolic and Algebraic Computation (ISSAC) will be held at Drexel University August 3 - 6, 2003. The international symposium provides an opportunity to learn of new developments and to present original research results in all areas of symbolic mathematical computation. Internet Accessible Mathematical Computation, a workshop at ISSAC 2003, will be held Thursday, August 7, 2003, at Drexel University. The workshop is free for all ISSAC '03 Topics of the workshop include, but are not limited to: - Remote access to mathematical software over the Internet - Encoding of mathematical expressions - Interoperability between software that create, transform or display mathematical expressions via ad hoc communication protocols and software architectures - Web-based mathematics education - Access and interoperability to mathematical knowledge - Protocols, APIs, URL schemes, metadata, and other mechanisms for system interoperability, parallel/distributed computing, and standardization - Application of IAMC for practical purposes For more information on IAMC, current and past activities, and proceedings of previous IAMC Workshops, please visit the IAMC Information Site: CHECK OUT OUR WEB SITE: The Math Forum http://mathforum.org/ Ask Dr. Math http://mathforum.org/dr.math/ Problems of the Week http://mathforum.org/pow/ Mathematics Library http://mathforum.org/library/ Math Tools http://mathforum.org/mathtools/ Teacher2Teacher http://mathforum.org/t2t/ Discussion Groups http://mathforum.org/discussions/ Join the Math Forum http://mathforum.org/join.forum.html Send comments to the Math Forum Internet Newsletter editors Donations http://deptapp.drexel.edu/ia/GOL/giftsonline1_MF.asp _o \o_ __| \ / |__ o _ o/ \o/ __|- __/ \__/o \o | o/ o/__/ /\ /| | \ \ / \ / \ /o\ / \ / \ / | / \ / \
{"url":"http://mathforum.org/electronic.newsletter/mf.intnews8.17.html","timestamp":"2014-04-18T21:51:33Z","content_type":null,"content_length":"8187","record_id":"<urn:uuid:90db3dda-6f75-4833-8020-1060363c2891>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Amelia object as data.frame Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Amelia object as data.frame From Laura Maria Schwirz <schwirzl@tcd.ie> To statalist@hsphsun2.harvard.edu Subject Re: st: Amelia object as data.frame Date Wed, 27 Feb 2013 10:33:34 +0000 Thanks, Billy. I haven't written my own programmes and am fairly new to this. Any suggestions on how to change the programme so it supports rclass programmes? I also tried the cmdok option which does not work for the same reason. I'd rather work within the same package and ideally work with Stata. On 26 February 2013 15:27, William Buchanan <william@williambuchanan.net> wrote: > Hi Laura, > The issue that Daniel pointed out wasn't that MI doesn't support -msp-, but that -msp- does not support MI data. For more information, see -help program_properties##mi- which explains some of the information that programmers use to make their packages support MI data. One reason that the program is likely incompatible with the -mi estimate- command is that -msp- is an rclass program (a point that Nick Cox made yesterday in response to your previous thread). > That being said, you could always use -msp- as a framework to develop your own procedure (say -msp2-) that would be an eclass program and would support MI commands. > HTH, > Billy > On Feb 26, 2013, at 7:08 AM, Laura Maria Schwirz <schwirzl@tcd.ie> wrote: >> Thanks for your advice. Stata's mi does in fact not support msp and >> appears to allow mainly for various types of regression. Using another >> programme to impute data is just one option although as you very well >> point out I do need to bear in mind assumptions underlying mi and MSP. >> On 26 February 2013 14:09, daniel klein <klein.daniel.81@gmail.com> wrote: >>> Aside from the obvious -- R questions are not the topic to be >>> discussed on Statalist -- based on your earlier question >>> (http://www.stata.com/statalist/archive/2013-02/msg00960.html) I get >>> the impression you switch software, because Stata did not do what you >>> wanted. >>> That is not necessarily a problem, but just some words of caution. >>> Chaniging to another software, because Stata does not do what you >>> want, might not always be the best choice, given that there might be >>> good reason why Stat does not do what you want. Just because a(nother) >>> software does something, it does not mean that this something makes >>> any sense, or is statistically "correct". >>> In your case the appearant reason -msp- (SSC) does not work with -mi- >>> is, that its author did not implement it to work with -mi-. Whether >>> there is a good statistical reason remains unclear to me. My point is, >>> that in any case you should think carefully about about what you want >>> to do. If you, for example, want to combine Loevinger's H, think >>> about the distribution of this statistic. Is it normal? If not, >>> applying Rubin's combination rules (regardless of software) might not >>> be appropriate. This is discussed here: >>> http://www.stata.com/support/faqs/statistics/combine-results-with-multiply-imputed-data/ >>> along with how to get Stata to combine results from commands that do >>> not support -mi-. >>> btw. Ameliaseems to assume the data to follow a multivariat normal >>> distribution, which might not be appropriate with Items used for >>> Mokken scale analysis, and you need to think about this, too. >>> Best >>> Daniel >>> -- >>> Hi Stata Users >>> I have run multiple imputations using R's Amelia package and would >>> like to use the imputed dataset to analyse Mokken Scale Analysis. >>> But mokken requires the object to be a data frame. I tried >>> as.data.frame(x) and as.matrix(x) but it says that it cannot coerce >>> class amelia into a data frame or matrix. >>> australia93=as.data.frame(australia93) >>> Error in as.data.frame.default(australia93) : >>> cannot coerce class '"amelia"' into a data.frame >>> coefH(australia93) >>> Error in check.data(X) : Data are not matrix or data.frame >>> Any help would be much appreciated. >>> * >>> * For searches and help try: >>> * http://www.stata.com/help.cgi?search >>> * http://www.stata.com/support/faqs/resources/statalist-faq/ >>> * http://www.ats.ucla.edu/stat/stata/ >> -- >> Laura Schwirz >> PhD Candidate and IRCHSS Scholar >> Department of Political Science >> Trinity College Dublin >> Dublin 2 >> Republic of Ireland >> Email: schwirzl@tcd.ie >> * >> * For searches and help try: >> * http://www.stata.com/help.cgi?search >> * http://www.stata.com/support/faqs/resources/statalist-faq/ >> * http://www.ats.ucla.edu/stat/stata/ > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/faqs/resources/statalist-faq/ > * http://www.ats.ucla.edu/stat/stata/ Laura Schwirz PhD Candidate and IRCHSS Scholar Department of Political Science Trinity College Dublin Dublin 2 Republic of Ireland Email: schwirzl@tcd.ie On 26 February 2013 15:27, William Buchanan <william@williambuchanan.net> wrote: > Hi Laura, > The issue that Daniel pointed out wasn't that MI doesn't support -msp-, but that -msp- does not support MI data. For more information, see -help program_properties##mi- which explains some of the information that programmers use to make their packages support MI data. One reason that the program is likely incompatible with the -mi estimate- command is that -msp- is an rclass program (a point that Nick Cox made yesterday in response to your previous thread). > That being said, you could always use -msp- as a framework to develop your own procedure (say -msp2-) that would be an eclass program and would support MI commands. > HTH, > Billy > On Feb 26, 2013, at 7:08 AM, Laura Maria Schwirz <schwirzl@tcd.ie> wrote: >> Thanks for your advice. Stata's mi does in fact not support msp and >> appears to allow mainly for various types of regression. Using another >> programme to impute data is just one option although as you very well >> point out I do need to bear in mind assumptions underlying mi and MSP. >> On 26 February 2013 14:09, daniel klein <klein.daniel.81@gmail.com> wrote: >>> Aside from the obvious -- R questions are not the topic to be >>> discussed on Statalist -- based on your earlier question >>> (http://www.stata.com/statalist/archive/2013-02/msg00960.html) I get >>> the impression you switch software, because Stata did not do what you >>> wanted. >>> That is not necessarily a problem, but just some words of caution. >>> Chaniging to another software, because Stata does not do what you >>> want, might not always be the best choice, given that there might be >>> good reason why Stat does not do what you want. Just because a(nother) >>> software does something, it does not mean that this something makes >>> any sense, or is statistically "correct". >>> In your case the appearant reason -msp- (SSC) does not work with -mi- >>> is, that its author did not implement it to work with -mi-. Whether >>> there is a good statistical reason remains unclear to me. My point is, >>> that in any case you should think carefully about about what you want >>> to do. If you, for example, want to combine Loevinger's H, think >>> about the distribution of this statistic. Is it normal? If not, >>> applying Rubin's combination rules (regardless of software) might not >>> be appropriate. This is discussed here: >>> http://www.stata.com/support/faqs/statistics/combine-results-with-multiply-imputed-data/ >>> along with how to get Stata to combine results from commands that do >>> not support -mi-. >>> btw. Ameliaseems to assume the data to follow a multivariat normal >>> distribution, which might not be appropriate with Items used for >>> Mokken scale analysis, and you need to think about this, too. >>> Best >>> Daniel >>> -- >>> Hi Stata Users >>> I have run multiple imputations using R's Amelia package and would >>> like to use the imputed dataset to analyse Mokken Scale Analysis. >>> But mokken requires the object to be a data frame. I tried >>> as.data.frame(x) and as.matrix(x) but it says that it cannot coerce >>> class amelia into a data frame or matrix. >>> australia93=as.data.frame(australia93) >>> Error in as.data.frame.default(australia93) : >>> cannot coerce class '"amelia"' into a data.frame >>> coefH(australia93) >>> Error in check.data(X) : Data are not matrix or data.frame >>> Any help would be much appreciated. >>> * >>> * For searches and help try: >>> * http://www.stata.com/help.cgi?search >>> * http://www.stata.com/support/faqs/resources/statalist-faq/ >>> * http://www.ats.ucla.edu/stat/stata/ >> -- >> Laura Schwirz >> PhD Candidate and IRCHSS Scholar >> Department of Political Science >> Trinity College Dublin >> Dublin 2 >> Republic of Ireland >> Email: schwirzl@tcd.ie >> * >> * For searches and help try: >> * http://www.stata.com/help.cgi?search >> * http://www.stata.com/support/faqs/resources/statalist-faq/ >> * http://www.ats.ucla.edu/stat/stata/ > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/faqs/resources/statalist-faq/ > * http://www.ats.ucla.edu/stat/stata/ Laura Schwirz PhD Candidate and IRCHSS Scholar Department of Political Science Trinity College Dublin Dublin 2 Republic of Ireland Email: schwirzl@tcd.ie * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-02/msg01115.html","timestamp":"2014-04-18T10:46:29Z","content_type":null,"content_length":"20327","record_id":"<urn:uuid:b7c91a4b-548b-48b1-9fc0-56e0fa772a01>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] What should be the value of nansum of nan's? Charles R Harris charlesr.harris@gmail.... Mon Apr 26 12:03:38 CDT 2010 On Mon, Apr 26, 2010 at 10:55 AM, Charles R Harris < charlesr.harris@gmail.com> wrote: > Hi All, > We need to make a decision for ticket #1123<http://projects.scipy.org/numpy/ticket/1123#comment:11>regarding what nansum should return when all values are nan. At some earlier > point it was zero, but currently it is nan, in fact it is nan whatever the > operation is. That is consistent, simple and serves to mark the array or > axis as containing all nans. I would like to close the ticket and am a bit > inclined to go with the current behaviour although there is an argument to > be made for returning 0 for the nansum case. Thoughts? To add a bit of context, one could argue that the results should be consistent with the equivalent operations on empty arrays and always be In [1]: nansum([]) Out[1]: nan In [2]: sum([]) Out[2]: 0.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20100426/a3dbfc8d/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-April/050148.html","timestamp":"2014-04-18T18:27:08Z","content_type":null,"content_length":"4190","record_id":"<urn:uuid:bfdacb27-e72a-4b6e-ad7b-660570b7fc63>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Let the 2 roots of the quadratic equation 9x^2_7x06=0 be r1 and r2. Evaluate the following: 1. (1/r1)+(1/r2) 2. ((1/r1)+(1/r2)^2 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/500f0fbee4b009397c6737bb","timestamp":"2014-04-19T20:11:14Z","content_type":null,"content_length":"90677","record_id":"<urn:uuid:8eb69aef-1b53-48d4-9b04-b755ee4e33e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
PME28 - All papers by presenting author PME28 Bergen, Norway 14 18 July 2004 The 28th International Conference of the International Group for the Psychology of Mathematics Education All papers by presenting author A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Abrahamson, DorPPProblab: multi-agent interactive computer models for grounding probability in perceptual judgments of spatial proportions and in accessible mathematization Acuña Soto, ClaudiaPPSynoptical and epistemological vision of points in a figural task on the cartesian plane Ainley, JanetRRConstructing meanings and utilities within algebraic tasks Alatorre, SilviaRRProportional reasoning of quasi-illiterate adults Alcock, LaraRRUses of example objects in proving Amato, SolangeRRImproving student teachers’ attitudes to mathematics Amit, MiriamRRTime and flow as parameters in international comparisons: a view from an eighth grade algebra lesson Anaya, MartaSOThe development of mathematical concepts: the case of function and distribution Anghileri, JuliaRRDisciplined calculators or flexible problem solvers? Antonini, SamueleRRA statement, the contrapositive and the inverse: intuition and argumentation Applebaum, MarkSOIdentification of mathematical mistakes by undergraduate students Arnon, IlanaRRSolution - what does it mean? Helping linear algebra students develop the concept while improving research tools Asghari, Amir HosseinRROrganizing with a focus on defining, a phenomenographic approach Askew, MikeRRMediation and interpretation: exploring the interpersonal and the intrapersonal in primary mathematics lessons Attorps, IirisSOSecondary school teachers’ pedagogical content knowledge Axiak, CettinaRRBeing sensetive to students mathematical needs: what does it take? Back, JenniSOExploring the challenge of online mediation Bairral, MarceloSODiversity of geometric practices in virtual discussion groups Ball, LyndaRRA new practice evolving in learning mathematics: differences in students' written records with cas Barabash, MaritaSODevelopment of insight of future math teachers as a result of follow-up after development of mathematical concepts Barash, AvivaSOCo-teaching by mathematics and special education pre-service teachers in inclusive seventh grade mathematics classes. Barkai, RuthiSOIs it a mathematical proof or not ? Elementary school teachers’ responses Barnes, HayleySOInvestigating using the theory of realistic mathematics education to elicit and address misconceptions Barwell, RichardWSResearching the teaching and learning of mathematics in multilingual classrooms Baturo, AnnetteRREmpowering andrea to help year 5 students construct fraction understanding Bayazit, IbrahimRRUnderstanding inverse functions: the relationship between teaching practice and student learning Becker, Joanne RossiSOAn investigation of beginning algebra students' ability to generalize linear patterns Beckmann, SybillaSOSingapore’s elementary school mathematics texts and current research on whole number operations Berg, ClaireSOThe role of learning communities in mathematics in the introduction of alternative ways of teaching algebra Beswick, KimRRThe impact of teachers' perceptions of student characteristics on the enactment of their beliefs Bikner-Ahsbahs, RRTowards the emergence of constructing mathematical meanings Bingolbali, ErhanRRIdentity, knowledge and departmental practices: mathematics of engineers and mathematicians. Blanton, MariaRRElementary students' capacity for functional thinking Bobis, JanetteRRFor the sake of the children: maintaining the momentum of a professional development program Bobos, GeorgeanaSOIs theoretical thinking necessary in linear algebra proofs? Borba, MarceloSODistance education in mathematics Bosch, M. AsunciónSOChanging prospective mathematics teachers´ conceptions on assessment: a teacher training strategy Botten, GeirPPGreen response and green dialog - communication among teachers and students Boufi, AdaRRFrom formal to semi-informal algorithms: the passage of a classroom into a new mathematical reality Bragg, PhilippaRRA measure of rulers- the importance of units in a measure Breen, ChrisRRIn the serpent's den: contrasting scripts relating to fear of mathematics Brendefur, JonathanSOElementary students’ use of conjectures to deepen understanding Cabral, Tania C. B.RRFormal inclusion and real diversity in an engineering program of a new public university Callingham, RosemaryRRPrimary students' understanding of tessellation: an initial exploration Camacho, MatíasPPStudents' understanding of area and definite integral concepts within an enhanced computer learning environment Chapman, OliveRRFacilitating peer interactions in learning mathematics: teachers' practical knowledge Charalambous, RRTowards a unified model on teachers´concerns and efficacy beliefs related to a mathematics reform Chick, HelenRRWhat is unusual? The case of a media graph Christou, ConstantinosRRProofs through exploration in dynamic geometry environments Clark, KarenRREstablishing a proessional learning community among middle school mathematics teachers Clarke, DavidRRPatterns of participation in the mathematics classroom Cooper, TomRRYoung white teachers' perceptions of mathematics learning of aboriginal and non-aboriginal students in remote communities Cortes, AnibalRRTwo important invariant tasks in solving equations: analyzing the equation and checking the validity of transformations Cretchley, PatriciaSOMathematics confidence and approaches to learning: gender and age effects in two quite different undergraduate mathematics courses Dawson, A. J. (sandy)SOProject mentor:measuring the growth of mentor and novice teacher mathematics content knowledge De Bock, DirkPPThe illusion of linearity: a literature review from a conceptual perspective De Bock, DirkSOOvercoming students’ illusion of linearity: the effect of performance tasks De Hoyos, Maria G.RRUncertainty during the early stages of problem solving Deloustal-Jorrand, VirginieRRStudying the mathematical concept of implication through a problem on written proofs Di Martino, PietroRRFrom single beliefs to belief systems: a new observational tool Doig, BrianRRAssessment as a strategic tool for enhancing learning in teacher education: a case study Domingo, PaolaRRPatterns of reasoning in classroom Dougherty, BarbaraRRGeneralized diagrams as a tool for young children’s problem solving Downs, MartinRRCorrespondences, functions and assignation rules. Dvora, TaliRRUnjustified assumptions based on diagrams in geometry Earnest, Darrell S.SOThe dots problem: third graders working with function Eichler, AndreasRRThe impact of individual curricula on teaching stochastics Ejersbo, Lisser RyeDGCommunication in mathematics classroom - questioning and listening Elia, IliadaRRThe functions of pictures in problem solving Engelbrecht, JohannSOComparing assessment modes and question formats in undergraduate mathematics English, LynRRMathematical modelling with young children Esteley, CristinaRRExtending linear models to non-linear contexts: an in-depth study about two university students' mathematical productions Evangelidou, AnastasiaRRUniversity students’ conceptions of function Evans, JeffSOMathematics, popular culture and inclusion: some findings and methodological issues Fakir Mohammad, RaziaRRPractical constraints upon teacher development in pakistani schools. Falcade, RossanaRRTowards a definition of function Favilli, FrancoPPSona drawings: a didactical software Favilli, FrancoSOMathematics education in multicultural contexts: a vocational challenge for the italian teaching staff Ferrara, FrancescaRR“Why doesn’t it start from the origin?”: hearing the cognitive voice of signs Ferrari, Pier LuigiRRMatematical language and advanced mathematics learning Ferreira, RosaSOImproving written tests: what do student teachers think about it? Filloy, EugenioRRArithmetic/algebraic problem-solving and the representation of two unknown quantities Finnane, MaureenSOThe role of assessing counting fluency in addressing a mathematical learning difficulty Forgasz, HelenDGExamining theses Forgasz, HelenRREquity and computers for mathematics learning:access and equity Frade, CristinaRRThe tacit-explicit dynamic in learning processes Francisco, JohnSOThe interplay of students' views on mathematical learning and their mathematical behavior: insights from a longitudinal study on the development of mathematical ideas. Freiman, ViktorRRTracking primary students' understanding of the equal sign Friedlander, AlexRRLevels of student responses in a spreadsheet-based environment Fritzlar, TorstenRRSensitivity for the complexity of problem ori-ented mathematics instruction – a challenge to teacher education Fuglestad, Anne BeritRRICT tools and students' competence development Gagatsis, AthanasiosRRThe effects of different modes of representation on mathematical problem solving Gallardo, AuroraPPOn the possibilities of success or failure of a teaching model. Algebraic blocks for low-school-performance students Gates, PeterDGThe role of mathematics in social exclusion/inclusion: foregrounding children’s backgrounds Georgiadou-Kabouridis, SOA newly-qualified teacher's responsibility for mathematics teaching Gervasoni, AnnPPExploring the mathematical knowledge of grade 1 and grade 2 children who are vulnerable in learning mathematics Giraldo, VictorRRDescriptions and conflicts in dynamic geometry Glanfield, FlorenceSOMathematics teacher understanding as an emergent phenomenon Glass, BarbaraRRStudents problem solving and justification Gomez-Chacon, Ines PPEmotion and affect in mathematical education exploring a theoretical framework of interpretation Gómez, PedroRRDidactical knowledge development of pre-service secondary mathematics teachers González-Martín, RRLegitimization of the graphic register in problem solving at the undergraduate level. The case of the improper integral. Gonzalez, María JoséSOGeneric and specific competences as a framework to evaluate the relevance of prospective mathematics teachers training syllabuses Gooya, ZahraSOWhy the mathematics performance of iraninan students in timss was unique? Gorev, DvoraRRWill “the way they teach” be “the way they have learned”? Pre-service teachers’ beliefs concerning computer embedding in math teaching. Grainger, HarrySOCreativity in schools: interpretations and the problematics of implementation. Groves, SusieRRProgressive discourse in mathematics classes ? The task of the teacher Guimarães, Luiz CarlosRRTeacher's practices and dynamic geometry Gusev, ValeryPPAbstraction in the learning of mathemics by fifth-graders in russia Gutierrez, AngelRRCharacterization of students’ reasoning and proof abilities in 3-dimensional geometry Haja, ShajahanSOWorkshop on developing problem solving competency of prospective teachers Hallagan, JeanRRA teacher's model of students' algebraic thinking about equivalent expressions Halverscheid, StefanRROn motivational aspects of instructor-learner interactions in extra-curriculum activities Hannula, MarkkuWSCreative writing Hannula, MarkkuRRDevelopment of understanding and self-confidence in mathematics * grades 5–8 Harel, GuershonRRMathematics teachers' knowledge base: preliminary results Hart, KathleenPPBenchmarks in an early number curriculum Healy, LuluRRThe role of tool and teacher mediations in the construction of meanings for reflection Healy, LuluPPThe appropriation of notions of reflection by visually impaired students Hegedus, StephenPPDynamic models of linear functions Hegedus, StephenWSSymbolic cognition in advanced mathematics Heinze, AisoRRThe proving process in mathematics classroom - method and results of a video study Heirdsfield, AnnSOEnhancing mental computation in year 3 Hoch, MaureenRRStructure sense in high school algebra: the effect of brackets Hopkins, SarahRRExplaining variability in retrieval times for addition produced by students with mathematical learning difficulties Horne, MarjRREarly gender differences Huillet, DanielleSOThe evolution of secondary school mozambican teachers knowledge about the definition of limits of functions Hähkiöniemi, MarkusRRPerceptual and symbolic representations as a starting point of the acquisition of the derivative Ilany, Bat-ShevaRRImplementation of a model using authentic investigative activities for teaching ratio and proportion in pre-service teacher education. Inglis, MatthewRRMathematicians and the selection task Iversen, KjærandPPStudent's attempt to solve several-step problems in probability Jahr, EinarSOWhat is a mathematical concept? Jirotková, DarinaRRInsight into pupils’ understanding of infinity in a geometrical context Jirotková, DarinaPPConstructivist aproaches in the education of future teachers, case of geometry Johnsen-Høines, MaritRRTextual differences as conditions for learning processes Juter, KristinaSOStudents'conceptions of limits and infinity Kaino, LucksonRRStudents' gender attitudes towards the use of calculators in mathematics instruction Kalyanasundaram, RRTeaching arithmetic and algebraic expressions Kaput, JamesAn introduction to the profound potential of connected algebra activities: issues of representation, engagement and pedagogy Karaagaç, Mehmet RRThe tension between teacher beliefs and teacher practice: the impact of the work setting. Kelleher, HeatherRRWhat a simple task can show: teachers explore the complexity of children?S thinking Kidron, IvyRRConstructing knowledge about the bifurcation diagram: epistemic actions and parallel constructions Klein, RonithSODo computers promote solving problems ability in elementary school? Kleve, BodilSOInterpretation and implementation of the l97's mathematics curriculum Knoll, EvaRRExperiencing research practice in pure mathematics in a teacher training context Ko, Ho-KyoungPPWow! It would be fun to learn math by playing a game! : number concept and mathematical strategy in the game yut-nori Koyama, MasatakaSOResearch on the process of understanding mathematics: the inclusion relation among fractions, decimals and whole numbers Kramarski, BrachaRREnhancing mathematical literacy with the use of metacognitive guidance in forum discussion Kratochvilová, JanaSOClassification as a tool for building structure Kubinova, MarieSOCharacteristics of the project as an educational strategy Kwon, Oh NamSORetention effect of rme-based instruction in differential equations Kyriacou, ChrisPPThe impact of the national numeracy strategy in England on pupils' confidence and competence in early mathaemtics Kaasila, RaimoSOThe connection between entrance examination procedures and pre-service elementary teachers' achievement in mathematics Laine, AnuSOPre-sevice elementary teachers' situational strategies in division Lamb, JaneenRRThe impact of developing teacher conceptual knowledge on students' knowledge of division. Lavy, IlanaRRKinds of arguments emerging while exploring on a computerized environment Lawrie, ChristineRRUsing solo to analyse group responses Ledesma Ruiz, Elena RRConnections between qualitative and quantitative thinking about proportion: the case of Paulina. Leikin, RozaRRTowards high quality geometrical tasks: reformulation of a proof problem Leppäaho, HenrySODeveloping of mathematical problemsolving at comprehensive school Leron, UriRRMathematical thinking and human nature: consonance and conflict Leu, Yuh-ChynRRThe mathematics pedagogical values delivered by an elementary teacher in her mathematics instruction: attainment of higher education and achievement Leung, AllenRRVisual reasoning in computational environment: a case of graph sketching Levenson, EstherRRElementary school students' use of mathematically-based and practically-based explanations: the case of multiplication Lewis, JenniferSOMathematics teaching as invisible work Liljedahl, PeterRRMathematical discovery: hadamard resurected Lin, Pi-JenRRSupporting teachers on designing problem-posing tasks as a tool of assesment to understand students' mathematical learning Lins, Abigail Fregni (bibi)DGTowards new perspectives and new methodologies for the use of technology in mathematics education Lins, Abigail Fregni (bibi)SOCabri-géometre: two ways of seeing it and using it Littler, GrahamPPTactile manipulation and communication Lo, Jane-JaneRRProspective elementary school teachers¡¦ solution strategies and reasoning for a missing value proportion task Mark - Zigdon, NitzaSOFirst graders’ and kindergarten children’s knowledge of grafic symbole system of numbers and addition and subtrraction Markovits, ZviaSOStudents' ability to cope with routine tasks and with number-sense tasks in israel and in korea Maschietto, MichelaRRThe introduction of calculus in 12th grade: the role of artefacts Matos, João FilipeSOLearning school mathematics versus being matematically competent – a problematic relationship Mcclain, KayPPA framework for action for mathematics teacher development Mcclain, KayRRThe critical role of institutional context in teacher development Mcdonough, AndreaRRStudents' perceptions of factors contributing to successful participation in mathematics Merenluoto, KaarinaRRThe cognitive-motivational profiles of students dealing with decimal numbers and fractions Mevarech, Zemira R.SOTeachers create mathematical argumentation Michaelidou, NikiRRThe number line as a representation of decimal numbers: a research with sixth grade students Middleton, JamesRRPreservice teachers conceptions of mathematics-based software Misailidou, ChristinaRRHelping children to model proportionally in group argumentation: overcoming the ‘constant sum’ error Misailidou, ChristinaPPDeveloping effective ‘ratio’ teaching in primary school: results from a case study Mitchelmore, MichaelRRAbstraction in mathematics and mathematics learning Miyakawa, TakeshiRRReflective symmetry in construction and proving Mkhize, Duduzile SOProfessionalism in mathematics teaching in south africa * are we transforming? Modestou, ModestinaRRStudents' improper proportional reasoning: the case of area and volume of rectangular figures Molina González, MartaSOIn the transition from arithmetic to algebra: misconceptions of the equal sign. Monaghan, JohnRRAbstraction and consolidation Monteiro, CarlosRRCritical sense in interpretations of media graphs Morselli, FrancescaRRBetween affect and cognition: proving at university level Mousley, JudithRRAn aspect of mathematical understanding: the notion of "connected knowing" Mousoulides, NikosRRAlgebraic and geometric approach in function problem solving Mulligan, JoanneRRChildren's development of structure in early mathematics Mungsing, WanchareeSOMathematical process: an analysis of the student communication on open-ended problem Måsøval, HeidiSOStudent authority in mathematics classroom discourse Nardi, ElenaRROn the fragile, yet crucial, relationship between mathematicians and researchers in mathematics education Neria, DoritRRStudents preference of non-algebraic representations in mathematical communication Nicol, CynthiaRRLearning to see in mathematics classrooms Nilsson, PerRRStudents`ways of interpreting aspects of chance embedded in a dice game. Nisbet, StevenRRThe impact of state-wide numeracy testing on the teaching of mathematics in primary schools Novotná, JarmilaDGResearch by teachers, research with teachers Noyes, AndyRRThe poetry of the universe”: new mathematics teachers’ metaphoric meaning-making Nunokawa, KazuhikoRRWhat studnets do when hearing ohters explaining Olive, JohnWSDeveloping algebraic reasoning in the early grades (k-8): the early algebra working group Oliveira, HéliaSOProfessional identity and professional knowledge: beginning to teach mathematics Oliveira, IsolinaSOCelebrating diversity: the role of mathematics in a curricular alternative to promote inclusion Oliveira, PauloSOMight students be knowledge producers? Olson, JoRRChanges in teachers’ practices while assuming new leadership roles Outhred, LynneRRStudents' structuring of rectangular arrays Ouvrier-Buffet, CecileRRConstruction of mathematical definitions: an epistemological and didactical study Ozmantar, Mehmet FatihRRMathematical abstraction through scaffolding Pang, JeongsukPPDevelopment of mathematics lesson plans using ict by prospective elementary school teachers Pantziara, MarilenaRRThe use of diagrams in solving non routine problems Paparistodemou, EfiRRDesigning for local and global meanings of randomness Paschos, TheodorusRRIntegrating the history of mathematics in educational praxis. An euclidean geometry approach to the solution of motion problems Pateman, NeilSOWhole school reform in mathematics Pehkonen, ErkkiSOElementary student teachers’ self-confidence as learners of mathematics Pehkonen, LeilaRRThe magic circle of the textbooks an option or an obstacle for teacher change Peled, IritRRSituated or abstract: the effect of combining context and structure on constructing an additive (part-part-whole) schema Pepin, BirgitPPMathematics textbooks and their use in secondary classrooms in England, France and Germany: connections, quality and entitlement Peretz, DvoraRRUsing graphical profiles to study the learning and teaching of mathematics Person, AxelleRRThe role of number in proportional reasoning: a prospective teacher's understanding Pierce, RobynRRLearning to use cas:voices from a classroom Pinto, MarciaRRTechnical school studentsïconceptions on tangent lines Pitta-Pantazi, DemetraRRElementary school students' mental representations of fractions Pittalis, MariosRRA structural model for problem posing Ponte, Joao-PedroSOUnderstanding and transforming our own practice by investigating it Povey, HilaryPPGirls' participation in some realisitic mathematics: reflections from student teachers Povey, HilaryRRSome undergraduates' experience of learning mathematics Price, AlisonSOIs it time to let go of conservation of number? Psycharis, GiorgosRRNormalising geometrical constructions: a context for the generation of meanings for ratio and proportion Radford, LuisRRThe sensual and the conceptual: artefact-mediated kinesthetic actions and semiotic activity Rangnes, Toril EskelandPPScaling in elementary school: understanding and learning through a web-based `scaling workshop´ Reikerås, Elin Kirsti LieSOConnections between skills in mathematics and ability in reading. Reynolds, SuzanneSOA study of fourth-grade students'explorations into comparing fractions Rico, LuisSOQuality in mathematics teachers training syllabuses Rivera, FerdinandRRA sociocultural account of students' collective mathematical understanding of polynomial inequalities in instrumented activity Robutti, OrnellaWSEmbodiment, metaphor and gesture in mathematics Robutti, OrnellaRRInfinity as a multi-faceted concept in history and in the mathematics classroom Rodd, MelissaRRSuccessful undergraduate mathematicians: a study of students in two universites
{"url":"http://www.maths.soton.ac.uk/EMIS/proceedings/PME28/authors.html","timestamp":"2014-04-20T21:16:57Z","content_type":null,"content_length":"81708","record_id":"<urn:uuid:b541ab71-08d8-45fd-959f-01dce916ba49>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
got any tricks to build up t-structures on derived categories? up vote 7 down vote favorite Are there any good tricks to construct a heart of a t-structure? (I'm thinking on the derived category of coherent sheaves of some variety) I'll start with the only one I know. If $(T,F)$ is a torsion pair on an abelian category $A$ then you can form the tilt inside $D(A)$. By definition, the tilt is complexes in $D(A)$ such that the minus one cohomology lies in $T$ and the zeroth cohomology lies in $F$ and all other cohomology vanishes. Unfortunately this method is only good for hearts concentrated in two degrees. An example of such a torsion pair is if you take $Ab$ the category of abelian groups, $T$ torsion groups and $F$ free groups. In any abelian group you can find the largest torsion subgroup and the quotient will be torsion free. This can be generalised to arbitrary integral domains and from that to arbitrary integral schemes, although the geometric meaning of the tilt (if there is one) escapes Another example is Bridgeland's category of perverse coherent sheaves, which can be defined as a tilt. I would be very interested in generalisations of Bridgeland's category. There is also a notion of perverse coherent sheaf by Bezrukavnikov, but from what I understand it is only useful with Artin stacks, as the jump in the dimension of the coarse space given by stabiliser groups is what allows interesting perversities. Comments on these latter perverse sheaves would also be very much appreciated, as I don't really understand the construction. add comment 1 Answer active oldest votes This will be a short overview on techniques I am familiar with. For simplicity, I will talk about bounded t-structures, which are determined by their heart $\mathcal{A} = D^{\le 0} \cap D^{\ge 0}$; and on the bounded derived category of coherent sheaves $D^b(X)$ on a variety/stack. 1. Tilting is in principle extremely powerful: $\mathcal{A}_1$ is obtained by tilting from $\mathcal{A}_2$ whenever objects of $\mathcal{A}_1$ have only two cohomologies with respect to $\mathcal{A}_2$, e.g. $\mathcal{A}_1 \subset \langle \mathcal{A}_2, \mathcal{A}_2[1]\rangle$. (See e.g. Lemma 1.1.2 in http://front.math.ucdavis.edu/0606.5013) 2. Tilting can be iterated. As an example, using 1. it is a not too difficult exercise to see that Bezrukavnikov's t-structures of perverse coherent sheaves can be constructed by iterated tilting. When $D^b(X)$ is equivalent to a derived category of quiver representations, iterated tilting can often be descibed by iterated quiver mutations. 3. It is not very difficult to construct torsion pairs. For example, when $\mathcal{A}$ is Noetherian, then any subcategory $\mathcal{T} \subset \mathcal{A}$ that is closed under extensions and quotients is the torsion part of a torsion pair $(\mathcal{T}, \mathcal{F} = \mathcal{T}^{\perp})$. Alternatively, any notion of stability condition on a heart $\ mathcal{A}$, say, induced via a slope function: $\mathcal{T}_{> \mu_0}$ is the extension-closed subcategory generated by stable objects $E$ with $\mu(E) > \mu_0$. (See http:// front.math.ucdavis.edu/0307.5164, section 6.) up vote 11 down vote 4. Given a semi-orthogonal decomposition of a triangulated category, one can construct a t-structure on the full category by gluing t-structures on the components - this is all in the accepted original BBD. 5. It is much easier to construct (unbounded) t-structures in the unbounded derived category $D_{qc}(X)$ of quasi-coherent sheaves. (Any subcategory closed under [1], extensions and small coproducts is the $D^{\le 0}$-part of a t-structure.) Sometimes one can use this to construct bounded t-structures on $D^b(X)$ by showing that they restrict - see e.g. section 2 of http://front.math.ucdavis.edu/0606.5013. But in general t-structures on $D_{qc}(X)$ do not descend, and even if they do, it might be hard to prove. 6. As an example of the latter techniques, when $G$ acts freely on $X$, then $G$-invariant t-structures on $X$ are in 1:1-correspondence with t-structures on the quotient $X/G$ satisfying an additional assumption: tensoring with $f_* \mathcal{O}_X$ is right-exact. 7. Any derived equivalence $D^b(X) \cong D^b(\mathcal{A})$ induces a t-structure on $D^b(X)$ by pull-back - I guess you already knew that! I realize that I talked mostly about tilting - I do think it's a very powerful method. 7. yes, I already knew that ;) but the rest is incredibly informative: beautiful! – Jacob Bell Aug 2 '12 at 22:57 about iterated tilting: can you confirm if I understand? The first heart is given by those complexes with $H^0 \in T$, $H^{-1} \in F$. If I then have another torsion pair $(T',F')$ on the tilt, the second heart is given by those complexes with $H^0_{\mathcal{H}} \in T'$, $H^{-1}_{\mathcal{H}} \in F'$. Where the clumsy $H^\cdot_{\mathcal{H}}$ stands for the "non-standard" cohomology with respect to the first heart. – Jacob Bell Aug 2 '12 at 23:00 Correct. In other words: to tilt, you only need to assume that $\mathcal{A}\subset D$ is the heart of a bounded t-structure, you don't need $D \cong D^b(\mathcal{A})$. – Arend Bayer Aug 2 '12 at 23:09 yes, that's the right way to put it. thanks again, I'll wait a little bit just in case someone else wants to chip in before accepting your answer. – Jacob Bell Aug 2 '12 at 23:14 There is a problem with iterating tilting --- after the very first tilting the heart may be (and frequently is) non-Noetherian! So, the trick as in 3. does not work. – Sasha Aug 3 '12 at 7:58 add comment Not the answer you're looking for? Browse other questions tagged derived-category t-structure coherent-sheaves tilting ag.algebraic-geometry or ask your own question.
{"url":"https://mathoverflow.net/questions/103766/got-any-tricks-to-build-up-t-structures-on-derived-categories","timestamp":"2014-04-20T13:32:51Z","content_type":null,"content_length":"61635","record_id":"<urn:uuid:9451a214-0d4f-46f0-9f7c-7e3dd0d6e0f1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
20 Moves or Less Will Solve Rubik’s Cube People have been solving Rubik’s Cube since it was invented in 1974, and some have gotten quite fast at it. But just how fast is possible? In other words, if you were an omniscient solver, what’s the most number of moves it would take you to solve any starting position of Rubik’s cube? Since there are 43,252,003,274,489,856,000 possible starting positions, many of which require different strategies to solve, answering that question seemed outside the realm of possibility: a computer dealing with each starting position in less than 20 seconds would still require 35 years of analysis. But by using some high-powered mathematics to chop the problem down to size, a team consisting of a mathematician, a Google engineer, a math teacher, and a programmer (and yes, a powerful computer) have indeed answered the question, determining that all starting positions are solvable in 20 moves or less. The story was picked up by a number of news outlets, including NPR, Discover, BBC News, and others.
{"url":"http://blogs.shu.edu/mathinthemedia/2010/08/20-moves-or-less-will-solve-rubiks-cube/","timestamp":"2014-04-17T12:46:42Z","content_type":null,"content_length":"17238","record_id":"<urn:uuid:24efd647-7520-48a6-9a24-a4d285e2e20b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Young's Double Slit Experiment - Slit Separation Calculation 1. The problem statement, all variables and given/known data Calculate the slit separation (d) given that: Wavelength = 650 nm (Plugged in 6.5*10^-7 m) m = 1 (plugged in 1) Distance to screen (D) = 37.5 cm (plugged in 0.375m) Distance between centre to side order (y) = 0.7 cm (pluged in 0.007m) 2. Relevant equations We were only given one equation in our lab manual (the same equation they gave us for a single slit, slit width problem....except instead of d they had a there to represent slit width) d = (m*Wavelength*D)/y where d is the slit separation 3. The attempt at a solution I plugged in the numbers and I produced a solution equal to 0.0348 mm. (I made sure to convert to meters before plugging into the equation and then converted back to milimetres by multiplying by What ails me is that the theoretical, or given slit separation is 0.25mm. This makes my relative error aproximately 88% and I am positive I did not do the experiment that poorly. Surprisingly though, the answer produced is VERY similar to the given SLIT WIDTH (0.04mm). Now I checked this a million times and I think I may be stuck in a rut of not seeing something that is supremely obvious but is making me get the wrong answer. Or the person who designed my lab did not supply me with a proper equation to solve this problem. Any help is appreciated.
{"url":"http://www.physicsforums.com/showpost.php?p=3156387&postcount=1","timestamp":"2014-04-16T22:17:39Z","content_type":null,"content_length":"9868","record_id":"<urn:uuid:09730a76-a7b8-4f0b-a4fb-3bac725f2571>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving for X in a complex log problem Hello all! I just got my AP Calculus summer assignment, and on it, I found a problem that's really giving me trouble. Anyone care to help? Here it is: log_3 x^2 = 2log_3 4 - 4log_3 5 I hope that's clear. :3 Thank you in advance for anyone kind enough to help me! burnbird16 wrote:log_3 x^2 = 2log_3 4 - 4log_3 5 The first step is to use a log rule to convert the subtraction of two logs on the right-hand side of the equation into one log containing a division. Since you will then have "log[3](one thing) = log[3](another thing)", you can set (one thing) equal to (another thing) and solve the resulting quadratic equation. Don't forget, though, that you cannot have negatives inside logarithms. Only one of the quadratic equation's solutions will be valid within the original logarithmic equation.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=1325","timestamp":"2014-04-17T22:12:16Z","content_type":null,"content_length":"19353","record_id":"<urn:uuid:5da0adc0-d500-4d56-82ad-3dcbf061ad9e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 33 The sides of a quadrilateral are 3, 4, 5, and 6. Find the length of the shortest side of a similar quadrilateral whose area is 9 times as great. 4. You are at Lowes and a man is leaning against three sections of fencing that are 6,7 and 12 ft long. A Lowe s person tells him to discard the 12 ft section and buy a 10ft section. She informs the customer that this would save money and would fence a bigger triangular a... 3. Your neighbor puts tiger on his flat roof shed. You and another neighbor argue over the ht of the tiger. At a distance of 80 ft from the shed your angle of elevation from the ground level to the top of the shed is 5.74 degree. The angle of elevation to the top of the tiger ... 2. A spin balancer rotates the wheel of the car at 480 revolutions per min. if the diameter of the wheel is 26 in what road speed is being tested in the miles per hour? Round to the nearest tenth 1. A hill with a 35 degree grade (incline with horizontal) is cut down for a roadbed to a 10 degree grade. If the distance from the base to the top of the original hill is 800 ft., How many vertical ft will be removed from the hill rounded to the nearest ft College Algebra Which of the following is equivalent to (2x-3)^2=25 ? A) 2x - 3 = 5 B) 2x - 3 = -5 C) 2x - 3 = 5 and 2x - 3 = -5 how do you make 9=6 (using roman numerals ) be equal on both sides In the ballroom of a historic palace near Paris, a chandelier hangs from the ceiling by a cable. The mass of the chandelier is 390 kg. Find the tension in the cable. A 10-kg duffle bag is undergoing a vertical acceleration of 0.050 m/s2 (positive for up, negative for down), while its owner is holding it with an upward force. What is the magnitude of this force? A physics professor lifts her 9.0-kg briefcase at constant speed with an upward force. Find the magnitude of her force. A physics professor lifts her 9.0-kg briefcase at constant speed with an upward force. Find the magnitude of her force. Actually I didn't know that formula haha. Thanks! A 32.0-kg penguin on frictionless ice holds onto the end of a horizontal spring with spring constant 250 N/m and oscillates back and forth. What is its period of oscillation? A 50-kg penguin stands on ice. A helium balloon is attached to the penguin by means of a harness and pulls upward with a force of 90 N. What is the normal force of the ice on the penguin? A physics professor lifts her 9.0-kg briefcase at constant speed with an upward force. Find the magnitude of her force. How do we know the ith of an invertible matrix B is orthogonal to the jth column of B^-1 , if i is not equal/unequal to j? What is the simplified sum when you evaluate 10/15 + 1/4 12th grade Id say... b algebra 1 x^2-18x+___=( )^2 12th grade agebra 2 For the first one : You put X = x² Then you have (variable change) : X² - 6X + 8 = 0 And you use common tools to solve that (use delta). You find X = 2 or X = 4. You have x² = X, so x = sqrt(X) or x = -sqrt(X). Here, the solutions are sqrt(2), -sqrt(2), 2, -2. F... 12th grade Simple equation : X = -sqrt(2)/sec(1/4) = -sqrt(2)*cos(1/4) It's not a round value. 12th grade I don't agree Anonymous. To solve 2nd degrees equations (aka find the roots of a trinomial function), you have first to put it to this form : ax² + bx + c = 0 (where a, b and c are reals) Then you have to calculate its discriminant (its delta) : Δ = b² - 4ac... real gdp calculates the gdp for a longer period of time What holds atoms together in a molecule? What is matter in which every particle is identical? what is the increase in volume as temperature increases? I need help on the things like tevrey square is a rectangle please helpp!!!!!!!!!!!!!!!!!!!!!!!!!!!!! I need help on the things like tevrey square is a rectangle please helpp!!!!!!!!!!!!!!!!!!!!!!!!!!!!! I know that a rectangle is a square but I am not sure of the rest. I know that a rectangle is a square but I am not sure of the rest. IS Evrey rectangle a square or is evrey rhombus a paralleogram. I forgot these things need help! Fourth Grade Geometry Is evrey rectangle a square? Or is evrey parallegram a rhombus I forgot.
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Nolan","timestamp":"2014-04-16T18:27:19Z","content_type":null,"content_length":"12000","record_id":"<urn:uuid:9dab0b9f-c00f-4b21-af7e-2fe18a37bda8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
a difficult geometry proof May 1st 2009, 02:54 PM #1 May 2009 a difficult geometry proof Here is the problem. Consider an isoceles triangle ABC. Please see the diagram below and IGNORE the dots which are used to show the positions of the points. Point D is a point on side AB. suppose that side AB=side AC. angle BAC=20 degrees and angle BDC= 30 degrees. Prove that side AD=side BC. Please help. thanks Here is the problem. Consider an isoceles triangle ABC. Please see the diagram below and IGNORE the dots which are used to show the positions of the points. Point D is a point on side AB. suppose that side AB=side AC. angle BAC=20 degrees and angle BDC= 30 degrees. Prove that side AD=side BC. Please help. thanks using law of sines we have $\frac{\sin (20)}{DC}=\frac{\sin (10)}{AD}$ and $\frac{\sin (80)}{DC}=\frac{\sin (30)}{BC}.$ dividing these two relations gives us: $\frac{AD}{BC}=\frac{\sin (80) \sin (10)}{\sin(20) \sin (30)}=\frac{2 \cos(10) \sin(10)}{\sin (20)}=1.$ non-trigonometric proof I guess I forgot to state a requirement for this problem. We should prove it without using trigonometry. sorry about my oversight May 1st 2009, 03:38 PM #2 MHF Contributor May 2008 May 1st 2009, 05:31 PM #3 May 2009
{"url":"http://mathhelpforum.com/geometry/86840-difficult-geometry-proof.html","timestamp":"2014-04-20T02:43:59Z","content_type":null,"content_length":"36473","record_id":"<urn:uuid:15b0aa0c-dbb9-46aa-b3d0-e0c8582a04a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Organic Text Part I > Organic Text Part I Organic Text Part I October 4, 2010 For those who did not attend 360|Flex, Paul Taylor and I collaborated on an organic text TinyTLF demo. We are used to thinking about organic text in terms of path deforming, that is, arranging the letters in the text along a general curve or path. While I’ve illustrated that process for a single spline, the next step is to extend the method to a general path, which may be discontinuous both spatially and in first derivative. That work is in progress. Another way to think about organic text is for one of the bounds (top, bottom, left, right) to be nonlinear. Instead of having text flow inside strict rectangular bounds, consider text flowing around a left or right spline boundary. Our 360|Flex demo used TinyTLF to render text inside a container whose left boundary was a quadratic Bezier spline. The right boundary could be a spline as well, but the demo was simplified to have only one nonlinear boundary. Paul also got a version of the demo running after his presentation where the spline was animated. This particular quadratic spline is G-1 continuous and non-interpolative. The spline is only guaranteed to pass through the first and last knots in the sequence. The first and last quad. Beziers are degenerate; they are actually lines, so the quadratic coefficient is zero. This causes the spline to be closer to the beginning and ending boundaries than a typical quad. There is also a tension parameter, which works best for values in the 0.2 – 0.6 range. I personally like about 0.3. I don’t know the original attribution for this spline, so the best I can do is credit my computational geometry professor, Dr. R. Tennyson. The algorithm goes back to at least the late 1970s so the only possible contribution I might be able to claim is addition of tension :) The spline uses instances of a custom quadratic Bezier class for its constituent segments under the assumption that the y coordinates are non-decreasing from segment to segment. This allows a single value to returned from the x-at-y method. A horizontal sweep-line is moved from the top to the bottom of a container. If the sweep line is outside the range of one spline, the x-value at either the minimum or maximum y is used to bound the container, allowing virtual bounds to extend infinitely in either direction. This is illustrated in the screen shot below. If the sweep line falls within either spline boundary, then the y-coordinate is already known. The Quadratic Bezier x-at-y algorithm returns a single x-coordinate that completes identification of the boundary. You can imagine the red line as a proxy for text lines that are laid out by TinyTLF. I’ll post again whenever Paul gets the presentation and demos online so you can see it in practice. In the mean time, you can view the current demo from which Paul developed the spline layout. It’s only been moderately tested, but should be far enough along to deconstruct how the algorithm works. 1. October 4, 2010 at 3:09 pm | Hi! thanks for sharing, this is really interesting, one day I was playing around with an algorithm dealing with liquid and I got some tough questions about how to put it in bottles of different shapes… hmmmmm… this article gave some ideas for finding the bottle limits. It is interesting I have used the hermite interpolation for audio stretching, quite similar to the quadratic bezier. I just wondering what the results are using the quadratic bezier instead to achieve the same. take a look on it http://www.yoambulante.com/en/labs/interpolation.php cheers! 2. October 11, 2010 at 6:50 am | Vous trouverez ci-joint mon Blog présentant ma nouvelle théorie mathématique de la conscience. Par la présente, j’aimerais si vous le voulez bien que les scientifiques de votre communauté me fassent parvenir leur commentaire. Dr Clovis Simard,phD
{"url":"http://algorithmist.wordpress.com/2010/10/04/organic-text-part-i/","timestamp":"2014-04-18T00:28:18Z","content_type":null,"content_length":"52332","record_id":"<urn:uuid:2be2e05c-7076-4319-910d-993183d46745>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
TARO 2 EAD 2002 Editing Instructions. Descriptive Summary Stiles, Frederick Arthur, 1944- F. A. Stiles Papers 1963-1971Materials are written in English. 2005-215 8 in. Dolph Briscoe Center for American History, The University of Texas at Austin Frederick A. Stiles (1944-1992) earned both a Master of Arts in Mathematics (1968) and a Ph.D. in Mathematics Education (1971) from the University of Texas at Austin. This collection consists of class notes and term papers for courses taught in the Mathematics Department at the University of Texas at Austin as well as Stiles’ Masters thesis and Ph.D. dissertation. Biographical Note Frederick A. Stiles (1944-1992) earned both a Master of Arts in Mathematics (1968) and a Ph.D. in Mathematics Education (1971) from the University of Texas at Austin. At the time of his death Stiles was an Assistant Professor of Mathematics at San Antonio College in San Antonio, TX. Scope and Contents This collection consists of class notes and term papers for courses taught in the Mathematics Department at the University of Texas at Austin as well as Stiles’ Masters thesis and Ph. D. dissertation. Many of the class notes are for courses taught by Dr. R. L. Moore. Also included are draft copies of “Concerning Dean John R. Silber and the Proposed Dismissal of Professor R. L. Moore” with handwritten notes. Forms part of the Archives of American Mathematics Access Restrictions Unrestricted Access Preferred Citation F. A. Stiles Papers, 1963-1971, Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas at Austin. Index Terms Subjects Moore, R. L. (Robert Lee), 1882- Stiles, Frederick Arthur, 1944- University of Texas at Austin Mathematics--Study and teaching Detailed Description of the Papers 4RM137 “Concerning Dean John R. Silber and the Proposed Dismissal of Professor R. L. Moore” [includes handwritten notations], 1969 4RM137 “Concerning Dean John R. Silber and the Proposed Dismissal of Professor R. L. Moore” [photocopy of materials], 1969 4RM137 “Some Theorems Regarding Quasi-Continuity” [Master’s thesis], August 1968 4RM137 “An Analysis of Two Approaches to the Teaching of Mathematics Courses for Prospective Teachers” [Ph. D. dissertation], 1971 Term papers: 4RM137 “The Basic Number Axioms and Their Applicability to the Private Secondary School” [M315], August 23, 1963 4RM137 “The Slope Theorems and Their Applicability to the Private Secondary School” [M613a], January 24, 1964 4RM137 “Summations, Their Statistical Applications, the ‘High-Point’ and Basic Integral Theorems” [M613b], May 31, 1964 4RM137 “The Basic Theorems of Induction and Positive Numbers Raised to Positive Integral Powers” [M360K], January 2, 1965 4RM137 “Straight Lines and Matrices” [M333L], March 23, 1965 4RM137 “Development of the Properties of the Counting Numbers from Five Axioms After the Manner of Professor Edmund Landau” [M385.3], 1967 4RM137 “Nine Basic Integral Theorems” [M683a], 1968 4RM138 “Proposal for a Study Concerning an Analysis of the Differences Between the Wall Approach Versus the Current Approach in the Teaching of Mathematics Courses for Elementary Education Majors at the University of Texas at Austin” [Ed. C. 382E.5], January 20, 1969 4RM138 Untitled [M322K (385.1)], undated Notes: 4RM138 Untitled [M624a and M624b with R. L. Moore], 1964-1965 4RM138 Foundations of Geometry [M331 with R. L. Moore], 1965 4RM138 Teaching Problems in Algebra, [M333K with Mr. Reed], 1965 4RM138 Solid Geometry [M309], 1966 4RM138 Geometry [M385.1 with W. T. Guy], 1968 4RM138 Foundations of Mathematics [M688 with R. L. Moore 1968-1969 4RM138 Theory of Sets [M389N with R. L. Moore], 1969 Teaching notes: 4RM138 Mac Nerney, J. S., “An Introduction to Analytic Functions with Theoretical Implications,” 1968 4RM138 Traylor, D. Reginal, “Advanced Mathematics,” 1970 4RM138 Wall, H. S., “Theory of Equations,” undated Work by others: 4RM138 Justice, J. V., “Solutions of LaPlace’s Equation in Spherical Coordinates” [MA thesis supervised by H. J. Ettlinger], August 1968 4RM138 Sherrill, J. M., “Some Work with Integrals” [term paper for M375.1 with H. S. Wall], undated
{"url":"http://www.lib.utexas.edu/taro/utcah/00453.xml","timestamp":"2014-04-21T07:54:28Z","content_type":null,"content_length":"10662","record_id":"<urn:uuid:1031ab5b-4c4e-474c-be06-edcd27773fa8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Sequential numeric Posted Friday, July 05, 2013 1:52 PM SSC Rookie I have a need to update numeric values in a table column so that they are unique and sequencial, the order of which is determined by another column in the same table, and by data in a 2nd table. I am running into a problem when there is duplicate data, and I can't figure out how to get the numeric values to be unique. For the data below, I want the seq_nbr column in TABLE1 to be in order 1,2,3,4 based on the order the codes are listed horizontally in TABLE2. Group: General Forum Members TABLE1 Last Login: seq_no seq_nbr dx_code Wednesday, July 17, E3CD8342-1294-4CBA-9201-D51C07E9FB0C 1 366.16 2013 4:14 PM 997312BA-8C90-4773-B0FC-1838C46A4728 3 370.03 Points: 39, Visits: 5DC781A2-71BC-4148-9D56-DA95D3F8F081 4 362.52 176 E65354B3-F404-430B-8153-EDD7D1921431 4 362.52 dx_code1 dx_code2 dx_code3 dx_code4 366.16 362.52 370.03 362.52 CREATE TABLE Table1 (seq_no UNIQUEIDENTIFIER,seq_nbr INT,dx_code VARCHAR(6)) CREATE TABLE Table2 (dx_code1 VARCHAR(6),dx_code2 VARCHAR(6),dx_code3 VARCHAR(6),dx_code4 VARCHAR(6)) INSERT INTO Table1(seq_no,seq_nbr,dx_code) SELECT NEWID(),'1','366.16' INSERT INTO Table1 (seq_no,seq_nbr,dx_code) SELECT NEWID(),'3','370.03' INSERT INTO Table1 (seq_no,seq_nbr,dx_code) SELECT NEWID(),'4','362.52' INSERT INTO Table1 (seq_no,seq_nbr,dx_code) SELECT NEWID(),'4','362.52' INSERT INTO Table2 (dx_code1,dx_code2,dx_code3,dx_code4) Pivot TABLE2 CREATE TABLE #diag_codes(dx INT IDENTITY,sequence char(8),dx_code VARCHAR(6)) INSERT INTO #diag_codes (sequence,dx_code) SELECT sequence,dx_code FROM(SELECT dx_code1,dx_code2,dx_code3,dx_code4 FROM Table2 ) d (dx_code FOR sequence IN (dx_code1,dx_code2,dx_code3,dx_code4) )AS unpvt_assess select * from #diag_codes DROP TABLE #diag_codes Hit post too soon. I have the syntax above for creating the 2 tables and adding data for the test scenario. I tried some queries with the pivot of TABLE2 to get my numeric values updated, but was running into difficulty due to the duplicate dx_code values. Posted Sunday, July 07, 2013 6:57 PM I hesitate to say this but I don't think you've got enough information to solve this. Hall of Fame Any query that might appear to solve it would need to rely on the ordering of the rows in Table1, which SQL does not guarantee. My mantra: No loops! No CURSORs! No RBAR! Hoo-uh! Group: General Forum My thought question: Have you ever been told that your query runs too fast? Last Login: Today @ My advice: 3:31 AM INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it? Points: 3,590, The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would Visits: 5,097 have in the first place. Need to UNPIVOT? Why not CROSS APPLY VALUES instead? Since random numbers are too important to be left to chance, let's generate some! Learn to understand recursive CTEs by example. Splitting strings based on patterns can be fast! Posted Monday, July 08, 2013 12:59 AM SSC-Addicted Try this: Group: General Forum Members IF OBJECT_ID('tempdb..#TempTable') IS NOT NULL Last Login: Sunday, DROP TABLE #TempTable; September 29, 2013 1:24 AM CREATE TABLE #TempTable Points: 429, Visits: (seq_nbr INT IDENTITY(1,1), dx_code1 VARCHAR(6),dx_code2 VARCHAR(6),dx_code3 VARCHAR(6),dx_code4 VARCHAR(6)) INSERT INTO #TempTable (dx_code1,dx_code2,dx_code3,dx_code4) --SELECT * FROM #TempTable IF OBJECT_ID('tempdb..#diag_codes') IS NOT NULL DROP TABLE #diag_codes; CREATE TABLE #diag_codes(rownum int,seq_nbr int identity(1,1),sequence char(8),dx_code VARCHAR(6)) INSERT INTO #diag_codes (rownum,sequence,dx_code) ROW_NUMBER() OVER (PARTITION BY dx_code ORDER BY sequence) AS rownum, FROM(SELECT seq_nbr, dx_code1,dx_code2,dx_code3,dx_code4 FROM #TempTable ) d (dx_code FOR sequence IN (dx_code1,dx_code2,dx_code3,dx_code4) )AS unpvt_assess #diag_codes as d rownum = 1 ORDER BY Posted Monday, July 08, 2013 11:44 AM SSC Rookie My original post may not have been very clear as to what I would like for the end result. Taking it a step farther to help illustrate the direction I am going, I did a simple join of Table1 and #diag_codes on the dx_code field. Group: General Forum SELECT * FROM Table1 t JOIN #diag_codes dc ON t.dx_code =dc.dx_code Last Login: Wednesday, July 17, Now that leaves me in a sense with two duplicate rows. The only difference being the seq_no column. I am trying to figure out if there is any logical way to remove the 2013 4:14 PM duplicates or prevent the duplicates from occurring, and then update the column Table1.seq_nbr with the value in the field #diag_codes.dx. Points: 39, Visits: 176 I have uploaded two attachments to help illustrate what I am trying to achieve. The "join with duplicates" shows the results of the above query, where I have crossed out the duplicates that I want removed. The "desired result" attachment shows what the data in Table1 should look like after updating the seq_nbr column. Posted Monday, July 08, 2013 12:41 PM I have to second what Dwain already stated. You don't have any way to force the order here. Why is Row 2 correct and Row 3 not correct? How do you know what order those will appear? You don't because you have no order by, and you have nothing to use as an order by. For that matter how do you know if dx_code2 belongs to dx 2? Why can't it be SSChampion dx_code4? There is nothing other than the perceived order of the unpivot to know this. If you had a normalized table for Table2 this would be not only possible, it would be Group: General Forum Members Need help? Help us help you. Last Login: Today @ 8:55 AM Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions. Points: 11,957, Visits: 10,986 Need to split a string? Try Jeff Moden's splitter. Cross Tabs and Pivots, Part 1 – Converting Rows to Columns Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs Understanding and Using APPLY (Part 1) Understanding and Using APPLY (Part 2) You cannot post new topics. You cannot delete your own events. You cannot post topic replies. You cannot delete other events. You cannot post new polls. You cannot send private messages. You cannot post replies to polls. You cannot send emails. You cannot edit your own topics. You may read topics. You cannot delete your own topics. You cannot rate topics. You cannot edit other topics. You cannot vote within polls. You cannot delete other topics. You cannot upload attachments. You cannot edit your own posts. You may download attachments. You cannot edit other posts. You cannot post HTML code. You cannot delete your own posts. You cannot edit HTML code. You cannot delete other posts. You cannot post IFCode. You cannot post events. You cannot post JavaScript. You cannot edit your own events. You cannot post EmotIcons. You cannot edit other events. You cannot post or upload images.
{"url":"http://www.sqlservercentral.com/Forums/Topic1470869-392-1.aspx","timestamp":"2014-04-17T15:30:06Z","content_type":null,"content_length":"120298","record_id":"<urn:uuid:c3c5d741-f2c8-4276-ba69-2e9e6837db51>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
La Mesa, CA Prealgebra Tutor Find a La Mesa, CA Prealgebra Tutor Hello perspective learners. I am a credentialed teacher that has worked for the San Diego Unified School District for the last 14 years. Amongst my experience, I have taught students from pre - K through High School working in many multicultural environments in group or one on one situations. 11 Subjects: including prealgebra, reading, English, writing My name is Maddie and my background is in mathematics. I understand the difficulties in learning math and am excited at the prospect of teaching people how fun math can be. I have a BS and an MS in Mathematics and am currently teaching and tutoring at a community college. 8 Subjects: including prealgebra, calculus, geometry, algebra 1 "Never let yesterday's disappointments overshadow tomorrow's dreams." I am graduate of U.C. Irvine (B.A. in Political Science) and have a Master's in Ed/Teaching Credential from Cal State San Marcos. Ten years ago, I was a elementary school teacher in Riverside City. 21 Subjects: including prealgebra, reading, geometry, Chinese ...My schedule is flexible and I am willing to meet my students where they feel comfortable. Nothing is more important to me than my student's success, and my students always leave my lessons feeling that they have learned something valuable. Thank you for reading my profile and I look forward to ... 24 Subjects: including prealgebra, reading, English, calculus ...However, the time and effort put into preparing can be futile if not done in a strategic fashion. I have a natural gift of communication and find that I am great at teaching. If you are willing to put in the time and effort with me, I can get you prepared for the ACT. 9 Subjects: including prealgebra, algebra 1, algebra 2, economics Related La Mesa, CA Tutors La Mesa, CA Accounting Tutors La Mesa, CA ACT Tutors La Mesa, CA Algebra Tutors La Mesa, CA Algebra 2 Tutors La Mesa, CA Calculus Tutors La Mesa, CA Geometry Tutors La Mesa, CA Math Tutors La Mesa, CA Prealgebra Tutors La Mesa, CA Precalculus Tutors La Mesa, CA SAT Tutors La Mesa, CA SAT Math Tutors La Mesa, CA Science Tutors La Mesa, CA Statistics Tutors La Mesa, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/La_Mesa_CA_Prealgebra_tutors.php","timestamp":"2014-04-20T13:55:33Z","content_type":null,"content_length":"24020","record_id":"<urn:uuid:c43c806f-9969-49ef-bd7f-7f7919276179>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
I = m(v-u) where I is impulse in Newton seconds, m is mass in kilograms, v is final velocty in metres per second and u is initial velocity in metres per second. Moment = (magnitude of force) x (perpendicular distance from line of action of force to pivot) Moment is measured in Newton metres, force in Newtons, and distance in metres. Conservation of momentum: where m is mass in kilograms, u is initial velocity in metres per second, and v is final velocity in metres per second i.e. momentum before collision = momentum after collision (providing no external forces act) F = μR where F is maximum friction measured in Newtons, μ is the coefficient of friction between the two surfaces, and R is the normal contact force Last edited by Daniel123 (2007-12-22 02:46:12)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=85359","timestamp":"2014-04-21T14:49:56Z","content_type":null,"content_length":"15443","record_id":"<urn:uuid:c4d6b452-4e99-47df-b3df-38af493a2fd5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation of a Line January 23rd 2009, 10:57 AM #1 Jan 2009 Equation of a Line Let me get this straight.. there are three equations: slope - y intercept form: y = mx + b standard form: ac + by + c = 0 point- slope form: y - y1 = m(x + x1) Now: how do you convert from one form to another form? And I'm not sure I quite understand what each equation represents. For example, in standard form, what does C represent? In slope-y intercept form, what does x represent? This is so confusing. And the thing is, I know the information and the principles, but I can't apply it in actual math questions. Does anybody have any hints on how to learn when to use one equation rather than another, or how to solve problems with these equations? Let me get this straight.. there are three equations: slope - y intercept form: y = mx + b standard form: ac + by + c = 0 point- slope form: y - y1 = m(x + x1) Now: how do you convert from one form to another form? And I'm not sure I quite understand what each equation represents. For example, in standard form, what does C represent? In slope-y intercept form, what does x represent? This is so confusing. And the thing is, I know the information and the principles, but I can't apply it in actual math questions. Does anybody have any hints on how to learn when to use one equation rather than another, or how to solve problems with these equations? I'm sure for the standard form you MEANT to write $ax + by + c = 0$ Consider the following: $ax + by + c = 0$ $by = -ax - c$ $y = \frac{-a}{b}x - \frac{c}{a}$ That is the same as slope-intercept form! The gradient of the line is given by $m = \frac{-a}{b}$, and the y-intecept is the point $\big(0, \frac{-c}{a}\big)$. The point-slope from comes from the fact that the gradient of a line, m, is defined as: $m = \frac{y - y_1}{x - x_1}$ $y - y_1 = m(x-x_1)$ Originally Posted by ifailatmath Let me get this straight.. there are three equations: slope - y intercept form: y = mx + b standard form: ac + by + c = 0 <--This should be ax + by + c = 0 point- slope form: y - y1 = m(x + x1) <-- this should actually be $(x-x_1)$ First, think about why the name of each formula is called that. Slope-intercept gives you exactly what it says it does, a slope (m) and a y intercept (b) Point-slope also gives you what it says it does, $y-y_1$ are two y coordinates for two points somewhere on your line and $x-x_1$ are the two x-coordinates of those same y-coordinates of the points on your line. The slope IS already in the point-slope equation, you just haven't solved for it yet. It is a good equation to use if you are given the graph of a line and asked to find the equation of it. Look closer at the point-slope equation. If you got "m" by itself by dividing both sides by $(x-x_1)$ that would be your slope right? $(x-x_1)$ is the "run" part of your slope and $(y-y_1)$ is the "rise" part. $\frac{change in y values}{change in x values}$ 1) How do you convert from one form to another form? I personally like slope-intercept form best. It is easy to work with because it tells you exactly where your line hits the y-intercept or "b" in that equation. You can get all of these into slope-intercept form by solving for "y" a) ax + by + c = 0 by = -ax - c y = \frac{-ax}{b}-\frac{c}{b} ***a, b & c are just coefficients (the numbers in front of your variables x & y 2) And I'm not sure I quite understand what each equation represents. For example, in standard form, what does C represent? c is just a number, since it does not have an "x" or "y" attached to it, it will probably be part of your y-intercept. If the letters confuse you, just replace them with a number, any number. That is what is neat about algebra, regardless of what numbers are in your equation you can solve for an answer. While you are trying to understand the concept, try to use numbers that will reduce each other, like 2x + 8y + 10 = 0 Solve for y in that equation and it will put it into slope-intercept form for you. 3) In slope-y intercept form, what does x represent? Look at the graph of y = x. It is just a line through (0,0) on your graph right? y = x is actually in slope-intercept form y = 1x + 0 The slope is understood to be 1, so we leave it out and since the graph intercepts the y axis at 0, we leave that off too! This is so confusing. You'll get it! Hang in there! It takes practice!! 4) Does anybody have any hints on how to learn when to use one equation rather than another, or how to solve problems with these equations? When you look at a problem you are given, try not to be overwhelmed with all the "stuff" in it. See if you can get "y" by itself using addition, subtraction, multiplication or division. Then once you get y by itself, see if it looks like one of the line equations, slope-intercept, point- slope or standard January 23rd 2009, 11:08 AM #2 Super Member Dec 2008 January 23rd 2009, 06:35 PM #3
{"url":"http://mathhelpforum.com/algebra/69581-equation-line.html","timestamp":"2014-04-18T08:23:30Z","content_type":null,"content_length":"42823","record_id":"<urn:uuid:9f38e7f9-622c-40e2-adb4-47172d6ce2ac>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Mt Vernon, NY Geometry Tutor Find a Mt Vernon, NY Geometry Tutor ...The majority of students encounter two problems in this subject: trouble with word problems and insufficient math discipline. There are some formulae that need to be memorized, of course, but ultimately a student's success depends on five factors. These are: 0. 15 Subjects: including geometry, reading, algebra 2, algebra 1 ...Even some certified math teachers are not fluent in this subject. I spent 36 years as a mathematics teacher and 22 of those years supervising a department of approximately 30 mathematics teachers. Teaching students how to study was always a priority in our professional development meetings. 9 Subjects: including geometry, algebra 1, algebra 2, precalculus ...I have been tutoring since my days on the Math Team at Stuyvesant HS. I was a high school teacher for a brief time and, more recently, I taught Probability and Statistics at Stony Brook University for a few years, where I received the President's Award for Excellence in Teaching. I believe that... 16 Subjects: including geometry, calculus, statistics, GRE ...I have been playing violin since the 4th grade and have undertaken leadership roles during my time in orchestras. I have been a section leader in school orchestras and concert mistress of a community orchestra. I have also played in small ensembles, in musical pit accompaniment, and in fiddle groups. 17 Subjects: including geometry, chemistry, calculus, statistics ...Best, Rhonda Sarrazin I have over 23 years of successful teaching experience in elementary education teaching grades k-12. I have a BS in elementary education and a master's degree in educational administration. I am currently a student at Union Theological Seminary and in the process of obtaining a Master's of Divinity degree. 32 Subjects: including geometry, reading, writing, English Related Mt Vernon, NY Tutors Mt Vernon, NY Accounting Tutors Mt Vernon, NY ACT Tutors Mt Vernon, NY Algebra Tutors Mt Vernon, NY Algebra 2 Tutors Mt Vernon, NY Calculus Tutors Mt Vernon, NY Geometry Tutors Mt Vernon, NY Math Tutors Mt Vernon, NY Prealgebra Tutors Mt Vernon, NY Precalculus Tutors Mt Vernon, NY SAT Tutors Mt Vernon, NY SAT Math Tutors Mt Vernon, NY Science Tutors Mt Vernon, NY Statistics Tutors Mt Vernon, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Mt_Vernon_NY_Geometry_tutors.php","timestamp":"2014-04-17T07:46:27Z","content_type":null,"content_length":"24149","record_id":"<urn:uuid:04a35cbb-72b2-4f98-ab07-0632a7e6ec36>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
e of ● Approximately 2,000 small, uniformly shaped objects (kernels of corn, dried beans, wooden markers, plastic beads) ● 10 paper cups or small beakers ● A 250-ml or 400-ml beaker Instructional Strategy Engage Initiate a discussion on human population with such questions as: How long have humans been on the earth? How do you think the early rate of human population growth compares with the population growth rate today? Why did this rate change? Tell students that this investigation represents a model of population growth rates. Explore Have student groups complete the following activities. ● Place the glass beakers on their desks. Begin by placing two objects (e.g., corn or plastic beads) in it. The beaker represents the limits of an ecosystem or ultimately the earth. ● Place 10 cups in a row on their desk. In the first cup, place two objects. In the second cup, place twice as many objects as the first cup (four). Have students record the number of objects on the outside of the cup. Continue this procedure by placing twice as many objects as in the former cup, or doubling the number, in cups 3 through 10. Be sure students record the numbers on the ● Take the beaker and determine its height. Have students indicate the approximate percentage of volume that is without objects. Record this on the table as 0 time. ● At timed intervals of 30 seconds, add the contents of cups 1 through 10. Students should record the total population and approximate percentage of volume in the beaker that is without objects. ● Students should complete the procedure and graph their results as total population versus results. Students may question the need for the 30-second intervals. The length of the time interval is arbitrary. Any time interval will do. Preparation of the graph can be assigned as homework. Range of Results The mathematics involved in answering the questions may challenge some students. Assist students when necessary to enable them to accomplish the objectives of the investigation. Table 1 shows the population and the percent of the beaker's volume without objects. A typical student graph is shown in Figure 1. Explain Ask the students to explain the relationship between population growth and biological evolution in populations of microorganisms, plants, and animals. Through questions and discussion, help them develop the connections stated in the learning outcome for the activity. Evolution results from an interaction of factors related to the potential for species to increase in numbers, the genetic variability in a population, the supply of essential resources, and environmental pressures for selection of those offspring that are able to survive and reproduce. Elaborate Begin by having students explain the results of their activity. During the discussion of the graph, have the students consider some of the following: Are there any limitations to the number of people the earth will support? Which factor might limit population growth first? How does this factor relate to human evolution? Are Table 1 Population growth Time Internal Population Percentage of empty volume (400-ml beaker) 0 2 99% 1 4 99% 2 8 99% 3 16 98% 4 32 97% 5 64 95% 6 128 93% 7 256 88% 8 512 80% 9 1024 70% 10 2048 50% 11 4096 0%
{"url":"http://www.nap.edu/openbook.php?record_id=5787&page=101","timestamp":"2014-04-19T19:48:52Z","content_type":null,"content_length":"41246","record_id":"<urn:uuid:c89c47b0-a4cd-4847-b1d0-cd5ce3627c13>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
A circle is inscribed in equilateral triangle ABC such that Author Message A circle is inscribed in equilateral triangle ABC such that [#permalink] 27 Sep 2008, 16:36 A circle is inscribed in equilateral triangle ABC such that point D lies on circle and on line segment AC , point E lies on circle and line segment AB , and point F lies on Joined: 22 Sep 2008 circle and on line segment BC. If line segment AB=6 , what is the area of figure created by line segments AD , AE and minor arc DE. Posts: 124 how to get value of OE or OD ( O is center of circle) Followers: 1 Kudos [?]: 36 [0], given: 0 scthakur Re: Good Trigo Question [#permalink] 28 Sep 2008, 02:48 SVP 1 Joined: 17 Jun 2008 This post received Posts: 1580 Unable to draw the figure here. Followers: 10 Since, ABC is an equilateral triangle, point D, E or F will divide AC, AB or BC into two halves. Kudos [?]: 167 [1] , given: 0 Now, if O is the center of circle and we focus on triangle OBF, then BF = 3 and since OVF is a 30-60-90 triangle, we know OF = BF/sqrt3 = sqrt3. Now, we can find out the area of circle, subtract it from the area of triangle and divide the result by 3 to get the result. dk94588 Re: Good Trigo Question [#permalink] 27 Sep 2008, 17:05 Manager the equation for an inscribed circle radius for an equilateral triangle is r=a (sqrt3 / 6) I know that most people don't know that (I had to look it up). Affiliations: Beta So you take 6 * sqrt3 / 6, which equals sqrt3. Gamma Sigma the formula for area of a circle is pi r squared, so it is 3.14* sqrt3 squared (3), so the area of the circle is 9.42. There will inherently be an inscribed equilateral triangle inside this circle, with sides 3,3,3 using the side bisectors D,E, and F as the points. Joined: 14 Aug 2008 the area of this triangle can be found by halving the height*base. So, you make half the triangle into a right triangle with a side 1.5 and the hypotenuse of 3 (a 30-60-90 triangle), which means that the height of the triangle is 1.5*sqrt3 (the other side). Posts: 210 so 1.5sqrt3 times 3 divided by 2 is 2.25sqrt3, the area of the triangle, which is 3.8971. so the area of the circle minus the area of the triangle gives you the aggregate area of the three minor arc sections, which is 9.42-3.8971, 5.5229. Schools: Harvard, so the area of one arc section is 1.84097, Penn, Maryland take the area of the triangle ADE (which is the same as DEF) 3.8971 minus the area of the arc section, 1.84097, so the area of AD,AE, and the minor arc DE is.... Followers: 4 2.05613 Re: Good Trigo Question [#permalink] 27 Sep 2008, 17:06 Affiliations: Beta Gamma Sigma as for getting the O, I'm not sure. someone plz figure it out and post it bc I really want to know now lol. another way to remember the inscribed circle radius is it is 1/3 the altitude, and a circumscribed radius is 1/2 the altitude. Joined: 14 Aug 2008 These kind of formulas are crucial for the GMAT, although I have never seen a geometry problem quite as difficult as this one in prep or on the GMAT. Posts: 210 Schools: Harvard, Penn, Maryland Followers: 4 Re: Good Trigo Question [#permalink] 28 Sep 2008, 06:46 Affiliations: Beta Gamma Sigma ok so because OB is bisecting angle EBF, you can conclude that angle OBF is 30 degrees, and that angle OFB is 90, so that OF=x, BF=sqrt3*x, and OB is 2x, with OF being the radius of the circle and BF=3. that makes sense, I think my way is quicker tho. Joined: 14 Aug 2008 Posts: 210 Schools: Harvard, Penn, Maryland Followers: 4 Re: Good Trigo Question [#permalink] 28 Sep 2008, 08:18 scthakur wrote: Unable to draw the figure here. Since, ABC is an equilateral triangle, point D, E or F will divide AC, AB or BC into two halves. Now, if O is the center of circle and we focus on triangle OBF, then BF = 3 and since OVF is a 30-60-90 triangle, we know OF = BF/sqrt3 = sqrt3. Joined: 05 Jul 2008 Now, we can find out the area of circle, subtract it from the area of triangle and divide the result by 3 to get the result. Posts: 1437 Followers: 31 I got until the part of BF=3. How ever, I failed to recognize that Triangle OBF is 30-60-90 triangle. Kudos [?]: 188 [0], given: 1 Are of equilateral triangle is s ^2 X sqrt (3) /4 leaves us with 9 sqrt (3) Area of circle is 3pi Reqd area = (9 sqrt (3) - 3 pi ) /3 gmatclubot Re: Good Trigo Question [#permalink] 28 Sep 2008, 08:18
{"url":"http://gmatclub.com/forum/a-circle-is-inscribed-in-equilateral-triangle-abc-such-that-70813.html?kudos=1","timestamp":"2014-04-18T11:25:40Z","content_type":null,"content_length":"146594","record_id":"<urn:uuid:d482b7b2-a368-4e03-8a93-21d05a1dfe18>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Theorems for nothing (and the proofs for free) up vote 31 down vote favorite Some theorems give far more than you feel they ought to: a weak hypothesis is enough to prove a strong result. Of course, there's almost always a lot of machinery hidden below the waterline. Such theorems can be excellent starting-points for someone to get to grips with a new(ish) subject: when the surprising result is no longer surprising then you can feel that you've gotten it. Let's have some examples. soft-question big-list 15 I like the Dire straits reference ;) – Grétar Amazeen Nov 13 '09 at 14:58 2 Time to put this one to bed. (ie, time to close it, I deem.) – Andrew Stacey Jun 23 '10 at 18:06 3 What happened between November and June that rendered this question no longer relevant? – I. J. Kennedy Oct 9 '10 at 23:52 1 Wow Andrew. I am not normally surprised when I see you on the list of closers of a question (and most times it is actually there for a good reason), but I did not expect you to go that far, particularly leaving us the mystery of how your question all of a sudden became no longer relevant. Did you intend to use the theorems-for-nothing list, e. g. a popular talk? If yes, what did you use and how well was it absorbed? But even then, I am not sure whether the question is no longer relevant just because it is not relevant to you... – darij grinberg May 4 '11 at 7:06 1 Darij: Vaguely relevant meta threads: tea.mathoverflow.net/discussion/210 and tea.mathoverflow.net/discussion/459 (If you really want to discuss this, start a thread on meta about it) – Andrew Stacey May 4 '11 at 9:01 show 1 more comment closed as no longer relevant by Andrew Stacey, Harry Gindi, S. Carnahan♦ Jun 23 '10 at 18:32 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center , please edit the question. 25 Answers active oldest votes Every compact metric space is (unless it's empty) a topological quotient of the Cantor set. up vote 41 down What, every compact metric space? Yes, every compact metric space. That's quite surprising! What are some good references for this? – Justin DeVries Nov 13 '09 at 16:59 7 Surprising, yes, but once you know about it, it seems easy enough to cook up a proof. Just write the set as a union of two closed subsets, decide to map the left half of the Cantor set onto one and the right half to the other, then do the same to each of these two sets, and so on. In the limit you have the map you want, provided you have arranged for the diameters of the parts to go to zero. – Harald Hanche-Olsen Nov 13 '09 at 19:04 8 Right! Once you know it, it's fine. But I think it's capable of changing one's intuition on what spaces and maps are. After all, the Cantor set is just a sprinkling of dust; how could it be capable of covering a big fat space like the 3-ball? – Tom Leinster Nov 14 '09 at 0:25 7 After this theorem has done its job changing your intuition, though, it's pretty easy to believe. A surjective continuous map glues stuff together. And the Cantor set is not "just" a sprinkling of dust; it's a sprinkling of a whole lot of rather clumpy dust. So it shouldn't be surprising that you can glue all this clumpy dust into many different forms. – Mark Meckes Dec 2 '09 at 14:36 I also like the fact that every countable compact Hausdorff space is homeomorphic to a countable successor ordinal equipped with its order topology. This is the Mazurkiewicz-Sierpinski 1 theorem, published originally in French (I think) but also available in English in Z. Semadeni's book 'Banach spaces of continuous functions' in section 8 (the chapter on compact 0-dimensional spaces). A proof of the Alexandroff-Hausdorff theorem (i.e., every compact metric space is a continuous image of the Cantor set) is also there, as well as a bunch of other tasty topology. – Philip Brooker Feb 21 '10 at 12:04 show 1 more comment For me, the theorem that every subgroup of a free group is free is a good example of this: it seems to come for free from covering spaces and the fundamental group, but really all up vote 28 down the heavy machinery is just moved underground. add comment Wedderburn's theorem: "Every finite division ring is a field." This is really astonishing if you think of quaternions: nothing analogous in the finite case. Then of course the classification of finite fields is also very beautiful: exactly one with p^n elements (p a prime and n an integer) and no others. up vote 22 down vote And as a bonus, Wedderburn's theorem is one of the crispest in all of mathematics: seven words ( or six and a half if you replace division ring by skew-field). You can save one word by replacing "a field" by "commutative" (but maybe we should count syllables rather than words). – Andreas Blass Jun 20 '11 at 19:44 Great idea, Andreas, thanks! – Georges Elencwajg Jun 21 '11 at 17:23 add comment I had that feeling of getting more than you ought to a couple of weeks ago when reading the first chapter of Rota and Klain's Introduction to Geometric Probability. In particular, I was up vote familiar with the usual derivation of the probability of Buffon's needle crossing a line. So it was amazing to read the solution to a harder problem, Buffon's noodle, which is solved by 18 down appealing to a much simpler seeming general symmetry argument. And like you describe, it forms a kind of teaser trailer to draw you you into the rest of the subject. 3 I agree, this is a completely wonderful argument. It's also a spectacular example of a more general theorem that's easier to prove. – Tom Leinster Nov 13 '09 at 18:00 3 Here is a related discussion gilkalai.wordpress.com/2009/08/03/… – Gil Kalai Nov 21 '09 at 11:15 add comment Isn't almost every theorem in mathematics an example of a theorem "for free"? One defines natural numbers, and then it follows each of them is a sum of four squares; one defines a notion of a continuous function and of Euclidean space, and Brouwer's fixed point theorem follows. Surely, that is amazing! With that said here are a handful of the example that lie closer to the surface: 1) Complex-differentiable functions are infinitely-differentiable, and in fact analytic. up vote 17 2) A function of several complex variables that is holomorphic in each variable is holomorphic in all of them (if it reminds you of 'theorem' that a function that is continuous in each down vote variable separately is continuous... well, then, it should). That is Hartogs' theorem. 3) Any bound on the error term in primes number theorem of the form $\psi(x)=x+O_{\varepsilon}(x^{a+\varepsilon})$ implies the bound $\psi(x)=x+O(x^a \log x)$. 4) Morally related to (3) is the tensor power trick, of which the earliest widely-known example is perhaps the proof of Cotlar-Stein lemma. One of my favorite examples is lemma 2.1 from a paper of Katz and Tao on Kakeya's conjecture. Four squares is nothing, every natural number is also the sum of three triangular numbers. – Zsbán Ambrus Apr 16 '10 at 19:32 add comment My "canonical example" is Banach-Steinhaus in functional analysis: that, in nice locally convex topological spaces (Banach will do), weakly bounded (or pointwise bounded) implies bounded. The machinery is quite technical, usually involving the Baire category theorem, but the result is very simple and very surprising. One especial point I like about this is that when you up vote 12 compare normed vector spaces with Banach spaces, then the process of adding more stuff (i.e. completion) actually limits the things that can go wrong. My intuition is that if you want to down vote limit the bad behaviour then you need to work in smaller spaces rather than larger. 2 My intuition (for this kind of issue, anyway) is actually the opposite. If you work with a larger space, then there's more "stuff" that nice things (functions, sequences, whatever) have to play nicely with. So the bigger the space, the nicer they must be. – Mark Meckes Nov 13 '09 at 15:12 I agree with Mark: adding stuff tends to rigidify things, think for example of localization. – Alex Collins Nov 13 '09 at 15:18 I agree. It always does seem like you get something for nothing. – Dinakar Muthiah Nov 13 '09 at 15:59 There is a Zabreiko's theorem which extracts the juice of Baire's Category and by invoking it, the Banach-Steinhaus, Open Mapping and Closed Graph theorems come just easily. It says: Every countably subadditive seminorm on a Banach space is continuous. Unfortunately I don't know a good reference. – Abhishek Parab Feb 21 '10 at 2:47 2 @Abhishek Parab: Zabreiko's theorem is proved in Megginson's book 'An Introduction to Banach Space Theory'. It is near the beginning of Section 1.6, which is entitled 'Three Fundamental Theorems'. – Philip Brooker Feb 21 '10 at 11:45 show 1 more comment Faithfully-flat descent: up vote 12 down It tells you that you can construct quasicoherent sheaves locally on a faithfully-flat cover. This is pretty amazing, because quasicoherent sheaves are, a priori, only Zariski local. vote So to specify a sheaf it requires a lot less data than it initially appears. add comment Kuratowski's theorem is a great example of a theorem of the form "the only obstructions are the obvious ones," which are always fun to learn about. up vote 11 down vote add comment I can´t resist to mention the Cayley-Hamilton theorem. Something intuitively correct turns out to be mathematically correct too, but for non-intuitive reasons! I still remember, its proof (I´m here referring to the one using the correspondence between operation and representation) worked from my perspective like a magic, clear, simple, non-trivial and beautiful, and it also up vote 11 made me interested in algebra, beyond the lecture in linear algebra for first-year students. It was nice time... down vote Indeed this is a wonderful theorem. Why is it intuitive correct? From all the first year algebra theorems it was the one where I had no intuition whatsoever. – Gil Kalai Nov 22 '09 at 3 The cheezy-easy proof that works over the real or complex numbers is to observe diagonalizable matrices are dense in the space of matrices, and the theorem is true for diagonalizable matrices (by computation) then notice the set of matrices that satisfy the theorem are closed. If you want to avoid this kind of argument you can enhance your intuition with the Jordan Canonical Form. :) – Ryan Budney Nov 22 '09 at 10:19 9 Then you just realize that det(tI-A) evaluated at A is some matrix whose entries are monstrously complicated polynomials in the n^2 entries of the matrix A, and since they're identically 0 on C^{n^2} each of those entries must be the zero polynomial; thus the theorem holds over any commutative ring as well. – Steven Sivek Nov 22 '09 at 13:46 1 Gil: maybe what was meant was the following: consider det(tI-A), and plug in A for t. Personally I wouldn't say this makes C-H "intuitively correct"; instead C-H is suggested by this simple heuristic. – Mark Meckes Nov 23 '09 at 14:03 add comment The Kline sphere characterization, proven by Bing: up vote 9 down A compact connected metric space (with at least two points) is the 2-sphere if and only if every circle separates and no pair of points does. Nitpick: A singleton set seems to be an exception. The wikipedia article seems to have missed that. I'll edit it later if nobody beats me to it. (I have a bus to catch.) – Harald Hanche-Olsen Nov 13 '09 at 19:11 Thanks. Corrected. – Richard Kent Nov 13 '09 at 19:24 add comment Tychonoff's theorem — product of any collection of compact spaces is still compact — is amazing and incredibly useful. up vote 8 down vote 2 it is not surprising thinking of net convergence and that the product topology is not the box topology (which is not compact). – Martin Brandenburg Dec 30 '09 at 2:35 add comment Once the machinery of (co)homology is developed, Brouwer's Fixed Point seems to come for free, it's extremely straightforward to prove and has quite a lot of important up vote 7 down consequences. I'm not sure I really understand the question though; do you just mean surprisingly easy to prove results (that have many substantial consequences)? – Sam Derbyshire Nov 13 '09 at 20:50 add comment The only group with order $p$ a prime is $\mathbb{Z}/p\mathbb{Z}$ up vote 6 down vote add comment I am not sure I fully understand the question. Is it the case that the theorem itself gives you a huge mileage while its proof is extremely difficult, (Characterization of finite simple group is an ultimate example; the Atiyah-Singer index theorem and the BBD(G)-decomposition theorem are other examples; or is it a case that understanding the proof (which is feasible) gives you a lot of mileage and a feeling that you got grip with the subject. up vote 5 down vote Anyway, a theorem which, to some extent, has both these features is Adams's theorem asserting that d-dimensional vectors form an algebra (even non-associative) in which division (except by 0) is always possible only for , 2, 4, and 8. (In these cases there are examles: the Complex, Quaternions and Cayley algebras.) add comment The Gauss-Bonnet theorem is a deep result relating the geometry of a surface to its topology, and its proof is very simple (the local version comes almost from nothing, and the main difficulties for the global one are topological results about triangulations). Also, it has some amazing corollaries: the integral of the gaussian curvature over a compact orientable surface up vote is a topological invariant (${\int\int}_{S}{K}d\sigma = 2\pi\chi(S)$, where $\chi(S)$ is the Euler-Poincaré characteristic of $S$); every compact regular surface with positive gaussian 5 down curvature is homeomorphic to the sphere $S^2$; and so on. add comment Unfortunately, a lot of these kinds of statements in combinatorics are only conjectural. One example (again, only conjectural) that came up in conversation the other day doesn't give a particularly natural result, but it's hugely surprising: the Erdos-Gyarfas conjecture in graph theory, which has pretty much the weakest possible condition for any statement of its form. up vote 4 Now that I think about it, though, Ramsey theory is all about "theorems for nothing." I'm a big fan of the sunflower lemma when it comes to Ramsey-theoretical statements that deserve to down vote be better known -- the only condition there is that your sets have to be relatively small, and there have to be a lot of them. (And that second part is conjecturally not even add comment To me, the canonical example is the Poincare Conjecture. Why SHOULD a three dimensional manifold with trivial fundamental group actually be the sphere? In higher dimensions, there are up vote 4 LOTS of simply connected things, but in two and three, simply connected and compact manifold determines the manifold uniquely. down vote 6 The proof in this case seems rather pricey. – Ryan Budney Nov 13 '09 at 18:59 1 Well, there's a lot of machinery hidden underneath it, yeah. But the statement looks like you're getting a huge amount of specificity from just a small hypothesis. – Charles Siegel Nov 13 '09 at 19:26 add comment That there are infinitely many primes has some simple proofs, but I remember being shown that the sum of the reciprocals of the primes diverges which had some more machinery in it up vote 4 down that was kind of neat to my mind. add comment Although not exactly what you're after, the question reminds me of Reynolds' parametricity theorem, or as Philip Wadler puts it: Theorems for Free! up vote 4 The basic idea is that a polymorphic construction (in a polymorphic lambda calculus) must behave uniformly, and so must preserve relations. For example, any term of type $\Pi X. X\to X$ down vote must be the identity function, and every term of type $\Pi X Y. X\times Y\to X$ must be the first projection. add comment I'd say the Tutte-Berge formula, which is a wonderful result that tells you (almost) everything you want to know about matchings in graphs. Although there are many proofs of this theorem, there is a beautiful proof for free using matroids. Strictly speaking, there is a proof for free of Gallai's Lemma (from which Tutte-Berge follows easily). Gallai's Lemma. Let $G$ be a connected graph such that $\nu(G-x)=\nu(G)$, for all $x \in V(G)$. Then $|V(G)|$ is odd and def$(G)=1.$ Remark: $\nu(G)$ is the size of a maximum matching of $G$, and def$(G)$ denotes the number of vertices of $G$ not covered by a maximum matching. up vote 4 Proof for free. In any matroid $M$ define the relation $x \sim y$ to mean $r(x)=r(y)=1$ and $r(\{x,y\})=1$ or if $x=y$. (Here, $r$ is the rank function of $M$). We say that $x \sim^* y$ down vote if and only if $x \sim y$ in the dual of $M$. It is trivial to check that $\sim$ (and hence also $\sim^*$) defines an equivalence relation on the ground set of $M$. Now let $G$ satisfy the hypothesis of Gallai's Lemma and let $M(G)$ be the matching matroid of $G$. By hypothesis, $M(G)$ does not contain any co-loops. Therefore, if $x$ and $y$ are adjacent vertices we clearly have $x \sim^* y$. But since $G$ is connected, this implies that $V(G)$ consists of a single $\sim^*$ equivalence class. In particular, $V(G)$ has co-rank 1, and so def$(G)$=1, as required. Edit. For completeness, I decided to include the derivation of Tutte-Berge from Gallai's lemma. Choose $X \subset V(G)$ maximal such that def$(G-X) -|X|=$ def$(G)$. By maximality, every component of $G-X$ satisfies the hypothesis in Gallai's lemma. Applying Gallai's lemma to each component, we see that $X$ gives us equality in the Tutte-Berge formula. add comment The Riesz-Thorin interpolation theorem; the complex analysis behind it never fails to surprise me. up vote 3 down vote add comment Artin-Schreier Theorem: If k is a field of characteristic p and strictly contained in its algebraic closure K and such that [K:k] is finite THEN (was surprising for me..) p is actually 0 and K = k(sqrt(-1)) and k is a real closed field! up vote 3 A not so well known but deserving result from the "failed" thesis of Abhyankar: If K and L are algebraically closed fields contained in another algebraically closed field, then the down vote compositum KL is not necessarily algebraically closed. 1 Abhyankar's result is probably not that surprising to many of us. But I was simply amazed since we take algebra in undergrad and know algebraically closed fields and compositums and we hardly ask that question.. I needed to answer that question later while writing my PhD and to my surprise Abhyankar was doing the same in his thesis. – Jose Capco Nov 21 '09 at 22:01 add comment Oh! From uniqueness of the countable dense linear order without endpoints: take (for instance) a countable ordinal $&lambda;$, and consider the anti-lex order on $\mathbb{Q}\times&lambda;$. This is a countable dense linear without endpoints, so it's order-isomorphic to $\mathbb{Q}$; in particular, $\mathbb{Q}$ contains a subset with order-type $&lambda;$ --- e.g. the isomorphs up vote 3 of anything $(\frac{5}{8},j)$. The same result for subsets of $\mathbb{R}$ is a more usual application of transfinite induction/AC/Zorn's lemma; here it's all hidden in the $\ down vote aleph_0$-categoricity result about dlow/oep. add comment I like the theorem, I think it's Gallagher's, that says: Most polynomials with integer coefficients are irreducible and have the full symmetric group as Galois group (over the rational up vote 2 The precise formulation asserts that the number of bad polynomials, i.e., the number of polynomials $X^r + a_1 X^{r-1} + \cdots + a_r$ with $|a_i|\leq N$ that DO NOT have the full down vote symmetric group as Galois group is $$O(r^3(2N+1)^{r-\frac{1}{2}}\log N)$$ (out of $(2N+1)^r$ polynomials). add comment Another good example is the Johnson-Lindenstrauss Lemma that says that any $n$ points in a Hilbert space can be embedded in a $O(\log n)$-dimensional Euclidean space with distances preserved upto any factor. It turns out that JL-style results crop up in many different versions, the main result itself has proofs ranging from 1 page to 10 pages, and it just keeps on up vote 2 giving :) down vote add comment Not the answer you're looking for? Browse other questions tagged soft-question big-list or ask your own question.
{"url":"http://mathoverflow.net/questions/5357/theorems-for-nothing-and-the-proofs-for-free/5511","timestamp":"2014-04-19T22:12:01Z","content_type":null,"content_length":"164900","record_id":"<urn:uuid:0b837714-4619-4e15-8522-6dc0c1e5123f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
An analogue of Lefschetz hyperplane theorem for complements to subvarieties in $\mathbb C^n$ ? up vote 4 down vote favorite Let $V^{2k}$ be a complex subvariety of dimension $2k$ (real dimension $4k$) in $\mathbb C^n$. Let $A$ be a complex $n-k$ dimensional plane in $\mathbb C^n$. Question. Is it true that the inclusion $H_{2n-2k-1}(A\setminus (V\cap A))\to H_{2n-2k-1}(\mathbb C^n\setminus V)$ is injective? We don't require $V^{2k}$ to be smooth, but $V^{2k}$ must be equidimesional, i.e. all its irreducible components have dimension $2k$. ag.algebraic-geometry at.algebraic-topology I was about to direct you to Katz's article in Motives I but... oops, you are not asking for affine Lefschetz hyperplane! – shenghao May 10 '11 at 23:28 add comment 1 Answer active oldest votes Yes. Indeed all irreducible components of $V\cap A$ have positive dimension. So the map is injective, since $H_{2n-2k-1}(A\setminus (V\cap A))=0$ as is shown in the answer to the following question: up vote 1 down vote accepted A bound on the top homology of a complement to a variety in $\mathbb C^n$ add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/64546/an-analogue-of-lefschetz-hyperplane-theorem-for-complements-to-subvarieties-in/64654","timestamp":"2014-04-18T00:25:48Z","content_type":null,"content_length":"51431","record_id":"<urn:uuid:30b74297-fcdc-41fb-9051-1341664127b6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 64 - ACM Trans. Math. Software , 1982 "... An iterative method is given for solving Ax ~ffi b and minU Ax- b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..." Cited by 337 (18 self) Add to MetaCart An iterative method is given for solving Ax ~ffi b and minU Ax- b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I~QR is the most reliable algorithm when A is ill-conditioned. Categories and Subject Descriptors: G.1.2 [Numerical Analysis]: ApprorJmation--least squares approximation; G.1.3 [Numerical Analysis]: Numerical Linear Algebra--linear systems (direct and - IMA J. Numer. Anal , 1982 "... We discuss the numerical solution of the Lyapunov equation ..." - IEEE Transactions on Information Theory , 1998 "... The Wiener filter is analyzed for stationary complex Gaussian signals from an information-theoretic point of view. A dual-port analysis of the Wiener filter leads to a decomposition based on orthogonal projections and results in a new multistage method for implementing the Wiener filter using a nest ..." Cited by 42 (3 self) Add to MetaCart The Wiener filter is analyzed for stationary complex Gaussian signals from an information-theoretic point of view. A dual-port analysis of the Wiener filter leads to a decomposition based on orthogonal projections and results in a new multistage method for implementing the Wiener filter using a nested chain of scalar Wiener filters. This new representation of the Wiener filter provides the capability to perform an information-theoretic analysis of previous, basis-dependent, reduced-rank Wiener filters. This analysis demonstrates that the recently introduced cross-spectral metric is optimal in the sense that it maximizes mutual information between the observed and desired processes. A new reduced-rank Wiener filter is developed based on this new structure which evolves a basis using successive projections of the desired signal onto orthogonal, lower dimensional subspaces. The performance is evaluated using a comparative computer analysis model and it is demonstrated that the low-complexity multistage reduced-rank Wiener filter is capable of outperforming the more complex eigendecomposition-based methods. - SIAM J. Matrix Anal. Appl , 1990 "... Abstract. This paper presents an improved version of incremental condition estimation, a technique for tracking the extremal singular values of a triangular matrix as it is being constructed one column at a time. We present a new motivation for this estimation technique using orthogonal projections. ..." Cited by 41 (2 self) Add to MetaCart Abstract. This paper presents an improved version of incremental condition estimation, a technique for tracking the extremal singular values of a triangular matrix as it is being constructed one column at a time. We present a new motivation for this estimation technique using orthogonal projections. The paper focuses on an implementation of this estimation scheme in an accurate and consistent fashion. In particular, we address the subtle numerical issues arising in the computation of the eigensystem of a symmetric rank-one perturbed diagonal 2 2 matrix. Experimental results show that the resulting scheme does a good job in estimating the extremal singular values of triangular matrices, independent of matrix size and matrix condition number, and that it performs qualitatively in the same fashion as some of the commonly used nonincremental condition estimation schemes. AMS(MOS) subject classi cations. 65F35, 65F05 Key words. Condition number, singular values, incremental condition estimation. 1 - SIAM J. MATRIX ANAL. APPL , 2008 "... Many data analysis applications deal with large matrices and involve approximating the matrix using a small number of “components.” Typically, these components are linear combinations of the rows and columns of the matrix, and are thus difficult to interpret in terms of the original features of the ..." Cited by 39 (9 self) Add to MetaCart Many data analysis applications deal with large matrices and involve approximating the matrix using a small number of “components.” Typically, these components are linear combinations of the rows and columns of the matrix, and are thus difficult to interpret in terms of the original features of the input data. In this paper, we propose and study matrix approximations that are explicitly expressed in terms of a small number of columns and/or rows of the data matrix, and thereby more amenable to interpretation in terms of the original data. Our main algorithmic results are two randomized algorithms which take as input an m × n matrix A and a rank parameter k. In our first algorithm, C is chosen, and we let A ′ = CC + A, where C + is the Moore–Penrose generalized inverse of C. In our second algorithm C, U, R are chosen, and we let A ′ = CUR. (C and R are matrices that consist of actual columns and rows, respectively, of A, and U is a generalized inverse of their intersection.) For each algorithm, we show that with probability at least 1 − δ, ‖A − A ′ ‖F ≤ (1 + ɛ) ‖A − Ak‖F, where Ak is the “best ” rank-k approximation provided by truncating the SVD of A, and where ‖X‖F is the Frobenius norm of the matrix X. The number of columns of C and rows of R is a low-degree polynomial in k, 1/ɛ, and log(1/δ). Both the Numerical Linear Algebra community and the Theoretical Computer Science community have studied variants - in Computational Inverse Problems in Electrocardiology, ed. P. Johnston, Advances in Computational Bioengineering , 2000 "... The L-curve is a log-log plot of the norm of a regularized solution versus the norm of the corresponding residual norm. It is a convenient graphical tool for displaying the trade-off between the size of a regularized solution and its fit to the given data, as the regularization parameter varies. The ..." Cited by 29 (2 self) Add to MetaCart The L-curve is a log-log plot of the norm of a regularized solution versus the norm of the corresponding residual norm. It is a convenient graphical tool for displaying the trade-off between the size of a regularized solution and its fit to the given data, as the regularization parameter varies. The L-curve thus gives insight into the regularizing properties of the underlying regularization method, and it is an aid in choosing an appropriate regularization parameter for the given data. In this chapter we summarize the main properties of the L-curve, and demonstrate by examples its usefulness and its limitations both as an analysis tool and as a method for choosing the regularization parameter. 1 Introduction Practically all regularization methods for computing stable solutions to inverse problems involve a trade-off between the "size" of the regularized solution and the quality of the fit that it provides to the given data. What distinguishes the various regularization methods is how... "... We consider the problem of selecting the “best ” subset of exactly k columns from an m × n matrix A. In particular, we present and analyze a novel two-stage algorithm that runs in O(min{mn 2, m 2 n}) time and returns as output an m × k matrix C consisting of exactly k columns of A. In the first stag ..." Cited by 28 (2 self) Add to MetaCart We consider the problem of selecting the “best ” subset of exactly k columns from an m × n matrix A. In particular, we present and analyze a novel two-stage algorithm that runs in O(min{mn 2, m 2 n}) time and returns as output an m × k matrix C consisting of exactly k columns of A. In the first stage (the randomized stage), the algorithm randomly selects O(k log k) columns according to a judiciously-chosen probability distribution that depends on information in the topk right singular subspace of A. In the second stage (the deterministic stage), the algorithm applies a deterministic column-selection procedure to select and return exactly k columns from the set of columns selected in the first stage. Let C be the m × k matrix containing those k columns, let PC denote the projection matrix onto the span of those columns, and let Ak denote the “best ” rank-k approximation to the matrix A as computed with the singular value decomposition. Then, we prove that ‖A − PCA‖2 ≤ O k 3 4 log 1 - Argonne Preprint ANL-MCS-P559-0196, Argonne National Laboratory , 1996 "... this paper, and we give only a brief synopsis here. For details, the reader is referred to the code. Test matrices 1 through 5 were designed to exercise column pivoting. Matrix 6 was designed to test the behavior of the condition estimation in the presence of clusters for the smallest singular value ..." Cited by 26 (1 self) Add to MetaCart this paper, and we give only a brief synopsis here. For details, the reader is referred to the code. Test matrices 1 through 5 were designed to exercise column pivoting. Matrix 6 was designed to test the behavior of the condition estimation in the presence of clusters for the smallest singular value. For the other cases, we employed the LAPACK matrix generator xLATMS, which generates random symmetric matrices by multiplying a diagonal matrix with prescribed singular values by random orthogonal matrices from the left and right. For the break1 distribution, all singular values are 1.0 except for one. In the arithmetic and geometric distributions, they decay from 1.0 to a specified smallest singular value in an arithmetic and geometric fashion, respectively. In the "reversed" distributions, the order of the diagonal entries was reversed. For test cases 7 though 12, we used xLATMS to generate a matrix of order - SIAM J. Matrix Anal. Appl , 1995 "... We describe an algorithm to compute a rank revealing sparse QR factorization. We augment a basic sparse multifrontal QR factorization with an incremental condition estimator to provide an estimate of the least singular value and vector for each successive column of R. We remove a column from R as ..." Cited by 20 (0 self) Add to MetaCart We describe an algorithm to compute a rank revealing sparse QR factorization. We augment a basic sparse multifrontal QR factorization with an incremental condition estimator to provide an estimate of the least singular value and vector for each successive column of R. We remove a column from R as soon as the condition estimate exceeds a tolerance, using the approximate singular vector to select a suitable column. Removing columns, or pivoting, requires a dynamic data structure and necessarily degrades sparsity. But most of the additional work fits naturally into the multifrontal factorization's use of efficient dense vector kernels, minimizing overall cost. Further, pivoting as soon as possible reduces the cost of pivot selection and data access. We present a theoretical analysis that shows that our use of approximate singular vectors does not degrade the quality of our rank-revealing factorization; we achieve an exponential bound like methods that use exact singular vectors. We prov...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=316542","timestamp":"2014-04-21T11:05:40Z","content_type":null,"content_length":"38902","record_id":"<urn:uuid:1cbc96b9-3a98-46f7-b77b-9531994ef766>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
User Sam Pinkus bio website visits member for 7 months seen Mar 8 at 22:43 stats profile views 19 22 awarded Scholar 22 accepted Lines in image; are they significant to prime numbers if so how? Feb Lines in image; are they significant to prime numbers if so how? 19 revised Tidy up. Slightly better 1st image as per suggestions. Feb Lines in image; are they significant to prime numbers if so how? 18 comment @WillSawin, I think I get the general idea but small correction; $P_{i}$/6 is not always $\pm1/3$ + some integer. Ex. 11/6 = 1+5/6. Lines in image; are they significant to prime numbers if so how? Feb Hi @WillSawin. I want to accept this answer but your argument is not completely clear to me. Proof omitted, it's this?: For any prime number $P_{i}$, and some other +ve number d < $P_{i} 18 comment $, there exists another number M < $P_{i}$ s.t. $P_{i}$/M = d + r where r element of $\pm{1/d,2/d,...,(d-1)/d}$. So for example for $P_{i}$=23, and d=4 we have 23/8 = 4-1/8. This leads to, where at say $\pi/2$ rads for each prime number there is only a small set of possible offsets from that vector the point can lie on $\pm{-1/4,-2/4-3/4,1/4,2/4,3/4}$, which gives the impression of a contour. Feb Lines in image; are they significant to prime numbers if so how? 18 revised added 113 characters in body Feb Lines in image; are they significant to prime numbers if so how? 18 revised added 301 characters in body Feb Lines in image; are they significant to prime numbers if so how? 18 comment @jmc I've uploaded the original image, and full code in PHP. I used gnuplot to do the initial plot then GIMP to apply the Gaussian smoothing. I'll try one with a bigger radius and upload if it turns out. Feb Lines in image; are they significant to prime numbers if so how? 17 revised added original image 17 awarded Student Feb Lines in image; are they significant to prime numbers if so how? 17 revised added original image 17 asked Lines in image; are they significant to prime numbers if so how? 17 awarded Teacher Sep Can one measure the infeasibility of four color proofs? 17 comment True it is O(1) in Big-O notation, but Big-O is not a very helpful measure of complexity in this example. Sep Can one measure the infeasibility of four color proofs? 17 revised I can comment? Sep Can one measure the infeasibility of four color proofs? 16 revised added 168 characters in body 16 awarded Editor Sep Can one measure the infeasibility of four color proofs? 16 revised added 23 characters in body Sep answered Can one measure the infeasibility of four color proofs?
{"url":"http://mathoverflow.net/users/40046/sam-pinkus?tab=activity","timestamp":"2014-04-16T07:59:35Z","content_type":null,"content_length":"44387","record_id":"<urn:uuid:601981a9-f6d4-4b21-8388-16cd3c695d47>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Toroidal polyhedron From Wikipedia, the free encyclopedia In geometry, a toroidal polyhedron is a polyhedron which is also a toroid (a g-holed torus), having a topological genus of 1 or greater.^1 Non-self-intersecting toroidal polyhedra are embedded toroids, while self-intersecting toroidal polyhedra are toroidal as abstract polyhedra, which can be verified by their Euler characteristic (0 or less), and their self-intersecting realization in Euclidean 3-space is a polyhedral immersion. If a toriodal polyhedron is non-orientable then it cannot be embedded in 3-space, for example the Klein bottle is a non-orientable toroid of Euler characteristic 0. If the toroid has an odd-valued Euler characteristic then it cannot be embedded, however the Klein bottle demonstrates that the converse is not true. Stewart toroids A special category of toroidal polyhedra are constructed exclusively by regular polygon faces, no intersections, and a further restriction that adjacent faces may not exist in the same plane. These are called Stewart toroids, named after Professor Bonnie Stewart who explored their existence. Stewart also defined them as quasi-convex toroidal polyhedra if the convex hull created no new edges (i.e. the holes can be filled by single planar polygons). Császár and Szilassi polyhedra The Császár polyhedron is a seven-vertex toroidal polyhedron with 21 edges and 14 triangular faces. It and the tetrahedron are the only known polyhedra in which every possible line segment connecting two vertices forms an edge of the polyhedron. Its dual, the Szilassi polyhedron, has seven hexagonal faces that are all adjacent to each other. The Császár polyhedron has the fewest possible vertices of any toroidal polyhedron, and the Szilassi polyhedron has the fewest possible faces of any toroidal polyhedron. Self-intersecting tori Octahemioctahedron Cubicuboctahedron Great dodecahedron Allowing faces to intersect produces toroidal polyhedra that are hard to recognize except by determining their Euler characteristic: χ = 2(1 − g). Such polyhedra are toroidal as abstract polyhedra, and their self-intersecting realization in Euclidean 3-space is a polyhedral immersion. For example: Crown polyhedra A crown polyhedron or stephanoid is a toroidal polyhedron which is also noble, being both isogonal (equal vertices) and isohedral (equal faces). Crown polyhedra are self-intersecting and topologically self-dual.^2 See also 1. ^ Stewart (1964). 2. ^ Grünbaum, B.; Polyhedra with hollow faces, Proc. NATO-ASI Conf. on polytopes: abstract, convex and computational, Toronto 1983, Ed. Bisztriczky, T. Et Al., Kluwer Academic (1994), pp. 43–70. External links
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Toroidal_polyhedron","timestamp":"2014-04-18T11:04:50Z","content_type":null,"content_length":"75940","record_id":"<urn:uuid:98836ca1-ed70-4c3b-b283-0e32c7377985>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics I Physics 100 Kramer 4 credits An introductory course, employing calculus, which presents the unifying principles of physics, a historical perspective on the development of physical sciences, and practice in analysis of physical phenomena. Topics include linear and rotational motion, Newton’s laws, work, energy, momentum, gravitation, and waves. Students enrolled in this course participate in the laboratory, for which there is a laboratory fee. Corequisite: Mathematics 210. This course is generally offered once a year. Physics II Physics 101 Bergman 4 credits This course continues the calculus-based physics sequence begun in Physics 100. Topics include thermodynamics, electricity, magnetism, optics, special relativity, and wave mechanics. Accompanying laboratory required. Prerequisite: Physics 100. Corequisite: Mathematics 211. This course is generally offered once a year (in the spring). Physics of Sound and Music Physics 204 Sharpe 3 credits This course investigates the physical and mathematical foundations of sound, musical scales, and musical instruments. Acoustic spectra and the construction of instruments are studied, along with sound reproduction and synthesis. Several laboratory sessions demonstrate and investigate many of the effects studied. Prerequisite: Placement in Mathematics 109. Last taught S11. Analog and Digital Electronics Physics 210 Bergman 4 credits This course introduces analog and digital electronic circuitry through both theory and laboratory work. It is suitable for science students wishing to become comfortable working in the laboratory, students with an interest in electronic art and music, students interested in computer science, and also those simply wanting a deeper understanding of the innards of integrated circuits. Analog topics include direct and alternating current circuits, filters, diodes and rectification, bipolar and field effect transistors, operational amplifiers, and oscillators. Digital topics include combinational and sequential logic, gates, flip-flops, and memory. Other topics may include audio signals, transducers, analog/digital conversion, and microprocessor basics. Prerequisite: Mathematics 210 and permission of the instructor. This course is generally offered once every two years. Last taught F09. Introduction to Quantum Physics Physics 220 Bergman 3 credits This course examines the observations that led to the quantum theory, in particular, the wave nature of matter and the particle nature of light. Topics include the Bohr semi-classical model of the atom, the deBroglie wave-particle duality, Fourier analysis, the Heisenberg uncertainty principle, the Schrodinger equation and the probabilistic interpretation of quantum mechanics, orbital and spin angular momentum, the hydrogen atom, the Pauli exclusion principle, and multi-electron atoms. The course provides an introduction to physics at the small scale that is necessary for those intending further study in physics and chemistry. Philosophical issues raised by the quantum theory are discussed. Prerequisite: Physics 101. Suggested corequisites: Mathematics 220 and Physics 230. This course is generally offered once a year. Relativity, Cosmology, and Astrophysics Physics 221 Kramer 3 credits A detailed study of the theory of special relativity, including kinematics, dynamics, and electrodynamics. Elements of general relativity and particle physics, with applications to cosmology and astrophysics. Corequisite: Physics 101. This course is generally offered once every two years. Last taught S10. Modern Physics Laboratory Physics 230 Bergman 1 credit Experiments may include e/m of the electron, the photoelectric effect, the hydrogen and deuterium spectra, the Zeeman effect, electron spin resonance, X-ray diffraction, holography, and astronomical observations. Extended laboratory experiments and written reports. Prerequisite: Physics 220 (may be taken concurrently). This course is generally offered once a year. Classical Mechanics Physics 303 Kramer 4 credits Classical mechanics is a study of matter and energy in the limits that the quantization of nature is not observable and the speed of light can be considered to be infinitely fast. Topics include the harmonic oscillator, celestial mechanics, rigid body motion, rotation, and the Lagrangian formulation of mechanics. Other possible topics include fluids, statics, and nonlinear systems. Prerequisite: Physics 101. This course is generally offered once every two years. Last taught S12. Electricity and Magnetism Physics 304 Bergman 4 credits Electromagnetic forces pervade nature, responsible for such diverse phenomena as chemical bonding and friction. Maxwell’s formulation of electromagnetic theory remains the most complete and elegant description of any of the fundamental forces of nature. Topics include vector calculus, electrostatics, electric fields in matter, magnetostatics, magnetic fields in matter, electrodynamics, and Maxwell’s equations. Prerequisite: Physics 101. This course is generally offered once every two years. Last taught F10. Physics 306T Kramer 4 credits Covers a range of topics at the interface of physics, chemistry, and biology. Topics may include: The shape and function of biological macromolecules, solute transport in organisms via diffusion and fluid flow, aspects of muscle contraction and vision, and an introduction to biomechanics. Prerequisites: Physics 101 and Math 221 and permission of the instructor. This course is generally offered as a tutorial. Fluid Mechanics Physics 308T Bergman 4 credits Fluid mechanics is of great practical importance to such fields as aerodynamics, chemical engineering, meteorology, oceanography, and geophysics. Although an understanding of the basic equations is a century old, aspects of fluid mechanics such as turbulence are also among the last, basic, unsolved problems in classical physics. In this course we will study the origin of the governing (Navier–Stokes) equations and the concept of nondimensional numbers, in particular the Reynolds number. We will then study the limits of low Reynolds number (viscous) flow and high Reynolds number (inviscid) flow. Further topics include boundary layers, drag and lift, convection, stratified flow, and rotating fluids. We will then study instabilities and transition to turbulence. The emphasis in this course will be on the physical phenomena, though the course will use mathematics freely. Prerequisite: Physics 101. This course is generally offered as a tutorial. Statistical Thermodynamics Physics 320T Kramer 4 credits Statistical thermodynamics connects the microscopic world with the macroscopic. The concepts of microscopic states (configuration space) and equilibrium are introduced, from which follow macroscopic quantities such as heat, work, temperature, and entropy. The partition function is derived and used as a tool to study ideal gases and spin systems. Other topics include free energy, phase transformations, chemical equilibrium, and quantum statistics and their application to blackbody radiation, conduction electrons, and Bose-Einstein condensates. This course is recommended for those with an interest in physical chemistry. Prerequisite: Physics 220; no previous course in statistics necessary. This course is generally offered as a tutorial. Quantum Mechanics I Physics 420T Bergman, Kramer 4 credits A formal course in quantum mechanics. Operators, state vectors, observables, and eigenvalues. Solutions of Schrodinger’s equation with applications to the harmonic oscillator, the hydrogen atom, and solids. Suggested for those intending to go to graduate school in physics. Prerequisites: Physics 220 and Mathematics 220. Some knowledge of electrodynamics is helpful but not required. This course is generally offered as a tutorial. Quantum Mechanics II Physics 421T Bergman, Kramer 4 credits A continuation of Physics 420T. Topics include the time-dependent Schrodinger equation, with applications to radiation, perturbation theory, and applications of quantum mechanics to multi-electron atoms and nuclear physics. Suggested for those intending to go to graduate school in physics. Prerequisite: Physics 420T. This course is generally offered as a tutorial. Solid State Physics Physics 422T Bergman 4 credits Solid state physics is the study of the properties that result from the distribution and interaction of electrons in metals, insulators, and semiconductors. Topics include crystal structures, the reciprocal lattice, lattice vibrations, free electron theory, the Bloch theorem, band structure and Fermi surfaces, semiconductors, superconductivity, magnetism, and defects. Prerequisite: Physics 220. Some knowledge of statistical thermodynamics is helpful but not required. This course is generally offered as a tutorial. General Relativity Physics 440T Kramer 4 credits Covers Einstein’s theory of gravity and its applications. Topics include the treatment of vectors and tensors in curved space-time, the Einstein field equations, the motion of particles in curved space-time, a thorough analysis of black holes, and (time-permitting) an introduction to cosmology. Prerequisites: Physics 221 and Mathematics 351 or permission of the instructor. This course is generally offered as a tutorial. Physics Tutorial Physics 300/400 Staff 4 credits Under these course numbers, juniors and seniors design tutorials to meet their particular interests and programmatic needs. A student should see the prospective tutor to define an area of mutual interest to pursue either individually or in a small group. A student may register for no more than one tutorial in any semester.
{"url":"https://www.simons-rock.edu/academics/divisions/division-of-science-mathematics-computing/physics/","timestamp":"2014-04-20T16:04:13Z","content_type":null,"content_length":"39612","record_id":"<urn:uuid:d2887402-95bc-47bb-9c90-6f55f450ccc5>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
NASA: Practical Uses of Math And Science (PUMAS) PUMAS Examples Currently the PUMAS collection contains 84 examples. The PUMAS examples are aimed primarily at helping pre-college teachers enrich their presentation of topics in math and science. • You may find a number of examples that relate to your area of interest, perhaps written in different styles, and possibly taking different approaches to the material. There may also be comments/ lesson plans filed with some of the examples, written by previous users. • Use these examples as a resource -- Select, adapt, recontextualize, and present the material to your students in a way that you judge will best meet your students' needs, abilities, and • You may have ideas related to a particular example that might be helpful to subsequent users. There is an opportunity, on the "Display an Example" page associated with that example, for you to submit your comments/lesson plans. • PUMAS examples are citable references. If you use material from PUMAS examples in other work, please cite them appropriately, e.g.: Chambers, L.H., "How Now, Pythagoras?", 07_10_98_1, The PUMAS Collection, http://pumas.nasa.gov, 1998. Search for Header Content Important header content can be searched by using the Search box in the upper-right-hand corner of your screen. Sort by Field This is a good way to start, or to find examples added after a certain date. Click links below to get the list of all the PUMAS Examples, arranged by that field. Examples by Title by Grade Level | by Date Accepted | by Keywords | by Benchmarks | by Author
{"url":"http://pumas.gsfc.nasa.gov/examples/index.php?id=","timestamp":"2014-04-16T17:10:54Z","content_type":null,"content_length":"18919","record_id":"<urn:uuid:b4c9522b-6b01-4787-adfa-2ac02865878e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Tenafly Algebra 2 Tutor Find a Tenafly Algebra 2 Tutor ...I have lived, worked and studied in South America. During my junior spring semester, I directly enrolled in a Bolivian University in La Paz (la Universidad Mayor de San Andres). My coursework, which included history and anthropology, took place entirely in Spanish with other Bolivian students. I have also taken summer courses in Buenos Aires, Argentina. 13 Subjects: including algebra 2, Spanish, geometry, English ...Why do I care about being intuitive, because the best games are intuitive. Imagine if, in order to play tennis with the Wii you had to spin the remote? That does not much sense right? 2 Subjects: including algebra 2, algebra 1 ...I have worked over 20 years in research in the oil, aerospace, and investment management industries. I also have extensive teaching experience -- both as a mathematics tutor and an adjunct professor. I have a Ph.D. in chemical engineering from the California Institute of Technology, and a minor concentration in applied mathematics. 11 Subjects: including algebra 2, calculus, algebra 1, SAT math I am a Mathematics/Computer Science student entering my junior year at Purchase College. I have been tutoring as a primary job since the 9th grade. I have always tutored independently and this is my first time working with a tutoring agency. 23 Subjects: including algebra 2, reading, statistics, Java ...My specialties are Math/Algebra and History. I'm firm, fair, understanding, and very funny. Fast Results Almost Guaranteed. 26 Subjects: including algebra 2, reading, biology, algebra 1
{"url":"http://www.purplemath.com/Tenafly_Algebra_2_tutors.php","timestamp":"2014-04-17T13:11:35Z","content_type":null,"content_length":"23483","record_id":"<urn:uuid:a5be1704-9609-4ca1-b8fa-2bbcdfc4be8f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
A "thread" chosen from Topic of the Moment Designing the Secondary Math Curriculum: What form should a geometry course for prospective secondary teachers take? Previous Topics || geometry.pre-college From: Lou Talman Subject: Geometry Course for Prospective Secondary Teachers Date: 16 May 1995 16:25:56 -0400 I am currently developing a new geometry course aimed at fulfilling the State of Colorado's certification requirement (for mathematics in secondary ed.) of a three-hour course in geometry. (Of course, there're a number of other requirements it should also meet -- like being useful to practicing teachers later in their careers...) I'm doing this under the auspices of the Rocky Mountain Teacher Education Collaborative, an NSF-funded consortium comprising Colorado State University, the University of Northern Colorado, and my own institution, Metropolitan State College of Denver. Its goal is the improvement of teacher education across the board. My course is supposed to incorporate current knowledge of the way students learn, including collaborative learning, and to use current technology such as Geometer's Sketchpad or Cabri. The course is to be offered at the junior level; I'm to offer it for the first time in the fall of this year. □ What, in the opinion of the members of this forum, belongs in such a course? What doesn't? □ What do practicing secondary teachers think of the geometry courses they took (should have taken)? What did those courses do right? Wrong? What did they supply? What should they have □ What do practicing teacher educators say about the same things? □ What do practicing geometers say about the same things? □ What do practicing mathematicians who are not particularly interested in geometry as mathematics say about the same things? □ What are the (de-)merits of Sketchpad vs. Cabri? Are there particularly striking (or even moderately important) things one can do with one but not the other? □ What other technology is available? □ What forms of collaborative learning do Forum members have (un-)favorable experience with? What else do we know about the ways in which students learn that might be of interest in building a working solution to this problem? □ What other relevant things have I forgotten to ask? What are the answers to those unasked questions? Lou Talman Metropolitan State College of Denver From: Walter Whiteley Subject: Re: Geometry Course for Prospective Secondary Teachers Date: 16 May 1995 18:40:23 -0400 In response to Lou's query: A Geometry Course for High School Teacher Candidates I just finished teaching a year-long course for a group of current high school mathematics teachers. For the first half of the course, I had an additional group of teacher candidates (Concurrent Education) and Math Majors who are considering a further program in Education. By the way - I use a large definition of geometry - including some topology of surfaces, Euler's formula, many visualization issues. [Where else will they see this?] I have a couple of strong suggestions - and some other possibilities: □ I would definitely use one of Sketchpad or Cabri. (I used some Sketchpad, but not enough). I think the ideal setting would include a number of weeks in which one class would be held in a computer lab (two per machine) with items to explore. □ If the equipment will support it, I would get them onto the internet in general (and the Geometry Forum in particular). About half my class got into this, but others had inadequate access. □ I would use lots of physical manipulatives and devices - lots of emphasis on visualizing, playing with objects, asking 'what if ... ? etc.. Among others, I used Polydron (for making polyhedra and simple plane tilings), spheres (for spherical geometry - see below), sticks, string, clay, paper to cut and fold, other materials for 3-d examples. [One reason for me to do this is to model this behaviour, so they do this in their classrooms.] □ For part of the course, I used a preprint version of a book that will be published this summer: Experiencing Geometry, by David Henderson. It was primarily about spherical geometry - with the explicit exploration of which plane properties apply to the sphere, and which change. Lots of emphasis on writing, playing with physical models (what is a 'straight line on a sphere? Why? Which properties, local, global symmetry, etc. are being used?) I would use it again - it re- explores SAS etc. (including AAA). I would probably supplement it with more expamples of 'plane proofs' and what generalizes. This time, I took the Forum discussion on 'symmetries of a quadrilateral' and asked them to apply this to the sphere. [I could send you a version of this assignment if you wish.] □ For this you must have physical spheres. Key Curriculum Press is working on some nice plastic models. (You might contact them for information - they let people 'borrow' a trunk of them for several weeks at a time this year.) I made reasonable ones out of craft store 'xmas decorations' - clear spheres (cut off the hanging tabs) and clear tori as bases. A cut-off large yogurt container fit over a hemisphere as a 'spherical ruler' for drawing with overhead transparencey pens. Alternatives some students used included elastics on tennis balls. □ I used projects - presented to the class and written up for me (after the presentation). This allowed a variety of interesting topics to be introduced [4-D and the hypercube, knot theory, plane symmetries, icosahedral symmetries, visualization and geometry, quasi-crystals, the 4-colour problem, finite geometries, axioms for projective geometry, rigidity and projective geometry, projections of spheres and map making, ... ]. This experience will assist them in asking their students to do projects. □ I emphasized asking questions about geometry and asking geometric questions. Every assignment ended with their questions, their responses to my responses to their questions, etc.. This dialogue was time consuming - but generated MOST of the topics for projects, as well as interesting issues related to teaching geometry, the role of proofs, etc., and some reflections on their own struggles with geometry. □ I introduced (and tried to return at regular intervals to) Klein's hierarchy of geometries. For each project, the LISTENERS were asked to report which level(s) of geometry were appropriate. I don't (yet) have good material for this - so it was only partially successful. However, I think it is a critical topic for modern users of geometry. Anyone using geometry in applied areas such as computational geometry, robotics, CAD, etc. should know the levels of geometry, and the value of moving up the hierarchy to simplify their problems. High school students who ask the right questions should have a teacher who can point them in the right direction. □ an interesting source for problems related to 'transformations' (Euclidean, similarity, affine, projective) is the three volume series Geometric Transformations, by Yoglom (MAA). These were written for Russian High School students - so they are tough, but the books include solutions. A number of them fit nicely with Sketchpad (Cabri) constructions and animations to 'see' what the solution looks like. These are my thoughts immediately after the course. Walter Whiteley York University Toronto Ontario From: Margaret Sinclair Subject: Re: Geometry Course for Prospective Secondary Teachers Date: 17 May 1995 21:05:49 -0400 I was part of Walter Whiteley's geometry class for teachers. Everything he mentioned in his reply was of benefit to us -- the push to get us on the Internet, the Russian questions, his lectures on Klein's heirarchy, the introdution to Sketchpad, the projects and so forth. But two things stand out: First, many (all?) math teachers think they're pretty good at math. They've forgotten how hard really new material can be. Attempting spherical geometry was a humbling experience. We had no frame of reference! We were so used to working from known theorems, it was shocking to be unsure. It's important, I believe, to get teachers exploring, and the questions Professor Whiteley asked us to consider made us think, but they also forced us to experiment, and this brings me to the other point. Second, because we had to experiment we needed to use models. I had played around with cuisennaire rods, algebra tiles, and so forth at teacher's college and at school, but I had never worked with them as a person who didn't know the answer. It was eye-opening to all of us to see how much we needed the hands-on materials to discover the answers. They weren't window dressing; they were essential. My children couldn't believe that I was drawing great circles with magic markers on plastic spheres to discover how many quadrilaterals were formed and which symmetry groups they belonged to when the sides of one quadrilateral were extended. Prof. Whiteley could have taught the course from a theoretical point of view, with blackboard diagrams and lots of formulas, but he chose to have us explore, and along with the spherical geometry we learned a great deal about what makes learning possible and enjoyable. Margaret Sinclair From: Anthony D Thrall Subject: Re: Geometry Course for Prospective Secondary Teachers Date: 18 May 1995 03:21:53 -0400 I am gratified that Lou has raised this question in an urgent context; I trust that he will elicit many deeply felt and thoughtful responses. Related to Lou's requests for input are the broad issues of how much and what kind of geometry we should offer and encourage in high school. We have discussed "how much geometry?" in previous volleys, and Lou has set this aside for us since he's talking about a one-year course (at the junior level). On the one hand I do not want to distract us from Lou's urgent requests; on the other hand I want to take this opportunity to make a plug for a longer-term, on-going discussion of the broader issues. In particular, we have an opportunity, through Annie Fetter's wonderful recording of our wrangling, to remember, resume, and build upon our discussions to a degree that was not convenient in the past. I say this because I am dismayed by our institutional amnesia for previous discussions, e.g., Behnke et al (1960), or Tuller (1967). The preceding plea for community memory is my major point, but I feel obliged to mention my personal inclinations about high school geometry curriculum, which are as follows. (Lou: I am an applied statistician - mathematician, as well as a recent graduate of the Stanford Teacher Education Program.) I believe we must restore and update: (i) our notion of a liberal education; and (ii) the place of mathematics in such education. Certainly it is appropriate to marvel at and coo over the brilliant toddlers of human civilization, The Greeks. But it is unseemly for us now, perhaps in our pre-pubescence, to ape these toddlers. Geometry as they practiced it was a contemplation on several related, and near-religious topics, such as the structure of the world and our place in it. A similar contemplation today must sooner or later reckon with the subsequent two and a half millenia of science, including Kepler, Newton, and Einstein. The task before us is to present the important ideas, both in their technical substance and in their historical development, within the allotted time and attention span that our students have for These considerations lead me to strongly endorse Klein's conception of geometry as pertaining to the invariants of specified transformations. In general, I think we need to elaborate a few simple themes, helping students to discover the power of these ideas through technical investigations. At center stage is the notion of a group of transformations. Why not introduce this in high school geometry? The students already know how to compose and "undo" actions on the computer. I would count the year a success if we could get this idea across, along with some appreciation for the potential power of (future) elaborations of the idea. Behnke et al (1960) discuss the technical difficulties of this and other proposals. Tuller (1967) considers and classifies the respective justifications for such proposals. Behnke, H., G. Choquet, J. Dieudonne, W. Fenchel, H. Freudenthal, G. Hajos, and G. Pickert (1960). Lectures on Modern Teaching of Geometry and Related Topics. Proceedings of the seminar held at Aarhus from 30-May-1960 to 2-June-1960 by the International Commission for Mathematics Instruction (ICMI). Aarhus, Denmark: Univeritet. Mathematics Institute. Tuller, Annita (1967). A Modern Introduction to Geometries. Princeton, New Jersey: D. Van Nostrand Company, Inc. Tony Thrall, PhD Etak -- The Digital Map Company Menlo Park, CA [Privacy Policy] [Terms of Use] Home || The Math Library || Quick Reference || Search || Help © 1994-2014 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education. Sarah Seastone 3 July 1995
{"url":"http://mathforum.org/sarah/topics/geomcourse.html","timestamp":"2014-04-17T21:43:07Z","content_type":null,"content_length":"15966","record_id":"<urn:uuid:7f9c0f97-7f6e-41f9-ae20-94f8769fecae>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
of Statistics Teaching Bits: A Resource for Teachers of Statistics Journal of Statistics Education v.6, n.3 (1998) Robert C. delMas General College University of Minnesota 333 Appleby Hall Minneapolis, MN 55455 William P. Peterson Department of Mathematics and Computer Science Middlebury College Middlebury, VT 05753-6145 This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Bob abstracts information from the literature on teaching and learning statistics, while Bill summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts. From the Literature on Teaching and Learning Statistics Statistical Education -- Expanding the Network eds. Lionel Pereira-Mendoza (Chief Editor), Lua Seu Kea, Tang Wee Kee, and Wing-Keung Wong (1998). Proceedings of the Fifth International Conference on Teaching of Statistics, Singapore, June 21-26, This three-volume set contains over 200 papers presented by statistics educators from around the world. Each paper falls into one of the following broad categories: VOLUME 1 Statistical Education at the School Level Statistical Education at the Post-Secondary Level Statistical Education for People in the Workplace Statistical Education and the Wider Society VOLUME 2 An International Perspective of Statistical Education Research in Teaching Statistics The Role of Technology in the Teaching of Statistics VOLUME 3 Other Determinants and Developments in Statistical Education Contributed Papers The three volume set can be purchased from: CTMA Ltd., 425 Race Course Road, Singapore 218671 Telephone: (65) 299 8992 FAX: (65) 299 8983 The cost is: IASE/ISI Member $65 plus shipping/handling Non-member $80 plus shipping/handling "Dice and Disease in the Classroom" by Marilyn Stor and William L. Briggs (1998). The Mathematics Teacher, 91(6), 464-468. This article presents an interesting mathematical modeling project. The goal of the activity is to model the exponential growth of communicable diseases by adding a risk factor. The required equipment is fairly simple: each student needs a die and a data sheet that is illustrated in the article. To simulate disease transmission, a student walks around the classroom and meets another student. The two students then roll their dice and sum the two outcomes. If the sum is below some predetermined cutoff, the encounter is designated as "risky," meaning that if either student was a carrier, the disease has been passed on to the other student. At the end of the activity, one student is randomly chosen to be the initial carrier of the disease, and the spread of the disease through the classroom is tracked. The article describes how the data are collected, graphed, and analyzed, and also presents suggestions for discussion and variations on the activity. "Roll the Dice: An Introduction to Probability" by Andrew Freda (1998). Mathematics: Teaching in the Middle School, 4(2), 8-12. The author describes a simple dice game that helps students understand why it is important to collect data to test beliefs. The game can also be used to help students develop an understanding of why large samples are more beneficial than smaller samples. To play the game, two students each roll a die. The absolute difference of the two outcomes is computed. Player A wins if the difference is 0, 1, or 2. Player B wins if the difference is 3, 4, or 5. Most students' first impressions are that this is a fair game. Over several rounds of the game the students come to realize that the stated outcomes are not equiprobable. The activity promotes skills in data collection and hypothesis testing, as well as mathematical modeling. The author presents examples of a program written in BASIC and a TI-82 calculator simulation, both of which can be used to help students explore the effects of sample size in modeling the dice game. The American Statistician: Teacher's Corner "A One-Semester, Laboratory-Based, Quality-Oriented Statistics Curriculum for Engineering Students" (with discussion) by Russell R. Barton and Craig A. Nowack (1998). The American Statistician, 52(3), 233-243. This article describes a new laboratory-based undergraduate engineering statistics course being offered by the Department of Industrial and Manufacturing Engineering at Penn State. The course is intended as a service course for engineering students outside of industrial engineering. We describe the topics covered in each of the eight modules of the course and how the laboratories are linked to the lecture material. We describe how the course is implemented, including facilities used, the course text, grading, student enrollment, and the implementation of continuous quality improvement in the classroom. We conclude with some remarks on the benefits of the laboratories, the effects of CQI in the classroom, and dissemination of course materials. "The Blind Paper Cutter: Teaching About Variation, Bias, Stability, and Process Control" by Richard A. Stone (1998). The American Statistician, 52(3), 244-247. The intention of this article is to provide teachers with a student activity to help reinforce learning about variation, bias, stability, and other statistical quality control concepts. Blind paper cutting is an effective way of generating tangible sequences of a product, which students can use to address many levels of questions. No special apparatus is required. "Expect the Unexpected from Conditional Expectation" by Michael A. Proschan and Brett Presnell (1998). The American Statistician, 52(3), 248-252. Conditioning arguments are often used in statistics. Unfortunately, many of them are incorrect. We show that seemingly logical reasoning can lead to erroneous conclusions because of lack of rigor in dealing with conditional distributions and expectations. "Some Uses for Distribution-Fitting Software in Teaching Statistics" by Alan Madgett (1998). The American Statistician, 52(3), 253-256. Statistics courses now make extensive use of menu-driven, interactive computer software. This article presents some insight as to how a new class of PC-based statistical software, called "distribution fitting" software, can be used in teaching various courses in statistics. Teaching Statistics A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes. The Circulation Manager of Teaching Statistics is Peter Holmes, ph@maths.nott.ac.uk, RSS Centre for Statistical Education, University of Nottingham, Nottingham NG7 2RD, England. Teaching Statistics has a website at http://www.maths.nott.ac.uk/rsscse/TS/. Teaching Statistics, Autumn 1998 Volume 20, Number 3 "Lawn Toss: Producing Data On-the-Fly" by Eric Nordmoe, 66-67. The author describes an activity based on common lawn tossing games such as horseshoes, flying disc golf, and lawn darts. The activity involves helping students to design and carry out a two-factor experiment. The two factors are Distance to the Target (short or long) and Hand Used for Throwing (left or right). An example of a tabulation sheet for collecting the data is provided, and a simple paired t-test is described for testing whether people tend to have a dominant hand. Suggestions are also provided for how to use the data to illustrate the effects of outliers, conduct a two-sample t-test, explore issues of experimental design, and conduct nonparametric tests. "Why Stratify?" by Ted Hodgson and John Borkowski, 68-71. The article describes an activity to help students understand the benefits of using stratified random sampling. The materials consist of equal numbers of red cards and black cards. Each card has a number written on one side. The values on the red cards are consistently smaller than those on the black cards, which creates two strata in the population of cards that differ with respect to the measure of interest. Students draw simple random samples and stratified random samples for samples of the same size from the population of cards and create distributions for the sample means. Students can determine that the average sample mean from both types of samples provide good estimates of the population mean. Comparison of the two distributions illustrates that the sample means from stratified random samples show much less variability. "Introducing Dot Plots and Stem-and-Leaf Diagrams" by Chris du Feu, 71-73. The author describes an activity that introduces dot plots and stem-and-leaf diagrams to elementary school students by capitalizing on a basic skill that is well rehearsed: measuring the length of things with a ruler. The list of equipment includes a worksheet, a ruler, a protractor, and a piece of string. An example of the worksheet, which contains lines and angles that can be measured, is provided for photocopying. One line is curved in a complex way so that the piece of string is used to match the curved line, and then a ruler is used to measure the length of the string as an estimate. The measurements produced by the children for a particular line typically show some variation. A dot plot or stem-and-leaf diagram of the measurements can visually demonstrate that there is variability, yet one measurement occurs more often than other measurements. The author has found this activity to spontaneously generate discussion of many statistical ideas. "A Constructivist Approach to Teaching Permutations and Combinations" by Robert J. Quinn and Lynda R. Wiest, 75-77. The authors present an approach to teaching about permutations and combinations where students are given an opportunity to explore these concepts within the context of a problem situation. Students are not instructed in the formal mathematics of permutations and combinations. Instead, they are asked to solve a problem regarding how many different ways someone can wallpaper three rooms given there are four different patterns to choose from. Students work in groups to come up with answers to the question. Each group reports back to the class with their answer and provides arguments for why their answer is reasonable. The instructor then identifies which groups have produced answers based on permutations and which have produced answers based on combinations. The formal terminology, symbolic notation, and methods are then introduced and students are shown how the approach taken by a group is modeled by the formal approach. The authors find that this approach empowers students by reinforcing the validity of their own intuitions. "BUSTLE -- A Bus Simulation" by John Appleby, 77-80. The author describes a computer simulation that demonstrates to students how statistical modeling can be used to account for the perception that buses always arrive in bunches. The program assumes that the rate at which passengers arrive at a bus stop follows a Poisson process, and that the delay caused as passengers board can account for buses eventually bunching up along a route. The program allows various parameters to be changed such as the initial delay between bus departures from the terminal, the number of buses, and the number of stops along the route. The author provides a web address from which the program can be downloaded and used for educational purposes. "Testing for Differences Between Two Brands of Cookies" by Rhonda C. Magel, 81-83. The author describes two activities that involve measurements of two different brands of chocolate chip cookies. In the first activity, students count the number of chips in cookies from samples of each brand. The students use the data to conduct a two-sample t-test, first testing for assumptions of normality. In the second activity, each student provides a rating for the taste of each brand. This provides an opportunity for students to conduct a matched-pairs t-test, as well as an opportunity to see the distinction between the two-sample and matched-pairs situations. The author has found these to be very motivating activities that provide students with a concrete and personal understanding of hypothesis testing. "A Note on Illustrating the Central Limit Theorem" by Thomas W. Woolley, 89-90. The article describes the use of the phone book to generate data for illustrating the Central Limit Theorem. In class, students discuss the expected shape of the distribution of the last four digits of a telephone number. It can be argued that if the digits are produced randomly, they should form a uniform distribution of digits between 0 and 9 with an expected value of 4.5 and a standard deviation of approximately 2.872. Outside of class, students randomly select a page of the phone book, randomly select a telephone number from the page, record the last four digits of the telephone number, and compute the average of the four digits. Each student repeats this process until 20 sample means are generated. The sample means from all students are collected in class and entered into statistical software to produce a distribution and obtain summary statistics. The distribution is typically unimodal, normal in its shape, centered around 4.5, with a standard deviation of approximately 1.436. This concrete example allows students to empirically test the Central Limit Theorem. Topics for Discussion from Current Newspapers and Journals "Following Benford's Law, or Looking Out for No. 1" by Malcolm W. Browne. The New York Times, 4 August 1998, F4. This article is based on "The First Digit Phenomenon" by Theodore P. Hill (American Scientist, July-August 1998, Vol. 86, pp. 358-363). It describes how Hill and other investigators have successfully applied a statistical phenomenon known as Benford's Law to detect problems ranging from fraud in accounting to bugs in computer output. The law is named for Dr. Frank Benford, a physicist at General Electric who identified it using a variety of datasets some sixty years ago. But in fact, as Hill's original article notes, the law had already been discovered in 1881 by the astronomer Simon Newcomb. Newcomb observed that tables of logarithms showed greater wear on the pages for lower leading digits. Most people intuitively expect leading digits to be distributed uniformly, but the log table evidence suggests that everyday calculations involve relatively more numbers with lower leading digits. Newcomb theorized that the chance of leading digit d is given by the base 10 logarithm of (1 + 1/d) for d = 1,2,...,9. Thus the chance of a leading 1 is not one in nine, but rather log(2) = .301 -- nearly one in three! Benford's law has been observed to hold in a wide range of datasets, including the numbers on the front page of The New York Times, tables of molecular weights of compounds, and random samples from a day's stock quotations. Dr. Mark Nigrini, an accounting consultant now at Southern Methodist University, wrote his Ph.D. dissertation on using Benford's Law to detect tax fraud. He recommended auditing those returns for which the distribution of digits failed to conform to Benford's distribution. In a test on data from Brooklyn, his method correctly flagged all cases in which fraud had previously been admitted. However, Nigrini points out that Benford's Law is not universally applicable. For example, analyses of corporate accounting data often turn up too many 24's, apparently because business travelers have to produce receipts for expenses of $25 or more. He also notes that the law won't help you pick lottery numbers. For example, in a "Pick Four" numbers game, lottery officials take great pains to ensure that the four digits are independently selected and are uniformly distributed on d = 1,2,...,9. The Times article describes a classroom experiment conducted by Dr. Hill in his classes at Georgia Tech. For homework, Hill asks those students whose mother's maiden name begins with A through L to flip a coin 200 times and record the results and the rest of the students to imagine the outcomes of 200 flips and write them down. As the article points out, the odds are overwhelming that there will be a run of at least six consecutive heads or tails somewhere in a sequence of 200 real tosses. In his class experiment, Hill reports a high rate of success at detecting the imaginary sequences by simply flagging those that fail to contain a run of length six or greater. While this is not an example of Benford's Law, it is another situation in which people are surprised by the true probability distribution. Readers of JSE may be familiar with this coin tossing experiment from the activity "Streaky Behavior: Runs in Binomial Trials" in Activity-Based Statistics by Schaeffer, Gnanadesikan, Watkins, and Witmer (1996, Springer). "Fate ... or Blind Chance" by Bruce Martin. The Washington Post, 9 September 1998, H1. This article on coincidences is excerpted from Martin's article in the September-October issue of The Skeptical Inquirer. The article starts with the famous (to statisticians!) birthday problem, illustrated with birthdays and death days of US presidents. The problem is extended to treat the chance that at least two people in a random sample will have birthdays within one day (i.e., on the same day or on two adjacent days). In this formulation, only 14 people are required to have a better than even chance. The article then mentions some popularly reported examples of coincidences, such as the similarities between the Lincoln and Kennedy assassinations. Martin observes that, as far as we know, the decimal digits of the number In the original Skeptical Inquirer article, Martin discusses the above examples, and also investigates the randomness of prices in the stock market. Again using the digits from the expansion of Deadly Disparities; Americans' Widening Gap in Incomes May be Narrowing our Lifespans" by James Lardner. The Washington Post, 16 August 1998, C1. Since the 1970s, virtually all income gains in the US have gone to households in the top 20% of the income distribution -- the greatest inequality observed in any of the world's wealthy nations. Beyond the fairness issues, a growing body of research indicates that countries with more pronounced differences in incomes tend to experience shorter life expectancies and greater risks of chronic illness in all income groups. Moreover, the magnitude of these risks appears to be larger than the more widely publicized health risks associated with cigarettes or high-fat foods. Richard Wilkinson, an economic historian at Sussex University, found that, among nations with gross domestic products of at least $5000 per capita, one nation could have twice the per capita income of another, yet still have a lower life expectancy. On the other hand, income equality emerged as a reliable predictor of health. This finding ties together a variety of international comparisons. For example, the greatest gains in British civilian life expectancy came during World War I and World War II, periods characterized by compression of incomes. In contrast, over the last ten years in Eastern Europe and the former Soviet Union, small segments of the population have had tremendous income gains, while living conditions for most people have deteriorated. These countries have actually experienced decreases in life expectancy. Among developed nations, the US and Britain today have the largest income disparities and the lowest life expectancies. Japan has a 3.6 year edge over the US in life expectancy (79.8 years vs. 76.2 years) even though it has a lower rate of spending on health care. The difference is roughly equal to the gain the US would experience if heart disease were eliminated as a cause of death! The July 1998 issue of the American Journal of Public Health presents analogous data comparing US states, cities, and counties. Research directed by John Lynch and George Kaplan of the University of Michigan found that mortality rates are more closely associated with measures of relative, rather than absolute, income. Thus the cities Bixoli, Mississippi; Las Cruces, New Mexico; and Steubenville, Ohio have both high inequality and high mortality. By contrast, Allentown, Pennsylvania; Pittsfield, Massachusetts; and Milwaukee, Wisconsin share low inequality and low mortality. "Driving While Black; A Statistician Proves that Prejudice Still Rules the Road" by John Lamberth. The Washington Post, 16 August 1998, C1. Lamberth is a member of the psychology department of Temple University. In 1993, he was contacted by attorneys whose African-American clients had been arrested on the New Jersey Turnpike for possession of drugs. It turned out that 25 blacks had been arrested over a three-year period on the same portion of the turnpike, but not a single white. The attorneys wanted a statistician's opinion of the trend. Lamberth was a good choice. Over 25 years his research on decision-making had led him to consider issues including jury selection and composition, and application of the death penalty. He was aware that blacks were underrepresented on juries and sentenced to death at greater rates than whites. In this article, Lamberth describes the process of designing a study to investigate the highway arrest issue. He focused on four sites between Exits 1 and 3 of the Turnpike, covering one of the busiest segments of highway in the country. His first challenge was to define the "population" of the highway, so he could determine how many people traveling the turnpike in a given time period were black. He devised two surveys, one stationary and one "rolling." For the first, observers were located on the side of the road. Their job was to count the number of cars and the race of their occupants during randomly selected three-hour blocks of time over a two-week period. From June 11 to June 24, 1993, his team carried out over 20 recording sessions, counting some 43,000 cars, 13.5% of which had one or more black occupants. For the "rolling survey," a public defender drove at a constant 60 miles per hour (5 miles per hour over the speed limit), counting cars that passed him as violators and cars that he passed as non-violators, noting the race of the drivers. In all, 2096 cars were counted, 98% of which were speeding and therefore subject to being stopped by police. Black drivers made up 15% of these violators. Lamberth then obtained data from the New Jersey State Police and learned that 35% of drivers stopped on this part of the turnpike were black. He says, "In stark numbers, blacks were 4.85 times as likely to be stopped as were others." He did not obtain data on race of drivers searched after being stopped. However, over a three-year period, 73.2% of those arrested along the turnpike by troopers from the area's Moorestown barracks were black, "making them 16.5 times more likely to be arrested than others." These findings led to a March 1996 ruling by New Jersey Superior Court Judge Robert E. Francis, who ruled that state police were effectively targeting blacks, violating their constitutional rights. Francis suppressed the use of any evidence gathered in the stops. Lamberth speculates that department drug policy explains police behavior in these situations. Testimony in the Superior Court case revealed that troopers' performance is considered deficient if they do not make enough arrests. Since police training targets minorities as likely drug dealers, the officers had an incentive to stop black drivers. But when Lamberth obtained data from Maryland (similar data has not been available from other states), he found that about 28% of drivers searched in that state have contraband, regardless of their race. Why, then, is there a continued perception that blacks are more likely to carry drugs? It turns out that, of 1000 searches in Maryland, 200 blacks were arrested, compared to only 80 non-blacks. But the problem is that the sample was biased: of those searched, 713 were black, and 287 were non-black. "Excerpts from Ruling on Planned Use of Statistical Sampling in 2000 Census." The New York Times , 25 August 1998, A13. Ruling on a lawsuit filed by the House of Representatives against the Commerce Department, a three-judge Federal panel says that plans to use sampling in the 2000 Census violate the Census Act. Since the Constitution requires an "actual enumeration," opponents of sampling have long argued that no statistical adjustment can be allowed. Significantly, the court did not rule on these constitutional issues. It more narrowly addressed whether sampling for the purpose of apportioning Representatives is allowed under Congress's 1976 amendments to sections 141(a) and 195 of the Census Act. The amended version of section 141(a) states: The Secretary shall, in the year 1980 and every 10 years thereafter, take a decennial census of population ... in such form and content as he may determine, including the use of sampling procedures and special surveys. The 1976 amendment to section 195 more directly addresses the apportionment issue: Except for the determination of population for purposes of apportionment of Representatives in Congress among the several States, the Secretary shall, if he considers it feasible, authorize the use of the statistical method known as sampling in carrying out the provisions of this title. The court ruled that these amendments must be considered together. Therefore, the case hinges on whether the exception stated in the amendment to section 195 meant "you cannot use sampling methods for purposes of apportionment" or "you do not have to use sampling methods." The court provided the following two examples of the use of the word except: Except for Mary, all children at the party shall be served cake. Except for my grandmother's wedding dress, you shall take the contents of my closet to the cleaners. The court argues that the interpretation of "except" must be made in the context of the situation. In the first example, one could argue it would be all right if Mary were also served cake. But in the second example, the intention is more clearly that grandmother's delicate wedding dress should not be taken to the cleaners. The judges stated that "the apportionment of Congressional representatives among the states is the wedding dress in the closet..." The Clinton administration appealed this ruling to the Supreme Court, which is scheduled to begin hearings on the matter on 30 November. A decision could come by March. This should be an interesting story to follow. After the Federal court ruling, an excellent discussion of the issues surrounding the Census was presented on "Talk of the Nation" (National Public Radio, August 28, 1998, http://www.npr.org/ramfiles The first hour of the program is entitled "Sampling and the 2000 Census." Guests are Harvey Choldin, sociologist and author of Looking for the Last Percent: The Controversy over Census Undercounts, statistician Stephen Fienberg, who has written a series of articles on the census for Chance, and Stephen Holmes, a New York Times correspondent. At the end of the hour, the group specifically addresses the need for statisticians to explain the issues surrounding sampling in a way that the public and Congress can understand. Fienberg describes the proposed Census adjustment in the context of the capture-recapture technique for estimating the number of fish in a lake. (The Activity-Based Statistics text mentioned previously devotes a chapter to a classroom experiment designed to illustrate the capture-recapture method.) In a previous "Teaching Bits" (Vol. 6, No. 1), we described ASA President David Moore's response to a William Safire editorial on adjusting the Census. Safire gave a laundry list of concerns about public opinion polling, and Moore properly took him to task for failing to address sampling issues in the specific context of the Census. However, some professional statisticians still worry about whether the current sampling proposals can provide sufficiently accurate estimates to improve the Census. For a good summary of these concerns, see "Sampling and Census 2000" by Morris Eaton, David A. Freedman, et al. (SIAM News, November 1998, 31(9), p. 1). "Prescription for War" by Richard Saltus. The Boston Globe, 21 September 1998, C1. At a Boston conference session on "Biology and War," psychologists Christian G. Mesquida and Neil I. Wiener of York University in Toronto presented a new theory about what triggers war: a society that is "bottom-heavy with young, unmarried and violence-prone males." This theory is based on an analysis of the relationship of population demographics to the occurrence of wars and rebellions over the last decade. Societies in which wars and rebellions had occurred tended to have a large population of unmarried males between the ages of 15 and 29. From the standpoint of evolutionary biology, it makes sense to ask whether war-like behavior confers an evolutionary advantage. The researchers explain that war is "a form of intrasexual male competition among groups, occasionally to obtain mates but more often to acquire the resources necessary to attract and retain mates." They point out that the argument makes sense as an explanation for offensive, but not defensive, wars. For example, the United States was reluctantly drawn into World War II, so there the theory applies to the young Nazis in Germany. Similarly, it applies to the Europeans who conquered the native populations of America, but not to the native peoples. In nearly half of the countries in Africa, young, unmarried males comprise more than 49% of the overall population. In the last 10 years, there have been at least 17 major civil wars in countries in Africa, along with several conflicts that crossed national borders. In contrast, Europe has few countries where the young, unmarried male population makes up even 35% of the total. In the last 10 years there has been only one major civil war, and that was in Yugoslavia, which has more than 42% young, unmarried males. "Ask Marilyn: Good News for Poor Spellers" by Marilyn vos Savant. Parade Magazine, 27 September 1998, p. 4. A letter from Judith Alexander of Chicago reads as follows: "A reader asked if you believe that spelling ability is a measure of education, intelligence or desire. I was fascinated by the survey you published in response. The implication of the questions is that you believe spelling ability may be related to personality. What were the results? I'm dying to know." The "biggest news," according to Marilyn, is that poor spelling has no relationship to general intelligence. On the other hand, she is sure that education, intelligence, or desire logically must have something to do with achieving excellence in spelling. But her theory is that, even if one has "the basics," personality traits can interfere with success at spelling. She bases her conclusions on a write-in poll, to which 42,603 of her readers responded (20,188 by postal mail and 22,415 by e-mail). Participants were first asked to provide a self-assessment of their spelling skills, on a scale from 1 to 100. They then ranked other personal traits on the same scale. For each respondent, Marilyn identified the quality or qualities that were ranked closest to spelling. She considers this quality to be most closely related to spelling ability. She calls her analytical process "self-normalization," explaining that matching up the ratings for each individual respondent overcomes the problem that respondents differ in how accurately they can assess their own abilities. The trait that she found most frequently linked to spelling in this analysis was "ability to follow instructions." Next was "ability to solve problems," followed by "rank as an organized person." The first two were related very strongly for strong spellers, but hardly related at all for weak spellers. Marilyn reports that only 6% of the weak spellers ranked their ability to follow instructions closest to their spelling ability, and only 5% ranked their ability to solve problems closest to their spelling ability. On the other hand, the relationship with organizational ability showed up at all spelling levels, with top spellers being the most organized, and weak spellers being the least organized. Marilyn says she asked for a ranking of leadership abilities in order to validate her methods. She did not believe this trait was related to spelling, and, indeed, leadership was linked least often to spelling in the data. Similarly, she reports that creativity appeared to be unrelated to spelling ability. "When Scientific Predictions Are So Good They're Bad" by William K. Stevens. The New York Times, 29 September 1998, F1. This article discusses various problems that can occur when the public is presented with predictions. A key problem is the tendency to rely on point estimates without taking into account the margin of error. For example, in spring 1997, when the Red River of the North was flooding in North Dakota, the National Weather Service forecast that the river would crest at 49 feet. Unfortunately, the river eventually crested at 54 feet. Those residents of Grand Forks who relied on the estimate of 49 feet later faced evacuation on short notice. In this case, the article argues, it would have been far better for the weather service to report an error statement along with its point estimate. The discussion over global warming illustrates our difficulties in dealing with predictions. Models used to predict the increase in the Earth's average temperature over the next century are necessarily based on many assumptions, each of which entails substantial uncertainty. Popular reports of the predictions place relatively little emphasis on the sizes of the possible errors, and the public may be misled by the implied precision. On the other hand, governments are seen as taking any uncertainty as a reason to avoid action on the issue. With other types of forecasting, people have learned from experience that the predictions are fallible, and have adjusted their behavior accordingly. Weather forecasts are a prime example. Similarly, the public has learned not to expect accurate prediction of the exact timing and magnitude of earthquakes, and seismologists focus instead on long-range forecasts. "Placebos Prove So Powerful Even Experts Are Surprised" by Sandra Blakeslee. The New York Times, 13 October 1998, D1. A recent study of a baldness remedy found that 86% of the men taking the treatment either maintained or increased the amount of hair on their heads -- but so did 42% of the placebo group. Dr. Irving Kirsch, a University of Connecticut psychologist, reports that placebos are 55-60% as effective as medications like aspirin and codeine for treating pain. But if some patients really do respond to placebos, might there not be a biological mechanism underlying the effect? Using new techniques of brain imagery, scientists are now discovering that patients' beliefs can indeed produce biological changes in cells and tissues. According to the article, some of the results "border on the miraculous." One explanation explored here is that the body's responses may be based more on what the brain expects to happen based on past experience, rather than on the information currently flowing to the brain. Thus a patient who expects a drug to make him better would have a positive response to a placebo. A related idea is that, by reducing stress, a placebo may allow the body to regain a natural state of health. In fact, a recent study showed that animals experiencing stress produce a valium-like substance in their brains, provided they have some control over the stress. Return to Table of Contents | Return to the JSE Home Page
{"url":"http://www.amstat.org/publications/jse/v6n3/resource.html","timestamp":"2014-04-16T13:18:18Z","content_type":null,"content_length":"41777","record_id":"<urn:uuid:26bdcee9-390c-41dd-a9ca-e71d0ebe4211>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
inductive limit inductive limit Category theory Universal constructions Limits and colimits An inductive limit is the same thing as a colimit. (Similarly, a projective limit is the same thing as a limit.) In this context, an inductive system is the same thing as a diagram, and an inductive cone is the same thing as a cocone. Many authors restrict this terminology to colimits over directed sets (or filtered categories), especially the directed set $(\mathbb{N},\leq)$ of natural numbers; see directed colimit (or filtered colimit) for discussion of this case if you think that it may be what you want. Revised on July 28, 2011 02:00:58 by Toby Bartels
{"url":"http://ncatlab.org/nlab/show/inductive+limit","timestamp":"2014-04-21T02:01:58Z","content_type":null,"content_length":"22052","record_id":"<urn:uuid:b99cb35b-e5ea-47f8-92d8-1cba5b5cc75d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Parity Theorem for Permutations - The Reduction Algorithm Rules and the Proof In the demonstration program, the computer performs a shortening algorithm that is based on a simple idea. Since starts with all markers in their home boxes, performs a sequence of transpositions, and eventually returns every marker to its home box, the first marker moved is eventually going to be brought home. There is a potential for shortening the sequence of transpositions by eliminating the transposition that moves this first marker away and the transposition that moves it back home. To produce a shortening algorithm built on this idea, all one needs is to devise rules for modifying the sequence of transpositions to adjust for the elimination of the two transpositions we have identified. We will set forth some rules that meet this objective. Building on our hands-on experience with the demonstration program, let us state the rules for the algorithm in terms of the computer’s response with the blue markers to the transpositions that the human user performs with the green markers. 1. When the human does the first swap, the computer does not do one. Instead, it makes note of the first marker the human moved, and labels the box from which it came as the “First Box, ” and tags the other box involved in this swap, and calls it the “Tagged Box”. (The First Box will remain fixed throughout this procedure, but the box designated as the Tagged Box may change.) 2. The computer responds to subsequent swaps of green markers on the left by the human by doing swaps of its set of blue markers on the right, using the following rules: a. If the human’s green swap involves neither the First Box nor the Tagged Box, the computer does its swap between the same two boxes. b. If the green swap involves the Tagged Box but not the First Box, the blue swap is between the same two boxes, but the computer also moves the tag, so as to keep it with the green disk from the First Box. c. If the green swap involves the First Box but not the Tagged Box, the computer does a swap between the Tagged Box and the other box, not the First, that the human used. d. If the green swap involves both the First Box and the current Tagged Box, the computer does not do any swap. 3. Once case d of step 2 has happened -- which it must since the first green marker moved is always in the Tagged Box and the first person is eventually going to bring it back home -- the computer does all subsequent swaps between the same boxes as the human. If one follows the procedure described here, it will produce a modified truncated sequence of transpositions that has two fewer transpositions at the point that the original sequence returns the first marker moved to its home box. Since the modified truncated sequence is the same as the original one from that point on, it uses two fewer transpositions in the end when each has returned all of the markers to their home boxes. You may wish to work out the proof that this algorithm leads to the Reduction Lemma on your own, or you can follow this link for a proof.
{"url":"http://www.maa.org/publications/periodicals/loci/joma/parity-theorem-for-permutations-the-reduction-algorithm-rules-and-the-proof","timestamp":"2014-04-19T21:51:56Z","content_type":null,"content_length":"100111","record_id":"<urn:uuid:6b618c57-da76-4270-871c-ad65ad389af0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Super Minimalist Micro Calendar Super Minimalist Micro Calendar Reduced: Click this image to go to the link for the 2010 version. This calendar is about the size of a large postage stamp. The first column contains numbers for the months of the year. The middle gives the weekday on which the first falls. The last column gives the date of the first Sunday of that month. With this data, you can easily figure out the rest of the dates that you need to know. The first line is interpreted: January and April and July start on a Tuesday and their first Sunday falls on the 6th — so you know that the other Sundays are 13, 20, 27. Once you know the Sundays, you can get to the Mondays by adding 1, Tuesdays by adding 2, and so on. Only the last column is actually needed for figuring out the other dates; the middle column is given because people frequently want to know quickly what day of the week the first falls on. Computing dates takes only a little practice. If you need to know what day of the week the 24th of January falls on, you learn to break 24 down into three sevens plus three. The first is a Tuesday, so 1 plus 21 is a Tuesday. If the 22d is a Tuesday, the 24th must be a Thursday. Or: you know immediately from the last column that the 20th must be a Sunday (from the sequence 6, 13, 20, 27). Twenty plus four must be the same as Sunday plus four days, which is a Thursday. If this version is still too minimalist for your taste, you might prefer the "unreduced" format below. Minimalist Calendar: Click this image to go to the link for the 2010 version. Designed by C. Pavur at the Latin Teaching Materials at Saint Louis University website, January 4, 2008. For a Full Current Calendar (Offsite Link):
{"url":"http://www.slu.edu/colleges/AS/languages/classical/dept/c.html","timestamp":"2014-04-17T08:28:33Z","content_type":null,"content_length":"5789","record_id":"<urn:uuid:559efe6a-e12c-448d-a9c4-1cab8fa4760c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Software for Borel-Weil-Bott in positive characteristic? up vote 9 down vote favorite I am interested in calculating cohomology of line bundles on flag varieties $G/B$ in positive characteristic. But I really just have a bunch of scattered examples. Does there exist some kind of software that will calculate this for me? For the most part I don't care about the representation structure of the cohomology modules, I just want to know dimensions. Also, I know there are various results on when this cohomology is just like the char. 0 situation, but they won't always apply to my examples. So I don't need general results, just an algorithm. I just know how to do one example using Macaulay2: irreducible homogeneous bundles on projective space (which are pushforwards of aforementioned line bundles). But I am also interested in things like type D, and homogeneous bundles on Grassmannians. rt.representation-theory ag.algebraic-geometry mathematical-software algebraic-groups Adding a tag algebraic-groups would be useful here. – Jim Humphreys Oct 14 '11 at 20:22 add comment 2 Answers active oldest votes This is a sort of negative-leaning answer to the question about existence of software for your purpose. There is quite a bit of history to the problem in prime characteristic, going back to isolated examples found in the 1970s by Mumford and his 1975 Ph.D. student W.L. Griffith Jr. (Cohomology of Flag Varieties in Characteristic p) which showed that the classical ideas could break down. The rank 2 example $G_2$ lends itself to picture drawing and has been looked at in considerable detail. See the recent updated preprint by Andersen and Kaneda here. Andersen's clever sheaf cohomology techniques (exploiting the Frobenius map) combined with my more speculative predictions tend to imply that the results depend heavily on Kazhdan-Lusztig theory for the affine Weyl group (of Langlands dual type). Moreover, the non-vanishing of cohomology seems to involve the actual module structure, so dimensions appear only as a byproduct of the study of generic module filtrations crossing Weyl chamber walls. The algebraic group of type $G_2$ already indicates how systematic but complicated the results will be in general, so any up vote computational approach must take this case into account. (The results for $A_2$ and $B_2$ worked out by Andersen following his 1977 MIT thesis On Schubert Varieties in G/B and Bott's Theorem 6 down are also subtle, but con't compete with the complexity of $G_2$ whose alcove geometry is richer.) ADDED: The problem arose in the setting of algebraic geometry, as seen in the thesis work mentioned above. Seshadri wrote up his own version of the $SL_3$ case treated by Larry Griffith, in a typescript Cohomology of line bundles on $SL_3/B$ (Tata Institute, September 28, 1976). I learned about the problem from him the following spring at IAS and formulated my own tentative interpretation in a conference paper that summer. Andersen recovered Griffith's results in a general setting in his 1979 Inventiones paper here. In particular, an extra non-vanishing $H^1$ has a unique simple submodule of specified highest weight. But pinning down the dimension or formal character of this module takes more work, done first by Jantzen (before Kazhdan-Lusztig theory). There may be shortcuts in small cases, but a general algorithmic approach to the flag variety of $SL_3$ gets complicated. I sort of suspected that what I want does not exist. But your answer is still great, and I think I can learn a lot from these references. Also, presumably your reference to Mumford's student is to the paper W. Griffith, "Cohomology of flag varieties in characteristic $p$". This gives the complete answer for $A_2$ (at least for when Bott's theorem fails) with a simple rule and I really like it. – Steven Sam Oct 14 '11 at 21:39 I don't know a lot about this story, but the cohomology of a homogeneous line bundle on a Grassmannian should be equal to the cohomology of a line bundle on the flag variety in type A corresponding to a weight that vanishes on all but one coroot, which is a very special case. Perhaps there is an algorithm in that case? (There might be some helpful comments in Jantzen's Representations of Algebraic Groups as well). – Chuck Hague Oct 17 '11 at 15:46 add comment This may be in the scope of http://sagemath.org/ . up vote -1 down vote add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory ag.algebraic-geometry mathematical-software algebraic-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/78153/software-for-borel-weil-bott-in-positive-characteristic","timestamp":"2014-04-16T22:23:36Z","content_type":null,"content_length":"60031","record_id":"<urn:uuid:75baa546-ec67-467b-8945-842c25b4ebb9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Word Problem Help #2 September 28th 2009, 04:59 PM #1 Sep 2009 Word Problem Help #2 any help is GREAT! 1. The grass in a meadow grew equally think and fast. 1 cow and 1 goat can eat it up in 90 weeks, 1 cow 2 goats can do it in 60 weeks. 2 cows and 1 goat can di it in 45 weeks. How many weeks would it take 2 cows and 2 goats to crop the grass of the whole meadow? 2.Tom and Jerry left their houses simultaneously to visit each other. Walking at UNIFORM speeds, they passed each other at a point 5 kilometers from Toms home without noticing. When they reached their respective destinations and found nobody home they turned around to go back, this time meeting each other at a point 3 km from Jerrys home. What was the ratio between the speed of Tom to that of Jerry? Thanks for your help. if you could explain your answers that would be awesome! Last edited by mr fantastic; September 28th 2009 at 10:21 PM. Reason: Questions moved to new thread. any help is GREAT! 1. The grass in a meadow grew equally think and fast. 1 cow and 1 goat can eat it up in 90 weeks, 1 cow 2 goats can do it in 60 weeks. 2 cows and 1 goat can di it in 45 weeks. How many weeks would it take 2 cows and 2 goats to crop the grass of the whole meadow? Thanks for your help. if you could explain your answers that would be awesome! Are we to take the rate of growth of the grass into account here? Because otherwise one can just say that 1 cow and 1 goat take twice as long as 2 cows and 2 goats, so 2 cows and 2 goats would take 45 weeks? October 1st 2009, 08:15 AM #2 Mar 2008
{"url":"http://mathhelpforum.com/algebra/104954-word-problem-help-2-a.html","timestamp":"2014-04-16T10:47:19Z","content_type":null,"content_length":"32160","record_id":"<urn:uuid:730e899e-830d-4c0a-8fd7-bfe784f45ceb>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Tic-Tac-Toe Quantum Tic-Tac-Toe investigates the concept of quantum entanglement through a simple and fun game. It was created by Allan Goff around 2002. This is an original Flash version. See the rules below the game board. There is no computer player or multiplayer; you play both sides. As in normal Tic-Tac-Toe, the game is played on a 3-by-3 board, and each of two players takes turns placing pieces, trying to get 3 in a row. But in Quantum Tic-Tac-Toe, you place two "potential" moves at a time, in separate squares. Eventually, one of these will become a real (or classical) move, and the other will not. Potential moves are marked with the numbers of the turns they were played on. Each pair of potential moves is connected. Only classical moves count toward a win. The game continues with each player placing their two potential moves per turn, until a special condition comes about. Eventually, multiple pairs of connected, potential moves will form a closed circuit. This closed circuit represents only two possible sets of classical moves. Depending on which of the last pair of moves becomes "real," all the other squares that are involved in the circuit will necessarily go to player 1 or 2. The player who closes the circuit chooses which of their last two potential moves becomes real, and all the other potential moves that are part of the circuit are automatically converted into real moves, based on their choice. (Note that it looks like this may be a misinterpretation of the rules on my part, but it's how this version works.) The game ends when there are one or more lines of three pieces in a row of the same color, or when the board is full. Unlike in regular tic-tac-toe, it's possible for both players to get 3 in a row at once, or for one player to get two of them! There can also be a normal win (one player gets 3 in a row) or no win at all. Other than that, great. X 1,5 O 5,9 X 1,5 X marks the upper left corner and the center. O sets up a block by marking the center and the lower right corner. X misses the import of what O did, and proceeds to claim the corner and center by forcing a collapse. Sadly, his now-real marks in the corner and center are blocked by a real O in the opposite corner. Except that your version of the game will not allow X to complete his second move. He can click in the upper right corner and place one spooky mark, but the program will not recognize a click in the center to place the second spooky mark. X's second move, marking squares 1 and 5 again, is legal and should be allowed by this program. You need not post this comment, it's just a bug report... Since it is possible for a single measurement to collapse the entire board and give classical tic-tac-toes to both players simultaneously, the rules declare that the player whose tic-tac-toe has the lower maximum subscript earns one point, and the player whose tic-tac-toe has the higher maximum subscript earns only one-half point. [Link to en.wikipedia.org] [Link to www.paradigmpuzzles.com], the inventor's site). There is no rule preventing either player from entangling two squares, and then placing a later move in the same two squares and forcing a Thank You :) Post Comment All comments are personally reviewed and must be: • On-topic • Courteous • Not self-linking or spam
{"url":"http://countergram.com/qtic","timestamp":"2014-04-18T08:02:42Z","content_type":null,"content_length":"11546","record_id":"<urn:uuid:9ed3ab3b-53e6-4d32-b86a-88327b288258>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
4-Bit Computer From WFFwiki If you’ve ever wondered how electronic devices like computers can count, this article gives a simple introduction to binary and logic and shows how they are tied together with electronics to make both simple and complex computers. Please note: There are a couple of mistakes on the slides in the youtube video which are shown corrected below. Thanks to reader [Veini] for taking the time to review the logic diagrams! Fundamentals of binary counting To begin with let’s have a look at some fundamentals. Since digital computers can only represent two states, on and off or zero and one, there are only two numbers available; therefore they have to count in base 2, not base 10 as we would do. However, it’s very similar, instead of ones, tens, hundreds and thousands, base 2 counts in ones, twos, fours, eights and so on. So, for example 2 in base 10 is one- zero in base 2. When we add numbers in base ten we carry over any digits which are greater than 9 into the next magnitude of units, so nine plus one equals zero carry one, or ten. Binary works exactly the same, however you carry over if the result is greater than 1, so one plus one equals one-zero. This means that if you have two single-figure binary numbers and you add them together there are only 4 possible results. It is useful to represent this in what’s known as a truth table. Here you can see the 4 possible values of the inputs A and B, and the four possible outputs represented by Sum and Carry. In order to represent the ‘logic’ required to get from the possible range of inputs to the desired outputs we use Boolean operations, or as they are more commonly called in electronics, logic gates. Logic Gates Here are the three basic types of logic gates which I’ve chosen because they are the simplest gates to make from transistors. You can make all other types of gates by combining these three. An AND gate outputs one only when both its inputs are one. An OR gate outputs one when either input is one. Finally a NOT gate (or inverter as it is sometimes called) outputs the opposite of its input, so if the input is one the output is zero and vice-versa. So, going back to our truth table, let’s look at the logic required to get ‘Sum’ based on the inputs A and B. Here we want the logic to output one only when one input is one and not the other, this is known as an exclusive OR gate. We can do this by simply using two AND gates with NOT gates on opposing inputs. If either gate outputs a 1 the result is 1 via the final OR gate. The carry output is even simpler, we want the carry to be 1 if both A and B is one, so we use an AND gate. To get the whole truth table we simply add the two logic circuits together. This logic is called a ‘half-adder’ due to the fact that it is only capable of working on single bit numbers, since you cannot input the carry bit, you can’t cascade them together to work on larger binary numbers. To solve this we combine two half-adders together to make a full-adder. This logic takes A, B and a carry as input and outputs the sum and carry. If you followed along with the half adder it’s pretty easy to see how this works from the logic diagram. Now the simple full-adder logic circuits can be combined to allow bigger binary numbers to be added together. This picture shows a four bit adder, in fact, due to the way the carry bit ‘ripples’ down, this is known as a ripple carry adder. Since both the A and B inputs are now 4 bits we can add together 1111 and 1111 or 15 plus 15 in base 10 to get a five bit result. Building logic gates from transistors Now let’s take a quick look at how we build logic gates using transistors. First up is the NOT gate. Here if the input is 1 it causes the electricity to flow from the collector to the emitter (top to bottom). Since the electricity will always follow the path of least resistance the output will be zero. If the input is zero, the transistor prevents the flow from collector to emitter, so the electricity flows out of the output causing it to be one. Next up is the AND gate. This requires two transistors, the inputs are on the bases and only if both inputs are one can the electricity flow to the output, making it a one also. The OR gate is similar but has two possible paths for the electricity, so if one base or the other is one, electricity flows to the output. Building a full-adder Once you have these basic building blocks you can combine them together using the logic diagram for a full adder, which gives us this circuit diagram: Here you can see a picture of a completed full-adder, the A, B and carry inputs are on the left and the sum and carry outputs are on the right. By making more of these it is possible to build adders capable of dealing with bigger and bigger numbers. Building a computer So with all the theory out of the way, let’s look at a real 4-bit computer built from discrete transistor gates. This circuit has 4 switches for each input (A and B) and a simple five LED output showing the result. Note that both the inputs and the output are little-endian meaning the smallest binary value is on the right, just like in base 10. You can clearly see the 4 full-adder circuits which perform the processing. The computer is made by combining 4 full adder circuits (as shown above) and some extra circuitry which drives the inputs to the adders and displays the output: And there you have it, how a simple transistor can be made to count. Whilst this ‘computer’ is a very basic one, you can easily see why modern processors contain hundreds of millions of transistors which enable modern computers to perform so much logic at such amazing speeds. Build your own computer If you're crazy enough to want to build you own computer from scratch (well, I was!) you can download the schematics and the PCB artwork used in this article from here. Donate to waitingforfriday.com: If you like this site and want to help support future projects, or you just want to show appreciation for a project you built, used or enjoyed, please consider leaving a PayPal donation. It's quick, secure and helps us to run the site and fund future projects! PayPal, fast, easy and secure Join the EFF: The owner of this site is a member of the EFF and you should be a member too! The EFF protects the rights of open-source, open-hardware authors all over the world. Most popular pages: more >>
{"url":"http://www.waitingforfriday.com/index.php/4-Bit_Computer","timestamp":"2014-04-18T02:58:53Z","content_type":null,"content_length":"34463","record_id":"<urn:uuid:42c770bd-0188-4c63-9181-ec7c20c8d8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Caveat About integrate.odeint Anne Archibald peridot.faceted@gmail.... Thu Oct 25 10:40:02 CDT 2007 On 25/10/2007, Lorenzo Isella <lorenzo.isella@gmail.com> wrote: > I do not know if what I am going to write is really useful (maybe it > is pretty obvious for everybody on this list). > I have been using integrate.odeint for quite a while to solve some > population equations. > Then I made a trivial change (nothing leading to different physics or > in general such as to justify any substantial difference with the > previous results), and I woke up in a nightmare: precision errors, > routine crashing etc... > I think I now what happened: in my code I was using t [time], T(t) > [time-dependent temperature], t_0 (initial time) and T_0 (initial > temperature). > For Python there is no possibility of confusion, but the underlying > Fortran made a mess out of this... > Something very trivial, but it took me a day and a half to debug this. > Hope it was useful. Do you have a small piece of demo code? This is very surprising, as FORTRAN should never see the variable names. I can't replicate it in spite of headache-inducing variable names: In [17]: T = lambda t, T: T In [18]: T0 = 1 In [19]: t = [0,1,2] In [20]: scipy.integrate.odeint(T,T0,t) array([[ 1. ], [ 1.50000001], [ 3.00000001]]) More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-October/014238.html","timestamp":"2014-04-16T17:15:08Z","content_type":null,"content_length":"4000","record_id":"<urn:uuid:ae78465f-4b54-47bd-947b-f0865612c45d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Discover Redux Discover Redux When I developed the dispersive discovery model earlier this year , I lacked direct evidence for the time-invariant evolution of the cumulative growth component. The derivation basically followed two stages : (1) a stochastic spatial sampling that generated a cumulative growth curve and (2) an empirical observation as to how sampling size evolves with time, with the best fit assuming a power-law with time. So with the obvious data readily available and actually staring me in the face for some time, from none other than Mr. Hubbert himself (hat tip to Mr, McManus), I believe this partial result further substantiates the validity of the model. In effect, the stage-1 part of the derivation benefits from a "show your work" objective evaluation, which strengthens the confidence level of the final result. Lacking a better analogy, I would similarly feel queasy if I tried to explain why rain regularly occurs if I could not simultaneously demonstrate the role of evaporation in the weather cycle. And so it goes with the oil discovery life-cycle, and arguably any other complex behavior. The basic parts of the derivation that we can substantiate involve the calculation in the figure below ( originally from The key terms include , which indicates cumulative footage, and the , which denotes an average cross-section for discovery for that particular cumulative footage. This represents Stage-1 of the calculation -- which I never verified with data before -- while the last lines labeled "Linear Growth" and "Parabolic Growth" provide examples of modeling the Stage-2 temporal evolution. Since the results come out naturally in terms of cumulative discovery, it helps to integrate Hubbert's yearly discovery curves. So the figure below shows the cumulative fit, while the original numbers came from this data set: I did a least-squares fit to the curve that I eyeballed from the previous post and the discovery asymptote increased from my estimated 175 to 177. I've found that generally accepted values for this USA discovery URR ranges up to 195 billion barrels in the 30 years since Hubbert published this data. Which in my opinion indicates that the model has potential for good predictive power. So at a subjective level, you can see that the cumulative really shows the model's strengths, both from the perspective of the generally good fit for a 2-parameter model (asymptotic value + cross section efficiency of discovery), but also in terms of the creeping reserve growth which does not flatten out as quickly as the exponential does. This slow apparent reserve growth matches empirical-reality remarkably well. In contrast, the quality of Hubbert's exponential fit appears way off when plotted in the cumulative discovery profile, only crossing at a few points and reaching an asymptote well before the dispersive model does. Just like we were taught in school, provide a hypothesis and then try to verify with data. Unlike the wing-nuts who believe that school only serves to Indoctrinate U . (seriously, click on the link if you want to read my review of one of the worst documentaries in recent memory) 0 Comments: "Like strange bulldogs sniffing each other's butts, you could sense wariness from both sides"
{"url":"http://mobjectivist.blogspot.com/2007/10/discover-redux.html","timestamp":"2014-04-17T07:50:05Z","content_type":null,"content_length":"35195","record_id":"<urn:uuid:219b0bd5-37b3-434c-9e34-0f9736f6831b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
RANDU (I1, I2, X) Description: Computes a pseudorandom number as a single-precision value. Class: Subroutine Arguments: I1, I2 INTEGER(2) variables or array elements that contain the seed for computing the random number. These values are updated during the computation so that they contain the updated X A REAL(4) variable or array element where the computed random number is returned. Results: The result is returned in X, which must be of type REAL(4). The result value is a pseudorandom number in the range 0.0 to 1.0. The algorithm for computing the random number value is based on the values for I1 and I2. If I1=0 and I2=0, the generator base is set as follows: X(n + 1) = 2**16 + 3 Otherwise, it is set as follows: X(n + 1) = (2**16 + 3) * X(n) mod 2**32 The generator base X(n + 1) is stored in I1, I2. The result is X(n + 1) scaled to a real value Y(n + 1), for 0.0 <= Y(n + 1) < 1. Consider the following: INTEGER(2) I, J CALL RANDU (I, J, X) If I and J are values 4 and 6, X stores the value 5.4932479E-04. Previous Page Next Page Table of Contents
{"url":"http://h21007.www2.hp.com/portal/download/files/unprot/Fortran/docs/lrm/lrm0315.htm","timestamp":"2014-04-18T15:56:11Z","content_type":null,"content_length":"2966","record_id":"<urn:uuid:d79f9eaa-0ea0-4cdc-ad17-c12cbe905fa8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Parameters and Statistics Measures of Central Tendency Measures of Central Tendency Statistical measures of central tendency or central location are numerical values that are indicative of the central point or the greatest frequency concerning a set of data. The most common measures of central location are the mean, median and mode. The statistical mean of a set of observations is the average of the measurements in a set of data. The population mean and sample mean are defined as follows: Given the set of data values x[1], x[2], .... x[N] from a finite population of size N, the population mean m is calculated as Given the set of data values x[1], x[2], .... x[n] from a sample of size n, the sample mean is calculated as: The sample mean is often used as an estimator of the mean of the population from whence the sample was taken. In fact, the sample mean is statistically proven to be a most effective estimator for the population mean. The median of a set of observations is that value that, when the observations are arranged in an ascending or descending order, satisfies the following condition: 1. If the number of observations is odd, the median is the middle value. 2. If the number of observations is even, the median is the average of the two middle values. The median is the same as the 50th percentile of a set of data. It is denoted by The mode of a set of observations is the specific value that occurs with the greatest frequency. There may be more than one mode in a set of observations, if there are several values that all occur with the greatest frequency. A mode may also not exist; this is true if all the observations occur with the same frequency. Another measure of central location that is occasionally used is the midrange. It is computed as the average of the smallest and largest values in a set of data. Example of Central Tendency EX. Given the following set of data 1.2, 1.5, 2.6, 3.8, 2.4, 1.9, 3.5, 2.5, 2.4, 3.0 It can be sorted in ascending order: 1.2, 1.5, 1.9, 2.4, 2.4, 2.5, 2.6, 3.0, 3.5, 3.8 The mean, median and mode are computed as follows: = (1 / 10) · (1.2 + 1.5 + 2.6 + 3.8 + 2.4 + 1.9 + 3.5 + 2.5 + 2.4 + 3.0) = 2.48 = (2.4 + 2.5) / 2 = 2.45 The mode is 2.4, since it is the only value that occurs twice. The midrange is (1.2 + 3.8) / 2 = 2.5. Note that the mean, median and mode of this set of data are very close to each other. This suggests that the data is very symmetrically distributed.
{"url":"http://www.course-notes.org/Statistics/Statistical_Measure_of_Data/Parameters_and_Statistics_Measures_of_Central_Tendency","timestamp":"2014-04-17T18:33:16Z","content_type":null,"content_length":"56081","record_id":"<urn:uuid:d70099c0-4bde-48ef-bd87-2302b370b23c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Stoughton, MA Algebra Tutor Find a Stoughton, MA Algebra Tutor ...I do well (99th percentile) on standardized tests in both math and English. I enjoy helping others to master the different types of questions on the SAT and the PSAT. I have several years part-time experience holding office hours and working in a tutorial office. 29 Subjects: including algebra 1, algebra 2, reading, writing ...I have tutored honor level college students in writing papers for many different subjects. As a current school teacher reading and writing are part of every classes curriculum. I have a degree in Earth Science, and astronomy is one of the core subjects in this degree. 22 Subjects: including algebra 1, physics, writing, study skills ...My background in baseball is extensive. From the age of six when I went to my first Red Sox game, I have had a passion for the "Olde Towne Team" and the fundamentals of baseball. I started playing Little League at the age of ten, moving through the ranks to high school ball. 8 Subjects: including algebra 1, English, SAT math, grammar ...Whether you want to solidify your knowledge and get ahead or get a fresh perspective if your are struggling, I am confident I can help you. I have the philosophy that anything can be understood if it is explained correctly. Teachers and professors can get caught up using too much jargon which can confuse students. 19 Subjects: including algebra 1, algebra 2, chemistry, Spanish ...How has my childhood play place gone from grassy field to forest? Most of us spend much too much time indoors for our own good. I've never taught chemistry in school though I am licensed to do so, but use it commonly as an important component of biology. 15 Subjects: including algebra 1, English, chemistry, geometry Related Stoughton, MA Tutors Stoughton, MA Accounting Tutors Stoughton, MA ACT Tutors Stoughton, MA Algebra Tutors Stoughton, MA Algebra 2 Tutors Stoughton, MA Calculus Tutors Stoughton, MA Geometry Tutors Stoughton, MA Math Tutors Stoughton, MA Prealgebra Tutors Stoughton, MA Precalculus Tutors Stoughton, MA SAT Tutors Stoughton, MA SAT Math Tutors Stoughton, MA Science Tutors Stoughton, MA Statistics Tutors Stoughton, MA Trigonometry Tutors Nearby Cities With algebra Tutor Avon, MA algebra Tutors Braintree algebra Tutors Bridgewater, MA algebra Tutors Brockton, MA algebra Tutors Canton, MA algebra Tutors Dedham, MA algebra Tutors Easton, MA algebra Tutors Holbrook, MA algebra Tutors Mansfield, MA algebra Tutors Mattapan algebra Tutors Milton, MA algebra Tutors Norwood, MA algebra Tutors Randolph, MA algebra Tutors Sharon, MA algebra Tutors Walpole, MA algebra Tutors
{"url":"http://www.purplemath.com/Stoughton_MA_Algebra_tutors.php","timestamp":"2014-04-21T10:45:00Z","content_type":null,"content_length":"23888","record_id":"<urn:uuid:96da9d55-890a-46fd-945a-f6b8034728d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
Heat loss through pipes excel Hi there, I am trying to make a spreadsheet which calculates the heat loss through a pipe and also heat loss when insulated, I am using the resistive analogy to calculate this and I have it working, the only problem I am having is the heat transfer coefficient, how do I calculate this? I know the pipe diameter, thickness, K value, temperatures inside and ambient and the insulation thickness, k value and diameters. it is just for general use, to get an estimate and save time.
{"url":"http://www.physicsforums.com/showthread.php?t=616445","timestamp":"2014-04-17T12:43:46Z","content_type":null,"content_length":"24416","record_id":"<urn:uuid:a9d45b61-3d31-41cc-877e-26134cdc8c92>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
How to translate the representation theory of semisimple to reductive groups? up vote 4 down vote favorite I am aware of the following question: Definitions of Reductive and Semisimple Groups So let me phrase a precise question: Is there a standard technique by which one can translate the unitary/smooth admissible representation theory of semisimple algebraic groups over a local field with the representation theory of reductive algebraic groups? E.g. many results for $GL(n,F)$ follow from results for $SL(n,F)$ modulo its center. Schur's lemma implies that the restiction of an irreducible representation to the center must be a character. Is there an algebraic semisimple group $G'$ associated to any reductive $G$, which plays the same role? E.g. such that $G'(F) \cdot Z(F)$ is cocompact in $G(F)$, where $Z$ is the centrum of $G$? add comment 1 Answer active oldest votes The derived group $G'$ of $G$ always works for your second question (i.e., $G'(F)Z(F)$ is closed and cocompact in $G(F)$). Indeed, by using local class field theory and Kneser-Bruhat-Tits we know that ${\rm{H}}^1(F,H)$ is finite for any connected reductive $F$-group $H$, so ${\rm{H}}^1(F,G')$ is finite. If $X \rightarrow Y$ is any smooth map of smooth $F$-schemes then $X (F) \rightarrow Y(F)$ is an $F$-analytic submersion, so it has open image. Thus, the image of $G(F)$ in the commutative $(G/G')(F)$ is open, and it has finite index since ${\rm{H}}^1 (F,G')$ is finite. In other words, $G(F)/G'(F)$ is a finite-index open subgroup of $(G/G')(F)$, so $G(F)/G'(F)Z(F)$ is compact if and only if the map $Z(F) \rightarrow (G/G')(F)$ induced by the surjection of $F$-tori $Z \rightarrow G/G'$ has compact cokernel. Thus, it suffices to show that for any surjective map $T' \rightarrow T$ between $F$-tori (even inseparable), the map $T'(F) \rightarrow T(F)$ has closed image with compact cokernel. The up vote 4 maximal compact subgroup of $T(F)$ is $$T(F)^1 = \cap_{\chi\in {\rm{X}}_F(T)} \ker |\!|\chi|\!|_F$$ where $\chi$ varies through the $F$-rational cocharacters of $T$ and $|\!| \cdot |\!| down vote _F$ is the normalized absolute value. In other words, $T(F)^1$ is the group of $t \in T(F)$ such that $\chi(t) \in O_F^{\times}$ for all such $\chi$. It is harmless to pass to quotients accepted by maximal compact subgroups, which is to say that it is equivalent to show that the map $T'(F)/T'(F)^1 \rightarrow T(F)/T(F)^1$ has closed image with finite index. But $T(F)^1$ is always open in $T(F)$ with the discrete cokernel $T(F)/T(F)^1$ naturally isomorphic to the $F$-rational cocharacter group ${\rm{X}}_{\ast,F}(T)$, so we are reduced to proving that ${\rm{X}}_{\ast,F}(T') \rightarrow {\rm{X}}_{\ast,F}(T)$ has image with finite index. Since these cocharacter groups are finitely generated $\mathbf{Z}$-modules and any surjection of $F$-tori admits an $F$-rational section in the isogeny category, we're done. add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory algebraic-groups automorphic-forms reductive-groups unitary-representations or ask your own question.
{"url":"http://mathoverflow.net/questions/134783/how-to-translate-the-representation-theory-of-semisimple-to-reductive-groups","timestamp":"2014-04-19T17:36:41Z","content_type":null,"content_length":"54425","record_id":"<urn:uuid:3fb7ecaf-e937-4716-8c80-26d821c81c75>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Fishing for a presentation idea My personal favourite, among the canonical "first proof" proofs, is the proof that every number has a prime factor (and, as a consequence, that there are infinitely many primes). Honestly, everyone's heard of the Pythagorean theorem, and a proof won't really "expand many people's horizons" as far as mathematics goes. I like introducing people to the prime number proof because most people have no conceptualization about how you would go about proving something about every whole number in existence, so the proof offers a taste of mathematics that they've probably never even imagined before. Alternately, instead of giving a rigorous proof, you could take a discipline like abstract algebra and give a conceptual understanding of the "algebraic" approach to tackling a problem (i.e. stripping away extraneous detail and focusing on the barest structure of the thing). A very good example would be bracelet counting with Burnside's theorem. You could talk about how the whole problem basically reduces to finding collections of bracelets that can be obtained from each other by rotations and reflections (i.e. the orbits generated by the action of the dihedral group on the set of bracelets). Or, you could talk about how things like addition and multiplication in the reals can be considered as binary operations, and that once you see it that way you see all sorts of similarities between things like the integers, permutations, and symmetries of geometric figures (i.e. the group structure).
{"url":"http://www.physicsforums.com/showthread.php?p=3865947","timestamp":"2014-04-20T18:36:03Z","content_type":null,"content_length":"29107","record_id":"<urn:uuid:823f8eb0-bbf8-4980-94b2-11f8f2042278>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
convert 500 g to kg You asked: convert 500 g to kg Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/convert_500_g_to_kg","timestamp":"2014-04-18T13:18:14Z","content_type":null,"content_length":"54987","record_id":"<urn:uuid:67b6a609-3768-44f9-81bd-6284de4c2c90>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Testing alternative models Testing alternative models However, even if the the model is incorrect, it may be that a correct model would still give show a correlation between the carry laws and less crime. A study by Bartley and Cohen [4] sheds some light on this possibility. They reran the regressions with thousands of different models (formed by deleting variables from Lott's model and by adding a trend term). If the carry law is associated with a reduction in crime in all of these models then we might conclude that is associated with a reduction in the correct model, without us having to identify which of the models is the correct one. Figures 1-4 of their paper show that the carry laws were not consistently associated with a crime reduction in any crime category: that is, there were some models where the law was associated with an increase for each crime category studied. I should note, however, that if we restrict things to just models that include a trend component, homicide and robbery show consistent reductions. For this reason, Bartley and Cohen argued that Lott's results should not be dismissed as unfounded. Dezhbakhsh and Rubin [10,11] re-examined the data using a more general model that allowed the carry law to have different effects in each county and to affect other parameters in the model. With this model they found the carry law did not have any clear effect on rape or assault, that it was associated with a reduction in homicide in six out of 33 states, and with an increase in robbery in 13 out of out of 33 states. The evidence here is stronger for an increase than for a decrease. Plassmann and Tideman [39] point out that Lott's analysis technique assumes that crime rates are normally distributed and that this is not even close to being true for low crime counties. When they made some plausible changes to the specification, the effects on murder vanished. However, when they did their own analysis assuming that the murder rate was Poisson distributed, they found an even stronger effect (a 12% decrease). They also looked at the effects on each state and found a confusing pattern of results, with the effect varying from a statistically significant increase of 6.5% (Virginia) to a statistically significant decrease of 35% (Montana). While we would not expect the laws to have exactly the same effect in every state, it seems hard to see how the effects could be so radically different. Duggan [12] points out another problem with Lott's analysis: One problem with these regression estimates is that Lott and Mustard are implicitly assuming that these laws are varying at the county level, when in fact they are varying only at the state The reason this is a problem is that you would expect crime rates in counties within the same state to be correlated. This problem does not bias the estimates of the law's effect, but causes the standard errors to be underestimated, so that some results may appear to be statistically significant when they are not. On page 278, note 3, Lott comments on this problem, but erroneously claims that including dummy variables for all counties solves the problem. This is clearly false. The dummy variables only account for fixed differences between counties and do not address the within-state correlations between counties. After adjustments to account for this problem, Duggan found that none of the coefficient estimates on the CCW variable remain statistically significant. Lott's response to Duggan's paper was to repeat his false claim: The correlation of the error terms across counties is picked up when one has county fixed effects included in the regression. He does not do the adjustment recognizing that the county fixed effects are already picking up what he wants to adjust for. [25] Moody [35] noticed the same problem as Duggan: Merging an aggregate variable with microlevel variables causes ordinary least squares formulas to severely overestimate the t-ratios associated with the aggregate variables. ...I reestimated the model using the original county-level data set but adjusted the standard errors for clustering within states. The results were somewhat different from the original Lott and Mustard findings. ...While shall-issue laws reduce violent crime in general in all models, the effects seem to be concentrated in robbery. Murder and rape are significantly reduced in only one version of the In Lott's response to Moody [29] he still did not admit to making a mistake but rather stated that he ``had already discussed this issue''. Tim Lambert
{"url":"http://www.cse.unsw.edu.au/~lambert/guns/lott/node10.html","timestamp":"2014-04-16T07:24:41Z","content_type":null,"content_length":"8575","record_id":"<urn:uuid:2cc370ff-020a-4d05-ba6b-f3ab98200cdd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Differential Operator in Maple: Current Progress Below is a summary of my understanding of creating the operator L in Maple. I've figured out how to apply this operator to an arbitrary function f(x), however i'm currently trying to figure out how to convert the D operator into diff(f(x),x) when applied to f(x). I believe I have to use the built in convert() function to accomplish this task. After the operators have been converted to diff() form it should be possible to plug the equations into Maples built in differential equation solving functions.
{"url":"http://www.personal.psu.edu/alm24/undergrad/LinearOperator.html","timestamp":"2014-04-17T21:49:14Z","content_type":null,"content_length":"13679","record_id":"<urn:uuid:312d469a-7337-4538-9f06-c0e45ef4286a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café The Mathematical Vocation Posted by David Corfield After a visit in 1939 to the Monastery of the Prophet Elijah on the Greek island of Santorini, the Oxford philosopher R. G. Collingwood entered into discussion with his students about the value of monastic life. It appears that the students were a little perplexed to find their prejudice that monks were “at worst idle, self-indulgent, and corrupt; at best selfishly wrapped up in a wrongheaded endeavour to save their souls by forsaking the world and cultivating a fugitive and cloistered virtue” clash with their admiration for …the atmosphere of earnest and cheerful devotion to a sacred calling, the dignity of the services and beauty of their music, the eager welcome and the loving hospitality, and above all the graces of character and mind which the life either generated in those who had adopted it or at least demanded of aspirants to it and thus focused, as it were, in the place where the life went on. (‘Monks and Morals’, Essays in Political Philosophy, Oxford, 1989) Collingwood then draws the students’ attention to a vocation they do value. Suppose a man devotes his life to the study of pure mathematics. Is he to be condemned for living on a selfish principle? Not, as my friends readily admitted, on the ground that pure mathematics cannot feed the hungry. Pure mathematics, apart from any consequences which may ultimately come of it, is pursued because it is thought worth pursing for its own sake. In order to judge its social utility, then, you must judge it not by these consequences but as an end in itself. What is more, you cannot judge the social utility of a mathematician by asking whether he publishes his results. Unless there is value in being a pure mathematician, there is no value in publishing works of pure mathematics; for the only positive result these works could have is to make more people into pure mathematicians; and a society which does not think it a good thing to have one pure mathematician among its members will hardly think it a good thing to have many. The social justification of pure mathematics as a career in any given society, then, is the fact that the society in question thinks pure mathematics worth studying: decides that the work of studying pure mathematics is one of the things which it wants to go on, and delegates this function, as somehow necessary for its own intellectual welfare, to a man or group of men who will undertake it. A test for this opinion is that the society in question should be grateful to the pure mathematician for doing his job, and proud of him for being so clever as to be able to do it; not that every one else should rush in to share his life, but that even if his neighbours feel no call to share it they should honour him for living as he does. The fact that they do so honour him is a proof that they want a life of that kind to be lived among them, and feel its achievements as a benefit to themselves. (pp. 145-146) Posted at September 28, 2009 4:47 PM UTC Re: The Mathematical Vocation I think the debate would have seemed much clearer in 1939 than it does now. In particular, the quote “Pure mathematics, apart form any consequences which may ultimately come of it, is pursued because it is thought worth pursing for its own sake.” presupposes that for things that were called pure mathematics as they were developed (ie, where there is a strong consensus, not borderline issues) to ever have “practical” consequences is rare. In a modern view, modern physics, computer science and mathematical modelling of all sorts, tends to bring in some results from various branches of mathematics that are called pure mathematics (eg, number theory, construction of computable reals, etc). So the argument that publishing pure mathematical research cannot be any factor in deciding the “utility” of a mathematician becomes very difficult to agree with. Of course the question still exists in a different from: how does one define social utility for working on something that is very unlikely to consequences that have practical consequences, and vanishingly likely to have direct practical consequences? Posted by: bane on September 28, 2009 5:32 PM | Permalink | Reply to this Re: The Mathematical Vocation This whole “pure mathematics is useless” thing is largely a pose that mathematicians like to adopt (for various psychological reasons I don’t dare to guess at), partly popularised by Hardy. As a result they like to greatly exaggerate the claims of uselessness. In particular they have a tendency to say “there is no use for X” when they mean “I don’t happen to know of any application of X because I don’t bother to look in on what my colleagues are doing, let alone the next department down the corridor”. For a while I worked in computational chemistry. I was amazed how much mathematics was being consumed by chemists - especially graph theory and algebraic topology, much of it the kind of pure mathematics that would have seemed the very height of uselessness back in 1939. Some of it was a weird kind of use. For example a lot of empirical results found by running linear regression on databases of various physical properties vs. graph-theoretical and algebraic-topologcial invariants of molecules. Nonetheless, it really opened my eyes. We have no idea what pure mathematics will turn out to be useful, but the true reason why societies tolerate and even pay for mathematicians is that in utilitarian terms the payoffs have been big. Posted by: Dan Piponi on September 28, 2009 6:25 PM | Permalink | Reply to this Re: The Mathematical Vocation This reminded me of a conversation I once had with a chemist who asked me why so much effort was spent on teaching undergraduates useless mathematics, such as calculus and probability theory, instead of teaching the very useful stuff – ie, group theory and algebraic topology! Posted by: peter on September 28, 2009 7:12 PM | Permalink | Reply to this Re: The Mathematical Vocation I was amazed how much mathematics was being consumed by chemists - especially graph theory and algebraic topology. So where does an algebraic topologist look to get hired into the chemistry biz? Posted by: John Armstrong on September 28, 2009 7:42 PM | Permalink | Reply to this Re: The Mathematical Vocation New Scientist! My wife spotted the ad for a job at a pharmaceutical company with the word ‘topology’ in the job description. But that was nearly 20 years ago now. Posted by: Dan Piponi on September 28, 2009 7:51 PM | Permalink | Reply to this Re: The Mathematical Vocation John A. wrote: So where does an algebraic topologist look to get hired into the chemistry biz? I don’t know much about this, but I’d guess that chemists either learn a bit of algebraic topology from books and papers like this or collaborate with mathematicians — not hire mathematicians to do Posted by: John Baez on September 29, 2009 1:58 AM | Permalink | Reply to this Re: The Mathematical Vocation This is sort of the problem. Once you’ve studied pure mathematics you’re pretty much useless for anything else in the eyes of employers. Posted by: John Armstrong on September 29, 2009 2:43 AM | Permalink | Reply to this Re: The Mathematical Vocation I find it’s more tricky than that. At least in the various areas of computing, lots of people think they want to employ people who have greater exposure to mathematics than they have. The problem tends to be that the people in immmediate charge of you don’t “deep down” understand that figuring out new mathematics (in the sense of “here’s how we can apply this existing theory to your problem”) is time-consuming. So they expect you to produce “mathematical bon-mots” at a moments notice and are very unimpressed when you can’t, and that can somewhat colour their views of future hiring. Posted by: bane on September 29, 2009 12:01 PM | Permalink | Reply to this Re: The Mathematical Vocation This whole “pure mathematics is useless” thing is largely a pose I don’t think so. You don’t get people saying “I went into nursing because there’s a small but non-zero probability that at some point, decades or centuries in the future, somebody practising a profession somewhat related to mine might do something of benefit to mankind.” I don’t think that’s how vocations work. Maybe some people feel a vocation to study algebraic topology because they want to be of help to chemists, but not many, surely…. And that’s all that’s needed for Collingwood to make his point. the true reason why societies tolerate and even pay for mathematicians is that in utilitarian terms the payoffs have been big Is that the reason the Ptolemies supported the library of Alexandria? Surely societies support all manner of activities that are useless in utilitarian terms. Sometimes they even invent fake reasons why these activities must secretly be useful. (“These jewels are so beautiful that they must have magic powers!” “If you can predict the positions of the planets, you must be able to foretell the fate of kings!” “This economic model is so elegant that it must hold the secret of wealth, health and happiness!”) I don’t think everybody’s values are purely utilitarian. I’m not sure if that is even philosophically coherent (it makes it seem as though everything derives its value from being useful for something else, but nothing is of value for itself…). Well, I don’t think you meant that exactly—but I do feel that the claim that everything that people support is useful seems like just as much of a pose as that people sometimes do (or support) things because they’re useless. Posted by: Tim Silverman on September 28, 2009 11:19 PM | Permalink | Reply to this Re: The Mathematical Vocation I certainly don’t think that everyone is a utilitarian about everything, after all we have art that is paid for by private and public institutions. But art is widely appreciated, which is why every major city has galleries and museums. Mathematics has a much smaller audience on the other hand, but it has such a long history of application success it would be foolish not to support it for utilitarian reasons. The Library of Alexandria was an extension of Ptolemaic imperialism. They wanted to be seen as the leaders of everything, including Greek culture. They had *the* copy of Homer. That gave them status and power. I’m not sure how much it had to do with love of learning or anything like that. But going back to the pose thing: I have met so many mathematicians who take such great delight in telling people how useless their work is that it seems plainly apparent to me that there is more pleasure from this telling than simply that of relaying a statement of fact. Individual mathematicians might not take any interest in the applications, but they are often there nonetheless. And even if a mathematician is working on solving problem X, and X seems completely useless, they may be honing useful technique Y to get there. Posted by: Dan Piponi on September 29, 2009 1:09 AM | Permalink | Reply to this Re: The Mathematical Vocation “They had *the* copy of Homer. That gave them status and power.” Before pronouncing a complete dissociation from love of learning, perhaps one might ask *why* it gave them status and power. Enough mathematicians have some feel for the tangled network of concerns linking their own preoccupation to the variety of human endeavor. In a typical cocktail-party conversation, it just takes less energy to be modest about the utilitarian weight of our own work. Posted by: Minhyong Kim on September 29, 2009 10:16 AM | Permalink | Reply to this Re: The Mathematical Vocation Tim said I don’t think everybody’s values are purely utilitarian. I’m not sure if that is even philosophically coherent (it makes it seem as though everything derives its value from being useful for something else, but nothing is of value for itself…). Exactly the point Collingwood goes on to make. The chain of means-end justification must end somewhere in an end valued for its own sake. I should think you mathematicians have much to fear about the development of a society which values mathematics solely as a means to external ends. Posted by: David Corfield on September 29, 2009 10:33 AM | Permalink | Reply to this Re: The Mathematical Vocation Why do you think of it as a chain? Posted by: Minhyong Kim on September 29, 2009 11:09 AM | Permalink | Reply to this Re: The Mathematical Vocation Why a chain? This is Collingwood’s argument against taking usefulness as the sole value of an action. Say I take an action A to be of value solely for the value of the action B it enables. But the value of B is then that of enabled action C, and so on without termination. Or if there is termination, and an action enables no new action to be taken, it is valueless according to this account. Posted by: David Corfield on September 29, 2009 11:49 AM | Permalink | Reply to this Re: The Mathematical Vocation But this is exactly the image that seems not to be realistic. Most people’s real conception of value, individual or social, is associated with at least a graph of actions rather than a chain with $A_i$ justifying $A_{i+1}$. Any given node may have high degree, and it seems quite natural to find cycles, as might occur in any eco-system. I don’t know if this helps at all with the general problem of utility and justification (of mathematics), but I think the chain idea is useless, even as a toy model. It’s obvious, for example, that nothing needs to be an end in itself. Posted by: Minhyong Kim on September 29, 2009 12:15 PM | Permalink | Reply to this Re: The Mathematical Vocation It’s obvious, for example, that nothing needs to be an end in itself. I’m puzzled by this. Do you mean obvious in a useless toy model, or obvious in real life? If the latter, does the “need” operator have scope inside or outside the existential? That it may be the case for any given individual, particular action, that it need not be an end in itself, seems obvious enough to me, but that there need be no actions at all which are valuable in themselves seems … well, not impossible, but pathological. it seems quite natural to find cycles They might well exist as a matter of fact, but I would consider them a pathology rather than a natural part of a value system. I don’t see how such a cycle could generate utility. It would simply be a waste of time. Everybody on the cycle would think they were doing something useful, but actually they wouldn’t be. Or have I misunderstood you? Posted by: Tim Silverman on September 29, 2009 1:31 PM | Permalink | Reply to this Re: The Mathematical Vocation Sorry. I’ll restate it in mathematics so that I don’t botch up the English again: A directed tree is either infinite or has a node with no incoming edges. But this is not necessarily true of a general directed graph. Here is an action graph that is quite simple really: A family whose members find great satisfaction in keeping each other happy. I don’t doubt that you can model it as (embedded in) some other tree, and that might be useful sometimes. By the way, in case someone finds the example ‘pathologically’ Confucian, I refer him/her to O’Henry’s famous story. Posted by: Minhyong Kim on September 29, 2009 2:29 PM | Permalink | Reply to this Re: The Mathematical Vocation I think we’re talking at cross-purposes. Consider the following chain: “What’s the point of you buying this bicycle?” “So I can give it to my son.” “What’s the point of giving a bicycle to your son?” “So he can ride it.” “What’s the point of your son riding a bicycle?” “So he can have fun.” “What’s the point of him having fun?” “Because Fun is GOOD!” A cyclic arrangement of value would replace that last statement with something like “To give me a reason to buy a bicycle.” This obviously doesn’t exclude reciprocal or cyclical arrangements of mutual benefit, but the benefit has to exist or the arrangement seems pointless: “I’m buying a bicycle for the sake of buying a bicycle, even though the bicycle is worthless and the act of purchasing it is pointless.” (This is much broader than just the utilitarian pursuit of pleasure: one can aim to create beautiful things because beauty is inherently good regardless of any pleasure it might give, etc. Then the beauty would be of value in itself, hence the end of the chain of value.) It would be possible (if unlikely) for a group of people all to derive all of their pleasure from the results of actions undertaken by others (so nobody would do anything to please themself). But the fact that people were getting pleasure would presumably be the point of the whole exercise. It would only be in that sense that anybody would be doing anything “for” anyone else—because of the expectation that that person would benefit in some way, and that that benefit would be of value in itself. (Again, pleasure need not be the point of the exercise—the goal could be a harmonious society or whatever. But there would have to be an ultimate goal for any particular action, in order for it to have value.) Posted by: Tim Silverman on September 29, 2009 3:40 PM | Permalink | Reply to this Re: The Mathematical Vocation Yes, as (more or less) predicted, you’ve constructed a larger graph in which the cycle can be embedded with a node above the whole cycle. It’s not obvious to me that consistently going for such a view is particularly illuminating. Posted by: Minhyong Kim on September 29, 2009 3:53 PM | Permalink | Reply to this Re: The Mathematical Vocation I don’t think I’ve “constructed” a larger graph—I think that the other nodes are always there; and that sometimes ignoring them leads to confusion, sometimes not. However, I don’t think I distinguished adequately between actions and their consequences; perhaps that accounts for our disagreement. Against that, I think demands for utilitarian justification are partly a way to gloss over (or distract attention from) questions about the actual goals (e.g. to pre-empt certain kinds of answers to the question of what mathematics is actually “for”). I confess I need to think more about what you are saying, so this is very inadequate. Posted by: Tim Silverman on September 29, 2009 4:38 PM | Permalink | Reply to this Re: The Mathematical Vocation Going off at a slight tangent here: this is reminiscent of how Google page rank works. How do I value a web page? Well if I value the web pages that point to it, then that confers on it a certain value. Those in turn are conferred value from other web pages. Ultimately we end up with circularity. But that’s not a problem, the cycles just give an eigenvalue problem for which we can solve. But with no reference to anything external from which to bootstrap, it’s surprising that it measure anything useful at all, and yet it does. Posted by: Dan Piponi on September 29, 2009 6:12 PM | Permalink | Reply to this Re: The Mathematical Vocation There may be no direct reference to anything external, but of course the links are put there by people based on their own judgements of value. The links are used as a proxy for those judgements. Posted by: Mark Meckes on September 30, 2009 3:27 PM | Permalink | Reply to this Re: The Mathematical Vocation This reminds me of the motto: Work to live; Live to bike; Bike to work! On a more serious note: One earns one’s living as a mathematician because someone is paying for it (for whatever reasons, ranging from “the work is needed” to “there is a budget for it”), and one does good mathematics because one likes to do this (sometimes only in one’s spare time, due to the administrative load in the official hours). How much the two aspects correlate is probably very person-dependent. Posted by: Arnold Neumaier on September 29, 2009 7:28 PM | Permalink | Reply to this Re: The Mathematical Vocation Tim Silverman gave a chain of justifications ending platonically in the Good: “What’s the point of you buying this bicycle?” “So I can give it to my son.” “What’s the point of giving a bicycle to your son?” “So he can ride it.” “What’s the point of your son riding a bicycle?” “So he can have fun.” “What’s the point of him having fun?” “Because Fun is GOOD!” But I can also imagine something that goes like this: “What’s the point of you buying this bicycle?” “So I can give it to my son.” “What’s the point of giving a bicycle to your son?” “So he can ride it.” “What’s the point of your son riding a bicycle?” “So he gets exercise, and has something to do with other kids.” “What’s the point of his getting exercise and doing things with other kids?” “It helps him become happy and well-adjusted, build his coordination and strength, and develop social skills?” “What’s the point of being happy, well-adjusted, coordinated, strong, and socially adept?” “It’ll help him make friends, get a good job, and stay healthy for a long time.” “What’s the point of his having friends, having a good job, and being healthy?” “It’ll help him contribute to society and do a good job raising kids of his own… for example, by buying them bicycles.” And so on… I think the justifications can endlessly branch out into the future or loop around, and I think they actually do. But what if you ask me “What’s the point of this whole complicated web???” Well, I would have to admit that if the whole universe didn’t exist, nobody would be the slightest upset. But if you tried to remove any one part of the universe, the nearby parts would be affected. Posted by: John Baez on September 29, 2009 10:33 PM | Permalink | Reply to this Re: The Mathematical Vocation I thought you mathematicians were supposed to understand reductio ad absurdum. The point of Tim’s chain is not to explain how things are, but to show that there’s a contradiction in the claim that the value of an action lies solely in those its consequences. He’s arguing that once you begin a chain of supposedly purely utilitarian actions, that you will be forced to conclude it with an action valued intrinsically. This doesn’t mean he believes the value of actions has the form of a chain terminating in a single good action. He may well have an image close to the web you present, where intrinsic good occurs at many points in the web. Posted by: David Corfield on September 30, 2009 9:48 AM | Permalink | Reply to this Re: The Mathematical Vocation To use the word ‘solely’ is dangerous in any circumstance. Posted by: Minhyong Kim on September 30, 2009 12:06 PM | Permalink | Reply to this Re: The Mathematical Vocation But I guess the urge to find a single reason can be helpful in science, even if it’s occasionally misguided. I vaguely remember from long ago two essays by Einstein, the first of which concluded with the proposal of a World Government as the *only* solution to contemporary ills.(My impression is he was substantially under the influence of Russell at the time.) He subsequently received a letter from some Soviet scientists spelling out the dangers of such an entity, especially from the perspective of weaker nations. He was clearly troubled by the points they made and wrote a rather rambling reply. I think it was a short while later that he wrote the essay concluding with ‘Only *socialism*…’. Posted by: Minhyong Kim on September 30, 2009 12:16 PM | Permalink | Reply to this Re: The Mathematical Vocation I suppose I should make a serious point as well. For example, in this portion of your Collingwood quote: ‘Pure mathematics, apart from any consequences which may ultimately come of it, is pursued because it is thought worth pursing for its own sake. In order to judge its social utility, then, you must judge it not by these consequences but as an end in itself.’ he is clearly the one making the claim for a *sole criterion* that lies above others. Normally, it’s like my $B_{691}$ below, and I can’t see the point of insisting otherwise. I really think his argumentation comes from an exclusive focus on the tree model, perhaps stemming from the contemporary interest in the axiomatic tradition. Posted by: Minhyong Kim on September 30, 2009 12:28 PM | Permalink | Reply to this Re: The Mathematical Vocation he is clearly the one making the claim for a *sole criterion* that lies above others. I think you’d have to look much more closely at his writings before you could gain a clear sense of how he takes goods to be ordered. I read the passage you quote to mean just that mathematics is thought worth pursing for its own sake, and to that extent its value should be judged accordingly. But I see we have a copy of the The first mate’s log of a voyage to Greece in the schooner yacht Fleur de Lys in 1939 in our library, so may be able to say more when I’ve read it. It appears that he also questions the worth of philosophy. Posted by: David Corfield on September 30, 2009 1:17 PM | Permalink | Reply to this Re: The Mathematical Vocation OK, we’ll see. I’ll make one ‘practical’ point to conclude my contribution to this discussion. If it comes to a question of giving justification for mathematics (or philosophy, for that matter) to the general public, we might do well to suggest, at least vaguely, a rich network (that could contain a $B_{691}$), perhaps by way of a few key examples. It seems more effective as well as more accurate than insisting on the overwhelming inevitability of intrinsic value and the stupidity of those who fail to recognize it. Posted by: Minhyong Kim on September 30, 2009 1:41 PM | Permalink | Reply to this Re: The Mathematical Vocation Is it really true that the point, the only point, of being happy, or having friends, or having a good job, is for the sake of something else? Don’t people actually think these things are good in themselves? Sure, they may also help you get other things which are also good, but is that what people are actually thinking when they feel glad that their children are happy, healthy and prosperous? Whenever people trot out these external reasons for things that seem obviously good in themselves, I always suspect these are fake reasons prompted by a utilitarian anxiety about simply saying something is good. Maybe I shouldn’t ascribe motives like this, but I just don’t think people really think like this as a rule, except when someone actually demands that they justify something that is, in reality, an end in itself. Suppose it turned out that being happy didn’t help you make friends, or that making friends didn’t help you get a job? Would being happy or having friends then become worthless? Got to dash out now, or I’d say something more thought-out. Posted by: Tim Silverman on September 30, 2009 10:52 AM | Permalink | Reply to this Re: The Mathematical Vocation I, for one, had intention of objecting to ‘good in itself.’ (Even though I was tempted to ask ‘What’s so good about fun?’) The list in response to ‘Why $A$?’ could certainly have had one edge going to $B_{691}=$good in itself with other edges leading off to infinity, loops, and what not. One might even ascribe the need to find a single overarching node of ‘The Good’ to the tradition of monotheism :-). I should take this opportunity to correct one extreme sentence from above, where I said the chain model is ‘useless.’ Posted by: Minhyong Kim on September 30, 2009 11:52 AM | Permalink | Reply to this Re: The Mathematical Vocation The notion of a hierarchical organisation of goods, arranged according to a single final end, is certainly there in Aristotle, and finds itself explicitly integrated with monotheism in Aquinas. Posted by: David Corfield on September 30, 2009 12:17 PM | Permalink | Reply to this Re: The Mathematical Vacation Also the notion of entelechy, literally, that which has its end in itself, sometimes translated as actuality or realization and sometimes glossed as perfection, depending on the sense of the word has that one has in mind. Like certain pinball games I used to play where the payoff of game $G$ is nothing but another play at game $G$. Posted by: Jon Awbrey on September 30, 2009 1:48 PM | Permalink | Reply to this Re: The Mathematical Vocation Here I feel I ought to observe that I don’t believe in a single overarching Form of the Good, and I would not at all represent intrinsic value as a single node, for the sake of which other things are To forestall the obvious follow-up question with regard to the issue of comparing goods, as mentioned by David C below: I don’t in fact belief that these sorts of questions are resolvable in theory, although they may sometimes be so in practice. Posted by: Tim Silverman on September 30, 2009 11:59 PM | Permalink | Reply to this Re: The Mathematical Vocation Even though I’m contributing nothing at this point, I feel obligated to reply. The reference to an overarching good came up because of your sentence “there would have to be an ultimate goal for any particular action, in order for it to have value”(Sorry, I don’t know that trick of linking to an earlier point in a thread.) But perhaps you didn’t mean it in any absolute sense. In any case, if you don’t believe in such a thing, there’s no reason for the mere presence of intrinsic value to lead to termination of a ‘value chain.’ Nor is there reason for intrinsic value to be the exclusive or even most important motivation for anything. I note with dismay that my sentences are becoming progressively metaphysical. Posted by: Minhyong Kim on October 1, 2009 2:04 AM | Permalink | Reply to this Re: The Mathematical Vocation I should have added that $B_{691}$ could then lead to $C_1=$ (good in itself)’ $C_2=$ (good in itself)” $C_3=$ (good in itself)”’ The universe of intrinsic value is itself quite rich. Posted by: Minhyong Kim on September 30, 2009 11:58 AM | Permalink | Reply to this Re: The Mathematical Vocation The universe of intrinsic value is itself quite rich. The anti-utilitarians would be with you on this. The next question is how to represent this richness. Have I accurately glimpsed your disapproval of the notion of an overarching good? One way to think about our (implicit) ordering is to see how we would behave in various circumstances. We may take research mathematics in our community to be a part of its flourishing, but we may be prepared to sacrifice it for the continuation of some other activity, such as the provision of public green spaces, if we were forced to choose. How would we give good reason for this choice? To read Aquinas weighing up goods, see questions 1-5 of Prima Secundae Partis of Summa Theologica. Posted by: David Corfield on September 30, 2009 2:25 PM | Permalink | Reply to this Re: The Mathematical Vocation Thanks for the Aquinas reference. The problem of adequate representation is obviously subtle in any serious inquiry. ‘Disapproval’ of overarching good seems a bit too strong for me, especially since I’m probably more religious than many other agnostics. However, too heavy an enphasis on intrinsic value does seem to lead to a certain rigidity of outlook, and perhaps inhibit sensible decisions in the kind of scenario you describe. Obviously, none of us are so partisan as to defend all parts of theoretical scholarship at all costs. So then it’s probably a good idea to have at hand some balanced pragmatism, even with regard to intrinsic values. On the other hand, people who pugnaciously insist on the intrinsic value of their own work can also be very interesting and valuable… Posted by: Minhyong Kim on September 30, 2009 6:52 PM | Permalink | Reply to this Re: The Mathematical Vocation I have this nasty habit of afterthoughts, but perhaps I should explain just a bit more. When asked ‘Why $A$?,’ a typical person’s considered response is `Because of $B_1, B_2, \ldots, B_n.$’ And so we take off backwards along the various edges. Why would you consider it pathological to find cycles here and there, or to have trouble finding an end? Posted by: Minhyong Kim on September 29, 2009 2:47 PM | Permalink | Reply to this Re: The Mathematical Vocation This is just the kind of discussion they have in epistemology. How is belief in P justified? If through earlier premisses, how do they receive their justification in turn? Can there be an infinite regress? Is there some point where I must stop? If the latter, is that point a proposition or a state of the world? Coherentists take all justification to arise from the coherence of the network, including cycles of justification. Foundationalists take there to be justified foundations. Infinitists argue for a justification to emerge from infinitely long chains of inference. Hybrids include foundherentists. Our message crossed earlier, in case you didn’t notice this. Posted by: David Corfield on September 29, 2009 3:17 PM | Permalink | Reply to this Re: The Mathematical Vocation I suspected these issues were already discussed to death by you philosophers :-). Anyways, I do think the Confucian cycle mentioned above is a genuine one. Posted by: Minhyong Kim on September 29, 2009 3:27 PM | Permalink | Reply to this Re: The Mathematical Vocation By the way, in case you folks here are interested, we now have a London Number Theory Blog Posted by: Minhyong Kim on September 29, 2009 3:30 PM | Permalink | Reply to this Re: The Mathematical Vocation I’m interested. I’ve added it to the online resources entry. I like your Non-abelian fundamental groups for the public. One small step in making the public “grateful to the pure mathematician for doing his job, and proud of him for being so clever as to be able to do it”. I wonder if they could cope with a little more explanation of this sentence: Here, critical use is made of the space associated to an equation, in that the fundamental group weighs the totality of paths that one might attempt to traverse through it. Posted by: David Corfield on October 1, 2009 11:20 AM | Permalink | Reply to this Re: The Mathematical Vocation OK. I’ve added an appendix to the original document. Posted by: Minhyong Kim on October 6, 2009 11:28 PM | Permalink | Reply to this Re: The Mathematical Vocation Speaking of hybrids, fundherentists, 3-quids, here’s a Peircean musement on entelechy. Posted by: Jon Awbrey on September 30, 2009 2:16 PM | Permalink | Reply to this Re: The Mathematical Vocation But this is exactly the image that seems not to be realistic. Collingwood doesn’t need it to be realistic. He is simply taking it to be the image held by a certain kind of utilitarian who takes the value of an action to be solely that of the action it enables. His argument is to the effect that nobody who holds this image can do so consistently. If we accept his argument, the only move that utilitarian can make is to say that there are images which they may hold, other than the chain one, where we can understand how both (i) all value is value of consequence and (ii) actions do have values. Perhaps we could have some richer network of actions enabling other actions, none of which are valuable in themselves, but where value emerges from the complex pattern. I can’t say I can see how. Collingwood’s own view is far from this, where actions have value in themselves as well as in terms of the actions they enable. Not only do I work to earn money, or work because I hold to the rule that able-bodied people should work if possible, but I take there to be an intrinsic good to the activities of my job. Posted by: David Corfield on September 29, 2009 2:42 PM | Permalink | Reply to this Re: The Mathematical Vocation I have the same sort of instinctive reaction to the chain viewpoint as I do to Asimov’s Three Laws of Robotics: it’s absolutely logically fine, except it breaks completely as soon as you have any degree of uncertainty at all in your knowledge about the inputs. The world is full of things in all sorts of areas that weren’t envisaged as being of practical use when they were first “developed”, and even more things where the actual use is completely different from the envisaged use, and many things that were believed would be useful turned out not to be. So any attempts to figure out in advance what things may eventually be useful are of limited efficacy, and hence any value-system predicated on that will be hopelessly inaccurate. That’s not to say that the intuitive problem has gone away: how should one justify working on something which has a (provisional) estimated probability of being useful that’s smaller than the probability associated with some other endeavour? My personal feeling has always been partly that society should be doing many things that are not of direct utility and then basically about “aptitude”: a good surgeon is likely to do more good in the world than I’ll ever do as a computer scientist/mathematician, but I just don’t have the desire or aptitude to be a good surgeon. Posted by: bane on September 29, 2009 1:35 PM | Permalink | Reply to this Re: The Mathematical Vocation “The world is full of things in all sorts of areas that weren’t envisaged as being of practical use when they were first “developed”, and even more things where the actual use is completely different from the envisaged use” In fact, the truth is even stronger. For most hi-tech products, most ultimate applications are not what the inventors originally envisaged. This is a such a well-known phenomenon that product engineers, marketers and marketing researchers regularly make use of it, involving so-called “lead-users” of technologies in the design and marketing of new products. The pioneer in this area was Eric von Hippel, of MIT. Posted by: peter on October 1, 2009 5:15 PM | Permalink | Reply to this Re: The Mathematical Vocation to a man or group of men who will undertake it. Sheesh. Even in 1939 Collingwood should have had enough examples of female mathematicians. I’m usually pretty forgiving of “he” used for he/she, pre-1970 say, but this is just silly. Sorry to derail. Posted by: Allen Knutson on September 28, 2009 6:55 PM | Permalink | Reply to this Re: The Mathematical Vocation Indeed. I stumbled over that phrase and had to try reading it again. This does not have the feel of a ‘generic’ “he” but rather seems like an overt statement that pure mathematics, as a useless venture, is only taken up by males. Posted by: wren ng thornton on September 29, 2009 2:20 AM | Permalink | Reply to this Re: The Mathematical Vocation AK wrote: “Sheesh. Even in 1939 Collingwood should have had enough examples of female mathematicians.” I saw a list of famous women mathematicians and there were six, ending with Emmy Noether who died 1935; only three, and in the last two hundred years, were accomplished pure mathematicians. This may be a prejudice but people think of pure mathematicians as exceptional rather than mediocre and that’s the bias I read into Collingwood. When he wrote this piece there was no living famous or highly competent woman pure mathematician. Likely this had something to do with the prejudice of male dominated institutions of higher learning, but is that the sole cause? Women think differently than men and have different physical brain structure. I don’t think Collingwood meant an average man or group of male mathematicians elected to pursue pure mathematics, so then he wouldn’t mean an average woman or a group of women pure mathematicians. By far, there were many more accomplished male pure mathematicians at the time while women in the same class were rare, and one doesn’t generalize a rule from the exceptional female examples. So perhaps he shared in the common academic male opinion of the potential of women for mathematics which was held in the time he wrote this, but based on the evidence available to him, what he wrote is true if one accepts the premise that he was describing a group that had demonstrated abstract accomplishments (nearly all men) rather than a rare individual female, but certainly not a group of women who had demonstrated accomplishments in the abstract area of pure mathematics. Collingwood wrote:”to a man or group of men who will undertake it.” That portion of his remark is found in the sentence which talks about ‘pure mathematicians’. Your criticism uses the phrase “should have had enough examples of female mathematicians.” Female mathematicians are not necessarily pure mathematicians. So “enough examples”? There was not one living example of a gifted female pure mathematician at the time Collingwood expressed his opinon. And only 3 such in the two hundred years preceding his statement. That he means he could have made such a statement using “he” based only on the evidence and without any prejudice of political correctness at all. Perhaps your hunch is right, blatant political incorrectness, but that is to be inferred from his era and not from what he wrote. Was he supposed to claim that potentially women should be included without substantial evidence, but on egalitarian grounds?! You are going uphill and against the evidence to assert that the discrepancy of male genius pure mathematicians, there are more than women, is due solely to environmental factors. “Women have proportionately smaller brains than do men, but apparently have the same general intelligence test scores. Thus, I have proposed that the sex difference in brain size relates to those intellectual abilities at which men excel. Women excel in verbal ability, perceptual speed, and motor coordination within personal space: men do better on various spatial tests and on tests of mathematical reasoning. It may require more brain tissue to process spatial information.” Posted by: Stephen Harris on September 29, 2009 5:54 AM | Permalink | Reply to this Re: The Mathematical Vocation I have to say that much as I’m irked by Collingwood’s apparent sexism in the quoted passage, I don’t find it surprising. This is pre-WW2 Oxbridge after all. Perhaps I’m too defeatist, but I find myself just taking it in stride en route to the more interesting points he makes. Short rejoinder to Stephen Harris’ comment - I can’t work out if he is saying we should evaluate Collingwood in context, or that Collingwood was justified in his turn of phrase or supposed view: you do realise that the quote you give, which comes from a a letter sent to the NYRB, is given a pretty good kicking by the author of the original piece? which makes me wonder why you added it? Posted by: yemon choi on September 29, 2009 8:43 AM | Permalink | Reply to this Re: The Mathematical Vocation Even taking into consideration the idea that ‘man’ may have been used to denote an individual of our species, rather than a male, as we do with ‘dog’ and ‘fox’, we may continue to detect a note of sexism here. And it’s wonderful that we’re all now indignant about the prejudices underlying such modes of expression from earlier times. But to feel surprise about these modes of expression can only result from a lack of historical knowledge. You only have to read a little to know that that was a common way of speaking back in the 1930s. And people continued to speak this way, and not just in Oxbridge. Listen to Feynman lecturing in 1964 and you hear him use similar expressions when he explains what the physicist does. Posted by: David Corfield on September 29, 2009 10:13 AM | Permalink | Reply to this Re: The Mathematical Vocation I think it’s inefficient to get worked up over how people used to talk. In this case, suppose Collingwood had added as an aside to justify choosing to say “he”. Suppose he said most women are predisposed to become poor pure mathematicians and then he predicted that no woman would win the Fields Medal in the 20th Century. There is a difference between making a sexist or any “ist” remark and observing some true fact that sheds a poor light on some group. Often there are 4 winners of the Fields Medal. 30% of Math PhD’s were earned by women in 2006, 70% by men. What are the chances that no woman will win a Fields Medal if they are just as gifted as their male counterparts but not as many of them. I think the a priori odds of a woman winning the Fields Medal in 1998, or 2002, or 2006, or 2010 with 4 slots open, is much better than 20 to 1 that a woman should have won a prize. Because winners have until 40 to achieve the prize it reduces the 20 to 1 odds somewhat. What is the reason no women have won the FM? Is it all sociological: Old white men serving as FM judges; dolls instead of computer toys; teachers and counselors steer them all towards Home Economics classes? Or is their a physical reason. This has been a controversy for many years. For instance women have a thicker bundle of their corpus callosum and the effect is not well understood. On another note I agree with bane about the lack of prediction. There is no absolute ethical standard (not even evolution) which guides a society’s choice about the usefulness of pure mathematics. There is no way beforehand to determine what is ultimately best. So that leaves a culture with pragmatic choices even about supporting the fine arts. We put a man on the moon in 1969 and were on track to send a man to Mars ten years later. Political expediency derailed that goal. I read that Australia Maths departments are under pressure to be consolidated into the Computer Depts. Posted by: Stephen Harris on September 29, 2009 4:44 PM | Permalink | Reply to this Re: The Mathematical Vocation yemon choi wrote: “I can’t work out if he is saying we should evaluate Collingwood in context, or that Collingwood was justified in his turn of phrase or supposed view:” … I pointed out that there was insufficient evidence to regard Collingwood as a sexist because he used the word “he” which seemed to view women as inferior pure mathematicians. There is more to it than the simplistic noticing of “he”. — “Minds and Bodies: philosophers and their ideas” By Colin McGinn Page 244 CM writes: “The true enemy of democracy is the anti-intellectual, the brain- washer, the prejudice-pumper, since she undermines what alone makes democracy workable. The forces of cretinization are, and have always been, the biggest threat to the success of democracy as a way of allocating political power: this is a fundamental conceptual truth, as well as a lamentable fact of history.” ——————— SH: Next McGinn quotes Collingwood(C), he evidently holds C in high reqard. CM: “Collingwood identifies a deeper problem: [quotes C] “It is much easier for any kind of man known to me to doze off into daydreams which are the first and most seemingly innocent stage of craziness.” … SH: Did you notice that CM used the word “she” in “since she undermines” in a critical sentence? That is not evidence of sexism. Did you notice that C used the word “man” in a critical way? That indicates that he is using “he” and “man” in a generic way not a sexist way. About the 1980’s Feminists made an issue over using he generically and the rules were changed to be what is now politically correct. Before then, you had to read what a person wrote to decide if they were sexist. Now if you read “Monks and Morals” where he wrote the challenged “he” you will see that Collingwood is championing erasing preconceived prejudices. Finally, I quote both sides of an issue because I’m interested in presenting both sides of an argument. As in whether _men_ are predisposed to be better at mathematical reasoning. You, and others might not know it, but this has been a controversial issue for some time, there is no such thing as of course men and women have equal areas of expertise in their thinking. It happens that the brain is strongly influenced by hormones. They gave a number of women IQ tests. Then they injected them with testosterone and gave them another IQ test. Their areas of strength were altered by the testosterone. More like males with higher scores in mathematical reasoning than in the first test they took. So this is not proof. It is the reason why it is ill-assumed that Collingwood made a sexist assumption by elevating males to better pure mathematician status; it might be factual on his part, rather than sexist. But mainly, I think his remark is not sexist because using “he” was politically correct when he wrote that article. I think people jumped to the conclusion that because C used the word “he” that he was impugning women. I think you have to look at other things people wrote back in that era to decide if they had a sexist attitude or were just following the normal mankind practice. It looks to me like C deplored all kinds of stereotypical prejudice. Of course you can’t correct some fault that you are not aware of such as a hidden (to oneself) prejudice. CM is quite outspoken. Also C did not particulary associate with his Oxford colleagues, so there is no guilt by association. No, Oxford academics were typically sexist so likely C was too. Posted by: Stephen Harris on October 1, 2009 9:07 AM | Permalink | Reply to this Re: The Mathematical Vocation The latest trendy idea in public research funding in Britain is ‘impact’. When applying for funding you now have to fill in a large section on the social and economic impact of the research. The responsibility for higher education and research funding were recently moved into the Department for Business, Innovation and Skills, headed up by our so-called Prince of Darkness Peter Mandelson (the Rt Hon Lord Mandelson, First Secretary of State) — a very powerful and, by most accounts, extremely intelligent, though much despised man. The word on the street is that the move to this emphasis on impact comes from directly from his lordship. Our much hated national Research Assessment Exercise, an enormously bureaucratic procedure by which ‘research outputs’ are measured to establish levels of university funding is about to be replaced by the Research Excellence Framework in which a large weighting will be given to — yes, you guessed it — the social and economic impact of our research. I was talking to a historian friend about this and she said that mathematicians would be alright because people like Einstein have lots of impact. (Actually she said “Ask the people of Hiroshima how much impact he had.”) I had to point out to her that the four or five-year window of the Research Excellence Framework would make it impossible to link nuclear technology with Einstein’s original ‘research outputs’. Posted by: Simon Willerton on September 29, 2009 11:43 AM | Permalink | Reply to this Re: The Mathematical Vocation Oddly, I think Collingwood could be read as saying there is a light in which the movement towards an appreciation of ‘social impact’ could be seen as positive, hard though it may be to do so for a directive emanating from the Prince of Darkness. Can you say of the citizens of Sheffield that they are …grateful to the pure mathematician for doing his job, and proud of him for being so clever as to be able to do it; not that every one else should rush in to share his life, but that even if his neighbours feel no call to share it they should honour him for living as he does. The fact that they do so honour him is a proof that they want a life of that kind to be lived among them, and feel its achievements as a benefit to themselves. Posted by: David Corfield on September 29, 2009 3:25 PM | Permalink | Reply to this Re: The Mathematical Vocation A conspiracy to queer the data seems to be a delightfully mischievous endeavor. Those of us in the colonies who want to help our colleagues across the pond can simply throw in a few random citations. We could even include a sentence in the introduction, “As is standard within this area, the authors point out that this work has nothing whatsoever to do with the \cite{FavBrit1,FavBrit2}.” Posted by: Scott Carter on September 29, 2009 5:15 PM | Permalink | Reply to this Re: The Mathematical Vocation To read Tim Gowers wrestling with similar issues see The Importance of Mathematics, which is the subject of a post at God Plays Dice. Posted by: David Corfield on September 29, 2009 5:13 PM | Permalink | Reply to this Re: The Mathematical Vocation I really enjoyed this paper; Gowers seems so grounded including his choice not to introduce the word “fractal” which might have proved to be a distraction. Thanks! Posted by: Stephen Harris on September 30, 2009 3:46 AM | Permalink | Reply to this Re: The Mathematical Vocation The most useful thing about utilitarianism, at least to politicians, is that so few people stop to ask who decides what “useful” means. Posted by: Gavin Wraith on September 29, 2009 9:53 PM | Permalink | Reply to this Mathematical Mystery Tour Somewhat in the same ballpark (cricket green), did anyone see the play “A Disappearing Number”, weaving its dramatic tapestry on the web of mutual musement that Hardy and Ramanujan spun around and between themselves? Posted by: Jon Awbrey on September 30, 2009 1:04 AM | Permalink | Reply to this Re: Mathematical Mystery Tour I did, and was disappointed. They took as their starting point the story of Ramanujan and Hardy which had heaps of dramatic potential, and who were already fascinating characters in their own right, and danced most post-modernly and brazenly about several timeframes trying to dramatise the nature of inquiry itself, with characters it proved hard to care a jot about. Too much spectacle, and way too little of the real meat of drama - exegesis of character and motive. Posted by: Mozibur Ullah on October 5, 2009 5:53 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2009/09/the_mathematical_vocation.html","timestamp":"2014-04-20T06:18:22Z","content_type":null,"content_length":"119754","record_id":"<urn:uuid:3149f33e-323d-4b23-91d7-983f187e03ef>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
All Discussions Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. Welcome to Azimuth Forum If you want to take part in these discussions either (if you have an account), apply for one now (if you don't).
{"url":"http://azimuth.mathforge.org/","timestamp":"2014-04-20T08:15:49Z","content_type":null,"content_length":"65487","record_id":"<urn:uuid:dc5af07e-d27e-41a5-ad63-aac7a8c0b8d5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
This web page relates to the second edition ISBN 1-4020-0964-X (Hardback) and 1-4020-0966-6 (Paperback). A third edition has just been published by Springer. In connection with the second edition, I am making available a set of files in Maple and Mathematica that facilitate the solution of boundary value problems in Elasticity. These can be accessed at the URL http://www-personal.umich.edu/~jbarber/elasticity/maple-and-mathematica2.html If you want to explore this resource, I suggest you start by clicking on either Programming in Maple or Programming in Mathematica and then on to `Catalogue of Maple files' or `Catalogue of Mathematica files'. If you have never used these methods to solve problems, you will surprised how effective they are. You will however need to have Mathematica or Maple installed on your computer system. Additional resources for Mathematica solutions of some elasticity problems can be found at http://documents.wolfram.com/applications/structural/. In particular, Chapters 3 and 4 of this resource apply to problems from Chapters 17 and 16 respectively of `Elasticity'. In the first printing of the second edition, pages 30 and 300 were incorrectly left blank. The correct text for these pages, which contains problems 2:3 to 2:8 and problems 21.4, 21.5 respectively, can be downloaded at `Page 30' and`Page 300' . If you find any other errors in the book or the electronic files, please let me know at jbarber@umich.edu. You can download my most recent list of errata at `Errata'. The second edition contains 223 end-of-chapter problems. These range from routine applications of the methods described in the chapter to quite challenging problems suitable for student projects. Most three dimensional problems are only really practicable when using Mathematica or Maple. Problems 24.8 and 24.9 are more difficult than they look! This is because the tractions on the spherical surface due to the point force(s) cannot be written in terms of a finite Fourier series in beta. This contrasts with the corresponding cylindrical problems 12.1, 12.2, 12.3. In fact, these tractions are still weakly (logarithmically) singular, though the dominant singularity associated with the point force has been removed. They can be removed by an infinite series of spherical harmonics, but the series will be rather slowly convergent. A more rapidly convergent solution can be obtained by first removing the logarithmic singularity. A solution manual is available, containing detailed solutions to all the problems, in some cases involving further discussion of the material and contour plots of the stresses etc. Bona fide instructors should contact me at jbarber@umich.edu if they need the manual and I will send it out as zipped .pdf files. Please tell me which edition of the book you have. An analytical tool using MatLab has been developed for determining the nature of the stress and displacement fields near a fairly general singular point in linear elasticity. This is based on the method outlined in Section 11.2. For more information, click here. CHAPTER 16 TORSION OF A PRISMATIC BAR CHAPTER 17 SHEAR OF A PRISMATIC BAR CHAPTER 28 THE PENNY-SHAPED CRACK Since the first edition of this book was published, there have been major improvements in symbolic mathematical languages such as Maple and Mathematica and this has opened up the possibility of solving considerably more complex and hence interesting and realistic elasticity problems as classroom examples. It also enables the student to focus on the formulation of the problem (e.g. the appropriate governing equations and boundary conditions) rather than on the algebraic manipulations, with a consequent improvement in insight into the subject and in motivation. During the past 10 years I have developed files in Maple and Mathematica to facilitate this process, notably electronic versions of the Tables in the present Chapters 19 and 20 and of the recurrence relations for generating spherical harmonics. One purpose of this new edition is to make this electronic material available to the reader through the Kluwer website www.elasticity.org. I hope that readers will make use of this resource and report back to me any aspects of the electronic material that could benefit from improvement or extension. Some hints about the use of this material are contained in Appendix A. Those who have never used Maple or Mathematica will find that it takes only a few hours of trial and error to learn how to write programs to solve boundary value problems in elasticity. I have also taken the opportunity to include substantially more material in the second edition - notably three chapters on antiplane stress systems, including Saint-Venant torsion and bending and an expanded section on three-dimensional problems in spherical and cylindrical coordinate systems, including axisymmetric torsion of bars of non-uniform circular cross-section. Finally, I have greatly expanded the number of end-of-chapter problems. Some of these problems are quite challenging, indeed several were the subject of substantial technical papers within the not too distant past, but they can all be solved in a few hours using Maple or Mathematica. A full set of solutions to these problems is in preparation and will be made available to bona fide instructors on request. Back to J.R.Barber's homepage.
{"url":"http://www-personal.umich.edu/~jbarber/elasticity/book2.html","timestamp":"2014-04-20T00:42:23Z","content_type":null,"content_length":"8815","record_id":"<urn:uuid:1f58869d-26c4-41b0-b49c-65e253330868>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick labels within figures August 26, 2011 By Karl Broman One of the coolest R packages I heard about at the useR! Conference: Toby Dylan Hocking‘s directlabels package for putting labels directly next to the relevant curves or point clouds in a figure. I think I first learned about this idea from Andrew Gelman: that a separate legend requires a lot of back-and-forth glances, so it’s better to put the labels right by the relevant bits. For example, rather than this: I’ve adopted this approach as much as possible, though it often requires a bit of work (and thought) to get the labels in just the right place. Here’s the code I used for the first of those pictures. (It was relatively easy here.) load(con <- url("http://www.biostat.wisc.edu/~kbroman/blog/mapexpansion.RData")) plot(dat[,1], dat[,4], xlab=expression(paste("Generation ", F[k])), ylab="Map expansion", type="l", lwd=2, col="black", las=1, xaxs="i", yaxs="i", ylim=c(0, 7)) for(i in 2:3) lines(dat[,1], dat[,i], lwd=2, col=c("blue", "red")[i-1]) text(17, dat[18,-1]-0.1, paste(colnames(dat)[-1], "-way", sep=""), adj=c(0,1), col=c("blue","red","black")) The aim of the directlabels package is get this without effort. I need to switch to either lattice or ggplot2 (vs. base graphics). But I should be doing that anyway. I’ll try lattice for this. I rearrange the data a bit, call xyplot and then use direct.labels to make the actual plot, as follows. (Note the use of with and gl, which I just learned about from Richie Cotton.) dat2 <- with(dat, data.frame(gen=rep(gen,3), mapexpansion=c(two, four, eight), cross=gl(3, nrow(dat), labels=c("two","four","eight")))) p <- xyplot(mapexpansion ~ gen, data=dat2, groups=cross, type="l", lwd=2) For a final figure for publication, one will want to do some editing by hand, but for day-to-day graphics, this looks really useful. The following is the “real” version of the above figure, from a paper under review, using a mixture of legend and direct labels: Here’s another figure I’m quite proud of, from a paper nearing submission. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/quick-labels-within-figures/","timestamp":"2014-04-21T04:35:44Z","content_type":null,"content_length":"40687","record_id":"<urn:uuid:60c7cad4-52f2-4372-9910-8c7d3d6ae659>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: AW: Keeping trailing zeros when formatting a decimal [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: AW: Keeping trailing zeros when formatting a decimal From Miranda Kim <mk@mrc.soton.ac.uk> To statalist@hsphsun2.harvard.edu Subject Re: st: AW: Keeping trailing zeros when formatting a decimal Date Thu, 01 Oct 2009 13:54:51 +0100 For the leading zero issue, I want it to display say "-0.22" rather than "-.22", and putting a 0 after the % doesn't seem to work: . display %09.2g 0.5259 but I used: subinstr(string(-0.22, "%9.2g"), ".", "0.",.) which does the job. However I'm still stuck for the trailing zeros... I use "g" rather than "f" because I am wanting to apply this to many different scales of decimal numbers, whether it be 0.000026789 or 0.23897, and just want to keep 2 significant figures. My little program goes as follows: program def numformat, rclass args num if abs(`num') > 1 { return local num = string(round(`num', 0.01), "%9.2f") if abs(`num') < 1 & abs(`num') >= 0.0001 { return local num = subinstr(string(`num', "%9.2g"), ".", "0.",.) if abs(`num') < 0.0001 { return local num = "< 0.0001" Martin Weiss wrote: Most of your problems seem to be due to the fact that you are using "g" instead of "f" in your formatting directives. See [U], 12.5 for more info. Leading zeroes can be induced by inserting a zero after the percentage sign. Also note that you do not need to use the -string()- function, as -display- is able to apply a formatting directive on its own, as seen in the last row: di "`=string(-0.000029, "%9.2g")'" di in red "`=string(-0.000029, "%09.2f")'" di "`=string(-0.0000201, "%9.2g")'" di in red "`=string(-0.0000201, "%7.6f")'" di "`=string(-0.000029, "%9.1g")'" di in red "`=string(-0.000029, "%09.1f")'" di in red %09.1f -0.000029 -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Miranda Kim Gesendet: Donnerstag, 1. Oktober 2009 13:33 An: statalist@hsphsun2.harvard.edu Betreff: st: Keeping trailing zeros when formatting a decimal I would be very grateful for advice on the following basic formatting questions... To store a number as a string with a format showing 2 significant figures, I do the following, for example: di "`=string(-0.000029, "%9.2g")'" If the second significant figure is a zero, how can I make sure this is still displayed? The following produces: . di "`=string(-0.0000201, "%9.2g")'" when I want it to display "-0.000020" Also, how can I make sure it displays the zero before the decimal point? Also, why does . di "`=string(-0.000029, "%9.1g")'" not show only 1 significant figure? Many thanks for your help. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-10/msg00023.html","timestamp":"2014-04-21T02:27:09Z","content_type":null,"content_length":"10271","record_id":"<urn:uuid:de24e90a-e0a9-4ed9-a918-0ffa9db89e02>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Lokal präsentierbare Kategorien. Number 221 - University of Edinburgh , 1998 "... Term rewriting systems are widely used throughout computer science as they provide an abstract model of computation while retaining a comparatively simple syntax and semantics. In order to reason within large term rewriting systems, structuring operations are used to build large term rewriting syste ..." Cited by 12 (6 self) Add to MetaCart Term rewriting systems are widely used throughout computer science as they provide an abstract model of computation while retaining a comparatively simple syntax and semantics. In order to reason within large term rewriting systems, structuring operations are used to build large term rewriting systems from smaller ones. Of particular interest is whether key properties are modular, thatis,ifthe components of a structured term rewriting system satisfy a property, then does the term rewriting system as a whole? A body of literature addresses this problem, but most of the results and proofs depend on strong syntactic conditions and do not easily generalize. Although many specific modularity results are known, a coherent framework which explains the underlying principles behind these results is lacking. This thesis posits that part of the problem is the usual, concrete and syntaxoriented semantics of term rewriting systems, and that a semantics is needed which on the one hand elides unnecessary syntactic details but on the other hand still possesses enough expressive power to model the key concepts arising from , 1997 "... A category may bear many monoidal structures, but (to within a unique isomorphism) only one structure of "category with finite products". To capture such distinctions, we consider on a 2-category those 2-monads for which algebra structure is essentially unique if it exists, giving a precise mathemat ..." Cited by 9 (3 self) Add to MetaCart A category may bear many monoidal structures, but (to within a unique isomorphism) only one structure of "category with finite products". To capture such distinctions, we consider on a 2-category those 2-monads for which algebra structure is essentially unique if it exists, giving a precise mathematical definition of "essentially unique" and investigating its consequences. We call such 2-monads property-like. We further consider the more restricted class of fully property-like 2-monads, consisting of those property-like 2-monads for which all 2-cells between (even lax) algebra morphisms are algebra 2-cells. The consideration of lax morphisms leads us to a new characterization of those monads, studied by Kock and Zoberlein, for which "structure is adjoint to unit", and which we now call lax-idempotent 2-monads: both these and their colax-idempotent duals are fully property-like. We end by showing that (at least for finitary 2-monads) the classes of property-likes, fully property-like... "... Abstract. Compact categories have lately seen renewed interest via applications to quantum physics. Being essentially finite-dimensional, they cannot accomodate (co)limit-based constructions. For example, they cannot capture protocols such as quantum key distribution, that rely on the law of large n ..." Cited by 2 (2 self) Add to MetaCart Abstract. Compact categories have lately seen renewed interest via applications to quantum physics. Being essentially finite-dimensional, they cannot accomodate (co)limit-based constructions. For example, they cannot capture protocols such as quantum key distribution, that rely on the law of large numbers. To overcome this limitation, we introduce the notion of a compactly accessible category, relying on the extra structure of a factorisation system. This notion allows for infinite dimension while retaining key properties of compact categories: the main technical result is that the choice-of-duals functor on the compact
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2496933","timestamp":"2014-04-20T20:20:59Z","content_type":null,"content_length":"18203","record_id":"<urn:uuid:5b13d09d-9e23-4c93-86ca-44106da4a29a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Feasterville Trevose Math Tutor Find a Feasterville Trevose Math Tutor ...At college level, he has tutored students from the Universities of Princeton, Oxford, Pennsylvania State, Drexel, Temple, Phoenix, and the College of New Jersey. Dr Peter offers assistance with algebra, pre-calculus, SAT, AP calculus, college calculus 1,2 and 3, GMAT and GRE. He is a retired Vice-President of an international Aerospace company. 10 Subjects: including algebra 1, algebra 2, calculus, prealgebra ...I am certified in Pennsylvania. I have successfully taught children to achieve high levels of success in all academic, social and emotional areas. I have taught many inclusion classes in my years of teaching in Philadelphia. 19 Subjects: including algebra 2, English, writing, linear algebra ...Tutoring this subject is a mighty and noble challenge and is rewarding. I try to make the student's entrance into algebra one that flows from their knowledge of arithmetic. I constantly go back to the principles of arithmetic and show the student how algebra flows from the knowledge of arithmetic he or she already has in hand. 11 Subjects: including prealgebra, algebra 2, algebra 1, reading ...Basically, physics requires thinking in precise detail about the interactions of matter and energy in specific situations, and reducing these thoughts to appropriate mathematical expressions. (There are a few essential "arbitrary" rules to learn, also, but not very many!) Precalculus is a catch-a... 23 Subjects: including algebra 1, algebra 2, ACT Math, calculus ...I received my bachelor's degree in biochemistry and molecular biology and graduated with honors, with the completion of a thesis project in the subject matter. While it has been a long time since I have been immersed with this coursework, I understand the biochemical mechanisms and with some per... 12 Subjects: including algebra 1, algebra 2, biology, chemistry
{"url":"http://www.purplemath.com/feasterville_trevose_pa_math_tutors.php","timestamp":"2014-04-20T07:01:10Z","content_type":null,"content_length":"24285","record_id":"<urn:uuid:e80bcb89-aeb6-49fd-a8a4-a05089c18516>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Greenbelt Prealgebra Tutor Find a Greenbelt Prealgebra Tutor ...Writing: During my previous job I was required to write multitudes of technical reports. I am very skilled with technical writing, and I can certainly help with analytic writing. I achieved a 750 on the SAT math section. 32 Subjects: including prealgebra, reading, algebra 2, calculus ...Additionally, I received a 90% or above on the following content tests: Pre-algebra, Algebra, Geometry, Algebra II, and Trigonometry. I would love to help you build your confidence in mathematics whether you are strong in math or if you may loathe it. Check out how I would help you by viewing actual math questions that I have answered for other students in the "Resources" tab. 5 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...Whether your favorite music is rock, country, Latin, jazz or even classical, guitar is one of the core instruments in many ensembles. I can help you pick out a guitar that fits the style of music you love/want to play (if you don't already have one), show you how to read music and tablature, as ... 49 Subjects: including prealgebra, chemistry, reading, physics ...I also have 7 plus years in piano lessons, 2 years in violin, and 3 plus years in choir and sight reading. I love music and I love teaching it and sharing it with others. I took piano lessons for at least 7 years, performed at conferences, and have taught students for about 2 years. 56 Subjects: including prealgebra, reading, English, writing ...My approach to tutoring is to utilize the knowledge the student already possess and then help guide them to the answer. Many students know much more then the think, but often need guidance to tap into their own knowledge bank. This style of teaching builds critical thinking skills so that the s... 16 Subjects: including prealgebra, chemistry, biology, algebra 1
{"url":"http://www.purplemath.com/greenbelt_prealgebra_tutors.php","timestamp":"2014-04-19T02:47:02Z","content_type":null,"content_length":"24147","record_id":"<urn:uuid:a8cdf6ed-5596-4157-802e-2178f9f0e4d4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can anybody explain how 'factoring a perfect square trinomial' works? please 25w^2-20w+4 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/511bdd83e4b06821731ad846","timestamp":"2014-04-16T10:20:53Z","content_type":null,"content_length":"35137","record_id":"<urn:uuid:c9e788c9-620f-4b44-8875-600dbf3108b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Energy Based Seismic Design and Evaluation Procedures for Reinforced Concrete Bridge Columns Mander, J., Energy Based Seismic Design and Evaluation Procedures for Reinforced Concrete Bridge Columns . in Research Accomplishments, 1986-1994: The National Center for Earthquake Engineering Research, pages 207-216. (Buffalo : National Center for Earthquake Engineering Research, September 1994) This project is concerned with the experimental determination and computational micromodeling of energy absorption or cyclic fatigue capacity of reinforced concrete bridge piers. The results are used with a new smooth hysteretic rule to generate seismic energy demand spectra. By comparing the ratio of energy capacity to demand, inferences of column damageability can be made. This seismic capacity analysis of bridge piers is developed starting from the basic principles of micromechanics. Constitutive models that predict the hysteretic behavior of mild and high strength steel reinforcing bars are dealt with in detail; stability, degradation and consistency of cyclic behavior is explained and an energy based low cycle fatigue model is proposed. A hysteretic model for confined and unconfined concrete subjected to either cyclic tension and/or compression is advanced. A column analysis program, UB-COLA, is developed to predict the behavior of columns when subjected to large inelastic cyclic deformation. The axial, flexure and shear deformations are modeled and all of the various failure modes such as longitudinal bar or transverse hoop fracture, or concrete crushing and bar buckling are captured. Flexural deformations are modeled using a fiber element routine. Shear deformations are modeled by developing a new Cyclic-Inelastic-Strut and Tie (CIST) model. This computational model was validated through experiments on bridge piers typical of construction in the eastern and central United States. These experimental studies included cyclic loading tests on models of entire piers (at reduced scales of 25 to 33%) as well as full size subassemblage tests on column-to-cap connections retrieved from prototype bridge structures. A smooth rule-based hysteretic model was developed to simulate bridge pier behavior. Using this model, nonlinear dynamic analyses were conducted to assess seismic energy demand and the associated inelastic energy based fatigue spectra. Seismic analysis and design recommendations regarding the assessment of fatigue energy are made based on the nonlinear dynamic analysis. A methodology for the evaluation of existing bridge structures is proposed; this incorporates traditional strength and ductility aspects and fatigue energy demand. The relevance of fatigue aspects for the seismic design of new bridge structures is also demonstrated. It is shown that the present code use of force reduction factors that are independent of natural period, are unconservative for short period stiff structures. Recommendations are made for force reduction factors to be used in fatigue resistant seismic design. Back to table of contents for this report This research is being performed to develop seismic evaluation procedures for bridge columns using energy based analyses that implicitly account for the duration effects of earthquakes. The evaluation first assesses the hysteretic energy absorption capacity of bridge columns and then compares this with the demand imposed by a critical earthquake. A dual experimental-theoretical approach is adopted whereby the energy absorption capacity of typical bridge columns is determined by laboratory testing. The test results are used to validate computational models that are based on the hysteretic micromechanical behavior of confined and unconfined concrete and reinforcing steel. From hysteretic macro models that are validated by the experiments as well as the associated analytical micromodeling predictions, the hysteretic energy and cyclic loading demand is determined for historically recorded earthquakes. This research task is part of NCEER's Highway Project. Task numbers are 91-3412 and 10693-E-5.2. Back to table of contents for this report Traditional (ATC-6-2, 1983) and more recent (Chai, Priestly and Seible, 1991 and NCEER, 1993) seismic evaluation procedures compare displacement ductility capacity with demand. If for a given peak ground acceleration the capacity exceeds the demand, the structure is considered to be able to survive the entire earthquake; however, no recognition is made in these analyses regarding the length or energy content of earthquake ground motions. This research has focused on using energy as the principal means of determining the damage potential of bridge columns. The proposed energy-based damage evaluation procedure is a three-step process as follows: (a) evaluate the hysteretic energy and cyclic loading capacity, (b) evaluate the seismic energy demand, and (c) compare the energy capacity/demand ratio, and if this is less than unity, incipient failure is expected and the bridge pier should ideally be retrofitted. In what follows in this paper are the results of research associated with the determination of hysteretic energy capacity and demand with a particular emphasis on non-ductile bridges in the eastern and central United States. Evaluation of the Hysteretic Energy Capacity of Bridge Piers This phase of the research has used a dual experimental-analytical approach for the determination of hysteretic energy capacity. Several reduced scale model bridge piers, as well as full scale subassemblages, have been tested in the laboratory under quasistatic reversed cyclic loading. These experimental studies have been used to validate computer programs that have been developed to derive the hysteretic energy capacity using the principles of micromechanics. Experimental Studies A 25% scale model of a shear-critical bridge pier with twin 36 inch square columns was constructed and tested under cyclic loading. The model was based on the Jewett-Holmwood Road bridge crossing of the east branch of Cazenovia Creek at East Aurora, New York. This specimen had 1.0% longitudinal steel and only nominal hoops (#3 @ 12 inch crs in the prototype). The gravity axial load on the columns was P = 0.05 f'[c]A[g]. The prototype bridge piers were demolished as part of an Erie County bridge rehabilitation scheme. The opportunity was therefore taken to retrieve a column to cap beam connection and test this full-size specimen under reversed cyclic lateral load in the University at Buffalo seismic laboratory. Figure 1 presents the experimental specimens together with the normalized test results. It is evident that the experimental results demonstrate reasonably good ductile behavior despite poor transverse steel detailing. A second set of model and prototype pier studies were undertaken on a bridge that had a typical three circular column frame pier bent. The prototype bridge, located at Niagara Falls, New York, was a two-span overpass. The central pier had three 33 inch diameter columns 20 feet long. Each column possessed a longitudinal steel volume of 0.019 with #4 circular hoops at 12 inch centers and carried an average gravity axial load of 0.03 f'[c]A[g]. Prototype and model specimen details are presented along with the experimental results in figure 2. Again, test results show that in spite of poor (non-ductile) detailing, there is a fair degree of ductile response. Damage was mostly located in the connections (beam column joints) of these two These prototype test results provide added confidence to the above-mentioned reinforced concrete model studies and basically confirm that well-conducted experiments on reduced scale physical models down to 25% in size show little difference when compared to their full-size companions. Analytical Studies The seismic capacity of the aforementioned piers was evaluated using existing (ATC-6-2,1983; Chai, Priestley and Seible, 1991; NCEER, 1993) recommendations. The first of these approaches, ATC 6-2, dates back to research work done in the 1970's following the 1971 San Fernando earthquake (ATCW1,1983).The focus of that approach is on the flexural and shear ductility of columns, with no recognition of possible column-to-cap beam joint vulnerability. Due to the very conservative assumptions made based on the paucity of test results at the time, shear brittle columns are generally predicted. Thus, by using these recommendations engineers may be tempted to either demolish or retrofit a bridge pier for the wrong reasons The second recommended seismic evaluation procedure is based on more recent work by Priestley and his coworkers at the University of California, San Diego (Chai, Priestly and Seible, 1991). Recommendations have now been made for assessing joint strength and ductility capacity. These tend to agree quite well with the aforementioned experiments. New shear recommendations have also been developed, but these are still based on displacement ductility amplitude. This research project has returned to the fundamentals of micromechanics in an attempt to predict the energy absorption capacity of bridge columns. Bridge pier failure is defined as that state in which the columns are unable to sustain the gravity load of the superstructure, that is, the onset of collapse. Incipient pier collapse may occur when: (a) the longitudinal bars fracture due to low cycle fatigue; (b) the transverse hoops fracture, thus leaving the column unconfined; (c) the lateral capacity is reduced to zero due to either shear strength deterioration, P-delta effects, or both. For columns with a moderate-to-high axial load intensity, failure modes (b) and (c) generally prevail. Most bridge columns, however, have low levels of axial load, thus either failure modes (a) or (c) occur depending on the reinforcing steel detailing. The analytical portion of this study is concerned with modeling the energy absorption (fatigue) capacity of reinforced concrete bridge columns using a cyclic dynamic Fiber Element computational model. The complete analysis methodology for bridge column capacity is developed starting from the basic principles of micromechanics. The hysteretic behavior of ordinary mild steel, as well as high strength threadbar prestressing reinforcement, was dealt with in detail by modeling cyclic stress-strain behavior and accounting for stability, degradation and consistency of cyclic behavior. An energy-based, universally-applicable low cycle fatigue model for such reinforcing steels is proposed. This new damage modeling approach using energy obviates the need for cycle counting and its use is therefore attractive for random loading situations. A hysteretic model for confined and unconfined concrete subjected to both tension or compression cyclic loading was advanced. Such sophisticated modeling was found necessary to enable the assessment of inelastic shear deformations under cyclic loading. This concrete stress-strain model is an enhanced version of the well-known model of Mander, et al (Mander, Priestley and Park, 1988). The model has been enhanced to predict the behavior of high strength concrete, and is also capable of simulating gradual crack closure under cyclic loading. A fiber element based column analysis computer program, UB-COLA, was developed for the purpose of accurately predicting the behavior of reinforcing concrete columns subjected to inelastic cyclic deformations. The axial, flexural and shear cyclic behavior are modeled, as well as the low cycle fatigue properties of reinforcing and high strength prestressing steel bars. Fracture of transverse confining steel is modeled using the energy balance theory of Mander, et al (1988). For assessing inelastic shear deformations under reversed cyclic loading, a Cyclic Inelastic Strut-Tie (CIST) model was developed which uses the new concrete model. Figure 3 shows the essence of how shear and flexural deformations are captured in the analysis. First, the column is divided into cracked and uncracked zones, figure 3(a). Next, for shear computations, the cracked zone is modeled as with equivalent strut and tie elements, figure 3(b). Using energy methods and the cyclic constitutive relations for concrete and steel, the truss deformations (figure 3c) are computed on an incremental step-by-step shear force control basis. The resulting total shear deformations are added to the inelastic plus elastic flexural deformations, which are computed using the fiber element analysis and moment area theory (figure 3(d)). The program proved to be reliable in predicting the failure mode of either low axial load (low cycle fatigue of longitudinal reinforcement) or high axial load columns (fracture of confining reinforcement and crushing of concrete). For shear critical columns, the cyclic inelastic behavior is accurately simulated through the CIST modeling technique. An example of the predictive capabilities of the program UB-COLA is presented in figure 4 in which the prototype specimen previously shown in figure 1 is analytically modeled. Careful instrumentation of the laboratory experiments enabled a decomposition to be made of the flexure and shear components of total column displacements. It is evident that the modeling procedure is capable of capturing well both of these inelastic components of behavior. Hysteretic Energy Demand In order to assess the hysteretic energy and cyclic loading demand of reinforced concrete bridge piers, reliable hysteretic models that are representative of real bridge behavior are necessary. Therefore, a rule-based smooth hysteretic model was developed that is capable of capturing the behavior of bridge piers. The model parameters are determined automatically by using a system identification routine in conjunction with either (a) real experimental data from large scale laboratory tests, or (b) results generated from the reversed cyclic loading Fiber Element analysis computer program UB-COLA. A SDOF inelastic dynamic time-history analysis program was developed for using the new rule-based smooth model as well as more traditional hysteretic models such as the piece-wise linear Takeda model. Spectral results were produced by using the smooth model and an example of all the spectra generated for one earthquake are shown in figure 5. The smooth model was calibrated with the full-size bridge column experimental data to determine global parameters to simulate structural force-deformation behavior. The calibration is summarized in figure 6. The cyclic loading demand results from several analyses are summarized in figure 7, and the effective dynamic magnification of displacement response depicted in figure 8. Back to table of contents for this report A complete methodology of seismic evaluation of existing bridge structures is proposed which incorporated the traditional strength and ductility aspects plus the fatigue energy demand. The relevance of fatigue aspects for the seismic design of new bridge structures is also demonstrated. Theoretical predictions using the new CIST-Fiber Element modeling techniques were validated by a combination of reduced scale and full size experiments on the reversed cyclic loading behavior of bridge piers. It is shown that the present code use of force reduction factors that are independent of natural period are unconservative for short period stiff structures. Recommendations are made for force reduction factors to be used in fatigue resistant seismic design. Back to table of contents for this report Personnel and Institutions This work has been undertaken by Professor John Mander, of the University at Buffalo, and his graduate students: F.D. Panthaki, M.T. Chaudhary, S.M. Waheed and G.A. Chang. The following people have also collaborated on this project: Dr. S.S. Chen, University at Buffalo, with associated field work and the retrieval of the prototype specimens; Dr. P. Gergely, Cornell University, in critiquing the work; Dr. A.M.> Transfer interrupted! damage modeling of concrete elements; Dr. M. Saiidi, University of Nevada, Reno on experimental testing; Dr.J. Kulicki of Modjeski and Masters, Inc. on providing data on typical bridge piers in the eastern and central U.S.; Erie Country Department of Public Works on field testing and retrieval of prototype specimens; and Region 5, NYSDOT, on field testing and retrieval of prototype specimens. Back to table of contents for this report John Mander Stuart Chen Andrei Reinhorn University at Buffalo Peter Gergely Cornell University M. Saiid Saiidi University of Nevada Reno John Kulicki Modjeski and Masters, Inc. Back to table of contents for this report Technical References ATC 6-2, "Seismic Retrofitting Guidelines for Highway Bridges ", Applied Technology Council, 1983. Chai, Y.H., Priestley, M.J.N. and Seible, E., "Seismic Retrofit of Circular Bridge Columns for Enhanced Flexural Performance", ACI Structural Journal, Vol. 88, No. 5, Sept.-Oct. 1991, pp. Mander, J.B., Priestley, M.J.N., and Park, R., "Theoretical Stress-Strain Model For Confined Concrete", Journal of Structural Engineering, ASCE, Vol. 114, No. 8, 1988, pp 1804-1826. NCEER Report to FHWA and U.S. Congress, "Seismic Retrofitting Manual for Highway Bridges", prepared by the National Center for Earthquake Engineering Research, State University of New York at Buffalo, New York, Nov. 1993; Back to table of contents for this report Chang, G.A. and Mander, J.B., (1994) "Seismic Energy Based Fatigue Damage Analysis of Bridge Columns: Part I, Evaluation of Capacity", Technical Report NCEER-94-0006, National Center for Earthquake Engineering Research, Buffalo, New York, March 14, 1994 Chang, G.A. and Mander, J.B., (1994) "Seismic Energy Based Fatigue Damage Analysis of Bridge Columns: Part II, Evaluation of Demand", Technical Report NCEER-94-0013, National Center for Earthquake Engineering Research, Buffalo, New York, June 1, 1994. Mander, J.B., Panthaki, ED., and Chaudhary, M.T. (1992) "Evaluation of Seismic Vulnerability of Highway Bridges in the Eastern United States", Lifeline Earthquake Engineering in the Central and Eastern U.S., ASCE Technical Council on Lifeline Earthquake Engineering, No. 5. pp 72-86. Mander, J.B., Waheed, S.M., Chaudhary, M.T.A., and Chen, S.S. (1993) "Seismic Performance of Shear-Critical Reinforced Concrete Bridge Piers," Technical Report NCEER-93-0010, National Center for Earthquake Engineering Research, Buffalo, New York, May 12, 1993. Back to table of contents for this report
{"url":"https://mceer.buffalo.edu/publications/resaccom/94-SP02/rsconvert.asp?f=rsa21_en.html","timestamp":"2014-04-19T17:08:24Z","content_type":null,"content_length":"28176","record_id":"<urn:uuid:d5897a43-5698-47fb-b246-b4346261109f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove that F satisfies all field axioms by method of direct verification 1. The problem statement, all variables and given/known data Consider the collection F of all real numbers of the form x+y√2, where x and y are rational numbers. Prove (by direct verification) that F satisfies all the field axioms (just like R) under the usual addition and multiplication. 2. Relevant equations Field axioms: There exist two binary operations, called addition + and multiplication ∗, such that the following hold: 1) commutativity x + y = y + x, xy = yx 2) associativity x + (y + z) = (x + y) + z, x(yz) = (xy)z 3) distributivity x(y + z) = xy + xz 4) Existence of 0; 1 such that x + 0 = x, 1 · x = x, 5) Existence of negatives: For every x there exists y such that x + y = 0. 6) Existence of reciprocals: For every x ̸= 0 there exists y such that xy = 1. 3. The attempt at a solution I just want to make sure I did this right. If you were to prove that a collection F ( a collection of all real #'s of the form x+y[tex]\sqrt{2}[/tex] where x & y are rational numbers ) satisfies all of the field axioms by direct verification, would you just do something like suppose m,n,o belong to F. then m=x[1]+y[1][tex]\sqrt{2}[/tex], etc. and then you just say m+(n+o) = x[1]+y[1][tex]\sqrt {2}[/tex] + ..... until you return to m+n+o = (m+n)+o ??? And then proceed to do so for all the axioms mentioned above? It's supposed to be really trivial right? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution
{"url":"http://www.physicsforums.com/showthread.php?t=433684","timestamp":"2014-04-19T02:25:45Z","content_type":null,"content_length":"35077","record_id":"<urn:uuid:94af237e-04b6-4d2e-88a3-6b6ccd333008>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Sharpsburg, GA ACT Tutor Find a Sharpsburg, GA ACT Tutor I am a highly-qualified state certified teacher. I have taught for four years in a public school setting and am now looking to extend my skills into private tutoring. I was a Science major at the University of Georgia, then decided I wanted to teach and received my teaching certificate. 11 Subjects: including ACT Math, chemistry, physics, biology ...I have been teaching private voice lessons since the Spring of 2012. I have been taking ballet classes at Newnan School of Dance since 2000, with the exception of the time I spent away in college. During my time at Mercer University, I danced in the student-led dance organization Mercer Dance University. 25 Subjects: including ACT Math, reading, calculus, statistics ...I currently teach special needs students in an elementary education environment. My Masters degree is in Early Elementary Education and I am also certified in Reading-Middle Grades. I have taught Reading and Phonics at the Elementary and Middle School levels. 48 Subjects: including ACT Math, English, reading, writing I have Masters Degree in International Business Management and First Degree in Mathematics and Statistics. With teaching, consulting and banking job experience spanning over a period of 19 years. I derive satisfaction in career counseling, knowledge impartation and mentoring. 18 Subjects: including ACT Math, calculus, geometry, accounting ...I have taught Advanced Placement Literature for the past eight consecutive years while also teaching remediation classes for reluctant readers, students who fail the Georgia High School Writing and Graduation Tests, and students identified as likely to fail End of Course Tests in English subject ... 15 Subjects: including ACT Math, English, Spanish, grammar Related Sharpsburg, GA Tutors Sharpsburg, GA Accounting Tutors Sharpsburg, GA ACT Tutors Sharpsburg, GA Algebra Tutors Sharpsburg, GA Algebra 2 Tutors Sharpsburg, GA Calculus Tutors Sharpsburg, GA Geometry Tutors Sharpsburg, GA Math Tutors Sharpsburg, GA Prealgebra Tutors Sharpsburg, GA Precalculus Tutors Sharpsburg, GA SAT Tutors Sharpsburg, GA SAT Math Tutors Sharpsburg, GA Science Tutors Sharpsburg, GA Statistics Tutors Sharpsburg, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Sharpsburg_GA_ACT_tutors.php","timestamp":"2014-04-19T17:38:25Z","content_type":null,"content_length":"23748","record_id":"<urn:uuid:a810f9a3-e7ac-4c73-84e1-dccd889cc621>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
NAG Library NAG Library Routine Document 1 Purpose G11AAF computes ${\chi }^{2}$ statistics for a two-way contingency table. For a $2×2$ table with a small number of observations exact probabilities are computed. 2 Specification SUBROUTINE G11AAF ( NROW, NCOL, NOBS, LDNOBS, EXPT, CHIST, PROB, CHI, G, DF, IFAIL) INTEGER NROW, NCOL, NOBS(LDNOBS,NCOL), LDNOBS, IFAIL REAL (KIND=nag_wp) EXPT(LDNOBS,NCOL), CHIST(LDNOBS,NCOL), PROB, CHI, G, DF 3 Description For a set of observations classified by two variables, with levels respectively, a two-way table of frequencies with rows and columns can be computed. $n11 n12 … n1c n1. n21 n22 … n2c n2. ⋮ ⋮ ⋮ ⋮ ⋮ nr1 nr2 … nrc nr. n.1 n.2 … n.c n$ To measure the association between the two classification variables two statistics that can be used are, the Pearson ${\chi }^{2}$ $\sum _{i=1}^{r}\sum _{j=1}^{c}\frac{{\left({n}_{ij}-{f}_{ij}\right)}^{2}}{{f}_{ij}}$ , and the likelihood ratio test statistic, $2\sum _{i=1}^{r}\sum _{j=1}^{c}{n}_{ij}×\mathrm{log}\left({n}_{ij}/{f}_{ij}\right)$ , where are the fitted values from the model that assumes the effects due to the classification variables are additive, i.e., there is no association. These values are the expected cell frequencies and are given by Under the hypothesis of no association between the two classification variables, both these statistics have, approximately, a ${\chi }^{2}$ -distribution with degrees of freedom. This distribution is arrived at under the assumption that the expected cell frequencies, , are not too small. For a discussion of this point see Everitt (1977) . He concludes by saying, ‘... in the majority of cases the chi-square criterion may be used for tables with expectations in excess of in the smallest cell’. In the case of the table, i.e., , the ${\chi }^{2}$ approximation can be improved by using Yates' continuity correction factor. This decreases the absolute value of . For tables with a small value of the exact probabilities from Fisher's test are computed. These are based on the hypergeometric distribution and are computed using . A two tail probability is computed as , where are the upper and lower one-tail probabilities from the hypergeometric distribution. 4 References Everitt B S (1977) The Analysis of Contingency Tables Chapman and Hall Kendall M G and Stuart A (1973) The Advanced Theory of Statistics (Volume 2) (3rd Edition) Griffin 5 Parameters 1: NROW – INTEGERInput On entry: $r$, the number of rows in the contingency table. Constraint: ${\mathbf{NROW}}\ge 2$. 2: NCOL – INTEGERInput On entry: $c$, the number of columns in the contingency table. Constraint: ${\mathbf{NCOL}}\ge 2$. 3: NOBS(LDNOBS,NCOL) – INTEGER arrayInput On entry: the contingency table ${\mathbf{NOBS}}\left(\mathit{i},\mathit{j}\right)$ must contain ${n}_{\mathit{i}\mathit{j}}$, for $\mathit{i}=1,2,\dots ,r$ and $\mathit{j}=1,2,\dots ,c$. Constraint: ${\mathbf{NOBS}}\left(\mathit{i},\mathit{j}\right)\ge 0$, for $\mathit{i}=1,2,\dots ,r$ and $\mathit{j}=1,2,\dots ,c$. 4: LDNOBS – INTEGERInput On entry : the first dimension of the arrays as declared in the (sub)program from which G11AAF is called. Constraint: ${\mathbf{LDNOBS}}\ge {\mathbf{NROW}}$. 5: EXPT(LDNOBS,NCOL) – REAL (KIND=nag_wp) arrayOutput On exit: the table of expected values. ${\mathbf{EXPT}}\left(\mathit{i},\mathit{j}\right)$ contains ${f}_{\mathit{i}\mathit{j}}$, for $\mathit{i}=1,2,\dots ,r$ and $\mathit{j}=1,2,\dots ,c$. 6: CHIST(LDNOBS,NCOL) – REAL (KIND=nag_wp) arrayOutput On exit: the table of ${\chi }^{2}$ contributions. ${\mathbf{CHIST}}\left(\mathit{i},\mathit{j}\right)$ contains $\frac{{\left({n}_{\mathit{i}\mathit{j}}-{f}_{\mathit{i}\mathit{j}}\right)}^{2}} {{f}_{\mathit{i}\mathit{j}}}$, for $\mathit{i}=1,2,\dots ,r$ and $\mathit{j}=1,2,\dots ,c$. 7: PROB – REAL (KIND=nag_wp)Output On exit : if $n\le 40$ contains the two tail significance level for Fisher's exact test, otherwise contains the significance level from the Pearson ${\chi }^{2}$ 8: CHI – REAL (KIND=nag_wp)Output On exit: the Pearson ${\chi }^{2}$ statistic. 9: G – REAL (KIND=nag_wp)Output On exit: the likelihood ratio test statistic. 10: DF – REAL (KIND=nag_wp)Output On exit: the degrees of freedom for the statistics. 11: IFAIL – INTEGERInput/Output On entry must be set to $-1\text{ or }1$ . If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value is recommended. Otherwise, because for this routine the values of the output parameters may be useful even if ${\mathbf{IFAIL}}e {\mathbf{0}}$ on exit, the recommended value is When the value $-\mathbf{1}\text{ or }1$ is used it is essential to test the value of IFAIL on exit. On exit unless the routine detects an error or a warning has been flagged (see Section 6 6 Error Indicators and Warnings If on entry , explanatory error messages are output on the current error message unit (as defined by Note: G11AAF may return useful information for one or more of the following detected errors or warnings. Errors or warnings detected by the routine: On entry, ${\mathbf{NROW}}<2$, or ${\mathbf{NCOL}}<2$, or ${\mathbf{LDNOBS}}<{\mathbf{NROW}}$. On entry, a value in ${\mathbf{NOBS}}<0$, or all values in NOBS are zero. On entry, a $2×2$ table has a row or column with both values $0$. At least one cell has expected frequency, ${f}_{ij}$, $\text{}\le 0.5$. The ${\chi }^{2}$ approximation may be poor. 7 Accuracy For the accuracy of the probabilities for Fisher's exact test see The routine allows for the automatic amalgamation of rows and columns. In most circumstances this is not recommended; see Everitt (1977) Multidimensional contingency tables can be analysed using log-linear models fitted by 9 Example The data below, taken from Everitt (1977) , is from patients with brain tumours. The row classification variable is the site of the tumour: frontal lobes, temporal lobes and other cerebral areas. The column classification variable is the type of tumour: benign, malignant and other cerebral tumours. $23 9 6 38 21 4 3 28 34 24 17 75 78 37 26 141$ The data is read in and the statistics computed and printed. 9.1 Program Text 9.2 Program Data 9.3 Program Results
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/G11/g11aaf.html","timestamp":"2014-04-16T12:06:35Z","content_type":null,"content_length":"26587","record_id":"<urn:uuid:b544c95e-4dc7-448d-b46f-d640f5cb0552>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
molar mass, atm, pressure, find mass. HELP A. 125 g B. 258 g C. 421 g D. 582 g E. 864 g I take it that these are the choices for answers to the question? If so, it looks to me like the correct answer corresponding to the initial values given in the problem statement is not in the list. What final value for the grams of N2 did you calculate? Is it possible that the starting values were modified to 'present a new problem'?
{"url":"http://www.physicsforums.com/showthread.php?p=3346055","timestamp":"2014-04-20T11:18:46Z","content_type":null,"content_length":"52980","record_id":"<urn:uuid:531f7cc4-5130-45fe-9c64-d15e9833d153>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Pushing Complex Structure Forward up vote 5 down vote favorite Let $p: E\to B$ be a covering map of $C^\infty$ manifolds, where $E$ has a complex structure. There are many cases when we want to know whether $B$ has a complex structure (which is obviously unique) making $p$ an analytic map, for example in the construction of families of elliptic curves. The difficulty is that given a small open set $U\subset B$, and pulling back the covering map to $$p|_{p^{-1}(U)}: \bigsqcup_\alpha U_\alpha\to U,$$ the $U_\alpha$ may induce incompatible complex structures on $U$. The easiest example of this phenomenon is the covering map $\mathbb{CP}^1\simeq S^2\to \mathbb{RP}^2$, where $\mathbb{RP}^2$ does not admit any complex structure, as it is not orientable. This case is not too badly behaved, however--in particular, given $U_\alpha, U_\beta\subset \mathbb{CP}^1$ over some $U\subset \mathbb{RP}^2$, the transition map $U_\alpha\to U\to U_\ beta$ seems to me to be antiholomorphic. So I have two questions about this general situation: 1) Is there an example of a covering map $p: E\to B$ of $C^\infty$ manifolds with $E$ complex, such that $B$ admits some complex structure, but none making $p$ analytic? 2) Given a covering map $p: E\to B$ with $E$ complex, is there an algebra-topological obstruction to the existence of a complex structure on $B$ making $p$ analytic? complex-geometry at.algebraic-topology dg.differential-geometry add comment 2 Answers active oldest votes For 1): take a double covering $E\to B$, where $E$ and $B$ are compact oriented surface of genus 3 and 2 respectively, and give $E$ a structure of Riemann surface with trivial automorphism group. up vote 12 down About 2): well, in the example above you see that you can deform a complex structure with no compatible complex structure on $B$ into one with a compatible structure. An vote accepted algebraic-topological obstruction should be discrete, so it seems to me that the example suggests that there isn't one. +1 for (1); I guess by algebra-topological I'm including things like the cohomology of the sheaf of holomorphic functions on $E$, which should be more rigid. Or I'd be OK with obstructions to deforming the complex structure on $E$ to one inducing a complex structure on $B$. – Daniel Litt Jan 7 '11 at 20:18 add comment This seems to be a question about holomorphicity of diffeomorphisms in a given complex structure. Replace your covering map $E \to B$ by its Galois closure (= frame bundle) $X \to B$. Now by construction $X \to B$ is a covering space which is Galois with Galois group $G$ (= groups of self bijections of a fixed fiber of $E \to B$). Since $X \to B$ factors through $E$, every complex structure on $E$ will induce a complex structure on $X$, and a complex structure on $B$ makes $E \to B$ holomorphic if and only if it makes $X \to B$ holomorphic. But the later question is just the question of whether all elements of $G$ which act as diffeomorphisms of $X$ will preserve the complex structure. Some of them preserve it automatically, e.g. the elements up vote of the subgroup $H \subset G$ for which $E = X/H$. But for the rest it is an actual condition. If all those diffeomorphisms preserve your complex structure, then the quotient exists as a 4 down complex manifold. If one of them doesn't, then your are out of luck. I don't think you can get more concrete obstructions. add comment Not the answer you're looking for? Browse other questions tagged complex-geometry at.algebraic-topology dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/51422/pushing-complex-structure-forward","timestamp":"2014-04-19T22:10:16Z","content_type":null,"content_length":"56599","record_id":"<urn:uuid:450edd92-4b1d-4ae3-b42b-4ed34603bf7e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Perimeter - Concept Perimeter is the sum of the sides of a polygon. It is a distance and therefore is a one-dimensional property. The perimeter of a circle is called its circumference, and can be found using the formula Circumference = 2(pi)(radius). For a square, the perimeter = 4(side length), and for a rectangle, perimeter = 2(length of the width) + 2(length of the height). A term that we use in geometry that you've definitely seen sometime in 6th, 7th, 8th, grade sometime before Geometry is parameter. And the parameter is just the sum of the lengths of the sides of a polygon. So if we talk about a square where are the sides are congruent the parameter is just calculated by saying four times S where S is your side length. If you have a rectangle where you have opposite sides congruent you're going to have two bases. So we are going to say two times B, where B is one of your bases, plus two of your heights or two times In a circle we don't call it perimeter, we call it circumference. So the circumference is essentially the perimeter of a circle. And there you only need to know your radius. That is going to be equal to two times pi times your radius. So to calculate your parameter of any polygon just add up the sides. Specifically, for a square you can use the shortcut of four times the side length, for a rectangle you can use two times the base plus two times the height, and for a circle you can two times pi times the radius. perimeter of a square rectangle and circle
{"url":"https://www.brightstorm.com/math/geometry/geometry-building-blocks/perimeter/","timestamp":"2014-04-19T04:28:42Z","content_type":null,"content_length":"65906","record_id":"<urn:uuid:92881e63-9544-4775-9904-a502420d1634>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Archives of the Caml mailing list > Message from Andrej Bauer Unexpected restriction in "let rec" expressions Date: -- (:) From: Andrej Bauer <Andrej.Bauer@f...> Subject: Re: [Caml-list] Unexpected restriction in "let rec" expressions Loup Vaillant wrote: > loop :: ((a,c) -> (b,c)) -> a -> b > loop f a = b > where (b,c) = f (a,c) You said the above was a "so-called fixpoint operator". To see in what sense this really is a fixpoint operator consider the type: ((a * c) -> (b * c)) -> a -> b (1) It is equivalent (under currying-uncurrying) to (a -> (c -> b) * (c -> c)) -> a -> b (2) We could write down a term of this type if we had a way of going from type c -> c to type c. More precisely, consider any term fix : (c -> c) -> c, where the name "fix" suggests that we will plug in a fix-point operator at the end of the day. Before reading on, you should try to write down a term of type (2), given that we have fix. I will bet that your brain will produce the same solution as described below. We can get a term of type (2) by defining let loop' f x = let (g, h) = f x in g (fix h) Converting from (2) back to (1) gives us an equivalent term let loop f x = let f' y = (fun z -> fst (f (y, z))), (fun z -> snd (f (y, z))) in loop' f' x or by beta-reducing: let loop f x = fst (f (x, fix (fun z -> snd (f (x, z))))) You are now free to plug in whatever you wish for fix, but presumably you would like fix to compute fixed points. This may be somewhat troublesome in an eager language, especially if c is not a function type. In fact, we may recover fix from loop as follows: let fix' f = loop (fun (_, z) -> (z, f z)) () To see that fix' is the same as fix, we just beta-eta reduce: fix' f = loop (fun (_, z) -> (z, f z)) () = fst (fix (fun z -> f z), f (fix (fun z -> f z))) = fix (fun z -> f z) = fix f Indeed, loop is a generalized fixpoint operator. But I think the nice picture drawn by Nicolas Pouillard is worth a thousand lambda terms. Best regards, P.S. Can someone think of anything else other than a fixpoint operator that we could use in place of fix to get an interesting program (maybe for special cases of c, such as c = int -> int)?
{"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2008/02/4b91b68b2d852a491265b8116e60d6f0.en.html","timestamp":"2014-04-17T02:06:08Z","content_type":null,"content_length":"10035","record_id":"<urn:uuid:1cb80192-e11d-4dbf-93e0-26b4e0d92fec>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
April 8th 2006, 09:15 PM A high speed passenger test train travels from Columbia station to Penn station in exactly 8 hours. This distance traveled in miles from Columbia at any give time in hours is given by: s(t) = - a) How many miles has the train traveled 2 hours into the trip? b)What is the distance in miles from Columbia station to Penn c) What is the average speed of the train (AROC) between 2 and 5 hours d) what is the velocity (IROC) of the train at exactly 2.5 hours into the trip. e) what is the maximum velocity attained by the train? Please justify your answer Thank you for all your help!!
{"url":"http://mathhelpforum.com/calculus/2498-speed-print.html","timestamp":"2014-04-20T04:07:16Z","content_type":null,"content_length":"3568","record_id":"<urn:uuid:5aa37a59-aab8-4489-97be-a676c987e15d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Philosophy of Paraconsistency & Associated Logics General resources on the web Stanford Encyclopedia of Philosophy: Dialetheism by Graham Priest (last revision Oct 3, 2008) Paraconsistent Logic by Graham Priest & Koji Tanaka (last revision Mar 20, 2009) Inconsistent Mathematics by Chris Mortensen (last revision Jul 31, 2008) Many-Valued Logic by Siegfried Gottwald (last revision Nov 17, 2004) Relevance Logic by Edwin Mares (last revision Jan 2, 2006) Substructural Logics by Greg Restall (last revision Jan 16, 2008) Internet Encyclopedia of Philosophy: Wikipedia, the free encyclopedia: Graham Priest Newton da Costa Paraconsistent logic Principle of explosion the philosophy of paraconsistency CLE e-Prints of the Centre for Logic (CLE/UNICAMP) - Editors Universal Logic: Logics as Structures (Jean-Yves Béziau) Geometry.Net - Mathematical_Logic: Paraconsistent Logics Inconsistent images (Mortensen) Paraconsistent Newsletters (check here for news: conferences, publications, links). Link defunct; to subscribe email Jean-Yves Béziau. Conferences & Journals The First World Congress on Paraconsistency, Wednesday 30 July Saturday 2 August 1997 II World Congress on Paraconsistency - May 08-12, 2000 WCP 3 - III world congress on paraconsistency, 28-31 July 2003 5th World Congress on Paraconsistency, 13-17 February 2014, Kolkata, India (& earlier conferences) First World Congress and School on Universal Logic Second World Congress and School on Universal Logic, Xi'An, China, August 16-22, 2007 Sorites. Digital Journal of Analytical Philosophy. Editor: Lorenzo Peña. Logical Studies Journal, no. 2 (1999) Special Issue on Paraconsistent Logic and Paraconsistency The future of paraconsistent logic Jean-Yves Béziau Reviews, The Bulletin of Symbolic Logic Volume 9, Number 3, Sept. 2003. Graham Priest's web site Jean-Yves Béziau From here, click on 'jybhomepage', then on whichever page you want to view, e.g. papers. Arnon Avron - Online Available Papers Articles of particular interest Beall, J.C. A Priestly Recipe for Explosive Curry, Logical Studies Journal, no. 7, 2001. Benado, M.E. Orellana; Bobenrieth, Andrés; Verdugo, Carlos. Metaphilosophical Pluralism and Paraconsistency: From Orientative to Multi-level Pluralism. ABSTRACT: In a famous passage, Kant claimed that controversy and the lack of agreement in metaphysics here understood as philosophy as a whole was a scandal. Attempting to motivate his critique of pure reason, a project aimed at both ending the scandal and setting philosophy on the secure path of science, Kant endorsed the view that for as long as disagreement reigned sovereign in philosophy, there would be little to be learned from it as a science. The success of philosophy begins when controversy ends and culminates when the discipline itself as it has been known disappears. On the other hand, particularly in the second half of the twentieth century, many have despaired of the very possibility of philosophy constituting the search for truth, that is to say, a cognitive human activity, and constituting thus a source of knowledge. This paper seeks to sketch a research program that is motivated by an intuition that opposes both of these views. Béziau, Jean-Yves. Adventures in the Paraconsistent Jungle, CLE e-Prints, Vol. 4(1), 2004 (Section Logic). Béziau, Jean-Yves. From Paraconsistent Logic to Universal Logic, Sorites, Issue #12, May 2001. Béziau, Jean-Yves. The Future of Paraconsistent Logic, in Logical Studies Journal, no. 2 (1999). Béziau, Jean-Yves; Sarenac, Darko. Possible Worlds: A Fashionable Nonsense?. 2001. Béziau, Jean-Yves. S5 is a Paraconsistent Logic and so is First-Order Classical Logic, in Logical Studies Journal, no. 9, 2002. Bremer, Manuel. "The Logic of Truth in Paraconsistent Internal Realism," Studia Philosophica Estonica, vol. 1, no.1, 2008, pp. 76-83. Special Issue "Truth" (Part I), edited by Daniel Cohnitz. Bremer, Manuel. "Why and How to Be a Dialetheist," Studia Philosophica Estonica, vol. 1, no.2, 2008, pp. 208-227. Special Issue "Truth" (Part II), edited by Daniel Cohnitz. Brunner, Andreas B.M.; Carnielli, Walter A. Anti-Intuitionism and Paraconsistency, CLE e-Prints, Vol. 3 (1), 2003. ABSTRACT: This paper aims to help to elucidate some questions on the duality between the intuitionistic and the paraconsistent paradigms of thought, proposing some new classes of anti-intuitionistic propositional logics and investigating their relationships with the original intuitionistic logics. It is shown here that anti-intuitionistic logics are paraconsistent, and in particular we develop a rst anti-intuitionistic hierarchy starting with Johansson's dual calculus and ending up with Goedel's three-valued dual calculus, showing that no calculus of this hierarchy allows the introduction of an internal implication symbol. Comparing these anti-intuitionistic logics with well-known paraconsistent calculi, we prove that they do not coincide with any of these. On the other hand, by dualizing the hierarchy of the paracomplete (or maximal weakly intuitionistic) many-valued logics [logical symbols] we show that the anti-intuitionistic hierarchy [logical symbols] obtained from [logical symbols] does coincide with the hierarchy of the many-valued paraconsistent logics [logical symbols] . Fundamental properties of our method are investigated, and we also discuss some questions on the duality between the intuitionistic and the paraconsistent paradigms, including the problem of self-duality. We argue that questions of duality quite naturally require refutative systems (which we call elenctic systems) as well as the usual demonstrative systems (which we call deictic systems), and multiple-conclusion logics are used as an appropriate environment to deal with them. Carnielli, Walter; Coniglio, Marcelo E.; Marcos, João. "Logics of Formal Inconsistency," CLE e-Prints, Vol. 5 (1), 2005. da Costa, Newton C. A.; Krause, Décio. Complementarity and Paraconsistency. ABSTRACT: Bohr s Principle of Complementarity is controversial and there has been much dispute over its precise meaning. Here, without trying to provide a detailed exegesis of Bohr s ideas, we take a very plausible interpretation of what may be understood by a theory which encompasses complementarity in a definite sense, which we term C-theories. The underlying logic of such theories is a kind of logic which has been termed paraclassical , obtained from classical logic by a suitable modification of the notion of deduction. Roughly speaking, C-theories are non-trivial theories which may have physically incompatible theorems (and, in particular, contradictory theorems). So, their underlying logic is a kind of paraconsistent logic. da Costa, Newton C. A.; Krause, Décio. The Logic of Complementarity. August 30, 2003. ABSTRACT: This paper is the sequel of a previous one where we have introduced a paraconsistent logic termed paraclassical logic to deal with complementary propositions [17]. Here, we enlarge upon the discussion by considering certain meaning principles , which sanction either some restrictions of classical procedures or the utilization of certain classical incompatible schemes in the domain of the physical theories. Here, the term classical refers to classical physics. Some general comments on the logical basis of a scientific theory are also put in between the text, motivated by the discussion of complementarity. da Costa, Newton C. A.; Krause, Décio. Remarks on the Applications of Paraconsistent Logic to Physics. ABSTRACT: In this paper we make some general remarks on the use of non-classical logics, in particular paraconsistent logic, in the foundational analysis of physical theories. As a case-study, we present a reconstruction of P. -D. F´evrier s logic of complementarity as a strict three-valued logic and also a paraconsistent version of it. At the end, we sketch our own approach to complementarity, which is based on a paraconsistent logic termed paraclassical logic . da Costa, Newton C. A.; Krause, Décio; Bueno, Otávio. Paraconsistent Logics and Paraconsistency: Technical and Philosophical Developments. CLE e-Prints, Vol. 4 (3), 2004. May, 19th 2004. da Costa, Newton C. A.; Krause, Décio; Bueno, Otávio. Paraconsistent Logics and Paraconsistency. October13, 2005. Also in Handbook of the Philosophy of Science, vol. 5. Decker, Hendrik. A Case for Paraconsistent Logic as a Foundation of Future Information Systems. ABSTRACT: Logic links philosophy with computer science and is the acknowledged foundation of information systems. Since the large scale proliferation of the internet and the world wide web, however, a rush of new technologies is avalanching, in many cases without much consideration of a solid foundation that would be up to par with the rigor of the traditional logic fundament. Philosophy may help to question established foundations, especially in times of technological breakthroughs that seem to override such foundations. In particular, the intolerance associated with the consistency requirements of classical logic begs question of its legitimacy, in the face of ubiquitous inconsistency in virtually all information systems of sizable extent. Based on that, we propose to overcome classical logic foundations by adopting paraconsistency as a foundational concept for future information systems engineering (ISE). Faust, Don. Conflict without Contradiction: Noncontradiction as a Scientific Modus Operandi. Presented at the Twentieth World Congress of Philosophy, Boston, Massachusetts, August 10-15, 1998. ABSTRACT: We explicate the view that our ignorance of the nature of the real world R, more so than a lack of ingenuity or sufficient time to have deduced the truth from what is so far known, accounts for the inadequacies of our theories of truth and systems of logic. Because of these inadequacies, advocacy of substantial correctness of such theories and systems is certainly not right and should be replaced with a perspective of Explorationism which is the broadest possible investigation of potential theories and systems along with the realization that all such theories and systems are partial and tentative. For example, the position of classical logic is clearly untenable from the perspective of explorationism. Due to ignorance regarding R and, consequently, the partial and evidential nature of our knowledge about R, an explorationist foundational logical framework should contain machinery which goes beyond that of classical logic in the direction of allowing for the handling of confirmatory and refutatory evidential knowledge. Such a foundational framework (which I call Evidence Logic) is described and analysed in terms of its ability to tolerate substantial evidential conflict while not allowing contraditions. Field, Hartry. Review of Graham Priest, Doubt Truth to Be a Liar (2006), Notre Dame Philosophical Reviews, 2006.03.18. Flynt, Henry. Is Mathematics a Scientific Discipline? 1996. Gratz, David; White, V. Alan. Review of Graham Priest, Logic: A Very Short Introduction, Essays in Philosophy, vol. 5, no. 1, January 2004. Marcos de Almeida, João. Thesis: Possible-Translations Semantics (in Portuguese). Marcos, João. "Wittgenstein & Paraconsistência," CLE e-Prints, vol. 1(7), 2001. Peña, Lorenzo. "Alboran Is and Is Not Dry: Katalin Havas on Logic and Dialectic," Logique et Analyse, issue #131-132 (1990), pp. 331-338. Peña, Lorenzo. Dialectics, from Handbook of Metaphysics and Ontology, 1991. Peña, Lorenzo. Graham Priest's «Dialetheism» Is It Althogether True? [Review of Graham Priest, In Contradiction], Sorites, Issue #07, November 1996, pp. 28-56. Peña, Lorenzo. «Partial Truth, Fringes and Motion: Three Applications of a Contradictorial Logic», Studies in Soviet Thought, vol 37 (1990), pp. 83-122. Rahman, Shahid; Van Bendegem, Jean Paul. The Dialogical Dynamics of Adaptive Paraconsistency. ABSTRACT: The dialogical approach to paraconsistency as developed by Rahman and Carnielli ([1]), Rahman and Roetti ([2]) and Rahman ([3], [4] and [5]) suggests a way of studying the dynamic process of arguing with inconsistencies. In his paper on Paraconsistency and Dialogue Logic ([6]) Van Bendegem suggests that an adaptive version of paraconsistency is the natural way of capturing the inherent dynamics of dialogues. The aim of this paper is to develop a formulation of dialogical paraconsistent logic in the spirit of an adaptive approach and which explores the possibility of eliminating inconsistencies by means of logical preference strategies. Rauser, Randal. "Is the Trinity a True Contradiction?" Quodlibet Journal, Volume 4 Number 4, November 2002. (groan!) Restall, Greg. Paraconsistency Everywhere. May 9, 2002. Tanaka, Koji. Three Schools of Paraconsistency. The Australasian Journal of Logic, vol. 1, July 1, 2003. ABSTRACT: A logic is said to be paraconsistent if it does not allow everything to follow from contradictory premises. There are several approaches to paraconsistency. This paper is concerned with several philosophical positions on paraconsistency. In particular, it concerns three schools of paraconsistency: Australian, Belgian and Brazilian. The Belgian and Brazilian schools have raised some objections to the dialetheism of the Australian school. I argue that the Australian school of paraconsistency need not be closed down on the basis of the Belgian and Brazilian schools objections. In the appendix of the paper, I also argue that the Brazilian school s view of logic is not coherent. Tuziak, Roman. Popper and Paraconsistency. Karl Popper 2002 Centenary Congress, Vienna, 3-7 July 2002. ABSTRACT: Paraconsistent logic was introduced in order to provide the framework for inconsistent but nontrivial theories. It was initiated by J. Lukasiewicz (1910) in Poland and, independently, by N. A. Vasilev (1911-13) in Russia, but only in 1948 the first paraconsistent formal system was designed. Since then thousands of papers have been published in this field. Paraconsistency became one of the fastest growing branches of logic, due to its fruitful applications to computer science, information theory, and artificial intelligence. K. R. Popper touched on the problem in his paper What is Dialectic? (1940). Although only mentioned, his basic idea of the possibility of a formal system of such a logic was fresh and original. Another attempt of exploring the logic of contradiction, this time as a dual to intuitionistic logic, was made by Popper in his paper On the Theory of Deduction I and II (1948). The same idea was formalized by N. D. Goodman (1981) and developed by D. Miller (1993) under a label Logic for Falsificationists . Popper`s contribution to the subject of paraconsistent logic has not been properly recognized so far. Since Lukasiewicz`s and Vasilev`s works were still not translated into any West European languages in the 1940s, he should be undoubtedly regarded as an independent forerunner of paraconsistency. On the other hand, it seems tempting to adapt some of Popper`s other ideas for the theory of paraconsistent logic (the way it was done with Vasilev`s very general concepts) and, especially, for the theory of artificial intelligence. Ursic, Marko. Paraconsistency and Dialectics as Coincidentia Oppositorum in the Philosophy of Nicholas of Cusa. Woods, John. "Dialectical Considerations on the Logic of Contradiction: Part I," Logic Journal of IGPL 2005 13(2): 231-260. See abstract. Woods, John. Dogmatism and Dialethism: Reflections on Remarks of Sorenson and Armour-Garb. Woods, John. Paradox and Paraconsistency: Conflict Resolution in the Abstract Sciences, excerpt from chapter 1, pp. 1-20. Zelený, Jindrich. Paraconsistency and Dialectical Consistency [corrected from original, which appeared in From the Logical Point of View (Prague), Vol. 1, 1994, pp. 35 51]. Recent Publications of Technical or Philosophical Interest Online Analyti, A.; Antoniou, G.; Damásio, C. V.; Wagner, G. "Negation and Negative Information in the W3C Resource Description Framework", Annals of Mathematics, Computing & Teleinformatics, vol 1, no 2, 2004, pp 25-34. Batens, Diderik; Meheus, Joke; Provijn, Dagmar. "An Adaptive Characterization of Signed Systems for Paraconsistent Reasoning", Pre-print, January 11, 2006. Béziau, Jean-Yves. "The Paraconsistent Logic Z: A Possible Solution to Jaskowski's Problem", Logic and Logical Philosophy, Vol.15 (2006): 99-111. McGinnis, Casey. Paraconsistency and Deontic Logic: Formal Systems for Reasoning with Normative Conflicts. PhD Thesis, University of Minnesota, 2006. Marcos, João. "Modality and Paraconsistency," in In M. Bilkova and L. Behounek, editors, The Logica Yearbook 2004 (Prague: Filosofia, 2005), pp. 213-222. Marcos, João. "Nearly every normal modal logic is paranormal," Logique et Analyse, 48:279-300, 2005. Preprint. Priest, Graham. "60% Proof: Lakatos, Proof, and Paraconsistency," Pre-print. January 30, 2006. Shapiro, Stewart. "Lakatos and Logic: Comments of Graham Priest's "60% proof: Lakatos, proof and paraconsistency"", Pre-print, 2006. Sorites, no. 17, October 2006. Latest Paraconsistent Newsletter, Fall 2006. Guides to Philosophical Logic Bibliography on Adaptive Logics: Applications Bibliography on Fuzziness and the Sorites Paradox, updated: Nov. 23 1994 Computational Linguistics Offprint Library, Bibliography (1999) DoCIS: Documents in Computing and Information Science Pathways to Philosophical Logic and the Philosophy of Logic EpistemeLinks.com: Logic and Philosophy of Logic Graham Priest's Inclosure Schema Graham Priest, Paraconsistent Logic, and Philosophy, Or, Logic and Reality by R. Dumain Graham Priest vs Erwin Marquit on Contradiction by R. Dumain “What is the Relationship Between Logic and Reality?” by R. Dumain "On the Dialectics of Metamathematics" (Excerpts) by Peter Vardy "Wittgensteinian Foundations of Non-Fregean Logic" by Boguslaw Wolniewicz Wittgenstein and Dialectic: An Annotated Bibliography Reflexivity & Situatedness Study Guide Home Page | Site Map | What's New | Coming Attractions | Book News Bibliography | Mini-Bibliographies | Study Guides | Special Sections My Writings | Other Authors' Texts | Philosophical Quotations Blogs | Images & Sounds | External Links CONTACT Ralph Dumain Uploaded 27 July 2005 Last update 8 February 2014 Previous update 31 March 2009 ©2005-2014 Ralph Dumain
{"url":"http://www.autodidactproject.org/bib/paraconsistency.html","timestamp":"2014-04-17T00:48:27Z","content_type":null,"content_length":"32935","record_id":"<urn:uuid:c7c057fe-14e1-4c9b-8372-bb4b2e9f4c3a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Rearranging Differential Equation January 18th 2011, 03:28 PM Rearranging Differential Equation Hi, I was going over my notes and didnt understand one of the steps made in a calculation, was hoping someone could explain it to me. Im trying to find an analytic solution for an ODE: dY/dX + X*Y = X The integrating factor is I = e^((X^2)/2) Multiplying the equation with the integrating factor I*(dY/dX) + X*I*Y = X*I This next step I dont follow where it gets simplified to: (d/dX)*(Y*e^((X^2)/2)) = X*(e^((X^2)/2)) The following line's after are: YI = I Y = 1 + C*I Any help would be appriciated, thank you. January 18th 2011, 03:49 PM The step you're wondering about is the heart of the integrating factor procedure. Suppose you had the DE $\dfrac{dy}{dx}+P(x)\,y=Q(x),$ and you multiply through by the integrating factor $e^{\int P(x)\,dx}.$ Then you get $e^{\int P(x)\,dx}\dfrac{dy}{dx}+e^{\int P(x)\,dx}P(x)\,y=Q(x)e^{\int P(x)\,dx}.$ The whole point of the procedure is that the LHS is now a total derivative. Indeed, if we examine $\dfrac{d}{dx}\left[e^{\int P(x)\,dx} y\right]=\left(e^{\int P(x)\,dx}\right)\left(\dfrac{dy}{dx}\right)+y\left (\dfrac{d}{dx}e^{\int P(x)\,dx}\right)=<br /> e^{\int P(x)\,dx}\,\dfrac{dy}{dx}+ye^ {\int P(x)\,dx}\,P(x),$ which is just our new LHS after we multiplied through by the integrating factor. Someone was clever enough to discover that the integrating factor was indeed $e^{\int P(x)\,dx},$ and after that, it became a procedure. Does that clear things up a bit for you, perhaps? January 18th 2011, 03:55 PM January 18th 2011, 03:56 PM Yes, thank you. If I didnt know this procedure is there a way of getting from I*(dY/dX) + X*I*Y = X*I to YI = I? Or would you suggest its too complicated and I just memorise how this procedure works? Thank you both, I think pickslides just answered my 2nd question January 18th 2011, 03:58 PM I say memorise it. It comes in very handy and enhances your understanding of calculus. January 18th 2011, 04:17 PM Prove It I'd say as well as memorising it i.e. that after you have multiplied by the Integrating Factor the LHS reduces to $\displaystyle \frac{d}{dx}(I\,y)$, get good at recognising Product Rule expansions, i.e. that $\displaystyle u\,\frac{dv}{dx} + v\,\frac{du}{dx} = \frac{d}{dx}(u\,v)$. January 19th 2011, 04:01 AM Not having gotten much further, Im stuck on how this next stage works: y*I = I y = 1 + C*e^-((x^2)/2) I am assuming he has now integrated the RHS? January 19th 2011, 12:12 PM Given $\displaystyle I= e^{\frac{x^2}{2}}$ $\displaystyle Iy'+Ixy = xI$ Note $\displaystyle I'=Ix$ then $\displaystyle (Iy)' = xI$ $\displaystyle Iy = \int xI~dx$ $\displaystyle e^{\frac{x^2}{2}}y = \int xe^{\frac{x^2}{2}}~dx$ Then by substitution $\displaystyle u = \frac{x^2}{2}$ integrate the right hand side. Final step is to divide both sides by $\displaystyle e^{\frac{x^2}{2}}$
{"url":"http://mathhelpforum.com/differential-equations/168715-rearranging-differential-equation-print.html","timestamp":"2014-04-17T19:38:26Z","content_type":null,"content_length":"12820","record_id":"<urn:uuid:3beaa0f2-079c-434b-a7ab-d0efbf2eb104>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyticity of Log(z) June 15th 2011, 09:44 PM #1 Senior Member Apr 2010 Analyticity of Log(z) The question Where is the function $Log(z^2 - 1)$ analytic? My attempt The branch cut is on the negative real axis, so: $z^2 - 1 = (x^2 - y^2 - 1) + i(2xy)$ $x^2 - y^2 - 1 \le 0$ $xy = 0$ Therefore, not analytic when $x = \pm 1$ Is this correct? Thank you. Re: Analyticity of Log(z) The principal value $\log:\textrm{Re}\;t>0\to \mathbb{C}$ is analytic. Then, $\textrm{Re}\;(z^2-1)>0\Leftrightarrow x^2-y^2-1>0$ . Re: Analyticity of Log(z) The principal value is defined everywhere except at $z=0$ and is discontinuous on the negative real axis. Hence it can't be analytic at these points. The Cauchy-Riemann condition tells us that the function is anaytic everywhere else. Re: Analyticity of Log(z) The complex function is composed by two terms... $\ln (z^{2}-1) = \ln (z-1) + \ln (z+1)$ (1) As ojones said, the first term is analytic everywhere except in $z=1$ , the second term everywhere except in $z=-1$. The points $z=-1$ and $z=1$ are then the two 'brantch points' of the function Kind regards Re: Analyticity of Log(z) Some authors, see for example Elementary Theory of Analytic Functions of One or Several Complex Variables by Henry Cartan define the principal determination of $\log z$ on $\textrm{Re}\; z >0$ . Re: Analyticity of Log(z) The general question of the analyticity of the function $\ln z$ in the whole complex plane with the only exception of $z=0$ has been discussed here... Kind regards Re: Analyticity of Log(z) Yes, the analyticity of $\log z$ does not change trough the years. Re: Analyticity of Log(z) Oops! I was talking about $\text{Log}(z)$, not $\text{Log}(z^2-1)$. I think for the latter we have to avoid points where $z^2-1$ lies on the negative real axis or is zero. Re: Analyticity of Log(z) Re: Analyticity of Log(z) The general question of the analyticity of the function $\ln z$ in the whole complex plane with the only exception of $z=0$ has been discussed here... Kind regards So you're saying that holomorphicity and analyticity don't necessarily coincide? Re: Analyticity of Log(z) June 15th 2011, 11:13 PM #2 June 20th 2011, 09:25 PM #3 Senior Member May 2010 Los Angeles, California June 20th 2011, 10:06 PM #4 June 20th 2011, 10:51 PM #5 June 20th 2011, 11:04 PM #6 June 20th 2011, 11:17 PM #7 June 20th 2011, 11:26 PM #8 Senior Member May 2010 Los Angeles, California June 20th 2011, 11:29 PM #9 Senior Member May 2010 Los Angeles, California June 20th 2011, 11:32 PM #10 Senior Member May 2010 Los Angeles, California June 21st 2011, 03:48 PM #11 Senior Member May 2010 Los Angeles, California
{"url":"http://mathhelpforum.com/differential-geometry/183117-analyticity-log-z.html","timestamp":"2014-04-18T16:25:48Z","content_type":null,"content_length":"66973","record_id":"<urn:uuid:deefdab9-d0ae-4733-a0bd-8491872d9be9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
complexity of dominating sets of regular graphs up vote 0 down vote favorite I believe it is just an easy question, but I have not found the answer: Is the optimization / decision problem DOMINATING SET NP-complete when restricted to regular graphs? Where can I find a proof of that? Thank you, MO users. co.combinatorics graph-theory reference-request 1 Dominating set is NP-complete, and even APX-complete, for cubic graphs. dx.doi.org/10.1016/S0304-3975(98)00158-3 – Andrew D. King Feb 7 '13 at 18:32 Thank you very much, Andrew D. King. I also found that it is NP-complete even for planar 4-regular graphs. – Martin Manrique Feb 10 '13 at 15:55 add comment 1 Answer active oldest votes Dominating set is NP-complete, and even APX-complete, for cubic graphs. dx.doi.org/10.1016/S0304-3975(98)00158-3 – Andrew D. King up vote 2 down vote Thank you very much, Andrew D. King. I also found that it is NP-complete even for planar 4-regular graphs. – Martin Manrique add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics graph-theory reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/121090/complexity-of-dominating-sets-of-regular-graphs","timestamp":"2014-04-16T16:58:19Z","content_type":null,"content_length":"51215","record_id":"<urn:uuid:82f54665-cbe2-4750-bf8a-d456c82e6d1d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximate method of integration of laminar boundary layer in incompressible fluid Loitsianskii, L. G. A method is given for the approximate solution of the equations of the two-dimensional laminar boundary layer in an incompressible fluid. The method is based on the use of a system of equations of successive moments that is easily solved for simple supplementary assumptions. The solution obtained is given in closed form by simple formulas and is claimed to be no less accurate than the complicated solutions previously obtained, which were based on the use of special classes of flows. An Adobe Acrobat (PDF) file of the entire report:
{"url":"http://naca.central.cranfield.ac.uk/report.php?NID=4705","timestamp":"2014-04-19T01:49:13Z","content_type":null,"content_length":"1774","record_id":"<urn:uuid:9479528a-fb1f-41e7-98bc-55a5805feba0>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer experiments in harmonic analysis Barany, Michael (2009) Computer experiments in harmonic analysis. In: [2009] SPSP 2009: Society for Philosophy of Science in Practice (Minnesota, June 18-20, 2009). Download (423Kb) | Preview It is conventionally understood that computers play a rather limited role in theoretical mathematics. While computation is indispensable in applied mathematics and the theory of computing and algorithms is rich and thriving, one does not, even today, expect to find computers in theoretical mathematics settings beyond the theory of computing. Where computers are used, by those studying combinatorics , algebra, number theory, or dynamical systems, the computer most often assumes the role of an automated and speedy theoretician, performing manipulations and checking cases in a way assumed to be possible for human theoreticians, if only they had the time, the memory, and the precision. Automated proofs have become standard tools in mathematical logic, and it is often expected that proofs be published in a computer-checkable format. It is not surprising, then, that most philosophical work on computers in theoretical mathematics has been on computers' roles as supplementary mathematicians. Donald MacKenzie's 2001 book Mechanizing Proof demonstrates the rich potential for social and historical studies to complement the substantial analytic debate in this area of philosophy. But what of computers in theoretical mathematics behaving as computers, and not as mere mechanized mathematicians? Very little role is commonly assumed for computers working as supplements to mathematicians, rather than as supplementary mathematicians themselves. Accordingly, very little philosophy has attempted to grapple with theoretical mathematics in which computers play an essential but essentially non-theoretical role. My presentation will draw on work I conducted as a researcher in harmonic analysis on fractals at Cornell University. I will analyze the explicit and implicit conceptual apparatus employed in my and my fellow researchers' use of computers in the theoretical study of second order differential equations, such as those for sound and heat flow, on various fractal analogues of the Sierpinski gasket. Such gaskets are easy to visualize in very crude approximation in a low number of dimensions. As one increases the complexity of the gasket or the refinement of one's analysis, visualization and precise computation become impossible, and soon computers are unable to produce even approximate data to model differential equations in these situations. We thus had to carefully choose analytic approaches and methods to make our theoretical mathematics amenable to computer simulation. In my case, studying the transformation of the gaskets as they are expanded into increasingly high dimensions, computer simulation eventually required that the problem be reimagined entirely in terms of interlinked systems of parameters. This computer-approximation-driven theoretical orientation shaped my mathematical intuitions toward the problem and guided my fellow researchers and me in both theoretical and computational directions. We discovered both that computer approximation could be incredibly powerful as an aid to intuition, and that it can be incredibly difficult to transfer computer-oriented mathematics back into the purely theoretical standards of our area of specialty. I will address the philosophical implications of computer-driven theoretical mathematics, asking how computer experiments can shape both the content and standards of theoretical sciences. Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL Social Networking: Share | Actions (login required) Document Downloads
{"url":"http://philsci-archive.pitt.edu/4715/","timestamp":"2014-04-20T14:31:21Z","content_type":null,"content_length":"36222","record_id":"<urn:uuid:ac5a7366-e1b3-4fb8-8d48-8388c8c3e435>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5016baade4b04dfc808ae179","timestamp":"2014-04-21T05:04:55Z","content_type":null,"content_length":"160032","record_id":"<urn:uuid:8b7cbbeb-aa15-49ab-a125-24a4271a9963>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
The Colony Prealgebra Tutor Find a The Colony Prealgebra Tutor ...I am very patient, reliable, and responsible. I have been told that I can explain well. I currently teach Math at a university and a community college. 20 Subjects: including prealgebra, calculus, statistics, geometry ...My research at SMU focused on genetic pathways. On one of my projects I had to construct a very specific fly strain, a fly strain that didn't already exist. Through the use of classic genetics and selection, I was able to design the fly. 30 Subjects: including prealgebra, reading, chemistry, English ...A general education in K-6 includes all major subjects - math, English, reading, science, social studies, and spelling. I am a certified elementary teacher for grades 1-8 with 20 years of experience in the classroom. I will take your child from the level he/she is on and lead them on an exciting journey to the next level. 8 Subjects: including prealgebra, algebra 1, grammar, vocabulary Hello, out there! I am a honors High School graduate looking to help tutor kids over the summer. I specialize in teaching 6th grade through 10th grade math. 6 Subjects: including prealgebra, chemistry, geometry, algebra 1 ...When I was in England studying for my M.S. degree in chemical engineering, I was a teaching assistant to 5 classes of about 25 undergraduate students, each in chemistry laboratory (Specifics include general chemistry, organic chemistry 1 and organic chemistry 2). I was also a teaching assistant ... 22 Subjects: including prealgebra, chemistry, calculus, physics
{"url":"http://www.purplemath.com/The_Colony_Prealgebra_tutors.php","timestamp":"2014-04-16T07:18:47Z","content_type":null,"content_length":"23734","record_id":"<urn:uuid:ced7eff3-912e-426b-803d-e832cab9cc2d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
graph the function; identify the domain and range; and compare the graph with the graph of y = 1/x y =... - Homework Help - eNotes.com graph the function; identify the domain and range; and compare the graph with the graph of y = 1/x y = -10/x Supposing that you need to evaluate the domain and range of the function `y = -10/x` , you should remember the definitions of domain and range of a function. The domain of function needs to contain all x values that make the function valid. The function` y = -10/x` is not valid if `x = 0` , hence, you need to reject the value x = 0 from domain. Since all x values are real numbers, then you may write the domain such that: `R - {0}.` The range of the function is the set that contains all values of function obtained using the elements from domain, hence, the range is also `R - {0}.` Comapring the graph `y = -10/x` to `y = 1/x` yields that the graph y = 1/x passes through two transformation: a reflection to y axis and a vertical expansion by the factor `k = -1/10` . Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/graph-function-identify-domain-range-compare-graph-440513","timestamp":"2014-04-18T01:06:26Z","content_type":null,"content_length":"26437","record_id":"<urn:uuid:44714fff-dcea-4d6d-b132-7ca98b166539>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about sheaf on A Mind for Madness Recall last time we talked about how we can form the sheaf of Witt vectors over a variety ${X}$ that is defined over an algebraically closed field ${k}$ of characteristic ${p}$. The sections of the structure sheaf form rings and we can take ${W_n}$ of those rings. The functoriality of ${W_n}$ gives us that this is a sheaf that we denote ${\mathcal{W}_n}$. For today we’ll be define ${\Lambda}$ to be ${W(k)}$. Recall that we also noted that ${H^q(X, \mathcal{W}_n)}$ makes sense and is a ${\Lambda}$-module annihilated by ${p^n\Lambda}$ (recall that we noted that Frobenius followed by the shift operator is the same as multiplying by ${p}$, and since Frobenius is surjective, multiplying by ${p}$ is just replacing the first entry by ${0}$ and shifting, so multiplying by ${p^n}$ is the same as shifting over ${n}$ entries and putting ${0}$‘s in, since the action is component-wise, ${p^n\Lambda}$ is just multiplying by ${0}$ everywhere and hence annihilates the module). In fact, all of our old operators ${F}$, ${V}$, and ${R}$ still act on ${H^q(X, \mathcal{W}_n)}$. They are easily seen to satisfy the formulas ${F(\lambda w)=F(\lambda)F(w)}$, ${V(\lambda w)=F^{-1}(\ lambda)V(w)}$, and ${R(\lambda w)=\lambda R(w)}$ for ${\lambda\in \Lambda}$. Just by using basic cohomological facts we can get a bunch of standard properties of ${H^q(X, \mathcal{W}_n)}$. We won’t write them all down, but the two most interesting (of the very basic) ones are that if ${X}$ is projective then ${H^q(X, \mathcal{W}_n)}$ is a finite ${\Lambda}$-module, and from the short exact sequence we looked at last time ${0\rightarrow \mathcal{O}_X\rightarrow \mathcal{W}_n \rightarrow \mathcal{W}_{n-1}\rightarrow 0}$, we can take the long exact sequence associated to it to get ${\ cdots \rightarrow H^q(X, \mathcal{O}_X)\rightarrow H^q(X, \mathcal{W}_n)\rightarrow H^q(X, \mathcal{W}_{n-1})\rightarrow \cdots}$ If you’re like me, you might be interested in studying Calabi-Yau manifolds in positive characteristic. If you’re not like me, then you might just be interested in positive characteristic K3 surfaces, either way these cohomology groups give some very good information as we’ll see later, and for a Calabi-Yau’s (including K3′s) we have ${H^i(X, \mathcal{O}_X)=0}$ for ${i=1, \ldots , n-1}$ where ${n}$ is the dimension of ${X}$. Using this long exact sequence, we can extrapolate that for Calabi-Yau’s we get ${H^i(X, \mathcal{W}_n)=0}$ for all ${n>0}$ and ${i=1, \ldots, n-1}$. In particular, we get that ${H^1(X, \mathcal{W})=0}$ for ${X}$ a K3 surface where we just define ${H^q(X, \mathcal{W})=\lim H^q(X, \mathcal{W}_n)}$ in the usual way. Sheaf of Witt Vectors I was going to go on to prove a bunch of purely algebraic properties of the Witt vectors, but honestly this is probably only interesting to you if you are a pure algebraist. From that point of view, this ring we’ve constructed should be really cool. We already have the ring of ${p}$-adic integers, and clearly ${W_{p^\infty}}$ directly generalizes it. They have some nice ring theoretic properties, especially ${W_{p^\infty}(k)}$ where ${k}$ is a perfect field of characteristic ${p}$. Unfortunately it would take awhile to go through and prove these things, and it would just be tedious algebra. Let’s actually see why algebraic geometers and number theorists care about the Witt vectors. First, we’ll need a few algebraic facts that we haven’t talked about. For today, we’re going to fix a prime ${p}$ and we have an ${\mathbf{important}}$ notational change: when I write ${W (A)}$ I mean ${W_{p^\infty}(A)}$, which means I’ll also write ${(a_0, a_1, \ldots)}$ when I mean ${(a_{p^0}, a_{p^1}, \ldots)}$ and I’ll write ${W_n(A)}$ when I mean ${W_{p^n}(A)}$. This shouldn’t cause confusion as it is really just a different way of thinking about the same thing, and it is good to get used to since this is the typical way they appear in the literature (on the topics I’ll be There is a cool application by thinking about these functors as representable by group schemes or ring schemes, but we’ll delay that for now in order to think about cohomology of varieties in characteristic ${p}$ and hopefully relate it back to de Rham stuff from a month or so ago. In addition to the fixed ${p}$, we will assume that ${A}$ is a commutative ring with ${1}$ and of characteristic ${p}$. We have a shift operator ${V: W_n(A)\rightarrow W_{n+1}(A)}$ that is given on elements by ${(a_0, \ldots, a_{n-1})\mapsto (0, a_0, \ldots, a_{n-1})}$. The V stands for Verschiebung which is German for “shift”. Note that this map is additive, but is not a ring map. We have the restriction map ${R: W_{n+1}(A)\rightarrow W_n(A)}$ given by ${(a_0, \ldots, a_n)\mapsto (a_0, \ldots, a_{n-1})}$. This one is a ring map as was mentioned last time. Lastly, we have the Frobenius endomorphism ${F: W_n(A)\rightarrow W_n(A)}$ given by ${(a_0, \ldots , a_{n-1})\mapsto (a_0^p, \ldots, a_{n-1}^p)}$. This is also a ring map, but only because of our necessary assumption that ${A}$ is of characteristic ${p}$. Just by brute force checking on elements we see a few relations between these operations, namely that ${V(x)y=V(x F(R(y)))}$ and ${RVF=FRV=RFV=p}$ the multiplication by ${p}$ map. Now on to the algebraic geometry part of all of this. Suppose ${X}$ is a variety defined over an algebraically closed field of characteristic ${p}$, say ${k}$. Then we can form the sheaf of Witt vectors on ${X}$ as follows. Notice that all the stalks of the structure sheaf ${\mathcal{O}_x}$ are local rings of characteristic ${p}$, so it makes sense to define the Witt rings ${W_n(\mathcal{O} _x)}$ for any postive ${n}$. Now just form the natural sheaf ${\mathcal{W}_n}$ that has as its stalks ${(\mathcal{W}_{n})_x=W_n(\mathcal{O}_x)}$. Note that forgetting ring structure and thinking only as a sheaf of sets we have that ${\mathcal{W}_n}$ is just ${\mathcal{O}^n}$, and when ${n=1}$ it is actually isomorphic as a sheaf of rings. For larger ${n}$ the addition and multiplication is defined in that strange way, so we no longer get an isomorphism of rings. Using our earlier operations and the isomorphism for ${n=1}$, we can use the following sequences to extract information. When ${n\geq m}$ we have the exact sequence ${0\rightarrow \mathcal{W}_m\stackrel{V}{\rightarrow} \mathcal{W}_n\stackrel{R}{\rightarrow}\mathcal{W}_{n-m}\rightarrow 0}$. If we take ${m=1}$, then we get the sequence ${0\rightarrow \mathcal{O}_X\rightarrow \mathcal{W}_n\rightarrow \mathcal{W}_{n-1}\rightarrow 0}$. This will be useful later when trying to convert cohomological facts about ${\ mathcal{O}_X}$ to ${\mathcal{W}}$. We could also define ${H^q(X, \mathcal{W}_n)}$ as sheaf cohomology because we can think of ${\mathcal{W}_n}$ just as a sheaf of abelian groups. Let ${\Lambda=W(k)}$, then since ${\mathcal{W}_n}$ are ${\Lambda}$-modules annihilated by ${p^n\Lambda}$, we get that ${H^q(X, \mathcal{W}_n)}$ are also ${\Lambda}$-modules annihilated by ${p^n\Lambda}$. Next time we’ll talk about some other fundamental properties of the cohomology of these sheaves. There was talk about schemes in the comments of my last post, so after reviewing what I’ve already posted about, I decided I may as well package it all up nicely in a brief post so that I’m allowed to use the term freely from now on. First, recall the sheaf structure we already have. For any ring, R, we have the associated topological space $Spec(R)$ and the sheaf of rings $\mathcal{O}$. Then the stalk for $p\in Spec(R)$ is $\ mathcal{O}_p\cong R_p$. Also, $\mathcal{O}(D(f))\cong R_f$ for any $f\in R$. Let’s extrapolate what was the important structure here. We really have a topological space and a sheaf of rings on it. We call this a ringed space. Morphism in this category are a pair $(f, g): (X, \mathcal{O}_X)\to (Y, \mathcal{O}_Y)$, where $f:X\to Y$ is continuous, and the sheaf structure is preserved, i.e. $g: \mathcal{O}_Y\to f_*\mathcal{O}_X$ is a map of sheaves of rings on Y. A ringed space is called a locally ringed space if each stalk is a local ring. I’m not sure how technical I should be about the definition of a local homomorphism. Essentially, we want to preserve localness on the homomorphisms induced on the stalks by the sheaf homomorphism. So a homomorphism is local if the preimage of the maximal ideal in one go to the maximal ideal in the other. So without proof I’ll just state that a homomorphism of rings $\phi : A\to B$ induces a natural morphism of locally ringed spaces (contravariantly), and conversely, given A and B, any morphism of locally ringed spaces $Spec(B)\to Spec(A)$ is induced by a ring hom $A\to B$. The first statement essentially follows from laying down definitions, but it is not trivial. The second one requires some more thought. Now we define a scheme. An affine scheme is a locally ringed space that is isomorphic to the spectrum of some ring. A scheme is a locally ringed space in which every point has an open neighborhood $U$ such that $(U, \mathcal{O}_X\Big|_U)$ is an affine scheme. Morphisms are in the locally ringed sense. The easiest example would be a field, where the topological space is a point and the structure sheaf is the field back again. If we step the dimension up by one (and require the field to be algebraically closed for sake of example), then $Spec (k[x])\cong \mathbb{A}_k^1$ I may or may not return to elaborate. I sort of want to consolidate the algebra I’ve learned this quarter through a series of posts before doing anything else along the algebraic geometry side of The Structure Sheaf of a Variety Alright, so I’m still taking this really round about way to the Nullstellensatz, but someday I’ll get there. For those of you that know about sheaves, some of the things I’ve been talking about should be looking vaguely familiar. We haven’t fully gotten there yet, but that is what today is about. I won’t explicitly define what a general sheaf is, but of course there is always wikipedia or a textbook if you really want to know. Let’s think back to what we had before. We define what we called $k[V]$ the coordinate ring on the algebraic set $V$. So now we do the natural thing, we look at the field of fractions of $k[V]$ which we will denote $k(V)$. You should say, “Wait a minute!” at this point, since we might have some “zero denominators.” So let’s hold off on actually defining this until we’ve built the way to work around the problem. So as a set, $f \in k(V)$ is something of the form $f=g/h$, where $g, h \in k[V]$. So it is a fraction of polynomials, or a rational function. The problem is that it is not defined at zeros of $h$. Luckily, zeros of polynomials are all we’ve been studying and talking about for awhile. Call $f \in k(V)$ regular at a point $P \in V$ if there is a representation $f=g/h$ such that $h(P) eq 0$. In fact for any $h \in k[V]$ we can define a set corresponding to where it can be in the denominator, i.e. $V_h=\{P \in V : h(P) eq 0\}$. Note that this is just the principal open set we defined earlier for the Zariski topology, but now it seems to have vital use. Let’s now define the local ring of V at P to be $\mathcal{O}_{V, P}=\{f \in k(V) : \ f \ regular \ at \ P\}$. Clearly this is a subring of $k(V)$. The not as obvious fact is that it is actually local. If you want to check, the unique maximal ideal is the set of elements of the form f/g where $f(P)=0$ and $g(P) eq 0$. So now some things are shaping up, since we have an object defined for sets and have a ring of functions at a point. What would really be exciting is if this construction which seemed ad hoc by taking everything in the field of fractions and throwing out things that don’t work, actually turned out to be a nice localization of the ring. Define the ideal $\overline{M}_P=\{ f \in k[V] : f(P)=0\}$. So this is technically what we were calling $\overline{I({P})}$ before. (The line meaning that we aren’t in $k [x_1, \ldots, x_n]$ anymore, we’re in $k[V]=k[x_1, ldots , x_n]/I(V)$. So this is is a maximal ideal and hence prime, so we can localize at it. Exactly what we were hoping for actually does happen, i.e. $k[V]_{\overline{M}_P}=\mathcal{O}_{V, P}$. In words, the localization of the coordinate ring at $\overline{M}_P$. Now for any open set $U \subset V$ we define $\mathcal{O}(U)=\{ f \in k(V) : f \ regular \ on \ U\}$. And for convenience $\mathcal{O}_V(\emptyset)={0}$. So not only is $\mathcal{O}_V(U)$ a ring, it is a k-algebra. This set of rings with the restrictions we defined last time form the structure sheaf $\mathcal{O}_V$, and the local ring $\mathcal{O}_{V, P}$ is the stalk of the sheaf at P with the elements as the germ of functions at P. So I’ll leave you with a nice way to rephrase some older posts: we should now think of $k[V]=\mathcal{O}(V)$, and $\mathcal{O}(V_h)=k[V][h^{-1}]=k[V]_h$. Severely edited: Sorry, some weird bug took out every backslash of this post rendering it incomprehensible. I’m really glad I decided to glance at it randomly. by hilbertthm90 1 Comment A closer look at Spec Let’s think about what is going on in a different way. So now let’s think of $f \in R$ elements of the ring as functions with domain $Spec(R)$. We define the value of the function at a point in our space $f(P)$ to be the residue class in $R/P$. This looks weird at first, since the image space depends on the point that you are evaluating the function. Before worrying about that too much, let’s see if we can get this notion to match up with what we did yesterday. We have the nice property that $f(P)=0$ if and only if $f \in P$. (Remember that even though we think of f as a function, it is really an element of the ring). Define for any subset of the ring S the zero set: $Z(S)=\{P\in Spec(R): f(P)=0, \forall f \in S\}$. Now from what I just noted in the previous paragraph, we get that these are just precisely the elements of $Spec(R)$ that contain S, i.e. the closed sets of the Zariski topology. Thus we can define our basis for the Zariski topology to be the collection of $D(f)=Spec(R)\setminus Z(f)$. We also will want what is “an inverse” to the zero set. We want the ideal that vanishes on a subset of Spec. So given $Y\subset Spec(R)$, define $I(Y)=\{f \in R : f(P)=0, \forall P\in Y\}$. Now this isn’t really an inverse, but we get close in the following sense: If $J\subset R$ is an ideal, then $\displaystyle I(Z(J))=\sqrt{J}$. Taking the ideal of the zero set is the radical of the ideal. And the radical has two equivalent definitions: $\displaystyle \sqrt {J}=\cap_{P\in Spec(R), P\supset J} P=\{a\in R : \exists n\in \mathbb{N}, a^n\in J\}$. If we take the ideal and zero set in the other order we get that $Z(I(Y))=\overline{Y}$ : the closure in the Zariski topology. We can abstract one step further and put a sheaf on $D(f)$. Note that for any $f\in R$ we have that $\{1, f, f^2, \ldots\}$ is a multiplicative set, so we can localize at it. Since I haven’t talked at all about sheaves, I’m not sure if I want to go any further with this, so maybe I’ll do some more examples next time and possibly start to scratch this surface.
{"url":"http://hilbertthm90.wordpress.com/tag/sheaf/","timestamp":"2014-04-17T01:10:17Z","content_type":null,"content_length":"95688","record_id":"<urn:uuid:fc911858-f19f-4256-893e-b684c29d7153>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
D03PCF General system of parabolic PDEs, method of lines, finite differences, one space variable D03PDF General system of parabolic PDEs, method of lines, Chebyshev C^0 collocation, one space variable D03PEF General system of first-order PDEs, method of lines, Keller box discretisation, one space variable D03PFF General system of convection-diffusion PDEs with source terms in conservative form, method of lines, upwind scheme using numerical flux function based on Riemann solver, one space variable D03PHF General system of parabolic PDEs, coupled DAEs, method of lines, finite differences, one space variable D03PJF General system of parabolic PDEs, coupled DAEs, method of lines, Chebyshev C^0 collocation, one space variable D03PKF General system of first-order PDEs, coupled DAEs, method of lines, Keller box discretisation, one space variable D03PLF General system of convection-diffusion PDEs with source terms in conservative form, coupled DAEs, method of lines, upwind scheme using numerical flux function based on Riemann solver, one space variable D03PPF General system of parabolic PDEs, coupled DAEs, method of lines, finite differences, remeshing, one space variable D03PRF General system of first-order PDEs, coupled DAEs, method of lines, Keller box discretisation, remeshing, one space variable D03PSF General system of convection-diffusion PDEs with source terms in conservative form, coupled DAEs, method of lines, upwind scheme using numerical flux function based on Riemann solver, remeshing, one space variable D03PYF PDEs, spatial interpolation with D03PDF or D03PJF D03PZF PDEs, spatial interpolation with D03PCF, D03PEF, D03PFF, D03PHF, D03PKF, D03PLF, D03PPF, D03PRF or D03PSF D03RAF General system of second-order PDEs, method of lines, finite differences, remeshing, two space variables, rectangular region D03RBF General system of second-order PDEs, method of lines, finite differences, remeshing, two space variables, rectilinear region
{"url":"http://www.nag.com/numeric/fl/manual20/html/indexes/kwic/pdes.html","timestamp":"2014-04-20T15:13:24Z","content_type":null,"content_length":"10387","record_id":"<urn:uuid:1590266e-d735-44bc-986a-c280f165f74a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Solve the following ODE ..#2 October 6th 2010, 11:02 AM #1 Sep 2010 Solve the following ODE ..#2 Solve the following equation : I did not get it .. not separable, not homogeneous, not exact, not linear, not bernoulli So I tried to use the method of determinating the integrating factor .. But this failed with me: $\dfrac{1}{N} \left( \dfrac{\partial M}{\partial y} - \dfrac{ \partial N }{ \partial x} \right)$ is not a function in x only .. $\dfrac{1}{M} \left( \dfrac{\partial M}{\partial y} - \dfrac{ \partial N }{ \partial x} \right)$ is not a function in y only .. any ideas?? It is an exact equation $\dfrac{1}{M} \left( \dfrac{\partial N}{\partial y} - \dfrac{ \partial M }{ \partial x} \right)$ $\frac{(-2x-4)-(2x+4y+4)}{2xy+2y^2+4y}$ = $\frac{-4x-4y-8}{2xy+2y^2+4y}$ = $\frac{-x-y-2}{xy+y^2+2y}$ = $\frac{-(x+y)-2}{y[(x+y)+2]}$ = $\frac{-(x+y)-2}{-y[-(x+y)-2]}$ it equals Can you finish up from here? In my notes & book, its only : $\dfrac{1}{N} \left( \dfrac{\partial M}{\partial y} - \dfrac{ \partial N }{ \partial x} \right)$ and it should be function in x only $\dfrac{1}{M} \left( \dfrac{\partial M}{\partial y} - \dfrac{ \partial N }{ \partial x} \right)$ and it should be function in y only There is no thing about : $\dfrac{1}{M} \left( \dfrac{\partial N}{\partial y} - \dfrac{ \partial M }{ \partial x} \right)$ any one know is the latter valid also ?? if so, can you tell me all expressions?? In my notes & book, its only : $\dfrac{1}{N} \left( \dfrac{\partial M}{\partial y} - \dfrac{ \partial N }{ \partial x} \right)$ and it should be function in x only $\dfrac{1}{M} \left( \dfrac{ \partial N }{ \partial x}-\dfrac{\partial M}{\partial y} \right)$ and it should be function in y only those are the correct expressions. sorry, I had just copied your latex and switched the M and N and forgot to switch the partial dy and dx when correcting the error. The rest of the work is correct though. October 6th 2010, 11:47 AM #2 October 7th 2010, 06:39 AM #3 Sep 2010 October 7th 2010, 08:21 PM #4 October 7th 2010, 09:54 PM #5 October 10th 2010, 05:33 AM #6 Sep 2010
{"url":"http://mathhelpforum.com/differential-equations/158624-solve-following-ode-2-a.html","timestamp":"2014-04-21T10:44:51Z","content_type":null,"content_length":"47641","record_id":"<urn:uuid:5c9732bf-19ac-491d-958b-a7b0ac6bfa3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
In the most of all statistical approaches in pattern recognition theories in remote sensing it is assumed that each probability density function of pattern classes can be approximated by the Gaussian probability density function. However, this assumption is not always appropriate in practice. The exact shape of class probability density function is supposed to be expressed as an original histogram. And if the shape of the histogram is largely different from the Gaussian function the classification results might include large error. Therefore, there seems no need to persist in Gaussian probability density function as the only representation of class histograms. In other words, if there are other functions which can approximate the original histograms more accurately than the Gaussian function can, we would better to adopt one of those functions as a representation of a pattern class histogram. From this point of view, a probability density function was expanded by adding another parameter to the Gaussian function so that it can approximate histograms more flexibly and still can include the Gaussian function itself as a special case. The expanded function used here is a non-symmetric Gaussian function which has two independent standard deviations for each Side of the mode so that it can approximate the anti-symmetricity of class In this paper some characteristics of the non-symmetric Gaussian probability density function were studied. Then the fitness to the original histogram was examined by chi-square test and compared with that of the conventional symmetric Gaussian function. The comparison between symmetric and non-symmetric function was accomplished also on the results of a test run. Date of this Version
{"url":"http://docs.lib.purdue.edu/lars_symp/477/","timestamp":"2014-04-20T09:03:36Z","content_type":null,"content_length":"21310","record_id":"<urn:uuid:edb857e4-16b9-4232-822f-c3dc9caed412>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
RPI Mathematical Sciences Research: Approximation Theory Approximation Theory Approximation theory is a branch of mathematics that strives to understand the fundamental limits in optimally representing different signal types. Signals here may mean: ● A database of digital audio signal. ● A collection of digital mammograms. ● Solutions of a class of integral equations. ● Triangulated compact surfaces acquired by a oscillatory characteristics. Researchers typically mathematically model these signals based on their intrinsic smoothness or oscillatory characteristics. Approximation Theory at Rensselaer Those studying approximation theory analyze and design various multiresolution techniques that have provable, optimal properties for these models. Such optimal representations are key ingredients to successful data compression, estimation, and computer-aided geometric design. Researchers use a range of tools, including: ● Mathematical analysis (Littlewood-Paley theory). ● Fast numerical algorithms. ● Information theory. ● Algebraic and differential geometry. ● Spline and subdivision theory. ● Modern wavelet theory. ● Harmonical analysis. Current Projects Projects include the design and analysis of various multiresolution techniques. Faculty Researcher Harry McLaughlin
{"url":"http://www.rpi.edu/dept/math/ms_research/approximation.html","timestamp":"2014-04-20T07:08:27Z","content_type":null,"content_length":"10373","record_id":"<urn:uuid:6919f9cd-c184-4bf0-bc32-f1155fd1517e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
West Hollywood Algebra Tutor Find a West Hollywood Algebra Tutor ...I don't want rub it in anybody's face or anything, but not only did I get a perfect SAT score AND get nominated as a Presidential Scholar, but I've also coached hundreds of students to an average point increase of 463 points! You can find students of mine on the campuses of every Ivy League univ... 26 Subjects: including algebra 1, algebra 2, reading, English ...My services are particularly valuable for those of you who need tutoring in more than one subject. If you want success, your search stops here, with me. See what those who have tried me have to 26 Subjects: including algebra 2, algebra 1, reading, chemistry Hi, my name is Saba. I graduated UCLA with a BA in English in 2012. I am working towards receiving my master's in Psychology with an emphasis in School Counseling at Phillips Graduate Institute. 34 Subjects: including algebra 2, algebra 1, English, reading ...The subjects that I teach are mostly math and science. My hobbies and interests include foreign languages, Music (Piano/Guitar), body building, nutrition and overall fitness. I love to teach and I receive the greatest satisfaction when you receive the results that you want. 15 Subjects: including algebra 2, algebra 1, calculus, chemistry ...I enjoy not only the mechanics of solving math problems, but also the philosophy of mathematics and how math applies to physics and engineering. My passion for mathematics is contagious, and I'm thrilled when my former students further pursue mathematics in education or profession.In addition to... 12 Subjects: including algebra 2, algebra 1, calculus, geometry
{"url":"http://www.purplemath.com/west_hollywood_algebra_tutors.php","timestamp":"2014-04-19T09:36:22Z","content_type":null,"content_length":"23900","record_id":"<urn:uuid:71a4aca8-e571-4adb-95fa-9fb714b0e362>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
We are surrounded by great engineering architectures and mechanical devices, which are at rest in the frame of reference of Earth. A large part of engineering creations are static objects. On the other hand, we also seek equilibrium of moving objects like that of floating ship, airplane cruising at high speed and such other moving mechanical devices. In both cases – static or dynamic, external forces and torques are zero. An equilibrium in motion is said be “dynamic equilibrium”. Similarly, an equilibrium at rest is said be “static equilibrium”. From this, it is clear that static equilibrium requires additional conditions to be fulfilled. ⇒ v C = 0 ⇒ v C = 0 ⇒ ω = 0 ⇒ ω = 0
{"url":"http://cnx.org/content/m14870/1.1/","timestamp":"2014-04-18T18:53:43Z","content_type":null,"content_length":"86975","record_id":"<urn:uuid:a272a576-26a5-45e4-b56a-295a2fae3f93>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Hoffman Estates Science Tutor Find a Hoffman Estates Science Tutor I graduated, with highest distinction from the University of Illinois at Chicago, where I received a Bachelor of Arts, with a focus on the teaching of English Literature. I am certified to teach English in the state of Illinois and have experience with Chicago Public Schools. I have additional tut... 39 Subjects: including ACT Science, writing, philosophy, reading ...I have completed additional extensive coursework related to business, including Financial Accounting, additional legal and finance classes, Intermediate Microeconomics, and forecasting econometrics. I have learned discounted cash flow and comparative analysis valuation methods through additional... 57 Subjects: including biostatistics, ESL/ESOL, philosophy, GED ...In addition to teaching my kids at home, I have completed curriculum projects for the Henry Ford learning institute and developed learning curriculum for elementary math, science as well as test prep for math and science students. I love to teach and develop a way to meet each student's needs wh... 19 Subjects: including biology, chemistry, prealgebra, reading ...I’m currently working on my B.S. in Biology. I’ve had a number of advanced biology classes, including two semesters of anatomy. I have worked in the medical field for 6 years which allows me to relate various topics to the real world. 33 Subjects: including anatomy, elementary (k-6th), Microsoft Excel, psychology ...As someone who has immersed himself in this world, I truly believe that if a student thinks about the topic in the proper way, then they will most certainly understand it in a new way. Thank you very much for your consideration, and I hope to hear from you soon!In college, I supplemented my Hist... 12 Subjects: including archaeology, reading, writing, grammar
{"url":"http://www.purplemath.com/hoffman_estates_il_science_tutors.php","timestamp":"2014-04-17T11:08:29Z","content_type":null,"content_length":"24236","record_id":"<urn:uuid:78362ab2-d424-41ca-b796-8f3a0c3439b3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Mrs Trimmer's String Copyright © University of Cambridge. All rights reserved. This was an engaging problem. It encouraged you to think about factors and multiples and showed how these principles may be applied to real-life situations! It also prompted you to think about different kinds of shapes. Mrs. Heffernan's students from Paton School, Shrewsbury, Massachusetts, USA targeted this problem using the "chunking" method of division. There are twenty four students, and one group of three students makes a triangle. So, to find out the number of triangles they can make, we need to know how many groups of three there are. Caitlin, from Deer Hill School worked out the correct answer, also noting that no-one should be left out. Nadia from Wimbledon High School and Ryan from Rhu Primary both sent excellent solutions to this problem. Nadia says: Well for the triangle one the answer is eight because there are $24$ children and you need three children for each triangle so you divide $24$ by $3$ and you get $8$. Then for the question that asks you what different shapes could be made the answer is a square, a rectangle, a diamond and many more. It might be better not to use the word "diamond" in maths - can you think of the mathematical names of shapes that we might think look like a diamond? Ryan suggests that parallelograms are possible too. What other four-sided shapes could we make? Carolyn, from Pigeon Mountain Primary School suggested some four-sided shapes, including parallelograms, and rhombuses. Priya and Ashleigh, from Penrhos College also mentioned a trapezium for another four-sided shape. Mrs. Heffernan's students pointed out that a rectangle is actually a type of parallelogram; it is a special version, because all of the angles are $90^{\circ}$. Similarly a square is a special type of rhombus, with right angles. The problem then asks about the numbers of other shapes that the class can make. Again, Mrs. Heffernan's class used the same method ("chunking") to find the answers. Nadia, and Ryan also submitted correct solutions. Ryan explains: They can make six four-sided shapes ($24\div 4=6$) , four hexagons ($24\div 6=4$) and three octagons ($24\div 8=3$). To make five pentagons Mrs. Trimmer can help. Like Ryan, Nadia also thought that Mrs Trimmer could join in - that's a good idea. Sarah from Greenlands Secondary School also suggested this, as did Matthew from Stambridge. Caitlin, and Margaret from Deer Hill School, and Ryan and Trystan also sent in correct solutions, with great reasoning. Kieran from Newman Primary School, and Ebony from Gordon Primary suggested that one child could hold two corners of a pentagon, which would be another good way around the difficulty. Priya and Ashleigh suggested that four children could be left out so that four pentagons could be made. Can you think of any other ways that the class could make shapes with the string? What if some people (or even all of them!) held two pieces of string, one in each hand? What shapes could they make? How many? Well done, and thank you for your solutions.
{"url":"http://nrich.maths.org/2907/solution?nomenu=1","timestamp":"2014-04-17T07:09:34Z","content_type":null,"content_length":"6235","record_id":"<urn:uuid:2201d1bb-58f2-4c76-9a63-27bd19f93f41>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Assignment Models of the Distribution of Earnings Assignment Models of the Distribution of Earnings by Michael Sattinger Assignment Models of the Distribution of Earnings Journal of Economic Literature Updated: October 25th, 2012 Assignment Models of the Distribution of Earnings University at Albany, State University of New York The author is indebted to Ricardo Burros, James Heckman, Kajal Lahiri, David Lam, Thad Mirer, Lawrence Rafalouich, T. Paul Schultz, and anonymous referees for comments on earlier drafts. Errors and opinions are the responsibility of the author. I. Introduction ELATIVE WAGES are changing. Over the last decade or so, earnings of high school graduates have declined rela- tive to college graduates, and earnings of young adults have declined relative to older adults; as a result, the distribu- tion of earnings has become more unequal. l These relative changes are hard to explain in the context of models where the return to education is fixed by the long run supply behavior of indi- viduals or in which the productivity and earnings of individuals are the result of their education and experience, indepen- dent of the availability of jobs in the econ- Frank Levy and Richard J. Murnane (1992) ana- lyze recent tends in the distribution of earnings. Us- ing Current Population Survey data, they show that earnings of workers with 16 years schooling have in- creased relative to earnings of workers with 12 years schooling between 1979 and 1987 and that (for work- ers with 12 years schooling) earnings of workers aged 45-54 have increased relative to workers aged 25- 34 during the same time period (Table 7). Levy and Murnane (1992, Table 4) also report results of several authors showing that earnings inequality for all earn- ers and for males increased between 1979 and 1987, using various inequality measures. omy. While changes in the industrial and occupational mix of the economy are rou- tinely incorporated into ad hoc explana- tions of shifts in the distribution of earn- ings, they are absent from most formal models of the distribution. This paper reviews models that explain the distribution of earnings as arising from the market economy's solution to the problem of assigning workers to jobs. The amount a worker can contribute to production typically depends on which job the worker performs. This occurs be- cause jobs require many different tasks, and human performances at those tasks are extremely diverse; because industrial sectors use different technologies that rely on different combinations of human skills; or because jobs vary in the amounts of resources combined with la- bar. the as a output then depends on how workers are assigned to jobs, i.e., which worker performs which job. The existence of an assignment prob- lem implies that workers face a choice in their job Or Their earnings are not determined by their performance in one sector alone: if they do poorly at one job or sector, they can choose another. Choice of job or sector creates an inter- mediate step between individuals' char- acteristics and their earnings. The ob- served relationship is constructed from worker choices. Income or utility maximization guides workers to choose particular jobs over others. Higher wages for workers with some characteristics then play an alloca- tive role in the economy rather than sim- ply being rewards for the possession of particular characteristics. Workers found in a given sector are not randomly drawn from the population as a whole. Instead workers' locations in sectors or jobs are based on the criterion that their choices maximize their income or utility. The models discussed here are charac- terized by the presence of an assignment problem, together with the consequences of worker choice and nonrandom selection. Despite outward differ- ences, the models discussed in Section I11 have in common that they specify the jobs or sectors available to workers, the relevant differences among workers, the technology relating worker and job characteristics to output, and the mecha- nism that assigns workers to jobs. These models generally proceed by first describing the assignment problem present in the economy. Then one can derive the wage differentials that are con- sistent with an equilibrium assignment of workers to jobs. The equilibrium wage differentials are those that yield equality between amounts of labor supplied and demanded in each submarket of labor. By providing a general equilibrium framework for studying inequality, assignment models reveal a rigorous route by which demand factors influence in- equality and correctly specify the relation between the distribution of individual characteristics and inequality. The earn- ings function is no longer a directly ob- servable relationship but instead is the equilibrium outcome to the solution of the assignment problem. Explicit consideration of the economy's assignment problem provides a unity to seemingly separate topics. Wage differ- entials, occupational choice, organization of hierarchies, unequal skill prices and self-selection bias are topics that have been studied by themselves but which arise as consequences of the assignment problem. The existence of many labor market phenomena, such as search, mo- bility, hierarchy tournaments, unemployment, and specialized labor markets, can be motivated as labor market responses to the problem of assigning work- ers to jobs. Although not generally recognized as a subcategory of income distribution the- ories, assignment models have a fairly long history. They can be said to begin with Jan Tinbergen's model (1951) wi+h continuous distributions of workers and jobs and A. D. Roy's sectoral model (1951) with workers choosing between two or more occupations. These models differ in a number of ways but share the feature that the distribution of earnings can be explained through the assignment problem. Empirical modeling of the distribution of earnings requires the econometric specification of worker alternatives, even though only the chosen sector or job is observed. This generates a set of econo- metric problems that have been addressed in applications of Roy's and Tin- bergen's models. Probably most economists would agree to the basic premises underlying assign- ment models, that both supply and de- mand are relevant and that individual performances vary from job to job. But there may be some disagreement regard- ing the implications of those premises for the conduct of research on the distribu- tion of earnings. This survey emphasizes the implications of assignment models for the earnings functions, the human capital approach and the decomposition of in- equality. A. Dog Bone Economy Many distribution theories achieve results by ignoring or trivializing the as- signment problem. This leads to misin- terpretation of empirical relations such as the earnings function. As an example of what can go wrong when the assign- ment problem is ignored, consider the following dog bone economy. The agents in this economy are n dogs kept in a pen. These dogs vary by weight, teeth, mus- cles, and tenacity, all observable. At the beginning of the day, a dump truck ar- rives with n bones, differing in size. The bones are dumped in a neighboring area. The Hicksian Day begins when the gate opens and the dogs go after the bones. At this point a nontatonnement allocation process beings in which each dog can only hold onto one bone, losing the bone to any dog able to take it away. Equilib- rium arises when each dog has a bone that is not wanted by any dog that could take it away, and when each dog prefers its own bone to any bone that it could take away from another dog. Hierarchical ordering of dogs and bones would elimi- nate cycling and guarantee the existence of equilibrium, but this assumption is un- necessary for the story. After the dust has settled, an econo- mist appears on the scene and collects data on the dogs and their bones. Bones can be rated by their value. The econo- mist then runs a regression of the bone values as a function of dog characteristics and finds a strong relationship (let us say the R' is 0.80). This relationship is an earnings function, with the value of bones as earnings. Pleased with the re- sults, the economist uses the estimated earnings function to predict the distribu- tion of earnings the following day, when the dump truck will bring a new load of bones. For each dog, the economist can predict the dog's bone value on the basis of the dog's characteristics (weight, teeth, muscles, tenacity). From the dis- tribution of dog characteristics, the econ- omist tries to infer the distribution of bone values. Will the economist succeed in predict- ing the next day's distribution of bone values? After all, the earnings function is fairly accurately estimated. Of course, the distribution of bone values the next day does not in any way depend on the distribution of dog charac- teristics; it depends simply on what the dump truck brings. But if there is no relation between dog characteristics and the distribution of bone values, how did the economist achieve such an accurate relationship between bone values and dog characteristics? The earnings func- tion estimated by the economist merely describes the assignment that arises be- tween dogs and bones. This assignment is a temporary equilibrium that depends on the given distributions of dogs and bones. The predictive content is limited to identifying the bone that will be found with a given dog, given the distributions of bones and dogs. We can explain why one dog got one of the bones and why another dog got a different bone but we cannot draw any conclusions about the causal or technological relation between dog characteristics and bones. The most important feature of this story is the illusion created by the success of the regression of bone values against dog characteristics. The nature of the earnings function is not apparent, and the influence of the distributions of bones and dogs on the earnings function is in- visible. The dog bone economy presents an extreme case in that the bones are exogenously determined by what the dump truck brings rather than on any characteristics of the dogs themselves. Yet the results of this economy conform to the economist's prior beliefs about the determinants of the bone distribution. The bone "earnings function" is consis- tent with a model in which a dog's charac- teristics determine the bone size it "earns." The existence of an assignment problem lying behind the empirically ob- served relationship is completely invisi- ble. This invisibility provides an example of the fallacy of composition. In thinking about the distribution of earnings, it would seem natural to begin with the explanation of a single individual's earn- ings. Given the economy, including the rewards for education, training, and other characteristics, this individual's earnings will depend only on his or her own characteristics. With the observed relationship between the individual's earnings and his or her characteristics, it would be possible to predict a change in earnings from any change in the indi- vidual's characteris tics. In aggregating individual earnings to get the distribu- tion of earnings, however, the economy, including returns to education and train- ing, cannot be taken as given. The econo- my's rewards for various characteristics are endogenously determined and must themselves be explained by any distribu- tion theory. In particular, the consequences for the earnings distribution of a change in the distribution of worker characteristics cannot in general be pre- dicted from the change for a single worker. What constitutes a theory of the individual's earnings cannot automatically be extended to a theory of the distri- bution of earnings. A first requirement of an earnings dis- tribution theory is therefore to avoid the fallacy of composition involved in going from the earnings function to the distri- bution of earnings, or else to specify the conditions under which it is legitimate to do so. It is unnecessary to use an as- signment model to avoid the fallacy of composition. But by specifying the deter- minants of the earnings function, assign- ment models accurately represent the in- teraction between supply and demand elements in shaping the distribution of earnings. B. Relation to Other Approaches Assignment models are closely related to other approaches to the study of inequality. They are consistent with structuralist theories in sociology, in which wage structures influence the wages associated with particular jobs (Mark Granovetter 1981; Arne L. Kalleberg and Ivar Berg 1987). As in assign- ment models, earnings depend on the characteristics of both the worker and the job. However, the structuralist theories do not assume competitive access to jobs. A major question then concerns how workers are matched to jobs (Aage B. S~rensenand Kalleberg 1981). Noncom- petitive access to jobs, for example through rationing or segmentation, pro- vides a route through which institutional structures can influence the distribution of earnings. Lester Thurow (1975) devel- ops a similar model in which the wage rate is determined mainly by the job. This leads to an assignment in which workers queue for jobs based on train- ability. Sherwin Rosen (1974) develops a model of the determination of implicit prices of product characteristics. The re- sulting relationship between price and product characteristics, called an hedonic price function, is an envelope of buyer and seller offer curves. As in assignment models, this price function assigns con- sumers to producers in a market with heterogeneous products. The earnings function generated by the assignment problem (e.g., in Tinbergen's model) is essentially an hedonic price function in a labor market context-an hedonic wage function. Studies of hedonic wage func- tions are mainly directed towards esti- mating compensating wage differentials for job characteristics such as risk (Robert E. B. Lucas 197713; Robert S. Smith 1979; Rosen 1986a). Ronald Ehrenberg and Smith (1991) provide an accessible exposition of how compensating wage dif- ferentials for risk (pp. 266-74) and educa- tion (pp. 314-18) are determined using an hedonic model. An alternative expression for assigning workers to jobs is matching. Boyan Jova- novic (1979) develops a model in which the output from a specific worker-job match is distributed as a random variable that is initially unknown to the employer or worker. The model is used to explain turnover as information about productiv- ity is revealed during job tenure. The matching literature is primarily concerned with ex post differences in the outputs obtained from worker-firm matches, whereas the assignment models discussed in this article emphasize ex ante differences among workers and firms. As productivities are not explicitly related to ex ante characteristics of work- ers or jobs in matching models, the ap- proach is less useful in explaining the dis- tribution of earnings, although it has been applied to the question of wage growth over a worker's career (Jacob Mincer and Jovanovic 1981), turnover, and unemployment (Jovanovic 1984) and returns to on-the-job training (John M. Barron, Dan A. Black, and Mark A. Loewenstein 1989). Arguments about mismatches in the labor market, either with respect to loca- tion or skills, are based on simplified forms of assignment models (John D. Kasarda 1988; Levy and Murnane 1992, Section VII. B, review mismatch models). Technological change, in particular the advance of information-based industries, has shifted the skill requirements of jobs. At the same time, entering workers are failing to acquire these skills, leading to a mismatch between supplies and de- mands. The mismatch arguments implic- itly regard skill requirements and sup- plies as unresponsive to economic incentives, at least in the short run, so that planning and intervention are neces- sary. In assignment models, these sup- plies and demands are not rigidly deter- mined but respond to wage differentials. Steeper wage differentials would then re- solve the mismatches that have been ob- served and forecast. Assignment models tend to be highly abstract and mathematical, often using simplifying and unrealistic assumptions about workers and jobs to achieve analyt- ical results. They have not so far gener- ated a set of easily identifiable questions which can be answered by accessible em- pirical procedures. Further, they may di- vert attention from issues of household composition, income transfers, discrimi- nation and social problems that have a more direct impact on poverty and in- equality. However, they point the way to the steps necessary to incorporate de- mand and job choice in empirical models of earnings. The next section discusses the extent of the assignment problem in the econ- omy and the way that decisions of work- ers and employers generate assignment patterns. Section I11 presents three basic types of assignment models, depending on whether characteristics of workers and jobs are continuous or discrete. These types are the linear programming opti- mal assignment problem, the differential rents model, and Roy's sectoral model. Section IV compares these models with regard to the choices available to work- ers, wage determination, self-selection, and comparative advantage. Section V considers implications of assignment models for the decompositions that are used to study the distribution of earn- ings. These decompositions include ana- lyzing the distribution by industrial or occupational sector, use of an earnings function, and human capital models. The conclusions in Section VI review the rela- tions between assignment problems, self- selection, and comparative advantage. The section indicates the most important extensions of assignment models and ex- planations for changes in wage differen- tials as well as relevant research ques- tions. 11. The Economy's Assignment Problem A. Existence What would the economy be like without an assignment problem? With only a single, observable skill, a worker would be able to get the same wage no matter which job he or she took. No spe- cific training, education, diversity in skills or preferences would limit in any way the jobs that one would seek. Find- ing a job would be reduced to locating a firm with a vacancy.2 Firms would be indifferent as to which workers they em- ployed. Hiring would be reduced to the trivial problem of taking the first worker that came along. Unemployment would only arise if the number of workers ex- ceeded the number of jobs. Wage differ- ences among workers could arise, but all labor could be expressed in terms of the amount of an average or standard worker it was equivalent to. Professors at univer- sities could be replaced by a sufficient number of high school graduates, pre- Job search by itself does not imply that an assign- ment problem exists. It is conceivable that there is only one skill, with marginal products proportional to the skill. but that this skill is im~erfectlv observed. Workers would search for em~lovkrs whdrated their skills more highly, and firms would search for workers whose skills were underrated. Changes in employ- ment would affect the distribution of earnings but (for a given level of unemployment) not output. sumably all lecturing at the same time. This very article could have been written by anybody, perhaps in less time. But of course the economy does have an assignment problem. The size and im- portance of the assignment problem can be seen from the resources expended to solve it. Unemployment imposes large costs through forgone production, nonpecuniary costs, and uncertain incomes. Much of this unemployment arises from workers seeking jobs better than those readily available at the lowest wages, at least when depression conditions are ab- sent. Firms spend substantial amounts through personnel departments in adver- tising positions and interviewing candi- dates. After employment, firms collect information about workers to facilitate later assignment within the firm through internal labor markets. Quits and layoffs by agents seeking better matches impose losses of specific training. Expenditures on screening and signals may arise be- cause of the advantage of some assign- ments over others; they may also inter- fere with efficient assignments. The formation of specialized labor markets may arise in order to reduce the costs of assignment. Occupational segregation and segmentation, by distorting the as- signment, impose efficiency losses on the economy as well as inequities in the treatment of individuals and groups. Much empirical work supports the ex- istence of an assignment problem. Joop Hartog (1985, 1986a, 198613, 1988) esti- mates earning functions that include both individual and job characteristics, using data from the Netherlands and elsewhere. Hartog compares three models. Some versions of human capital models Suggest that only individual characteris- tics should effect earnings, jobcharacteristics are the maior determinants in segmented labor Aarket theeries. In assignment models, both sets of variables would be significant (in the ab- sence of an exact correspondence be- tween individual and job characteristics). Hartog finds that both individual and job characteristics affect earnings. Further, there are significant interactions between them, supporting the existence of an as- signment pr~blem.~ Hartog (1977, 1980, 1981b) and R. E. B. Lucas (1974) identify significant ways in which jobs differ. Sat- tinger (1978) establishes the existence of comparative advantage among individu- als using data on mechanical aptitude tests taken by secondary school students. The ratios of performance of pairs of indi- viduals are computed for four tasks. For each pair, these ratios are then ordered from highest to lowest. In the absence of any systematic comparative advantage, one ordering will be as likely as any other. Using a chi-square goodness-of-fit test, the hypothesis of no systematic com- parative advantage is rejected. Heckman and Guilherme Sedlacek (1985), in esti- mating extensions of Roy's model, show that differentials for education and expe- rience are larger in manufacturing than in nonmanufacturing. Also, in a later pa- per (1990), they reject a simpler model with no assignment problem in which worker earnings would be the same in all market sectors. Unequal wage structures among eco- nomic sectors provide indirect evidence of an assignment problem. Heckman and Jose Scheinkman (1987) establish that worker characteristics receive unequal rewards in different sectors of the econ- omy, so that workers face a nontrivial choice problem. William T. Dickens and Lawrence F. Katz (1987) and Alan B. Krueger and Lawrence H. Summers 3As a specific example of interactions, Hartog (1985) estimates an earnings function with dummy variables for each combination of job level (or re- quired education) and worker education. He tests and rejects a specification in which education and job level contribute independently to earnings. Edu- cational differentials therefore depend upon the job. (1988) also conclude that wage structures vary among industrial sectors. Like other major allocative problems of the economy (such as what, how, and for whom), the assignment problem is not apparent to individual agents who are simply solving their own utility or profit maximizing problems. Employed and unemployed workers in an economy en- gage in job search, eliciting job offers un- til they find a satisfactory one. Employers typically interview a number of candi- dates for a job, seeking the most appro- priate candidate. But out of these activi- ties arises an assignment of workers to jobs. An assignment of workers to jobs can be defined as a listing of each worker together with the job he or she per- form~.~ The next section examines how the decisions of individual agents solve the assignment problem facing the econ- omy. B. Comparative Advantage Now consider the reasons why some assignments occur instead of others. One reason is comparative ad~antage.~ Consider a fixed-proportions technology in which employers need to have a fixed set of tasks performed to .yield a given level of production. Suppose workers do not have preferences for some tasks over others. Each job is associated with a par- ticular task. Let a,j.be the number of 4The analysis of this paper takes jobs as given. Rosen (1978) considers the subproblem of how em- ployers arrange into jobs the tasks that they need performed. Application of comparative advantage to the anal- ysis of labor markets is commonly attributed to Roy. Roy does not analyze his model in terms of compara- tive advantage but comments (1951, p. 145), "It should be apparent that the analysis attempted in this article bears some sort of &nity to the theory of comparative advantages. A situation has been ex- amined in which individuals' comp;rative advantages in various activities differ widely. Sattinger (1975) applies comparative advantage to the study of the distribution of earnings and Rosen (1978) develops a general analysis of comparative advantage in labor markets. times that worker i can perform job j's task per period. If then worker 1is said to have a compara- tive advantage at job 1and worker 2 has a comparative advantage at job 2 (note that if (1)holds, then a22/a12> azllall). Comparative advantage determines the assignment in a market system with this technology as follows. Suppose that in equilibrium the wage rate prevailing for worker i is wi.The employer offering job j will seek to minimize the cost of getting the job's tasks performed, taking the wage rate as given.6 Using worker i, the cost would be wilaij.Employer j will prefer to hire worker 1instead of 2 whenever w21a2j> wllalj,or From (I),it follows that we would never observe employer 2 hiring worker 1 when employer 1 hires worker 2: it is impossible for wllw2to be simultaneously greater than alllazland less than al2/aZ2. Depending on the wage rates, it is possi- ble that both employers would prefer worker 1, or that both prefer worker 2. But the only assignment in which the employers prefer different workers is when employer 1 prefers worker 1and employer 2 prefers worker 2. With this technology, the equilibrium assignment must be consistent with the comparative advantage relations as given by (1). This example also shows how knowl- edge of the equilibrium assignment can Alternatively, fees could be offered for the perfor- mance of tasks, and workers could maximize their incomes. The resulting equilibrium wage rates for workers would still satisfy (2) and (3). In models in which an inexact assignment occurs (because of im- perfect information) or in disequilibrium, the wage for a worker may depend on both the worker and job characteristics. explain wage differences. Suppose in equilibrium worker 1is observed in job 1while worker 2 is in job 2. Then the ratio of wages for the two workers must lie between the workers' trade-offs at the first job (i.e., the ratio of their perfor- mances) and their trade-offs at the second job: In this way, ratios of performances in the two jobs set limits within which the wage differential must fall. The term "comparative advantage" is used in different ways by various authors. As defined using (I), comparative advan- tage arises whenever ratios of outputs for two workers are not identically equal in every job. Comparative advantage then establishes the existence of an assignment problem but one would need to know the direction of the inequality in (1)to determine which particular assign- ment comes about. An alternative rela- tion is absolute advantage, which arises when a worker is better at a job than other workers. In terms of the outputs in (I),worker 1has an absolute advantage at job j compared to worker 2 if alj > azj.If each worker has an absolute advan- tage at his or her own job, compared to any other worker and that worker's job, then comparative advantage must also be present in the sense defined in (I).~ The While in simple economies it is possible that ab- solute advantage determines assignment (with each worker employed at the job at which he or she is best), this becomes unreasonable in large economies. For example, if there are one million workers, there would need to be at least one million different jobs, and in each job a worker would need to be better than nearly a million other workers. Even so, only for a very special set of wages would each worker choose the job at which he or she was best. However, Glenn MacDonald and James T. Markusen (1985) describe a technology with two activities in which absolute advantage (in the form of absolute skill lev- els) results in assignments that are not completely determined by comparative advantage, as in the scale of operations effect in the following section. significance of comparative advantage is that a worker can still get a job even though he or she is worse at all jobs than other workers, i. e., even though absolute advantage is absent for that worker. Some economists find it useful to restrict comparative advantage to the case where absolute advantage is absent for some worker; this will be referred to as the standard comparative advantage C. Scale of Operations Eflect Some economists may believe that comparative advantage is the only pro- duction principle underlying the assign- ment of workers to jobs, but this is incor- rect. As a counterexample, consider an economy in which a job is associated with the use of a particular machine that can be used by only one person at a time. Suppose the possible values of output (price times quantity) obtained per hour from the two workers at two jobs are as Job 1 Job 2 Worker 1 $35 $20 Worker 2 $20 $10. Here, worker 1 has a comparative advan- tage at the second job and worker 2 has a comparative advantage at the first job, because $35/$20 < $20/$10. However, the maximum value of output, $45, is obtained when worker 1 is employed at job 1 and worker 2 is employed at job 2. In an eight hour day with this assign- 'This restricted case is consistent with the example of comparative advantage worked out by David Ri- cardo in the context of trade (1951, p. 135). In that example, Portugal has an absolute advantage over England in the production of both wine and cloth. But there are still gains from trade because England has a comparative advantage in the production of cloth. The importance of comparative advantage is that it explains trade even when one country has an absolute advantage at both goods. If each country had an absolute advantage at one good, trade would be obvious. ment, output would be $280 at job 1 and $80 at job 2. Suppose we tried to reallocate labor according to comparative advantage. Suppose we put worker 2 in job 1 for eight hours and worker 1 four hours at job 1and four hours at job 2. If this were possible, it would yield a net increase of $20 from job 1. But it would require twelve hours of labor in job 1during an eight hour day, which is ruled out by the assumption that only one worker at a time can be employed at a job. A worker occupies a job (or the machine associated with the job), preventing the reassignments indicated by comparative advantage. The reason comparative advantage does not indicate the optimal assignment in this case is that earnings from a job are no longer proportional to physical output at the job. With cooperating fac- tors of production (either explicit in the form of a machine or implicit via a scar- city in the jobs available), an opportunity cost for the cooperating factor must be subtracted from the value of output to yield the earning^.^ Ronald H. Tuck (1954, p. 1) describes the resulting prob- lem facing the economy as one of . . . assigning each individual member of the economy to work of an appropriate level of re- sponsibility, and of doing this in such a way that the best possible use is made of the avail- able human talent and experience.10 Basically, more resources (in the form of more capital, labor or greater responsi- bility) are allocated to workers with Some authors determine comparative advantage on the basis of earnings in ditferent jobs or at different educational levels rather than on the basis of physical output. With cooperating factors, this approach is ambiguous. loTuck (1954) explains the distribution of firm sizes in terms of the distribution of productive resources among entrepreneurs. Robert E. Lucas, Jr. (1978) and Walter Oi (1983) also develop theories of the size distribution of firms that involve the assignment of resources to heterogeneous entrepreneurs. greater abilities because the resources have a greater effect on output for those workers. In turn, with greater resources, output is more sensitive to the abilities of workers, raising wage differentials for workers with greater abilities. The principle affecting the distribution of earnings has been developed in a num- ber of contexts. Thomas Mayer (1960) and Melvin Reder (1968) use the term "scale of operations effect" in their mod- els of the distribution of earnings (see also discussions by Reder (1969, pp. 219- 23) and Sattinger (1980, pp. 32-35)). Mayer uses the term to describe the po- tential value of output, while Reder uses it for the value of resources under a per- son's control. Rosen (1981) applies the scale of operations effect to the incomes of superstars. George Akerlof (1981) de- velops an analogy between jobs and dam sites to explain why some workers might be unemployable. A productive dam that does not fully utilize a dam site may not be chosen because it prevents more productive dams from being used at the site. The dam site carries an opportunity cost that must be subtracted from a dam's output to determine whether it is suit- able for the site. Stephen J. Spurr (1987) discusses how the scale of operations ef- fect results in larger claims being assigned to lawyers of higher quality. The scale of operations effect is also related to theories of compensation within hierarchies developed by Herbert A. Simon (1957) and Harold Lydall(1959; 1968, pp. 125-29). In a hierarchical model developed by Guillermo A. Calvo and Stanislaw Wellisz (1979), the effect of a supervisor shirking is that workers under the supervisor also shirk. This in- creases the sensitivity of the firm's output to workers' abilities as supervisors and leads a firm to place more able workers at higher levels in the hierarchy. In Calvo and Wellisz' model, the scale of opera- tions effect provides a link between assignment models and efficiency wage models, in which firms pay above market wages in order to influence workers' productivities. Differences in efficiency wages can then be explained in terms of differences in the scale of operations rather than differences in costs of moni- toring workers. With the scale of operations effect, the wage ratio for the two workers will not lie between the ratios of outputs as in the comparative advantage case because of the presence of opportunity costs from the use of a machine or the filling of a position or job. Consider now how wages are determined for workers in the context of a model in which the cooperating fac- tor is capital, in the form of heterogene- ous units called machines. Assume only one worker at a time can be combined with a machine. Let pj be the price of a unit of output from machine j, and let ag be the output produced per period by worker i at machine j. Let wi be the wage rate for worker i. The owner of ma- chine j takes the wage as given and chooses the worker that maximizes the residual pjav -wi instead of the output values pjag appearing in the example at the beginning of this section. If the owner of machine 1is observed to choose worker 1while the owner of machine 2 chooses worker 2, plall -wl r plael -w, and p2a12 -wl Ip2ae2 -w,. Therefore: The difference in wages must lie between the difference in the value of output pro- duced by the two workers on machine 1, and the corresponding difference on machine 2. The assignment of worker 1 "Rosen (1982), Michael Waldman (1984b), David Grubb (1985) and Peter F. Kostiuk (1990) develop additional models that relate versions of the scale of operations effect to assignment and earnings within hierarchies. to machine 1 and worker 2 to machine 2 can come about only if p2(a12 -ae2) 5 pl(al1 -~21).If p2(a12 -a22) 'pl(al1 -azl), only the opposite assignment could be observed in equilibrium (i.e., worker 1at machine 2 and worker 2 at machine 1). Alternatively, one could begin by as- suming workers choose machines. Let rj be the rental cost for machine j. Then worker i chooses j to maximize pjaq rj, Again, if worker 1 is observed to choose machine 1while worker 2 chooses machine 2, plall -rl r p2a12-r2and p,a2, -r1 5 p2az2-r2. Therefore: In this way, differences in wages and rents are determined symmetrically by the problem of assigning workers to jobs. The technologies leading to the com- parative advantage and scale of opera- tions cases are very different. In the com- parative advantage case discussed above, the tasks from different jobs are needed in fixed proportions and cannot be substi- tuted for each other, but a particular job could be filled by an indefinite number of workers. Opposite circumstances hold in the scale of operations case. Output from different jobs is simply added to- gether, so that there is perfect substitut- ability among the outputs of different jobs. However, with only one worker per job, more workers cannot be added to a job to make up for low output levels. Clearly, there are many potential tech- nologies relating worker and job charac- teristics to aggregate production. In these cases, the optimal assignment would not be determined by simple bilat- eral comparisons as in (1)and (2). Along a given wage offer curve, the low- est wage acceptable to a worker, woi,OCcurs when the effort requirement equals the effort capability of the worker, i.e., gi = hj. If the effort requirement is higher or lower than g,, the worker must receive a higher wage in order to achieve the same level of utility. Higher values of woi yield higher wage offer curves and higher levels of utility, so that the worker chooses hj to maximize wi -a(gi -h.)2. With this assumption regarding prefkr- ences, workers with higher effort capabil- ities will always end up in jobs with higher effort requirements. This type of assumption (in which workers and jobs are matched on the basis of distance be- tween characteristics) is useful in gener- ating hierarchical assignments in other contexts, for example marriage (Gary S. Becker 1973; Lam 1988). The assumption in Tinbergen's model can be contrasted l2 Tinbergen (1956) extends his model to multidi- mensional worker and job characteristics. He further considers a generalization in which the production side of the economy can be incorporated into the determination of the wage function (1956, pp. 170- 71). In this case, wage differentials combine pro- ductivity differences and compensating wage differ- entials. In related work, Tinbergen develops a nor- mative theory of income distribution in which a tax function is found that maximizes social welfare (1970); estimates an empirical model with discrete categories of labor distinguished by educational level (1975a, 1977); and estimates elasticities of substitution among educational levels as a means of explaining educa- tional differentials (1972, 1974, 1975b). Journal of Economic Literature, Vol. XXXZ (June 1993) with one in which workers all uniformly prefer jobs with higher values of some characteristic (or else all prefer lower values). For example, in the compensat- ing wage literature, all workers may dis- like a particular job feature such as riski- ness, noise or distance to work but have different valuations of those characteris- tics. The unequal valuations lead to an assignment of workers to Wage differentials in Tinbergen's model also differ in an important regard from wage differentials in the compara- tive advantage and scale of operations cases. If the distribution of worker char- acteristics exactly matches the distribu- tion of job characteristics (so that hj = g, if worker i gets job j), wage differences would be eliminated. Further, if workers end up in jobs with effort requirements below their capabilities, wages will need to be a decreasing function of capabilities in order to induce workers to take the jobs. This result would not arise with comparative advantage or the scale of op- erations effect as long as the worker char- acteristic contributes to production. This section has shown how the profit or utility maximizing decisions of workers or employers generate an assignment of workers to jobs. The aggregate assign- ment problem will typically be invisible to individual agents, but their decisions may lead to a pattern of assignment that prevails throughout the economy. The next step is to examine how the problem of assigning workers to jobs generates wage differentials and the distribution of earnings among workers. 111. Alternative Assignment Models The three assignment models devel- oped in this section seem very different. The linear programming optimal assign- ment problem is a model of the condi- tions for an efficient assignment. The dif- ferential rents model explains wage differentials. Roy's model explains self- selection into occupations. The point common to all three models is that they explicitly formulate the assignment prob- lem that must be solved in the economy. This problem enters as an intermediate step in the connection between worker characteristics and earnings. Because they are all linked by the explicit pres- ence of an assignment problem, all three models exhibit common phenomena (such as conditions for an efficient assign- ment, wage differentials that depend on job assignments, and self-selection effects), although in different forms and with different emphasis. A major difference in the models con- sidered here is in their description of worker and job characteristics. In the lin- ear programming optimal assignment problem, workers and job characteristics take discrete values, and in the differen- tial rents model they are continuously distributed. In Roy's model, jobs are dis- crete (in the form of sectors),while worker characteristics are continuouslv distributed, so that many workers will end up in the same sector. These differ- ences in modeling: account for the out- ward differences in results. The starting point will be the linear programming optimal assignment prob- lem. This problem provides a very gen- eral model with which to analvze the economy's assignment of workers to jobs and its results-have many features that are common to all assignment models.13 l3 Dale Mortensen (1988) reviews matching prob- lems related to the assignment problem. In this liter- ature, matches are formed through the voluntary ac- tions of agents rather than as the solution to an aggregate maximization problem. David Gale and Lloyd Shapley (1962)analyze equilibrium in a match- ing market and present an adjustment process that would lead to a stable market structure based on preferences of agents on both sides of the market. Shapley and Martin Shubik (1972)and Becker (1973) analyze the same problem when one agent in the match can compensate the other for forming the match, for example through the wage in a labor con- tract. Alvin Roth and Marilda A. Oliveira Sotomayor (1990) review game-theoretic analysis of two-sided matching problems. No restrictive hierarchical assumptions are made regarding workers or jobs. That is, there are no explicit parameters de- scribing workers that would allow one to rank them with regard to skills. Fur- ther, no continuity assumptions are made regarding distributions of workers and jobs. Each worker's wage depends in a complex way on the outputs obtained from alternative assignments rather than on the marginal increase in output ob- tained by using more labor or slightly different labor. On the other hand, the linear programming assignment problem imposes some restrictive assumptions: there are equal numbers of workers and jobs, and they must be combined in fixed proportions, with one worker per job. By altering conditions in the model, one can generate the differential rents model that will be discussed in Section 1II.B. or Roy's sectoral model that will be dis- cussed in Section 1II.C. This procedure will facilitate a comparison of various models. A. Discrete Workers and Jobs: The Linear Programming Optimal Assignment Problem Tjalling C. Koopmans and Martin Beckmann (1957) consider a linear pro- gramming optimal assignment problem in which economic activities are assigned to locations. The dual prices in the solu- tion of the assignment problem then cor- respond to market determined profits and land rents. By changing the context of the assignment problem, one can con- sider how wages and machine rents (or profits associated with a job) are deter- mined. A linear programming optimal assign- ment problem arises as follows. Suppose there are n workers and n machines (with each machine corresponding to a job), and let agbe the value of output obtained by worker i at machine j. The problem is to find the assignment, with one worker per machine, that maximizes the sum of output values. The assignment problem is a special case of the general linear programming problem of maximiz- ing a linear objective function subject to inequality constraints. As part of the sim- plex method of solving this problem there are simplex multipliers or dual prices associated with each worker and machine. Let wi be the dual price for worker i and let rj be the dual price for machine j. These dual prices have the following properties. If worker i is assigned to machine j in the optimal solu- tion, then wi + rj = ag;otherwise wi + > aV With the optimal solution, the 2~ prices exhaust the product. If work- ers obtain their income by renting ma- chines at the prices given by the r;~, then (ruling out ties) they would be led to select the machines they are assigned to in the optimal solution. The maximum income of worker i would be wi. Similarly, if machine owners hire workers at the factor prices wi, they would choose the workers assigned to their machines in the optimal solution, and the maxi- mum income for the owner of machine j would be rj. The dual prices wi and rj distribute income in such a way that the assignment problem is solved through the income maximizing behavior of indi- vidual agents. These dual prices perform as market prices and could arise from a competitive solution.14 In the solution of the optimal assign- ment problem, there is no expression showing the relationship between dual prices and any explicit characteristics of workers or machines, a relationship that would be analogous to an earnings func- tion. However, it is possible to apply fac- tor analysis to the matrix A formed from the outputs agin order to infer character- l4 Gerald Thompson (1979) investigates the rela- tion between the prices generated by auctioning or bidding and the dual prices of the assignment prob- lem. istics of workers and machines.15 With this factorization, outputs from matches can be represented as: where R is the rank of the matrix formed from the outputs aij, pik is the amount of the k-th latent property of worker i, qjkis the amount of the k-th latent prop- erty of machine j, and Ak is the weight for the k-th property. With this factoriza- tion, the k-th property of workers inter- acts only with the k-th property of ma- chines in the determination of outputs. l6 Suppose in the optimal assignment that worker i is matched with machine j and that worker c is matched with ma- chine d. Then from the condition that the owner of machine d would not prefer worker i, aid -wi 5 a,d -w,, and from the condition that the owner of machine j would not prefer worker c, aCj-w, I aij -wi. Combining these inequalities and using (7) yields: The inequalities in (8) show the upper and lower limits for the wage differences between worker i and worker j. The lim- its depend on the differences between l5See Sattinger (1984). Factor analysis refers to the factorization of matrices, i.e., representing a ma- trix as the product of other matrices. In the term factor prices, factor refers to factors of production such as labor or land. l6 It is possible that R is less than the number of workers or machines, in which case the factorization represents the complete data more compactly in terms of the R underlying properties of machines or workers. Alternatively, one could use the factoriza- tion to approximate the data using fewer than R properties. Arrange the Ak in decreasing value, and set Akpikcljk. Then the matrix B = (bg)is the best rank S estimator of A, in the sense that among all rank S matrices with the same numbers of rows and columns, B minimizes the sum of the squares of differences between the entries of A and the entries of B. The main purpose of the factorization used here, however, is that it permits one to relate differences in wages to worker and machine properties. the latent properties of the two workers, i.e., pik -pck appears on both sides of (8). But the limits also depend on the machine properties qdk and qjk, which enter as weights, increasing the impor- tance of some worker properties and de- creasing the importance of others. The effect of worker properties on wages therefore depends on which jobs are per- formed in equilibrium. This result illus- trates one of the central points about as- signment models: a change in either the workers or jobs in the economy alters the assignment and the wage differentials that are observed. The determination of limits for machine rents is exactly symmetric to the determination of wage limits: In this expression, worker properties en- ter as weights for the importance of vari- ous machine properties. The dual prices from the solution of the optimal assignment problem exhibit two forms of indeterminacy. Because agents choose partners on the basis of relative rewards, it is possible to shift all wages up by a given amount and all rents down by the same amount (or else all wages down and all rents up). In (8) and (9), limits are placed only on differ- ences between wages or rents. In this model, the problem of assigning workers to machines determines relative wages and machine rents but not their absolute levels. The absolute levels of wages and rents are determined outside the assign- ment problem, perhaps by the availabil- ity of idle machines or workers. A second indeterminacy arises because individual wages and rents can increase or decrease within the limits in (8) and (9)while still leading to the same assignment. With continuous distributions of workers and jobs, as in the differential rents model of the following section, this indetermi- nacy disappears because the bounds for wage differences approach each other in the limit. The particular wages and rents that arise depend on the adjustment pro- cess and institutions that lead the econ- omy to equilibrium. l7 B. Continuous Distributions of Workers and Jobs: The Diferential Rents Model In Ricardo's analysis of rent (1951, p. 70), the difference in rents for two nearly similar tracts of land with unequal fertil- ity will equal the difference in output on the two tracts, holding labor and capi- tal constant. The absolute level of rents can be calculated from the condition that no rent is paid on marginal land for which cultivation yields only enough to pay for the capital and labor used. These princi- ples can also be applied in the labor mar- ket. The wage differential associated with a particular worker characteristic can be calculated from the increase in output from changing that characteristic, hold- ing everything else the same. While land is heterogeneous in Ricardo's differential rents model, though, both labor and jobs (or capital) are heterogeneous in the labor market. The wage differential therefore depends on the assignment of workers to jobs. The differential rents model (Sattinger 1979, 1980) arises when the output in the optimal assignment problem depends on a single explicit characteristic of the worker and a single explicit characteristic of the job. l8 Under certain conditions, a l7Vincent Crawford and Elsie Knoer (1981) and Alexander Kelso and Crawford (1982)analyze an ad- justment mechanism in which employers make offers and workers then accept or reject them. Then the resulting solution is best from the point of view of the employers. Alvin Roth (1984, 1985) shows that the best allocation for one side of the market is the worst for the other. "The assumption that workers can be described by a single skill or ability is counterfactual. Individu- hierarchical assignment arises in which more skilled workers perform jobs with greater resources. Hierarchical models of this type are interesting because they di- rect attention to an important feature of market systems, the tendency to rein- force and exaggerate differences among workers. With heterogeneous jobs, more skilled workers (who would perhaps have gotten higher earnings anyway) have their earnings boosted by being assigned to jobs with more capital, responsibility, or subordinates. By imposing some conditions on the model considered in 'the previous section, it is possible to obtain the differen- tial rents model discussed in this section.l9 Suppose that each job is associ- ated with a unit of capital, called a ma- chine, and suppose that each machine can be described by a single characteris- tic, its size, which measures the amount of resources or capital associated with the job. Let a,j. = f(gi,kj), where g, is a mea- sure of worker i's skill (alternatively, ca- pability, education, or ability), kj is a measure of the size of machine j, and production f(g,k) is an increasing func- tion of g and k and has continuous first and second order derivatives. (It is not necessary for f(g,k) to take the same func- tional form as the linear factorization in (8).) Now suppose that the numbers of workers and machines increase indefi- als have extremely diverse abilities at various tasks, and these abilities are only partially correlated. A measure of averages like an IQ may be stable but it will be a poor predictor of an individual's performance at any given task. Unfortunately, the simplify- ing assumption of a single worker characteristic may be confused with arguments about the existence of IQ, which are irrelevant to the issues considered here. Tinbergen (1956) constructs a model with con- tinuous distributions of workers and jobs in which workers are described by multiple characteristics. l9 Tinbergen's models (1951, 1956) and Sattinger's models of comparative advantage (1975)and compen- sating wage differences (1977)also assume continuous distributions of workers and jobs but cannot be de- rived from the same optimal assignment problem dis- cussed in 1II.A. In addition to production-relevant characteristics, preferences may also guide the assignment. In Tinbergen's original model (1951), workers prefer jobs with effort requirements that are close to their effort capabilities. l2 These requirements and capabilities are unre- lated to production in the model. Let hj be the effort requirement of job j and let gi be the effort capability of worker i. Worker trade-offs between the wage and the effort requirement hj are described by the following family of wage offer or indifference curves: Journal of Economic Literature, Vol. XXXZ uune 1993) nitely so that the values of g and k cover intervals. Let G(x)be the proportion of workers with skill levels less than or equal to x, and let K(x)be the proportion of machine sizes that are less than or equal to x." In this economy, aggregate output is obtained by summing the production from each match of a worker with a ma- chine. In the absence of preferences, the efficient assignment will be the one that maximizes this aggregate production. Consider now how the production functionf(g, k) together with the distribu- tions G(x) and K(x) determine the rela- tionship between wages and the skill level g. Let this relationship be repre- sented by w(g).The owner of a machine of size k* will attempt to maximize the profits obtained from that machine. If the owner hires a worker of skill g, profits will be given by f(g, k*) -w(g).To decide whether this skill level maximizes profits, the owner would compare the increases in production from using a worker of greater skill with the increase in wages. If the increase in production is greater, the owner would choose a higher skill level. If the increase in production is lower than the wage increase, the em- ployer would choose a less skilled worker. The owner has found the right skill level when the increase in produc- tion equals the increase in wages. For- mally, maximization of profits for the ma- chine owner implies the first order condition where wl(g) = dwldg. The term wl(g) is simply the wage differential, the in- ''In this model, the distribution of machine sizes is taken as given. Akerlof (1969)considers a model in which capital is allocated to workers. Some workers are then structurally unemployed because their out- put will not cover the cost of capital. crease in wages from a given increase in the worker's skill level. The term df (g, k*)ldg is the increase in output from using a worker of a higher skill level, holding machine size constant at k*. This method of calculating wage differentials is similar to Ricardo's calculation of differ- ential rents. Also, (10)is analogous to the familiar competitive labor market condi- tion that the wage equals the marginal revenue product, only with an increment in skill replacing an increment in the number of workers. The first order condition (10)does not by itself determine the wage function w(g). In this economy, the effect of an increase in the worker's skill level, and the size of the wage differential, depend on which job the worker performs. For each value of g at which we wish to calcu- late the wage differential w'(g),we would need to know the size of the machine k* of the employer who hires that labor. This information is contained in the economy's assignment of workers to jobs. Usually, to find the general equilib- rium of an economy, one must determine simultaneously the prices and quantities that satisfy the equilibrium condition. In the context of an assignment model, this means finding both the wage function w(g) and the assignment at the same time. As employers choose workers on the basis of the wage function w(g), this would be analytically very difficult in the general case. However, a number of simplifying as- sumptions make it possible to determine the assignment without first knowing the wage function. First, in the time period under consideration, the distribution of jobs or machines does not depend on the wage function w(g).The number of jobs does not increase or decrease in response to a high or low profit. Because workers and jobs are each described by only one variable, the production function f(g, k) may be such that only a simple hierar- chical assignment can arise, i.e., one in which more skilled workers are employed at jobs with larger machines (an alternative would be that more skilled workers are employed at jobs with smaller machines). The procedure for determining equi- librium is as follows. First, a tentative assignment is assumed (based on what one would expect from the technology). Then this assignment is used to derive the wage function w(g). Finally, the ten- tative assignment together with w(g) are checked to see whether they satisfy the second order conditions and whether any other assignment could arise. In the model developed here, the ten- tative assignment is that more skilled workers will be employed at jobs with larger machines. With this assumption, the top n jobs will go to the top n work- ers. The n-th worker, in order of decreas- ing skill g, will be employed at the n-th machine, in order of decreasing machine size. The number of workers with skill greater than or equal to some level g is 1 -G(g). Similarly, the number of jobs with machine size greater than or equal to k is 1 -K(k). Setting these two amounts equal yields a relationship k(g) which describes the machine size for the job assigned to a worker of skill g under the tentative assumption. Suppose, for example, that go is such that thirty per- cent of workers have skill levels greater than go. Then thirty percent of jobs will have machine sizes greater than k(g0). With the assignment determined, it is now possible to use (10) to find the slope of the wage differential wl(g). Suppose we are interested in the slope of the wage differential at skill level go. From the ten- tative assignment, k(go) is the machine size for the owner who chooses to hire the worker with skill level go. From (lo), the slope w '(go) equals the partial deriva- tive of production with respect to skill, calculated at k* = k(go).21 Formally, This expression corresponds to the limits for wage differences in the optimal as- signment problem in (8). With continu- ous distributions and only one character- istic for workers, it shows very simply that the wage differential for a worker with skill level go depends on the assign- ment of workers to jobs. One needs to know the machine size k(go) assigned to that worker in order to calculate the wage differential from (11). One feature of this model is that ma- chine rents are determined simultane- ously with the wage function w(g). Let r(k) be the rent for a machine of size k. The machine rent is given by the residual obtained by subtracting the wage from production: r(k) = f(k,g) -w(g). This factor price is treated as a rent instead of profits as it could be determined in a manner exactly symmetric to the wage The validity of the tentative assump- tion can now be checked. The employer's second order condition for profit (or rent) maximization is that profits should be a concave function of the skill level g, i.e., for k* = k(g), where wV(g) = #wldg2. It can be shown that this condition holds if the mixed partial derivative a2f (g, k)l 21 It is important to realize that the partial deriva- tive on the right-hand side in (10) is taken treating k* as a constant. The value of the partial derivative is then found by substituting k(go) for k*. It would be incorrect to substitute k(g) for k* and then take the derivative since the condition (10) is derived for an employer with a fixed machine size k*. Journal of Economic Literature, Vol. XXXZ Uune 1993) agak is positive.22 In turn, a positive mixed partial derivative arises when the effect of skill on production is greater at larger machines, i.e., aflag is an increas- ing function of k. The expression in (11)yields the slope of the wage function w(g).One must inte- grate with respect to g to obtain the wage function itself. The resulting expression includes a constant of integration, an ar- bitrary parameter that determines the absolute level of all wages. The labor market process in which employers choose workers determines only relative wages (i.e., the wages of one worker in relation to wages of another worker) and not their absolute level. For example, in this model all wages could be shifted up by one dollar and the first order condi- tion in (10) would continue to be satis- fied. Because of the fixed proportions technology, in which one worker can only be used in combination with one machine, the marginal products of workers and machines are not defined. The share of output between workers and employ- ers must therefore be determined by other phenomena. In the model developed here, reserve prices of labor and capital determine ab- 22 From (ll), so rearranging yields: The right side of this expression should be positive for the employer's second order condition for profit maximization to be satisfied. If a2f(g,k)lagak is posi- tive for all workers and machines, then the tentative assignment with dkldg > 0 satisfies the employer's second order condition. Also, the second order condi- tion would not hold with the reverse assignment, in which dkldg < 0. solute levels of wages and rents.23 The reserve price of labor, p,, is the mini- mum amount that workers must receive in order to be willing to work. If wages are below p,, workers choose to remain idle or engage in some other activity rather than work. Similarly, owners of machines must receive p, or else they will withhold their machines from pro- duction. As with Ricardo's differential rents, the absolute levels of the wage and rent func- tions are determined by the conditions that hold for the last or marginal match. As one moves down the list of workers in order of decreasing skill, the machine size assigned to that worker in equilib- rium declines, along with the level of production from the match, f(g,k).In one possible outcome, the level of production declines to the sum of the reserve wage and reserve rent, p, + p,, while there are still workers with lower skill levels and machines of smaller size. Suppose the skill level when this occurs is gm and the corresponding machine size is k, = k(gm).If the wage w(gm)were greater than p,, unemployed workers would bid the wage down to p,. If the wage were less than p,, then the rent r (k,) would be greater than p,. Employers with idle machines would offer higher wages and accept lower rents until the wage again equaled p,. The outcome of this adjust- ment process is that w (g,) = p, and r(km) = p,. These conditions are sufficient to determine the absolute levels of the wage and rent functions.24 The conditions Reserve prices of labor could be represented in the linear programming optimal assignment prob- lem by the presence of extra "null machines" for which output equals labor's reserve price. A surplus of such machines would force their rents to zero. Machine reserve prices could be represented in a similar manner. 24 TWO other possible cases could arise. First, sup- pose that there are more workers than machines, and that production f(g,k) is sufficient so that all ma- chines can be used (i. e., f(g,k) is always greater than the sum of the reserve prices, p, + p,). Suppose the smallest machine size is k, and the corresponding guarantee that output equals the sum of the wage and machine rent for the last or marginal match. It can then be shown that wages and rents exhaust the product for the nonmarginal, more productive By making assumptions regarding the functional forms for production and the distribution of workers and machine sizes, it is possible to draw specific con- clusions regarding the shape of the wage function and earnings inequality. For ex- ample, suppose f (g,k) takes the Cobb- worker's skill level is g,. Then unemployed workers bid the lowest wage rate down to p,, so w(g,) = p,. The owner of machine size k, gets the residual, so dk,) =f(g,, k,) -P,. These two conditions deter- mine the absolute levels of w(g) and dk). In the other case, there are more machines than workers. At the last match, the machine owner gets p, while the worker gets the residual. "For any level of skill go, the total differential of output is: af dk ago dg Using (11)and the analogous expression for r(k), df = -dg0 + -dg0 [z][k=k,)]. so that the change in output equals the change in factor payments. With output equal to factor pay- ments for the marginal matches, output will also be exhausted for matches involving higher values of go. Some other technical questions related to the differ- ential rents model are whether the equilibrium ex- ists, is unique and is efficient, in the sense of maxi- mizing production net of reserve prices of labor and capital. Existence and uniqueness are established by construction: given the production function and dis- tributions of workers and machines, the equilibrium is actually found, including the assignment k(g)and the wage function w(g).The efficiency of the assign- ment can be established from the assumption that production from a match is an increasing function of skill and machine size and that the mixed partial derivative is positive. The proof proceeds by suppos- ing that another assignment maximizes production net of reserve prices. Then because the assignment is different, two workers and their machines can be found such that the more skilled worker is using the smaller machine. By switching machines for these two workers, even greater production can be ob- tained, contradicting the assumption that the alterna- tive assignment maximizes production net of reserve prices. The contradiction proves that the hierarchical assignment is Douglas form gakP, and suppose skills and machine sizes are lognormally dis- tributed with variances of logarithms ui and ui, respectively. Then using (11)the wage function w(g) takes the form where A is a constant and C, is the con- stant of integration obtained when wl(g) is integrated. This function will be con- cave, linear, or convex depending on whether (amg + puk)/ug is greater than, equal to or less than one. If a + p = 1 and if uk > ug (i.e., machine sizes are more unequally distributed than skills), then w(g) will be convex.26 The quantity -C, will be lognormally distributed with variance of logarithms aUg f puk, a linear combination of the inequalities in skill and machine size distributions. If both workers and machines are unem- ployed, then (from the condition that la- bor and capital factor prices equal their reserve prices in the marginal match), C, can be calculated as This amount could be positive or nega- tive. A larger value of C, reduces earn- ings inequality. In this way, the separate influences of the production function, the distributions of skills and machine sizes, and the reserve prices can be found. As mentioned in the introduction, these in- fluences will be unapparent. Only the wage function w(g) will be observed, so that wages will appear to depend only on g. The differential rents model and related hierarchical models explain why the 26 The wage function derived by Tinbergen (1951, 1956, 1970) displays the same flexibility. The slope of the wage function and whether it is concave or convex can be related to the distributions of worker and job characteristics. John Pettengill (1980) uses a similar methodology to examine the effects of union- ization on earnings inequality. distribution of earnings differs in shape from the distribution of abilities. A. C. Pigou (1952, p. 650) raises the question (now known as Pigou's paradox) of why the distribution of earnings is positively skewed when abilities are symmetrically distributed. This paradox presumes that earnings ought to be proportional to some single-dimensional measure of abil- ities. It can be resolved by recognizing that workers are engaged in many differ- ent activities: there is no single measure of ability that determines a worker's earn- ings. In the context of the differential rents model, one obtains a different dis- tribution of abilities among workers de- pending on which machine is used. If every worker used the same type of ma- chine, the distribution of earnings would take the same shape as the distribution of abilities, defined by worker outputs.27 With unequal machine sizes, however, workers with greater skill levels are as- signed to larger machines. This boosts their earnings above what they would be if everyone used the same machine. Be- cause of the positive mixed partial deriva- tive d2fldgdk, differentials for skill (i.e., dfldg) will be greater for more skilled workers. The distribution of earnings will not resemble the distribution of outputs at any one machine and instead will be positively skewed relative to any such distribution. The model presented here incorporates some important elements of the economy. These include an explicit as- signment problem and the role of cooper- ating factors, in this case capital as repre- sented by separate machines. The main point is the expression for the wage dif- ferential in (11).But there are also sev- "Each worker's earnings would be the value of output minus the machine rent, which would be the same for all workers. The distribution of earnings would be the same as the distribution ofoutput values but shifted to the left by the amount of the machine rent. These conclusions can be derived directly by setting uk = 0 in (13)and (14). eral shortcomings of the approach. It is essentially a short-run model, taking the distributions of workers and jobs as given.28 It assumes a very restrictive pro- duction technology in which only one worker can be combined with a machine. More generally, one would expect a pro- duction function in which there could be variable numbers of workers combined with the capital. The model relies heavily on calculus and on continuity assump- tions that allow one to work with deriva- tives. Most importantly, because of the absence of any stochastic element, the model predicts an exact correspondence between skill and machine size, which is inconsistent with observed assignments. C. Discrete Jobs or Occupations: Roy's Sectoral Model The model developed in this section (commonly called Roy's model) differs from the previous section's model in that workers choose among only a few jobs or occupations instead of a continuum of jobs. Rather than each job being filled by only one worker, a subset of all work- ers can be found in a given job. Because of worker self-selection of jobs, the distri- bution of workers in a given job will differ systematically from the labor force as a whole.29 These effects of worker choice 28 The assumption of rigid distributions of workers and jobs is the same for both mismatch arguments and differential rents models. However, unlike mis- match arguments, there are no excess supplies or demands in differential rents models, as these are eliminated by the wage differentials. 29 In an earlier article, Roy (1950)attempts to show that abilities combined multiplicatively, so that out- puts and hence incomes are lognormally distributed. He uses data from manual occupations such as choco- late packing to test this hypothesis. The 1951 article arises from the realization that workers ". . . engaged in a particular occupation tend to be selected in a purposive manner from the working population as a whole" (1951, p. 135). Then the workers found in a specific occupation would not have the same distribution of abilities as the population as a whole would have in that or self-selection make Roy's model par- ticularly interesting to econometricians dealing with self-selectivity issues. In the basic two-sector model devel- oped by Roy, members of a simple econ- omy must choose between catching rab- bits or fishing for trout. A worker's income in a sector is proportional to the number of rabbits or trout caught. Unlike the model developed in the previous sec- tion, there is no restriction on the num- ber of workers in a job or occupation. Workers can move from one job to an- other depending on the price of trout in terms of rabbits. The model is very rich in yielding a wide range of outcomes depending upon the relation between abilities in the two sectors. The predic- tion of observable distributions renders the model more applicable to empirical work than the previous section's model. Roy's model can be represented as a special case of the linear programming optimal assignment problem discussed in Section 1II.A. With two sectors, the out- put value entries ay for worker i would be the same for all jobs in a given sector. In order for all output to go to the worker, with no subtraction of rent for the machine or job, it is necessary for the opportunity cost of filling a given job to be zero. This can be accomplished by assuming that there are more jobs in each sector than there are workers. ("Null workers," with zero outputs in each sec- tor, can be added so that the number of workers equals the number of jobs.) Then the rent for each job will be zero and the wage rate for a worker choosing a job would equal the value of output. The matrix of output ('v) have rank 2. This demonstrates an important difference between Roy's and the differential rents models: with no scarcity of jobs in either sector, taking a job in either sector entails no opportunity cost so that assignment is based on comparative advantage. In Roy's model, there is no simple ex- pression for the wage differential as in (ll), showing the relevance of the assign- ment. However, as in that model, a worker's income is not a simple function of a skill measure but depends on which job the worker performs (i. e., rabbits or trout). In both models, aggregate output depends on the assignment of workers to jobs, and the assignment problem is solved by the income-maximizing deci- sions of agents. A shift in demand (e.g., an increase in the price of trout in terms of rabbits) can i~~fluence the distribution of earnings by increasing the relative earnings of trout fishers and leading some workers to move from catching rabbits to fishing for trout. This section presents a graphical treat- ment of Roy's model.30 Suppose there are two sectors, rabbits and trout. Sup- pose that if the entire population chose one of the sectors, the distribution of out- puts (in terms of rabbits or trout caught) would be lognormal. Let a: and a; be the variances of logarithms of outputs in the rabbit and trout sectors, respectively, and let p be the correlation between the logarithms of a worker's outputs in the two sectors. Without loss of generality, suppose that amounts are more unequally distributed in the second sector (trout), so that a: <a:. This corresponds to Roy's assumption that trout are more difficult to catch than rabbits. Three basic cases arise depending on the correlation between the two sectors. i. Case of a,lu2 5 p 5 u2/u1.31 30 Heckman and Sedlacek (1985, 1990), James Heckman and Bo Honore (lggo), and G, S, Maddala (1977, 1983) develop statistical versions of Roy's model. According to comments made to this author by Michael Farrell, Roy had developed a complete statistical model as the basis for his conclusions but did not include it in the 1951 paper. This is consistent with the detailed conclusions he reaches. 31 Cases can also be described in terms of the cova,.iance, given by u12 = pulu2, Case i, u! 5 u125 u?j, while in Case ii a125 0 and in Case iii 0 5u125a: 5a;. Log Trout Proportion Hunting Rabbits Log Rabbits Figure 1. Contour Plot of Density of Sector Performances This is the standard comparative ad- vantage case. In this case, outputs are highly correlated, so that workers with higher levels of output in one occupation are also very likely to have higher levels of output in the other sector. The effect of selection on the distribu- tion of outputs in the two sectors and on the distribution of income can be seen in Figures 1 through 5. Figure 1 shows a contour plot of the distribution of worker performances in the two sectors. In this figure, it is assumed that the vari- ance of logarithms of rabbits caught by the population, a:, is 1, while the vari- ance of logarithms of trout, a;, is 4. The means of the logarithms of rabbits and trout are both 4.32 Also, the correlation between sector outputs is assumed to be 0.75. The points on a given contour line in Figure 1 correspond to combinations of rabbits and trout such that the density of workers is the same. Assume that the price of a rabbit is 32 Although the logarithmic means are equal to 4 for both trout and rabbit skills, the means themselves are unequal. The mean of a lognormal distribution is given by ek+uu2, where p and u2are the mean and variance of logarithms (John Aitchison and J. A. C. Brown 1957). The means for rabbit and trout skills are thus 90 and 403. Log Rabbits Figure 2. Proportion of Workers Choosing to Hunt Rabbits 1.2 while the price of a trout is 1, so that one rabbit is worth 1.2 trout. A worker chooses to hunt rabbits whenever 1.2times his or her rabbit catch is greater than one times his or her trout catch. The 45-degree line sloping upward from the logarithm of the price ratio, 0.182, on the vertical axis shows all combina- tions of rabbits and trout that yield the same income. Any worker with a combi- nation of rabbits and trout below this line would make greater income hunting rab- bits. Any worker with a combination above this line would choose to fish for trout. Figures 2 through 4 show how this as- signment mechanism affects the distribu- tions of workers observed hunting rabbits and fishing for trout. Figure 2 shows, from among workers who can catch a par- ticular number of rabbits, the proportion that choose to hunt rabbits. In this case, the proportion hunting rabbits declines as the number of rabbits increases. The mean logarithm of rabbits caught among those choosing to hunt rabbits will there- fore be less than the population mean of 4. Workers who can catch many rabbits are likely to have a comparative advan- tage at trout fishing, and therefore choose that A better rabbit hunter would get only a small income advantage from his or her superior catch because the in- equality in number of rabbits caught is relatively small. The number of trout Density of Log Rabbits Figure 3. Rabbit Hunting Abilities Among All Workers and Among Rabbit Hunters caught are more unequally distributed, so an above average performance in that occupation will yield a much higher in- come. In Figure 3, the upper curve shows the distribution of rabbits caught by all workers (the vertical axis is the density of workers who catch a given number of rabbits). This upper curve is a lognormal distribution, so that the logarithm of rab- bits is normally distributed with mean 4 and variance 1. The lower curve shows the density of workers by rabbits caught for those choosing the rabbit sector (this curve is not normalized; the area under the curve is 0.55, equal to the proportion of all workers choosing the rabbit sector).33 At higher numbers of rabbits, workers are less likely to choose the rab- bit sector because their income may be larger in the trout sector. Workers catch- '' This density is obtained by multiplying (for each number of rabbits caught) the proportion choosing the rabbit sector, given in Figure 2, times the density of all workers who can catch a given number of rab- bits, given by the upper curve in Figure 3. Let n(xl,xz;al,ap,p) be the joint probability density func- tion for the bivariate lognormal distribution, where xl is the number of rabbits, x2 is the number of trout, and a,,u2and p are the standard deviations for rab- bits and trout and the correlation, respectively. Those who choose rabbits are workers for whom q 5 1.2x1, so that at xl the height of the lower curve in Figure 3 is given by Log Trout Figure 4. Trout Fishing Abilities Among All Workers and Among Trout Fishers ing low numbers of rabbits, however, are likely to choose the rabbit sector. The distribution of income in the econ- omy can be found by combining the lower curves in Figures 3 and 4, setting 1.2 trout equal to one rabbit and express- ing income in rabbits. This is done in Figure 5. The upper curve is the distri- bution of incomes in the economy, ob- tained by summing the densities of work- ers by income for the two sectors. The lower curve on the left arises from the rabbit sector, while the lower curve on the right arises from the trout sector. This figure shows that the upper tail of the income distribution comes from workers in the trout sector, while the lower tail comes from workers in the rab- bit sector. There is, however, substantial overlap in incomes from the two sectors: the assignment of workers to sectors is not entirely hierarchical, in the sense that some workers in the trout sector are Figure 4 makes the same comparison with respect to the number of trout caught. The upper curve shows the dis- tribution of trout caught by all workers. The logarithms of trout are normally dis- tributed with mean 4 and variance 4. The lower curve shows the density of workers by trout caught for workers choosing the trout sector. Nearly all those with high trout catches choose the trout sector, while those with low trout catches select the rabbit sector. Density of Density of Workers Workers workers tend to have a comparative ad- vantage. ii. Case of p < 0. This case arises when performances in the two sectors are negatively corre- lated, i.e., the better rabbit hunters tend to be the worse trout fishers. Those with worse performances in an occupation are more likely to choose the other occupa- tion to earn their living. In this case, the assignment is roughly described by abso- lute advantage, which arises when work- ers in an occupation are better at that occupation than workers choosing the other occupation. Workers in an occupa- tion tend to have higher outputs in that occupation than workers choosing the other occupation, although there will be exceptions. Worker choices lead to a sim- ple assignment pattern: each occupation tends to be filled with the best workers in that occupation. The workers with the highest incomes will tend to be those with extreme performances, good and bad, rather than those with average or above average performances in both oc- cupations. Figure 6, corresponding to Figure 5, shows the aggregate and sectoral distri- butions of income in the case where the correlation between sector performances is -0.5, everything else the same. Density of Workers Income Figure 7. Aggregate and Sectoral Distributions of Income, Case iii with 0 < plz < ul/uz The mean logarithm of income is 4.97 and the variance of logarithms is 1.31. Compared to case i, there are virtually no workers with logarithms of income be- low 2. As in Figure 5, though, the upper tail is dominated by workers in the high variance trout sector. iii. Case of 0 5 p < a,/a, This intermediate case arises when outputs in the two occupations are posi- tively correlated but not as much as in the standard comparative advantage case in i above. Workers with better perfor- mances in the first sector are more likely to choose that sector, even though they also tend to be slightly better in the sec- ond sector. The importance of this case is that a positive correlation between sec- tor performances does not necessarily generate the standard comparative ad- vantage case. Figure 7, corresponding to Figures 5 and 6, shows the aggregate and sectoral distributions of income for this case, as- suming p = 0.25. In this case, the mean logarithm of income is 4.71 and the vari- ance of logarithms is 1.76. Comparison of Figures 5 through 7 for the three cases reveals a number of com- mon features. In all three cases, the up- per tail is dominated by workers in the high variance sector, trout. This effect stands out clearly because the variances in the two sectors were arbitrarily chosen to be so far apart. The aggregate distribu- tion of income takes the same general shape in all three cases, with the largest inequality (as measured by the variance of logarithms) in the case where p is the highest. The lower tail is dominated by workers in the low variance rabbit sector. However, in the case with p < 0, some low income workers are also in the trout sector. Despite the assumption that a trout is worth less than a rabbit and that mean logarithms of performances are the same, average incomes are higher in the trout sector. The higher average income arises because of the high incomes going to workers in the upper tail of the trout ability distribution. The unequal variance between the sectors appears to play at least as strong a role as correlation be- tween sector performances in shaping the distribution of income. The listing of cases in this section shows that a variety of outcomes is possi- ble depending on the correlation p. In particular, the standard comparative ad- vantage case in i is not inevitable and is a special case of Roy's model. Roy's model can also be used to illus- trate how demand can influence the dis- tribution of earnings and the division of workers between sectors. Table 1 shows the effects of changing the price of trout in terms of rabbits in case i. As the price of trout goes up, the proportion of work- ers selecting the rabbit sector declines, meal1 earnings increases and the variance of logs increases. Workers originally in the trout sector find their earnings boosted by the price increase, relative to workers in the rabbit sector, who have lower earnings on average. The effects can be seen in Figure 8, showing the distribution of earnings for two prices of trout, 0.5 and 1. The lower tail is unaf- fected because it arises from workers who stay in the rabbit sector. However, the upper tail is shifted to the right as work- Price of Trout Proportion in Terms of Hunting Mean Rabbits Rabbits Earnings ers from the trout sector, who account for the upper tail, experience an increase in earnings from the higher trout price. In this case, the increase in the price of trout raises earnings inequality as mea- sured by the variance of logarithms of earnings. As the price of trout doubles from 0.5 to 1.0, average earnings of workers in the trout sector will not double. The rea- son sector earnings are not proportional to prices is that a nonrandom selection of workers moves from the rabbit sector to the trout sector in response to a trout price increase. This is demonstrated for Case i in Figures 9 and 10, which show worker movements in response to an in- crease in the price of trout from 0.83 rab- bits (corresponding to 1.2 trout per rab- bit) to 1.0 rabbits. This price change would be represented in Figure 1 by a shift in the "Equal Incomes" line down- ward from a vertical intercept of log (1.21 1)to log (Ill), through the origin. Figure Density of workerS Pnce of trout equals 0.35 ' one-half rabblt Figure 8. Shift in Earnings Distribution from Change in Price of Trout Variance Mean Mean of Rabbit Trout Logarithms Skill Skill 9 shows the proportion ofworkers leaving the rabbit sector as a function of the num- ber of rabbits they can catch. As shown, workers with greater rabbit skills are more likely to leave that sector, thereby lowering the average skill level in the rabbit sector. Figure 10 shows how the movers compare with the workers already in the trout sector. The ratio of the number of entrants to current work- ers in the trout sector is greater at lower numbers of trout caught. After the price change, average skill levels in the trout sector will be lower. The average wage in the trout sector increases less than the price of trout. The response of average skill levels to changes in rabbit or trout prices demon- strates an important feature of Roy's model, the aggregation bias that arises because of movements between sectors (Heckman and Sedlacek 1985, pp. 1107- 10). Changes in average wage rates do not accurately reflect changes in the wage 0.2 .. : Log Income 246810 Figure 9. Proportions of Workers Leaving Rabbit Sector Entrants' Ratio Figure 10. Ratios of New Entrants to Current Trout Fishers rates for workers with given skill levels. Heckman and Sedlacek (1985) estimate an empirical version of Roy's model using Current Population Survey data from 1968 to 1981. They assume that workers choose between two sectors, manufactur- ing and nonmanufacturing, justifying this division because manufacturing has been the focus of so much previous empirical work. This model is rejected by two test criteria that they propose. If wages are the product of tasks and task prices, and if the relation between tasks and skills does not change over time, the coeffi- cients of skills in estimates of the loga- rithms of wages in successive cross-sec- tions should be the same. This is referred to as the proportionality hypothesis and fails to hold for the estimated model. The second test is whether the residuals in the loglinear wage equation are distrib- uted normally as assumed, and this is rejected using a chi-square goodness-of- fit test. In response, Heckman and Sedlacek consider a multi-sector generalization but reject it because of the expense of estimating such models. Instead they ex- tend Roy's model in four other ways. First, individuals are assumed to maxi- mize utility instead of money incomes. Second, earnings are decomposed into hourly wage rates and hours of work that are freely chosen. Third, Heckman and Sedlacek deveIop a general non-normal model for the distribution of residuals which has Roy's lognormal assumption as a special case. Fourth, individuals are assumed to have a nonmarket or house- hold production sector as an alternative to market With the assumption of utility maxi- mization, preferences influence assign- ment and earnings in the extended model by leading workers to choose sectors that do not necessarily maximize earnings. In this way, the distribution of utilities, which generates sectoral choice, can dif- fer from the distribution of task perfor- mances, which generates the earnings distribution. One cautionary note concerning pref- erences in Heckman and Sedlacek's model is that their presence may be nec- essary to mimic sector-specific training. In the model, workers at each point in time are assumed to choose between the manufacturing, nonmanufacturing and nonmarket sectors. It seems likely that workers who have experience and train- ing in a sector would have their pro- ductivity raised in that sector relative to what it would be in another sector. The training would be specific to the sector, just as specific training raises productiv- ity only in the firm that gives it. An expe- rienced worker in a sector could expect to get less in the other sector than indi- cated by the estimated task-skills rela- tionship. The worker would then be much less likely to switch sectors than on the basis of predicted earnings alone. A suitable distribution of preferences would correct for the absence of assumed sector specific training. Preferences could also be present in 34 In a later article, Heckman and Sedlacek (1990) try to determine which extensions to Roy's model are most important in improving its goodness of fit. They find that the existence of a nonmarket sector is more important than allowing for departures from lognormality. lieu of search by workers. With sectoral choice made through search (and with many sectors), workers do not usually end up in the sector that absolutely maxi- mizes their earnings. In the absence of assumptions that workers find sectors through search, the distribution of pref- erences could substitute for the random wage outcomes of search. Heckman and Sedlacek find that edu- cation has twice as strong an effect in manufacturing as in nonmanufacturing. Wages grow much more rapidly with work experience in manufacturing than in nonmanufacturing. These results show that it would be incorrect to apply one earnings function to all workers indepen- dent of sector.35 The model leaves unex- plained the shapes of the wage functions in the two sectors. What aspect of the manufacturing sector causes its wage function to differ from the wage function in the nonmanufacturing sector? The wage functions in each sector could themselves be hedonic wage functions, arising from assignment problems within each sector. It seems unlikely that all workers in the manufacturing sector are perfect substitutes for each other, as re- quired for the efficiency units assump- tion. Also, it is not clear that workers choose manufacturing versus nonmanu- facturing sectors instead of occupations. The variance of the error term in non- manufacturing is greater than in manu- facturing. Heckman and Sedlacek find this to be consistent with greater hetero- geneity in the industries classified as non- manufacturing. Because preferences and not earnings determine sectoral choice, the results of the extended model do not 35 Heckman and Sedlacek (1990, p. S353) explicitly test whether the observed wage distribution could be explained by a model with only a single production market, so that workers only decide whether to work or not and get the same wage no matter where they work. Such a model is rejected. conform exactly to one of the cases dis- cussed earlier in this section. In particu- lar, the correlation p is not identified in this model because of the presence of unobservables. However, education and experience positively affect both manufacturing and nonmanufacturing tasks, so performances in the two market sectors would be positively correlated in the ab- sence of other skill related variables. Heckman and Sedlacek estimate the effect of self-selection on inequality in Roy's model by comparing the observed earnings inequality with the level that would arise if workers chose sectors ran- domly. They find that self-selection de- creases the variance of logarithms of wages within each sector, moves the mean wages in the two sectors closer to- gether, and reduces the variance of loga- rithms of wages in the economy by 11.6 percent.36 In a later article, Heckman and Sedla- cek (1990) examine extensions to the Roy model in more detail. Heckman and Honore (1990) analyze statistical proper- ties and the empirical content of Roy's model. Although the statistical analysis used in all of these models appears to be very difficult, the work establishes Roy's model and its extensions as a practi- cal way to apply assignment models em- pirically. IV. Comparisons and Extensions A. Choices The three models discussed in the previous section share a number of fea- tures in common. First, of course, is the existence of multiple sectors. Workers therefore face a choice of sectors, or else employers in each sector face a choice of workers. From the point of view of James Heckman and Bo Honore (1990) prove that in the Roy model, self-selection reduces inequal- ity compared to random assignment to sectors. the economy as a whole, the existence of multiple sectors entails an assignment problem. In the simplest case, where workers are described by a single characteristic, multiple sectors arise because the output in some jobs is more sensitive to that characteristic than others. In the differ- ential rents model, the effect of worker skill on output, aflag, is an increasing function of the size of the machine. As in other scale of operations models, an efficient assignment requires that more skilled workers be assigned to jobs with more resources, either capital or respon- sibility. Alternatively, jobs may differ by a single parameter of difficulty, which measures the sensitivity of the job to the worker skill. This sensitivity to worker skill is also present in Roy's model. If performances in the two sectors were perfectly correlated (so that one could predict performance in one sector from the performance in the other sector), workers would still face a choice between rabbits and trout. With unequal vari- ances of performances, better rabbit catchers would choose to fish for trout as that sector is more sensitive to their skill levels. Even if jobs did not differ in their sen- sitivity to worker skills, multiple sectors would arise because of the great variety of tasks performed in the production of goods and services and the diversity of human performances at those tasks. Mul- tiple sectors provide workers with choices. Both the existence of choice and the features of those choices affect the distribution of earnings. The discussion of Roy's model in Section 1II.C. empha- sizes the features of those choices, such as the unequal variances and correlation between the two occupational perfor- mances. An alternative way to view Roy's model is by comparing its outcome with the outcome of a single sector model, in which the distribution of outputs would be identical in shape to the distri- bution of earnings. Suppose the second sector has the same distribution of out- puts as the first sector (equal variances and zero correlation). Then the addition of a second sector provides workers with a choice and alters the distribution of earnings from its shape in the single sec- tor case. It is clear that the addition of more sectors would alter the distribution further. But the effect of additional choice is not apparent in Roy's model as in the alternative cases considered in Section III.C., the number of sectors stays the same at two. Benoit Mandelbrot (1962) and Hendrik S. Houthakker (1974) develop models that partly explain how choice among many sectors, in the ab- sence of unequal variances, affects the distribution of earnings. Mandelbrot seeks to show how several occupations can have different exponents for Pareto upper tails. As a simplified ver- sion of Mandelbrot's model, suppose that workers have vectors of aptitudes given by (yl, . . , yn). Assume each aptitude follows a Pareto distribution, with proba- bility (ylyo)-" that income is greater than or equal to y, where yo is the lowest apti- tude and a is the same for all aptitudes.37 If the sector's offers are proportional to only a single aptitude, then the offers will follow the Pareto distribution. Accepted offers will differ from the Pareto distribution because lower offers are likely to be rejected. But the higher of- fers are more likely to be accepted, so in the upper tail accepted offers will again resemble the Pareto distribution. Now suppose a sector weights two aptitudes heavily. To be specific, suppose a firm offers a wage equal to A0 whenever the lower of the two aptitudes is 0, where 37 Instead of the Pareto distribution, Mandelbrot adopts a somewhat weaker assumption regarding ap- titudes. They are assumed to follow the weak law of Pareto, so they asymptotically resemble the Pareto distribution in the upper tail. A is some constant.38 Multiplying the two randomly with cumulative density func- cumulative distribution functions to-tion F (yl, y2, . . . , yn). An individual gether, the likelihood that an offer equal selects the occupation that maximizes in- to or exceeding A0 is made is (~/y,,)-~~. come, z. If the prices of aptitudes are The distribution of wages for this occupa- tion will asymptotically resemble a Pa- reto distribution but with parameter 2a instead of a. Similarly, a sector for which the offer depends on c aptitudes being simultaneously large will have a wage dis- tribution which in the upper tail will re- semble a Pareto distribution with param- eter ca. In this way different sectors can have wage distributions which are asymptotically Paretian but with different parameters. In this economy, the very high offers go to workers in sectors that weight a single aptitude highly. Sectors requiring two or more aptitudes do not make many high offers. The workers getting the high- est wages are those who are extremely good at a single skill that is crucial to a sector rather than workers who have a high average of aptitudes. The distribu- tion of offers with the lowest Pareto coef- ficient (corresponding to the greatest in- equality) will come to dominate the upper tail of the entire earnings distribu- tion, which will then resemble a Pareto distribution with coefficient a. Houthakker's model is similar to Man- delbrot's in that a worker has a vector of n occupational aptitudes and chooses the occupation that yields the highest in- come. Houthakker shows how the distri- bution of earnings can be derived from the joint distribution of aptitudes. The individual has a vector of occupational aptitudes (yl, yz, . . . , y,) which varies 38 This functional form for the offer would arise if the worker's productivity is a fixed-proportions func- tion of the worker's aptitudes. For example, if a job required a worker to perform two tasks simultane- ously, the productivity would be proportional to the minimum of the two aptitudes. Mandelbrot derives his results under much more general conditions and assumes that offers are a nonhomogeneous form of the independent aptitude factors (1962, p. 61). each unity, the cumulative density func- tion of incomes will be given by F (z,z, . . . , z), the probability that each aptitude will be less than or equal to z. The resulting distribution of income will differ in form from the distributions of individual aptitudes, and Houthakker illustrates this general result with exam- ples using the bivariate exponential and bivariate Pareto distributions. Figure 5. Aggregate and Sectoral Distributions of Income less productive than some workers in the rabbit sector. In this case, workers are assigned to sectors on the basis of comparative ad- vantage. Workers who do well in a sector (i.e., rabbits) do not necessarily select that sector; instead they may select the other sector because they have a compar- ative advantage in it. Workers may select a sector (rabbits) even though they do badly in it because they have a compara- tive advantage in that sector. There is an implicit ranking of the sectors in that trout fishing is the sector at which better ; Income Figure 6. Aggregate and Sectoral Distributions of Income, Case ii with p,, < 0 The aggregate income distribution tends to be more skewed to the right than a lognormal distribution as the right tail, from the trout sector, is more elon- gated than the left tail. The mean loga- rithm of income is 4.48, greater than the mean would be if there were no choice and all workers had to stay in one sector. Overall, income inequality, as measured by the variance of logarithms, is 2.03, greater than the population inequality in rabbit catches, a: = 1,but less than the population inequality in trout catches, oi = 4. The distribution of income resem- bles neither the distribution of abilities in catching rabbits nor the distribution of abilities in catching trout. As illus- trated, it will tend to be more highly skewed than either of the ability distribu- tions. B. Wage Diflerentials In a single competitive labor market, the wage rate is determined by the famil- iar condition that quantity supplied equals quantity demanded. The supply and demand curves for a single labor market, however, do not show their de- pendence on substitution of labor or jobs from other labor markets. With many closely related labor markets, as in as- signment models, the demand curve for a particular type of labor is determined by the cost of hiring alternative types of labor. Wage determination in assignment models generally takes the form of condi- tions imposed on wage differentials. These conditions are expressed differently in the three assignment models that have been considered but are all essen- tially generated by trade-offs in produc- tion that arise from varying skill levels. The conditions are shown in (3) for the comparative advantage case and (4) for the scale of operations case. The role of trade-offs in production is clearest in the differential rents model. In (1l), the wage differential equals the effect of an increase in the skill level g on output, hold- ing the size of machine constant at the level corresponding to the equilibrium assignment. In the optimal assignment problem, the limits in (8)are the differ- ences in the two workers' outputs at ma- chines d and j. The role of trade-offs in production is less clear in Roy's two-sector model as there are no wages associated with work- ers. However, by defining workers' wages as equal to their incomes, it is pos- sible to derive an analogous result.3g An important conclusion arising from the determination of wage differentials in all three assignment models is that prices or values of worker characteristics are not uniform across the economy. Finis Welch (1969) raises the issue of uni- form skill prices and develops a model in which production costs depend on ag- gregate combinations of skills, so that skill prices can be equalized across sec- tors. Rosen (1983) and Heckman and Scheinkman (1987) describe conditions under which uniform skill pricing will not arise. Assignment models provide a direct explanation of unequal skill prices in an economy. For example, in Roy's model, suppose the two skills are rabbit hunting and trout fishing. If a worker chooses to fish for trout, the price of the rabbit skill is zero and the price of the trout skill is given by the trout price. A worker catch- ing rabbits similarly receives a zero re- ward for any trout fishing skill. In the differential rents model, the value of an increment in the worker skill g depends 39 Let pj be the price of output in sector j and let aV be the output of worker i in sector j. Let wu = p,a. the amount that worker i would get if he or she%hose sector j. If worker 1 chooses sector 1, then wll = plall and wlz = pza12Iwll. If worker 2 chooses sector 2, then wzz = pzaz2and wzl = plazl Iwzz Then Plall -Wll >9>wlz -Pzalz plapl Wp2 -u;p2 w2z pnap2 so that %2 wll 2 % azl wpp ap2 This is the same relation as in (3) in Section II.B., even though Roy's model uses a different technology and assumes workers choose sectors instead of em- ployers choosing workers. on the size of the machine assigned to that worker in equilibrium. In the opti- mal assignment problem, the value placed on differences in worker charac- teristics in (8)depends on the characteris- tics of workers' machines. Unequal wage structures in sectors of the economy are therefore a direct outcome of the exis- tence of an assignment problem. C. Self-selection In all three of the assignment models considered here, self-selection is the mechanism that is used to bring about the assignment. Workers select a sector or job, and thereby assign themselves to it, when it offers them greater income or utility than any other sector. This se- lection criterion results in a distribution of performances within the sector that differs systematically from the distribu- tion for the population as a whole. These selection corrections have been worked out for the two-sector Roy model but not for the general multisector case. Self-selection is not a necessary feature of as- signment models, however, and is only one of a few mechanisms that may oper- ate in the economy to assign workers to jobs. Self-selection requires that a worker have complete information about potential earnings or utility in each sec- tor. This is reasonable when there are only two sectors. But the assumption of complete information becomes unreason- able when there are many sectors with little guide to the worker as to which one is most suitable. Once one abandons full information and self-selection, the in- formation structure plays a role in deter- mining the feasible assignment mecha- nism, the resulting assignment and wage differences. Studies of worker behavior suggest that workers engage in search to find jobs. In the standard search model, work- ers need to know only the distribution of wage offers among jobs and not the wage corresponding to each job. A worker selects a reservation wage and ac- cepts the first job offer with a wage that equals or exceeds it. The selection crite- rion under search would appear to be much simpler than under self-selection. A worker observed to be in a sector (or job) gets a wage that exceeds the reserva- tion wage but that does not necessarily exceed the wage in every other sector (or job). Then the income or utility in a given sector needs to satisfy only one bi- lateral comparison, i. e., with the work- er's reservation income or An alternative approach is to assume that in each industry or sector, workers make a choice between that sector and a compos- ite sector consisting of all other sectors. Maddala (1983, pp. 275-78) considers ad- ditional methods that rely on multiple binary-choice rules or on applications of order statistics. With search present, the process of as- signing workers to jobs itself generates some additional sources of inequality (Sattinger (1985) analyzes the distribu- tional consequences of job search). Workers suffer unequal amounts of un- employment in their search, contributing to inequality in earnings and welfare. Workers with identical characteristics may get unequal wage rates because of the random outcomes of search, further contributing to inequality. Workers can influence their expected wage rates and likelihood of unemployment through their choice of reservation wages, provid- ing an alternative source of inequality. On the other hand, search alters the as- signment in such a way that higher 40The degree of simplification depends on how the reservation wage is determined. If it is exogenous (an unknown function of the worker's characteristics, the same for all workers, plus a random element), then the simplification is straightfonvard. If the reser- vation wage is determined endogenously, as the solu- tion to the worker's optimizing problem, then the econometrics could be as complicated as the self- selection procedures. skilled workers end up, on average, with fewer resources than in an exact, self- selection assignment, possibly reducing their earnings and overall inequality. A further consequence of departing from the mechanism of full information self-selection is that there will be a de- mand for information about worker or job characteristics. MacDonald (1980) devel- ops a model of person-specific informa- tion in which there are two types of work- ers and two types of firms. The type of a worker cannot be directly observed, but workers can invest in generating in- formation about their own types. Neither worker type has an absolute advantage at both types of firms, so any information leads to improved worker-firm matches. The value of information in the labor market leads firms to offer higher wages to workers who can provide information about their types. All workers then re- ceive a return to information investment. In later articles, MacDonald (1982a, 1982b) considers how information enters the production function through the as- signment. A continuum of tasks is assigned to workers of two types based on the quality of information, and output rises when the quality of information in- creases. Models of signaling, filters, and screening (Kenneth Arrow 1973; A. Michael Spence 1973) show that worker in- vestment in information about themselves could simply give them a competitive advantage in the labor mar- ket. Workers get a private return to infor- mational investments that exceeds the social returns. With an assignment prob- lem in the economy, however, informa- tional investment can yield a social return that equals or exceeds the private return, even though it does not change worker productivity (e. g., Arrow's model with two types of jobs; 1973, p. 202). In Mac- Donald's model (1980), there are no ex- ternalities from information about worker types as in signaling or screening models so that the social returns equal private returns. Waldman (1984a) and Joan E. Ricart i Costa (1988) develop models in which the worker's assignment itself acts as a signal to other firms, leading to possi- ble inefficient assignments. Investment in information generates a pattern of mobility over time as well as effects on the life-cycle earnings profile. Hartog (1981a) develops a two-period model in which wages in the first period are based on signals and in the second period on capabilities. He shows that dis- persion in signal classes increases over time, and more capable individuals expe- rience higher earnings The structure of information also affects the amount of earnings inequality. Michael Rothschild and Joseph Stiglitz (1982) develop a model in which a work- er's output is maximized when placed in a job level corresponding to the worker's ability. However, the worker's ability de- pends on both observed and unobserved characteristics, so the job placement de- pends on the worker's expected ability level. A given observed characteristic is then related both directly to production and indirectly, through its correlation with unobserved characteristics. Firms may be unable to distinguish direct and indirect effects. When few characteristics are observable, expected ability levels do not vary greatly, and workers are placed in narrowly varying job levels. When many characteristics are observable, ex- pected ability levels vary greatly and workers are placed in widely varying job levels. With more observable character- istics, both the expected wage and the variance of wages are Tournaments may be regarded as a mechanism of assigning workers to hier- archical levels in a context in which worker abilities are revealed through competition (Edward Lazear and Rosen 1981; Rosen 1986b). As performances de- pend on effort, large prizes are required for workers in the top ranks to maintain incentives to compete. Labor markets provide another mecha- nism for assigning workers to jobs. Instead of one big labor market, sub-markets arise based on observable characteristics of workers and jobs. With- out an assignment problem (at least spa- tially), the existence of submarkets would serve no economic purpose. Screening, job market signaling, dual labor markets, and occupational segregation may deter- mine assignment through restricted ac- cess to jobs for some workers, resulting in assignments that are less efficient than possible given the constraints of costly and incomplete information (Dickens and Kevin Lang 1985; Insan Tunali 1988; T. Magnac 1991). Comparative advantage is not neces- sarily present in the three models consid- ered here, and it does not necessarily determine the assignment. In the linear programming optimal assignment prob- lem, comparative advantage will be ab- sent if the matrix of output values (av) has rank one. Then each column will be a scalar multiple of any other column. The ratios of output values for any two workers, aUlazj, will be the same no mat- ter which machine they use. Despite the absence of comparative advantage, an op- timal assignment will arise in which more productive workers are assigned to more productive machines, according to the scale of operations effect. If the matrix has rank two or more, then comparative advantage must arise but it will not be the only determinant of the In the differential rents model, comparative advantage will be absent when- ever f (g,k) is multiplicatively separable (i.e., it can be written as a function of g times a function of k). For example, sup- posef (g,k) = gakP as in the Cobb-Doug- las production function. Then the ratio of output values for workers 1and 2 will be (g~lkP)l(g~lkP) (gl/g2)Q, an amount that does not depend on the machine size. Comparative advantage will there- fore be absent. The optimal assignment will require larger machines to be com- bined with more skilled workers as in the scale of operations effect. Iff (g,k) is not separable (e.g., as in the constant elasticity of substitution production func- tion with elasticity unequal to one), then comparative advantage will Only in Roy's model does comparative advantage determine the assignment of workers to jobs. Because the value of out- put in each sector equals the worker's earnings, worker self-selection leads to an assignment that is consistent with comparative advantage in the sense de- fined in (1). With this general definition, no restrictions are placed on the correla- tion between performances in one sector and in the other. The standard compara- tive advantage case, in which absolute advantage is absent (case i in Section III.C.), is only one of three possible cases. The basic reason comparative advan- tage determines assignment in Roy's model but not in the others is that coop- erating factors such as capital play no sig- nificant role. In the linear programming optimal assignment problem and the dif- ferential rents model, the value of output is divided between labor and the employer so that wages are no longer pro- portional to output. In Roy's model, how- ever, the value of output goes entirely to the worker. The absence of any role for cooperating factors of production is not an inherent feature of Roy's model. Capital can be incorporated into Roy's model as follows. Suppose that within a sector, workers perform tasks that are an input together with capital. Suppose output in a sector is given by Ire, Vol. XXXZ Uune 1993) where Q is total output per period in the sector, n is the number of workers, K is the amount of capital, and T is the total number of tasks performed. Assume Q has continuous first and second order derivatives. If n does not appear as an argument of Q in (15), then the marginal ~roduct of a worker will be tidQldT, where ti is the number of tasks performed by the worker (this is essentially Heck- man and Sedlacek's assumption (1985, p. 1080) in their derivation of Roy's model). Potential earnings in a sector will con- tinue to be proportional to tasks as in Roy's standard model. However, suppose n appears in (15) and suppose further that Q has constant returns to scale in all three arguments. Then where p = Kln = capital per worker, y = Tln = average tasks per worker, and h(p,y) = Q(l,Kln,Tln) = output per worker. Then the marginal product of a worker who performs titasks per period is where MPi is the marginal product, h, = dhldp and h, = dhldy. Unless h(p,y) is homogeneous of de- gree one, the intercept in (17) will be positive or negative so that the marginal ~roduct is no longer proportional to the number of tasks performed.41 It can be shown that inequality in marginal prod- ucts and wages in a sector will be greater or less than inequality in tasks performed 41 If h(p,y)is homogeneous of degree one, h(p,y) = ph, + yh, by Euler's Theorem, so that MPj = tih, and potential earnings in a sector are proportional to tasks performed. This reduces to the case where Q is a function of K and T only as Q = nh(p,y) = h(np, ny) = h(K, TI. depending on whether the intercept is negative or positive. Depending on the functional form for h(p,y), a change in either p or y will alter the relation be- tween wages and tasks within a sector. Movement of workers from one sector to another, with no movement of capital, raises the capital to labor ratio in the sec- tor they move from and lowers it in the sector they move into. These changes al- ter relative wages between and within sectors. In this way, capital can be incorporated into Roy's model. This results in a much more complex model that would be more difficult to estimate econometrically. But this extension is necessary in order to investigate how growth, capital accumu- lation, and business cycles affect the dis- tribution of earnings and why wage struc- tures vary from sector to sector. V. Strategies in Studying the Distribution of Earnings In analyzing any complex research question, a standard approach is to de- compose the problem by breaking it up into smaller questions that can be more easily explained. Approaches to the dis- tribution of earnings can be understood in terms of the decompositions used to analyze it. These decompositions include breaking the economy up into sectors, use of an earnings function, or perfectly elastic supply or demand curves. Assign- ment models demonstrate the invalidity of the ceteris paribus assumptions which lie behind these decompositions. This section discusses the decompositions used in various approaches, the problems revealed by analyzing the economy's as- signment problem, the solutions suggested by existing models, and strategies for further work. A. Sectoral Decompositions A seemingly natural way to study the distribution of earnings is to break the population down into subgroups based on demographic, occupational, or indus- trial categories. At any point in time, one can then study earnings inquality in terms of differences within and between groups. With this disaggregation or de- composition, one would expect to be able to calculate the consequences for inequality of a change within a sector, e.g., from the number in that sector or the distribution of earnings within that sec- tor. However, this calculation requires the ceteris paribus assumption that the composition of the other sector remains the same, and Roy's model shows directly why this assumption does not hold. In Roy's model, a change in the num- ber of workers, mean earnings or vari- ance of logarithms of earnings within a sector does not occur in isolation. Shifts of workers from one sector to another occur because of changes in the relative prices of output in the two sectors. When the price of output of one of the sectors increases, a nonrandom selection of workers in the second sector move into that sector. This movement alters the means and distributions in both sectors. Figures 9 and 10 and Table 1in Section 1II.C. demonstrate the consequences of changes in relative sector prices. In Table 1, as the price of trout in terms of rabbits goes up from 0.83 to 1.00, the proportion hunting rabbits declines from 0.551 to 0.5. If one used this result to predict the effects of the price change on inequality, though, one would miss the self-selection effects of the shift on the composition of workers within sectors. As shown, both mean rabbit skills and mean trout skills decline, and other changes occur within the sectors. Thus the number or distribu- tion in one sector cannot be taken as given as the other sector changes. From the assignment perspective, the source of the change in distribution lies in the reassignment of workers from one sector to another in response to sector price changes, rather than in the separate changes within each sector. By incorpo- rating the selection decision, Roy's model allows one to predict the conse- quences of changes in sector prices. Roy's model is appropriate any time the population is divided into two groups based on individual choice. For example, consider the decision to participate in the labor market. This context provided much of the early work on self-selection corrections (Reuben Gronau 1974; H. Gregg Lewis 1974; Heckman 1974, 1979), and Heckman and Sedlacek esti- mate models with a nonmarket or house- hold sector (1985, 1990). The decision to participate in the labor market divides the population into two sectors, the paid labor market and the nonmarket or household sector. A decomposition in which one looked at only the paid labor force would be misleading. From the perspective of Roy's model, workers in the paid labor market are a selection of all potential workers. As the task price in the paid labor market goes up, a selec- tion of individuals will move from the nonmarket to the paid labor market sec- tor. Changes in average wages will not be proportional to changes in task prices, and average worker productivity will be affected, depending on the parameters of the task distributions. An empirical version of Roy's model can be used to examine these and other effects which occur along with changes in labor force participation. Measures of earnings inequality may be used to compare alternative distribu- tions among a fixed population but may inaccurately indicate changes in inequal- ity when the number of earners grows or contracts. For example, an increase in the paid labor market task price could draw in predominantly low wage work- ers, making them better off while raising measured earnings inequality. The ap- propriate correction is to include workers outside the paid labor market in the mea- sure of inequality, so that movement of a worker in or out of the labor force would not by itself cause changes in inequality. Heckman and Honore (1990, Theorem 6, p. 1135) derive results relating in- equality in one of the sectors in Roy's model to overall inequality under the as- sumption of lognormality. A related application arises in studying the effect of development on inequality. One sector in Roy's model would be the market (perhaps urban) sector and the other a nonmarket sector (perhaps rural, agricultural, or subsistence). Develop- ment generates a higher task price for labor producing the market good relative to the nonmarket good. The effects on the selection of workers in the market sector could then be derived and related to average earnings, productivity, and observed inequality as development pro- ceeds. International trade provides another potential empirical application of Roy's model. A classic question in trade theory, reflected in the Hecksher-Ohlin and Samuelson-Stolper theorems, is the ef- fect of trade on factor payments and in- come distribution. Roy's model could be applied to this question by assuming that within a country, workers (or producing units) are divided between export and import-competing sectors. Results from Roy's model could then be used to exam- ine effects of terms of trade on production in the two sectors, average earnings and productivity in each sector, and the over- all distribution of earnings. In addition to their own work, Maddala (1983, p. 289) and Heckman and Honore (1990, p. 1121) discuss further applica- tions of Roy's self-selection corrections, which include labor force participation, returns to education, retirement, union wage differentials, migration, occupational choice, movement between regions, and marital status. More recently, George Borjas (1990) examines the effects of self-selection on the skill composition of immigrants. Charles Brown (1990) ana- lyzes the operation of self-selection in army retention. Also, Gary Solon (1988) and Robert Gibbons and Katz (1992) con- sider the earnings of industry changers, Lazear (1986) examines choice of piece rate versus salary pay structures as a con- sequence of sorting between firms, and Tunali (1988) and Magnac (1991) extend Roy's model to examine segmented labor markets. Further applications of Roy's model are possible if it can be extended to more than two sectors. A current research question concerns the reasons why wages differ among industrial sectors or estab- lishments. Levy and Murnane (1992, Section V1.C) review this literature in relation to the distribution of earnings, and James D. Montgomery (1991), Lang (1991), and Sattinger (1991) analyze capi- tal intensity as a source of wage differ- ences. Other possible reasons include ef- ficiency wages, unobserved abilities, union threats, and involuntary unemployment. A first step in analyzing the relation between wages and industrial characteristics is to correct for sectoral selection biases within industries. How- ever, the econometrics of Roy's model with self-selection would appear to place a barrier of only two or three sectors that can be estimated as a worker's income or utility in a sector must exceed the in- come or utility in every other sector. The extension to many sectors would appear to be possible if workers are assumed to engage in search instead of self-selection to find jobs, as discussed in Section 1V.C. B. Earnings Function A second approach to decomposing the distribution of earnings is to express separately the prices and quantities of worker characteristics that contribute to earnings. This approach uses an earnings function to describe the prices for worker characteristics, i. e., the relation between worker characteristics and earnings. This earnings function can be combined with the distribution of worker characteristics to generate the distribution of earnings. As discussed in the introduction, this ap- proach neglects that what is exogenous to the determination of an individual's earnings is endogenous to the determina- tion of the distribution of earnings. The approach therefore involves a form of the fallacy of composition. The problems that arise from using an earnings function can be demonstrated using the following simple model. Sup- pose at any one point in time that work- ers differ by a single characteristic x and that the logarithm of earnings is related to x by the following earnings function: where ei is a mean zero random error term uncorrelated with xi.From the per- spective of supply and demand models, it seems reasonable to suppose that the return to the skill variable x depends on both demand and supply variables. The demand for x, according to the capital- skill complementarity hypothesis, would depend on the economy's capital to labor ratio p. Suppose therefore that the coeffi- cient b depends positively on the capital to labor ratio p and negatively on the average population value of the worker characteristic x. Now taking variances on both sides of (18), Var(1n y) = b2var(ln x) + Var(e). (19) This relationship will hold tautologically whenever (18) holds. If a single worker experiences a change in his or her own characteristic from xito xi, the expected logarithm of earnings for that worker would change from a + b In xito a + b In xi,But suppose all worker character- istics increase by 10 percent. Then (19) will incorrectly predict the consequences of this change for earnings inequality, as measured by the variance of the loga- rithms of earnings. Var(1n x) will increase by 1.l2= 1.21, but Var(1n y) will not increase by 1.21 b2: as b depends nega- tively on the average worker characteris- tic, b2 will decline. Further, if one at- tempts to estimate (19) directly (using for example time series data), the estimated coefficient of Var(1n x) will not equal b2; it will instead confound changes in Var(1n x) with changes in b2.42 Use of an earnings function obscures the influence of demand on the distribu- tion of earnings. Demand variables such as the capital to labor ratio appear to play no role in the determination of individual earnings in (18) but that is an illusion. The influence of the aggregate capital to labor ratio on earnings would be invisible in any single period empirical estimation of (18) as it would be the same for each observation. Then in the expression for earnings inequality in (19), demand vari- ables do not appear explicitly, suggesting that earnings inequality does not depend on them. But while the coefficient b can be regarded as a constant in a single pe- riod estimate of (18), it will vary from one period to another in (19). With an assignment problem present in the economy, the earnings function is no longer a direct relationship arising from the contributions of worker charac- teristics to production. The assignment problem introduces an intermediate step between worker characteristics and earn- ings. The observed earnings function is generated from the supply and demand decisions of workers and firms. The he- donic wage and price literature develops 42 In approaches derived from a solution to the assignment problem, the coefficient b in (20) is en- dogenously determined, and the influence of the dis- tribution of individual characteristics, Var (1n x) in (21), on earnings inequality is correctly specified (for example in the earnings function solved by Tinbergen 1956, p. 168). econometric procedures for estimating how the wage varies with worker attri- butes and for the identification of supply and demand functions for worker charac- teristics (Rosen 1974; Dennis Epple 1987; Timothy Bartik 1987). An immedi- ate application of this approach in the area of the distribution of earnings is compensating wage differentials for job characteristics (R. E. B. Lucas 1977b; Smith 1979; Greg J. Duncan and Bertil Holmlund 1983; John H. Goddeeris 1988; Mark Killingsworth 1986). In Kill- ingsworth's analysis, workers have heter- ogeneous preferences for a given job characteristic so that the compensating wage differential will depend on the dis- tribution of worker preferences. Killings- worth applies the analysis to differentials between white and blue collar labor. Causes of compensating wage differen- tials are relevant to issues of segmenta- tion, discrimination and comparable worth (Killingsworth 1987). Pettengill (1980) applies related proce- dures to the question of how labor unions affect skill differentials and inequality. Unionized firms adjust to higher negoti- ated wages by employing higher quality workers. The effect of unionization is then to shift demands for workers to higher quality levels, leading to greater skill differentials and inequality in the economy. Pettengill also extends the analysis to credentialism, discrimination, absenteeism, cyclical changes in productivity and wages, and minimum wages. A major reason for interest in the earn- ings function is that it describes the earn- ings that a worker with a given set of characteristics can obtain in the labor market. With an assignment problem, however, there will no longer exist a sin- gle expected wage associated with a given set of worker characteristics, as implied by the traditional earnings function. In- stead, the worker will face a distribution of potential wages and job characteristics from alternative jobs or sectors. The pur- pose of describing the alternatives facing workers would be better served by esti- mating the wage offer distributions for workers with given sets of characteris- tic~.~~ If job characteristics such as risk or satisfaction vary from sector to sector, the joint distribution of wages and job characteristics should be estimated. Tinbergen (1975a, 1977) and Hartog (1986a, 1986b, 1988) estimate models in which earnings depend on both worker and job characteristics. Such models are relevant to questions of overeducation (Mun C. Tsang and Henry M. Levin 1985; Russell W. Rumberger 1987; Na- chum Sicherman 1991) and mismatches. An important outcome from such models is that the return to education varies ac- cording to the job placement of the worker, e.g., whether the job requires more or less education than the worker has. C. Human Capital Models Human capital models of the distribu- tion of earnings also rely on decomposi- tions. These models structure the deter- mination of earnings in such a way that the influences of supply and demand can be separated. The decompositions used are inconsistent with the existence of an assignment problem but are not essential features of human capital models of indi- vidual behavior (as distinct from models of the distribution of earnings). Robert J. Willis and Rosen's (1979) intertempo- ral extension of Roy's model shows that the human capital and assignment mod- els can be combined. 44 In the model developed by Mincer (1958, 1974), workers choose a level of *Christopher J. Flinn and Heckman (1982) and Nicholas Kiefer and George Neumann (1979a, 1979b) estimate wage offer distributions facing workers en- gagd in search. Mark Blaug (1976), Lucas (1977a), Rosen (1977), Sattinger (1980) and Willis (1986) discuss alternative interpretations of human capital earnings models. schooling based on the maximization of the present discounted value of lifetime earnings. With continuous discounting, the earnings hnction generated by this assumption in the long run is where Y, is the yearly income with s years of schooling beyond the minimum, Yo is yearly income with the minimum schooling level, r is the discount rate and s is the number of years of schooling be- yond the minimum. If Y, and s satisfy the relation in (20), the present discounted values will be equalized for each level of schooling. At each level of schooling, there is in the long run a perfectly elastic supply of labor at the yearly income determined by (20). If the yearly income for a given level of schooling yields a higher present discounted value than other levels, new workers will choose that schooling level. The amount of labor will continue to in- crease until the yearly income is pushed down to a level which yields the same present discounted value as all other schooling levels. With horizontal supply curves, the lo- cation of demand curves cannot influence the yearly incomes of workers in the long run. Under the assumptions of Mincer's model, the equilibrium earnings function is invariant to changes in the demands for labor. The coefficient of the schooling variable in (20) is the discount rate and does not depend on demand variables. However, in this model, the distribu- tion of earnings depends both on the yearly incomes for workers at each schooling level and the numbers at each schooling level. With a horizontal supply curve for each level of schooling, the number of workers with that level de- pends on the location of the demand curve. Although demand does not enter explicitly anywhere in the model, it plays a central role in determining the distribu- tion of earnings. In Y, = In Yo + rs, (20) This human capital model provides a simple decomposition of the determina- tion of the earnings distribution. Worker supply behavior completely determines the earnings function, which remains the same in the long run as long as the dis- count rate is the same. Demands for workers do not influence this earnings function. With horizontal supply curves, the location of demand curves completely determine the numbers of work- ers at each schooling level. This decomposition is possible because of the absence of any assignment prob- lem. All workers are identical, so any worker can obtain any schooling level. In the long run each worker is indifferent as to which schooling level to obtain. If instead workers had preferences for some schooling levels or if they faced different costs of obtaining schooling, the decom- position would break down. The supply of workers to a given schooling level would no longer be horizontal. As the demand for workers at a particular schooling level increases, the yearly in- come would need to be higher to com- pensate for cost and utility differences. Shifts in demand would then alter the earnings function. The terms in the earnings function (20) can be reinterpreted to yield a very dif- ferent model, one that yields an explicit decomposition of earnings inequality. From a worker's point of view, a rate of return to an investment in schooling can always be calculated. In Mincer's model, this rate of return is exactly equal to the discount rate in the long run equilibrium. Now suppose that the rate of return is instead a random variable.45 The earn- ings function can then be written as 45 Lucas (1977a) analyzes different interpretations of the interest rate in human capital models. Mincer (1974, p. 27), Becker and Barry Chiswick (1966), and Chiswick and Mincer (1972) use human capital mod- els of the earnings function in which the rate of return to schooling is a random variable. where Var(1n Y) is the variance of loga- rithms of earnings, and ? and i are the average rate of return and level of school- ing for all workers. This procedure appears to provide a neat decomposition of earnings inequal- ity into rate of return and schooling sources, but unless restrictive assump- tions hold, the decomposition has no pre- dictive content. Willis and Rosen (1979) develop a model that can be used to show why the existence of an assignment problem in- terferes with the use of the decomposi- tion in (22). In their development, the assignment problem is to allocate indi- viduals to schooling levels on the basis of tastes, talents, expectations, and pa- rental wealth. Individuals base their de- cision on human capital considerations but because of heterogeneous tastes and talents they are not indifferent among schooling levels. Rates of return calculated from observed relationships between schooling and wages will not reflect the rates of return facing individu- als. Willis and Rosen argue that individu- als who stop with a high school education earn more than college educated individ- uals would if they had stopped with a high school education. On average, then, individuals have an absolute advantage in terms of earnings at the schooling level they choose: they earn more than the rest of the population would at that In response to a wage increase for a particular schooling level, only a subset of individuals switch to that level. The supply of individuals to a schooling level is therefore upward sloping rather than perfectly elastic as in Mincer's model. The wages and rates of return for workers at a schooling level depend on both the supply and demand for work- ers with that amount of schooling as well as observed and unobserved worker skills. Now consider what happens when the distribution of schooling shifts. We may suppose that changes in parental wealth or tastes lead more workers to get a col- lege education. Moving down the de- mand curve for college-educated labor, the rates of return for college educated labor declines. The change in the distri- bution of schooling causes a change in the distribution of rates of return.47 This connection between schooling 46 According to Willis and Rosen (1979), this result argues against a large positive correlation between a worker's productivity with a high school education and with a college education, ruling out simple rank- ings of individuals. This corresponds to cases ii and iii in the Roy model (1II.C) and is inconsistent with the one-dimensional ability or skill models discussed in 1II.B. In addition to absolute advantage, compara- tive advantage is also ~resent in the sense defined in g). " To be caused by shifts in the distribution of schooling, the recent increases in the returns to schooling (noted in Section I) would require that fewer workers choose to get a college education. and rates of return makes (22) unusable. To keep things simple, suppose 3 goes up while Var(s) stays the same. The ex- pression in (22) would predict that earn- ings inequality would go up by 2SVar (r) times the increase in S. This prediction would be incorrect as and Var(r) also change. The ceteris paribus assumption that is needed to use comparative statics does not hold. Instead of earnings in- equality increasing as workers get more education, it may decline if the rate of return falls enough. 48 Early assignment models trivialize hu- man capital issues by assuming that worker characteristics such as schooling levels are exogenously determined. Early human capital models trivialize as- signment issues by assuming that work- ers all face the same investment returns independent of sector. But assignment and human capital models are not inher- ently competing theories of the distribu- tion of earnings. In both, behavior is motivated by wealth maximization (or, in extensions, utility maximization). In both, workers are assigned to sectors or educational levels on the basis of their own choices rather than rationing. The This daculty can be avoided by supposing that a worker's rate of return does not depend on the sector or job chosen. Each worker's labor could be expressed as a multiple of some standard worker's labor, independent of occupation or job. This is known as the efficiency units assumption as all labor can be expressed in terms of a common measure (see discussions by Rosen 1977; Sattinger 1980, pp. 15-20; and Paul Taubman 1975, pp. 3-6).It is equiva- lent to the absence of any assignment problem. With wages proportional to a worker's efficiency units, em- ployers are indifferent as to which labor to use in a job. For example, an employer is indifferent between hiring a worker with two efficiency units at a wage rate w and hiring two workers of one efficiency unit apiece at a wage rate of wI2 each. This assumption would insure that the distribution of rates of return is unaffected by shifts in the distribution of schooling, so that the standard comparative static analysis can be applied to (22). However, the efficiency units as- sumption seems an unreasonable restriction in mod- els designed to explain the economic consequences of human diversity. If the problems of extending Roy's model to many sectors can be solved, then an empirical general equilibrium method of studying the distribution of earnings can be developed in which the conditions in each sector can be consid- ered separately. However, it is difficult to describe how to formulate such an em- pirical, multi-sector model without actu- ally doing so. advantage of viewing investment in edu- cation as an assignment to schooling lev- els is that it more accurately describes the choices available to individuals. As in Willis and Rosen's analysis (1979), a model that specifies the unobserved al- ternatives for workers allows one to ex- amine the issue of ability bias and esti- mate the real return to individuals for investment in schooling. D. Assignment Models In their structuring of the determina- tion of earnings, assignment models also rely on a decomposition of the distribu- tion of earnings. The existence of an as- signment problem introduces an intermediate decision stage between workers' characteristics and their earnings. In many of the theoretical models, the as- signment problem can be solved first and then used to determine the wage differ- entials. This occurs, for example, in Tin- bergen's model, the differential rents model, and the linear programming opti- mal assignment problem. In general, however, assignment and the determina- tion of earnings occur simultaneously. In empirical studies, the information that would be needed to solve the econo- my's assignment problem is unavailable. The current assignment can be observed but not the alternatives facing individual agents. The relevant characteristics of workers and firms may not even be ob- servable. Complete specification of the assignment problem, prior to the deter- mination of earnings, is therefore unreal- istic in empirical estimates. Tinbergen's empirical study of the dis- tribution of earnings structures the prob- lem using a supply and demand approach (1975a, pp. 29-30). In a single labor mar- ket, the structural supply and demand equations can be solved to yield two re- duced form equations, one for the price (wage) and the other for the quantity (em- ployment). These reduced form equa- tions are functions of both supply and demand factors. Extending this approach to multiple labor markets, the earnings function and measures of inequality should be regarded as reduced form equations that depend on the distribu- tion of both supply and demand factors. This approach does not incorporate self- selection corrections but carries the sim- ple, practical injunction that both supply and demand factors need to be included in empirical estimates of the earnings function or inequality measures. Tinber- gen (1975a, 1977) applies an empirical decomposition of the determination of earnings in which the labor market is bro- ken down into compartments based on discrete values of some worker character- istic such as education. In this approach, the separate effects of supply and de- mand (in the form of job requirements) can be estimated. Tinbergen's separate estimates of labor market compartments suggest a very pragmatic, disaggregated, partial equilibrium approach to studying the distribution of earnings. First, one isolates a sector or segment of the econ- omy of interest (for example, wholesale and retail trade, which employs many younger workers at low wages). Then one analyzes the market in terms of supply and demand, fully specifying the alterna- tives that are available to employers and workers (in order to account for self- selection phenomena) and institutions in the labor market. This approach is often followed in practice but without refer- ence to the income distribution develop- ments that lie behind it. where Ysiis the yearly income for worker i, riis the average rate of return to school- ing for worker i, and si is the number of years of schooling beyond the mini- mum for worker i. In this model, the variables riand si are determined exoge- nously so that a separate model is needed to explain a worker's level of schooling and rate of return (see Becker 1975, pp. 94-117, for a theory of investment behav- ior that explains the rate of return and schooling level). So far, the earnings function in (21) describes the generation of a worker's earnings, taking the econ- omy as given. The next step is to extend the earnings function in (21) to an expla- nation of earnings inequality by taking variances on both sides of (21). Under the assumption that the rate of return ri and level of schooling si are independently distributed, A. The Importance of Choice This survey has examined three po- tential reasons why the existence of an assignment problem affects the distribu- tion of earnings. The first reason is com- parative advantage, which is measured by bilateral comparisons of output of two workers at two jobs. In technologies with cooperating factors of production, as in the linear programming optimal assign- ment problem and differential rents model (Sections 1II.A and III.B), com- parative advantage does not necessarily determine the assignment; instead, the scale of operations effect may influence the assignment. In technologies where comparative advantage does determine the assignment, as in Roy's model, it is consistent with all cases, in the trivial sense that the ratios of outputs for two workers varies from sector to sector. The standard comparative advantage case, in which there is a large positive correlation between sectoral outputs, yields predict- able consequences for the distribution of earnings but does not necessarily arise in Roy's model. In short, saying there is comparative advantage tells us very lit- tle about the distribution of earnings other than that it doesn't resemble the distribution of abilities in any one sector. Self-selection is another phenomenon that is often closely associated with as- signment models. Although all three models in Section I11 use self-selection as a means of choosing among sectors or jobs, assignment models do not require the self-selection mechanism. Instead, workers could engage in search to find the sector that maximizes their income or utility. This is particularly appropriate when there are many sectors or jobs to choose from, making the assumption of full information about jobs less reason- able. The major point common to all assign- ment models is the existence of choice among jobs, occupations, or sectors for a worker. With choice, a worker's earn- ings or utility are not determined by per- formance within a single area of endeavor. Instead, the worker can avoid the consequences of a bad performance in one sector by choosing another sector. Comparative advantage is significant in describing relationships among opportu- nities in different sectors. Self-selection describes how decisions may be made. But the underlying feature is the variabil- ity of worker output among sectors or jobs. This variability arises from the dif- ferent sensitivity of jobs to worker abili- ties (i.e, the difficulty of jobs), the large dispersion in tasks performed in different jobs throughout the economy, and the diversity and lack of correlation among individuals' performances of those tasks. Variability of output from worker-firm matches generates a problem in choosing jobs or sectors, from the worker's point of view, and a problem of assigning work- ers to jobs, from the perspective of the economy as a whole. B. Extensions Through the work of Heckman and Sedlacek (1985, 1990), Roy's model pro- vides the most promising route to esti- mate assignment models. They find that two extensions of Roy's model are essen- tial to explain the distribution of wages in the U.S. economy. First, workers choose sectors on the basis of utility maxi- mization instead of income maximization. Workers have sector-specific preferences that alter the assignment of workers to jobs relative to what one would expect on the basis of income maximization. Sec- ond, Heckman and Sedlacek find that nonparticipation in the labor market is an important choice facing workers. Journal of Economic Literature, Vol. XXXI (June 1993) Workers participating in either the man- ufacturing or nonmanufacturing sectors are therefore a selection from all poten- tial workers in the population. This article suggests that two additional extensions are important. First, the interactions between capital and de- mands for labor should be adequately specified and estimated. In Heckman and Sedlacek's models, changes in capital intensity or energy prices affect demands for workers and the division of labor through task prices. With their assump- tions, earnings are proportional to tasks, which serve as measures of sector specific abilities. Then wage differentials for worker characteristics within a sector would not change over time. However, if capital intensity affects marginal prod- ucts as in (17), earnings will not be pro- portional to tasks. Then changes in capi- tal per worker or tasks per worker could influence the demand for workers in more complicated ways. Incorporation of the role of capital will provide a link be- tween observable conditions of demand and the distribution of earnings. The second extension concerns the number of sectors. With self-selection, the econometrics appear to place a bar- rier of only two or three sectors that can be estimated because a worker's income or utility in a sector must exceed the in- come or utility in every other sector. The extension to many sectors would appear to be possible if workers are assumed to engage in search instead of self-selection to find jobs. Then the income or utility in a given sector needs only to satisfy one bilateral comparison, i. e., with the worker's reservation income or utility. With many sectors, the relation be- tween sector characteristics and demands for workers can be examined. Character- istics of industries lead firms to seek dif- ferent mixes of workers. In the context of a search model firms must pay higher wages to gain more acceptances from workers with desired characteristics. Such wage premia are consistent with ab- sence of monopsony when search assigns workers to jobs. Wage differences be- tween industries can persist without all workers going to the higher wage indus- try. With search assigning workers to jobs, the extension of assignment models to industrial sectors can be used to ex- plain interindustry differences in wage structures. C. Relation to Other Distribution Theories The distribution theory arising from assignment models is distinct from estab- lished theories. In perfectly competitive neoclassical models, wages in a labor market are determined from intersecting supply and demand curves, where the demand curve is derived from firm pro- duction functions. In equilibrium, wages equal the marginal revenue product, the product price times the marginal product of labor. The expression for the wage dif- ferential in the differential rents model, (ll), provides a comparable condition. In neoclassical models, wages are found by varying the quantity of labor, whereas in assignment models, the wage differen- tial is found by varying the assignment (i.e., changing the type of labor used in a particular job). In cases where workers and capital are combined in fixed propor- tions (the differential rents model and the linear programming assignment problem), the marginal product of labor will not be defined. Then the wage differen- tials will be consistent with alternative absolute levels of wages and profits. This indeterminacy is absent in neoclassical models but instead characterizes classical models, in which factor prices are deter- mined exogenously. Classical models, however, are usually concerned with fac- tor shares and do not explain relative wage rates. The major difference between assign- ment and human capital models is in the interpretation of the earnings function. In the earnings function in Mincer's model, the coefficient of schooling is the discount rate. This coefficient would re- main unchanged in response to changes in the distributions of jobs or workers. In the alternative approach, as developed by Becker and Chiswick, the coefficient of schooling is a rate of return which does not explicitly depend on supply or de- mand variables. In these models, auxil- iary assumptions combined with the human capital model of investment be- havior allow one to decompose the distri- bution of earnings. The existence of an assignment problem is inconsistent with these auxiliary assumptions. In assign- ment models, the earnings function is in- terpreted as an hedonic wage function, a reduced form relationship instead of a structural relationship. A change in the distribution of either workers or jobs would lead to a new equilibrium, with a new coefficient of schooling. With an assignment problem, the earnings func- tion cannot be used to predict the conse- quences of changes in jobs or workers. There is no reason, however, why the human capital model of individual behav- ior cannot be combined with sectoral choice or set in the context of an assign- ment problem, as in Willis and Rosen's model. Because abilities enter so importantly, assignment models may appear to be akin to theories that relate the distribution of earnings to the distribution of abilities. This link is perhaps fostered by the sim- plifying assumption in some models of a single parameter describing workers. However, the abilities considered in as- signment models are the outputs or per- formances in various jobs, not IQ's or other measures of innate ability. Further, the point of assignment models is just the opposite of ability models: the distribution of abilities cannot by itself explain the distribution of earnings. In particular, using one task to identify abili- ties, the distributions of earnings and abilities will not have the same shape. Assignment models emphasize the diver- sity of human performances from one task to another and the roles of choice and demand in placing higher or lower values on particular abilities. Earnings inequal- ity depends both on differences among workers and the extent to which the economy exaggerates or moderates those differences through the assignment of workers to different tasks. D. Relative Wage Changes Assignment models provide several explanations for why relative wages change. In the linear programming as- signment problem, a shift in jobs would produce changes in wage differentials. In (8), if new jobs have machine properties which are greater for those jobs that have higher values of the X's, then wage differ- entials would increase. However, this model is too abstract to relate its results to observed changes in skill and age dif- ferentials. The differential rents model of Section 1II.B provides a realistic explanation of how differentials can change over time. In (13), the wage function derived using specific assumptions about functional forms can be expressed by w(g) = ~g(~''g+~"k)'~g where A is a con- + C,, stant and C, is a constant determined by the reserve prices of workers and ma- chines. The exponent of the skill variable g in this expression is an increasing func- tion of uk,which measures the inequality in the distribution of capital among jobs. As capital becomes more unequally dis- tributed among jobs, the wage function w(g) becomes more concave. With more capital per worker among the most skilled workers, their wage differentials increase. With less capital per worker among the less skilled, their wage differ- entials decline. The quantity w -C, is lognormally distributed with variance of logarithms aug + Puk,and this measure of inequality is an increasing function of uk.*' Using an extended version of Roy's model, Heckman and Sedlacek (1985, p. 1107) estimate that the price of the manu- facturing task declined 22 percent from 1976 to 1980, while the price of the non- manufacturing task rose 21 percent in the same time period. In their analysis, this produces a movement of workers from manufacturing to nonmanufacturing, which is the sector with the lower educa- tion and experience differentials. This movement alone would tend to reduce aggregate differentials. However, in the nonrnanufacturing sector, the greater task price raises skill and experience dif- ferentials, while in the manufacturing sector these differentials decline.50 Workers moving from manufacturing to nonrnanufacturing tend to be at the bot- tom of the task distributions jn the two sectors. Their movement raises average worker quality in manufacturing while reducing it in nonmanufacturing. The av- erage wage in manufacturing declines both because of the lower task price and the lower average worker skill levels. In nonmanufacturing, the task price and av- erage worker quality have opposite ef- fects on the average wage and the net effect is ambiguous. Heckman and Sedla- cek do not indicate what the net effect 49 Sattinger (1980) finds that inequality in the dis- tribution of capital is significantly related to earnings inequality. The measure of inequality in the distribu- tion of capital is derived from industry capital to labor ratios. Dickens and Katz (1987, p. 66) review studies that have found a positive relationship between capi- tal to labor ratios and average industry wage rates. Thev also find a ~ositive relations hi^ in their own study (p. 78). That is, the slope of the skill-earnings relation- ship increases in nonrnanufacturing and decreases in manufacturing. Note, however, that under the as- sumptions used by Heckman and Sedlacek, elastici- ties of earnings with respect to skills are constant. of all these movements would be on ag- gregate educational and experience dif- ferentials. If the economy is not described by the manufacturing versus nonrnanufacturing dichotomy, then movements within sectors will also con- tribute to changes in relative wages. The extensions to Roy's model to incorporate capital, discussed in Section IV. D, would explain how differentials could change within sectors independently of task price changes. E. Research Questions Each approach to the distribution of earnings suggests a set of relevant research questions. In the human capital approach, for example, important phe- nomena are decisions to invest and the returns to those investments. The exis- tence of an assignment problem suggests a different set of questions. First, how do workers differ in ways that are relevant to employment? What choices do workers face between occupa- tions and industrial sectors? How do the choices facing workers differ by educa- tional level? How many sectors do work- ers choose among? How do worker pref- erences affect choice of sector, wage differentials and earnings inequality? On the employer side, how does the technology in each sector relate worker characteristics to output? How do these technologies generate different demands for workers? How are demands for work- ers related to features of the industry or occupation such as capital intensity or hi- erarchical span of control? Do workers occupy capital in the sense discussed in Section II.C? How has the mix of jobs changed over time, how is it related to shifts between manufacturing and non- manufacturing, and how does trade affect the mix? What wage differentials do firms need to offer to attract a labor force with a given set of characteristics? What mechanisms operate in the econ- omy to assign workers to jobs? What combinations of self-selection, search, and separated labor markets do workers use to find jobs? What are the costs of these assignment mechanisms, what are the efficiency properties of the assign- ments they bring about, and how do they affect the distribution of earnings? What are the relative contributions of unequal variances of worker performances among sectors, correlation among sector perfor- mances and number of sectors to earn- ings inequality? A final set of questions is related to explaining observed changes in the dis- tribution of earnings. How are shifts in the mix of jobs and workers related to changes in the wage rates for high school and college graduates and the returns to education? How do wage or unemploy- ment differentials reconcile so-called mismatches between supplies and de- mands for workers? Assignment models offer the promise of incorporating the influence of demand on the distribution of earnings, accurately representing the relation between worker characteristics and earnings, and rigorously explaining changes in earnings inequality and wage differentials over time. This promise has been met only partially through applications of assign- ment models to many aspects of the dis- tribution of earnings. While assignment models indicate the shortcomings of earnings function and human capital ap- proaches, empirical work has only begun to provide a comprehensive alternative. AITCHISON,JOHNAND BROWN,J.A.C. The lognormal distribution. Cambridge: Cambridge U. Press, 1981, 48(1), pp. 3749. ARROW,KENNETHJ. "Higher Education as a Filter," 1. Pub. Econ., July 1973, 2(3), pp. 193-216. BARRON,JOHN M.; BLACK, DAN A. AND LOEWENSTEIN, MARK A. "Job Matching and On-the-Job Training," J. Labor Econ., Jan. 1989, 7(1), pp. 1 19. BARTIK,TIMOTHY J. "The Estimation of Demand Pa- rameters in Hedonic Price Models," ]. Polit. Econ., Feb. 1987, 95(1), pp. 81-88. BECKER, GARY S. "A Theory of Marriage: Part 1," 1. Polit. Econ., JulyIAug. 1973, 81(4), pp. 81M6. . Human capital. 2nd. ed. NY: Columbia U. Press for National Bureau of Economic Research, 1975. BECKER,GARYAND CHISWICK, R. "Education BARRY and the Distribution of Earnings," Amer. Econ. Rev., May 1966, 56, pp. 358-69. BLAUG,MARK."The Empirical Status of Human Cap- ital Theory: A Slightly Jaundiced Survey," ]. Econ. Lit. Sept. 1976, 14(3), pp. 827-55. BORJAS,GEORGEJ. Friends or strangers: The impact of immigrants on the U.S. economy. NY: Basic Books, 1990. "The Quality Dimension in Army Retention," Carnegie-Rochester Conf. Ser. Pub. Pol., Autumn 1990, 33, pp. 221-55. A. "Hierarchy, Ability and Income Distribution," 1. Polit. Econ., Oct. 1979, 87(5), Part 1, pp. 991- 1010. CHISWICK,BARRYR. AND MINCER, JACOB. "Time- Series Changes in Personal Income Inequality in the United States from 1939, with Projections to 1985," in lnvestment in education. Ed: THEODORE W. SCHULTZ.Chicago: U. of Chicago Press, 1972, pp. 34-66. CRAWFORD,VINCENTP. AND KNOER,ELSIE M. "Job Matching with Heterogeneous Firms and Work- ers," Econometrica, Mar. 1981, 49(2), pp. 437-50. F. "In-ter-industry Wage Differences and Industry Char- acteristics," in Unemployment and the structure of labor markets. Eds.: KEVIN LANGAND JONATHAN LEONARD. NY and Oxford: Basil Blackwell, 1987, pp. 48-89. DICKENS,WILLIAMT. AND LANG, KEVIN. "A Test of Dual Labor Market Theory," Amer. Econ. Rev., Sept. 1985, 75(4), pp. 792-805. DUNCAN,GREG J. AND HOLMLUND, BERTIL. "Was Adam Smith Right After All? Another Test of the Theory of Compensating Wage Differentials," I. Labor Econ., Oct. 1983, 1(4), pp. 36679. RONALD STEWART. Modem labor economics, 4th ed., NY: HarperCol- lins, 1991. EPPLE, DENNIS. "Hedonic Prices and Implicit Mar- kets: Estimating Demand and Supply Functions for Differentiated Products,"]. Polit. Econ., Feb. 1987, 95(1), pp. 59-80. FLINN, CHRISTOPHER J. AND JAMES J. HECKMAN, "New Methods for Analyzing Structural Models of Labor Force Dynamics,"]. Econometrics, Jan. 1982, 18(1), pp. 115-68. GALE, DAVID AND SHAPLEY,LLOYD."College Admis- 1957. AKERLOF,GEORGE."Structural Unemployment in a Neoclassical Framework," 1. Polit. Econ., May1 June 1969, 77(3), pp. 399407. . "Jobs as Dam Sites," Rev. Econ. Stud., Jan. sions and the Stability of Marriage," Amer. Math. Mon., Jan. 1962, 69(1), pp. 9-15. F. ''Does Unmeasured Ability Explain Inter-Industry Wage Differentials?" Rev. Econ. Stud., July 1992, 59(3), pp. 515-35. GODDEERIS,JOHN H. "Compensating Differentials and Self-selection: An Application to Lawyers," J. Polit. Econ., April 1988, 96(2), pp. 411-28. GORDON,ROGERH. AND BLINDER,ALANS. "Market Wages, Reservation Wages, and Retirement Deci- sions," J. Pub. Econ., Oct. 1980, 14(2), pp. 277 GRANOVETTER, "Toward a Sociological Theory MARK. of Income Differences, in Sociological perspec- tives on labor markets. Ed.: IVAR BERG. NY: Aca- demic Press, 1981, pp. 11-47. GRONAU,REUBEN."Wage Comparisons-A Selectivity Bias,"]. Polit. Econ., Nov./Dec. 1974, 82(6), pp. 1119-43. GRUBB,DAVIDB. "Ability and Power over Production in the Distribution of Earnings," Rev. Econ. Sta- tist., May 1985, 67(2), pp. 188-94. HARTOG,JOOP. "On the Multicapability Theory of Income Distribution," Europ. Econ. Rev., Nov. 1977, 10(2), pp. 157-71. . "Earnings and Capability Requirements," Rev. Econ. Statist., May 1980, 62(2), pp. 23M0. . "Wages and Allocation under Imperfect In- formation," De Economist, 1981a, 129(3), pp. 311 . Personal income distribution: A multicapa- bility theory. Boston: Martinus Nijhoff, 1981b. . "Earnings Functions: Testing for the De- mand Side," Econ. Letters, 1985(3), 19, pp. 28185... . "Earnings Functions: Beyond Human Capi- tal," Applied Econ., Dec. 1986a, 18(12), pp. 12911309.--.. "Allocation and the Earnings Function," Empirical Econ., 1986b, 11(2), pp. 97-110. . "An Ordered Response Model for Allocation and Earnings," Kyklos, 1988, 41(1), pp. 113-41. HECKMAN, J. "Shadow Prices, Market Wages, JAMES and Labor Supply," Economtrica, July 1974, 42(4), pp. 679-94. . "Sample Selection Bias as a Specification Er- ror," Econometrica, Jan. 1979, 47(1), pp. 153-61. HECKMAN,JAMESJ. AND HONORE, BO E. "The Empir- ical Content of the Roy Model," Econometrica, Sept. 1990, 58(5), pp. 1121-49. JOSE. "The Im- portance of Bundling in a Gorman-Lancaster Model of Earnings," Rev. Econ. Stud., Apr. 1987, 54(2), pp. 243-55. HECKMAN,JAMESJ. AND SEDLACEK, L. GUILHERME"Heterogeneity, Aggregation and Market Wage Functions: An Empirical Model of Self-selection in the Labor Market," J. Polit. Econ., Dec. 1985, 93(6), pp. 1077-1125. . "Self-Selection and the Distribution of Hourly Wages," J. Labor Econ., Jan. 1990, 8(1), Part 2, pp. S329-63. HOUTHAKKER, S. "The Size Distribution of HENDRIK Labour Incomes Derived from the Distribution of Aptitudes," in Econometrics and economic theory: Essays in honour of Jan Tinbergen. Ed.: WILLY SELLEKAERTS. White Plains, NY: International Arts and Sciences Press, 1974, pp. 177-87. JOVANOVIC, "Job Matching and the Theory BOYAN. of Turnover," J. Polit. Econ., Oct. 1979, 87(5), Part 2, pp. 972-90. -. "Matching, Turnover, and Unemployment," J. Polit. Econ., Feb. 1984, 92(1), pp. 108-22. KALLEBERG,ARNE L. AND BERG, IVAR. Work and industry; Structures, markets and processes. NY: Plenum Press, 1987. JOHN D. "Jobs, Migration, and Emerging Urban Mismatches," in Urban change and pov- erty. Eds: MICHAEL G. H. MCGEARY AND LAURENCE E. LYNN, JR. Washington, DC: National Academy Press, 1988, pp. 148-98. S., JR. AND CRAWFORD, P. "Job Matching, Coalition Formation, and Gross Substitutes," Econometrica, Nov. 1982, 50(6), pp. 1483-504. KIEFER, NICHOLAS M. AND NEUMANN,GEORGER. "An Empirical Job-Search Model with a Test of the Constant Reservation-Wage Hypothesis," J. Polit. Econ., Feb. 1979a, 87(1), pp. 89-107. . "Estimation of Wage Offer Distributions and Reservation Wages," in: Studies in the economics of search. Eds.: STEVEN A. LIPPMAN AND JAMESJ. MCCALL.Amsterdam: North-Holland, 1979b, pp. 171-89. KILLINGSWORTH, "A Simple MARK R. Structural Model of Heterogeneous Preferences and Com- pensating Wage Differentials," in Unemployment, search and labour supply. Eds.: RICHARD BLUNDELL AND IAN WALKER. Cambridge: Cambridge U. Press, 1986, pp. 303-17. . "Heterogeneous Preferences, Compensating Wage Differentials, and Comparable Worth," Quart. J. Econ., Nov. 1987, 102(4), pp. 727 MARTIN. "Assignment Problems and the Location of Eco- nomic Activities," Econometrica, Jan. 1957, 25(1), pp. 53-76. KOSTIUK,PETER F. "Firm Size and Executive Com- pensation," J. Human Resources, Winter 1990, 250). pp. 90-105. KRUEGER, B. AND SUMMERS,LAWRENCEALAN H. "Ef- ficiency Wages and the Inter-Industry Wage Struc- ture," Economtrica, Mar. 1988, 56(2), pp. 259 LAM, DAVID. "Marriage Markets and Assortative Mating with Household Public Goods: Theoretical Results and Empirical Implications," J. Hum. Res., Fall 1988, 23(4), pp. 462-87. LANG, KEVIN. "Persistent Wage Dispersion and In- voluntary Unemployment," Quart. J. Econ., Feb. 1991, 106(1), pp. 181-202. LAZEAR,EDWARD."Salaries and Piece Rates," J. Bus., July 1986, 59(3), pp. 405-31. LAZEAR, AND ROSEN,SHERWIN. EDWARD "Rankorder Tournments as Optimum Labor Contracts,"]. Po-lit. Econ., Oct. 1981, 89(5), pp. 841-64. RICHARDJ. "U. S. Earn- ings Levels and Earnings Inequality: A Review of Recent Trends and Proposed Explanations," J. Econ. Lit., Sept. 1992, 30(3), pp. 1333-81. LEWIS, H. GREGG. "Comments on Selectivity Biases in Wage Comparisons," J. Polit. Econ., Nov./Dec. 1974, 82(6), pp. 1145-55. LUCAS,ROBERTE., JR. "On the Size Distribution of Business Firms," Bell J. Econ., Autumn 1978, 9(2), pp. 508-23. LUCAS,ROBERTE. B. "The Distribution of Job Char- acteristics," Rev. Econ. Statist., Nov. 1974, 56(4), pp. 530-40. . "Is There a Human Capital Approach to In- come Inequality?' J. Hum. Res., Summer 1977a, 12(3), pp. 387-95. sion and Interindustry Wage Differentials," Quart. J. Econ., Feb. 1991, 106(1), pp. 163-79. MORTENSEN, DALE T. "Matching: Finding a Partner for Life or Otherwise," in Organizations and insti- tutions: Sociological and economic approaches to the analysis of social structure, supplement to Amer. J. Sociology, 94. Eds.: CHRISTOPHER ROSEN. Chicago: U. of Chicago Press, 1988, pp. S215-40. 01, WALTER Y. "Heterogeneous Firms and the Orga- nization of Production," Econ. Inquiry, Apr. 1983, 21(2), pp. 147-71. PET~ENGILL,JOHNS. Labor unions and the inequality of earned income. Amsterdam: North-Holland, Prcou, A. C. The economics of welfare. London: "Hedonic Wage Equations and Psychic Macmillan, 1952. Wages in the Returns to Schooling," Amer. Econ. REDER, MELVIN W. "The Size Distribution of Earn- Rev., Sept. 1977b, 67(4), pp. 549-58. ings," in The distribution of national income. Eds.: LYDALL,HAROLD."The Distribution of Employment Incomes," Econometrica, Jan. 1959, 27, pp. 110 . The structure of earnings. Oxford: Oxford U. Press, 1968. MACDONALD. GLENN M. "Person-Suecific Informa- tion in the' Labor Market," J. poiit. Econ., June 1980, 88(3), pp. 578-97. . "Information in Production," Econometrica, Sept. 1982a, 50(5), pp. 1143-62. . "A Market Equilibrium Theory of Job As- signment and Sequential Accumulation of Informa- tion," Amer. Econ. Rev., Dec. 1982b, 72(5), pp. "A ~ehabilitation of Absolute ~dvanta~e,']. Polit. Econ., Apr. 1985, 93(2), pp. 277-97. MADDALA, G. S. "Self-Selectivity Problems in Econo- metric Models," in Applications of statistics. Ed.: PARUCHURIR. KRISHNAIAH. Amsterdam: North- Holland, 1977, pp. 351-66. . Limited-dependent and qualitative variables in econometrics. Cambridge: Cambridge U. Press, T. "Segmented or Competitive Labor Mar- kets?" Econometrica, Jan. 1991, 59(1), pp. 165 87. MANDELBROT, BENOIT. "Paretian Distributions and Income Maximization," Quart. 1. Econ., Feb. 1962, 76, pp. 57-85. MAYER. THOMAS. "The Distribution of Abilitv and ~arnin~s," Rev. Econ. Statist., May 1960, 42(2), pp. 189-95. MINCER,JACOB."Investment in Human Capital and Personal Income Distribution, J. Polit. Econ., Aug. 1958, 66(4), pp. 281-302. . Schooling, experience and earnings. NY: Co- lumbia U. Press for the National Bureau of Eco- nomic Research, 1974. MINCER,JACOBAND JOVANOVIC, BOYAN."Labor Mo- bility and Wages, in Studies in labor markets. Ed.: SHERWIN Chicago: U. of Chicago Press ROSEN. JEAN MARCHAL BERNARDDUCROS. AND NY: St. Martin's Press, 1968, pp. 583-610. . "A Partial Survey of the Theory of Income Size Distribution," in Six papers on the size distri- bution of wealth and income; Studies in income and wealth 33. Ed.: LEE SOLTOW. NY: Columbia U. Press for the National Bureau of Economic Re- search, 1969, pp. 205-53. RICARDO,DAVID.On the principles of political econ- omu and taxation. Cambridge: Cambridge U. press, 1951. RICARTI COSTA. IOAN E. "Manaeerial Task Assien- ment and Promotions." con-trica, Mar. 19u88, 56(2), pp. 449-66. ROSEN, SHERWIN. "Hedonic Prices and Implicit Mar- kets: Product Differentiation in Pure Competi- tion," 1. Polit. Econ., Jan./Feb. 1974, 82(1), pp. 34-55. . "Human Capital: A Survey of Empirical Re- search," in Research in labor economics, Vol. 1. Ed.: RONALD EHRENBERG. Greenwich, CT: JAI Press, 1977, pp. M9. . "Substitution and Division of Labour," Economica, Aug. 1978, 45(179), pp. 235-50. . "The Economics of Superstars," Amer. Econ. Rev., Dec. 1981, 71(5), pp. . "Authority, Control, and the Distribution of Earnings," Bell J. Econ., Autumn 1982, 13(2), pp. 311-23. . "A Note on Aggregation of Skills and Labor Quality,"]. Hum. Res., Summer 1983, 18(3), pp. 425-31. . "The Theory of Equalizing Differences," in Handbook of labor economics. Vol. I. Eds.: ORLEY ASHENFELTERAND RICHARDLAYARD. Amsterdam: North-Holland, 1986a, pp. 641-92. . "Prizes and Incentives in Elimination Tour- naments." Amer. Econ. Rev., . -, , Sept. 1986b, 76(4), pp. 701-15. ROTH, ALVIN E. "Stability and Polarization of Inter- ests in lob Matching," Econometrica,Tan. 1984, -. . . 52(1), &.47-57. for National Bureau of Economic Research, 1981, "Common and Conflicting Interests in Two- pp. 21-63. sided Matching Markets," Europ. Econ. Rev., MONTGOMERY, D. "Equilibrium Wage Disper- Feb. 1985, 27(1), pp. 75-96. JAMES ROTH, ALVIN E. AND SOTOMAYOR,MARILDAA. OLI- VEIRA. Two-sided matching: A study in game- theoretic modeling and analysis. Cambridge: Cambridge U. Press, 1990. ROTHSCHILD, AND STIGLITZ,MICHAEL JOSEPH. "A Model of Employment Outcomes Illustrating the Effects of the Structure of Information on the Level and the Distribution of Income," Econ. Letters, 1982, lO (3-4), pp. 231-36. ROY, ANDREW D. "The Distribution of Earnings and of Individual Output," Econ. I., Sept. 1950, 60, pp. 489-505. kets. Ed.: IVAR E. BERG. NY: Academic Press, 1981, pp. 49-74. TAUBMAN, PAUL J. Sources of inequality in earnings. Amsterdam: North-Holland, 1975. THOMPSON, L. "Pareto Optimal Determinis- GERALD tic Models for Bid and Offer Auctions," in Operations research uerfahren-methods of operations re- search 35. Eds.: WERNER OET~LI AND FRANZ STEFFENS. Konigstein, Germany: Athenaum-Hain- Scriptor-Hanstein, 1979, pp. 517-30. THUROW,LESTERC. Generating inequality: Mecha- nisms of distribution in the U.S. economy. NY: "Some Thoughts on the Distribution of Earn- Basic Books, 1975. ings," Oxford Econ. Papers, June 1951,3,pp. 135-TINBERGEN, JAN. "Some Remarks on the Distribution 46 of Labour Incomes," International economic pa- RUMBERGER, W. "The Impact of Surplus pers, no. 1. Translations prepared for the interna- RUSSELL Schooling on Productivity and Earnings," J. Hum. tional economic association. Eds.: ALAN T. PEA- Res., Winter 1987, 22(1), pp. 24-50. COCK ET AL. London: Macmillan, 1951, pp. 195 SAITINGER,MICHAEL."Comparative Advantage and 207. the Distributions of Earnings and Abilities," . "On the Theory of Income Distribution," Econometrica, May 1975, 43(3), pp. 45568. Weltwirtsch. Arch., 1956, 77(2), pp. 155-73. . "Compensating Wage Differences,"]. Econ. . "A Positive and a Normative Theory of In- Theory, Dec. 1977, 16(2), pp. 496-503. come Distribution," Rev. lncome Wealth, Sept. "Comparative Advantage in Individuals," 1970, 16(3), pp. 221-33. Rev. Econ. Statist., May 1978, 60(2), pp. 259--. "The Impact of Education on Income Distri- 67. bution. Rev. lncome Wealth. Se~t.1972. 18(3). "Differential Rents and the Distribution of pp. 255-65. Earnings," Oxford Econ. Pap., Mar. 1979, 31(1), "Substitution of Graduate by Other Labour," -. -. pp. 60-71. Kuklos. 1974. 27(2). DD. 217-26. lncome distribution: Analysis and policies. Capital and the distribution of labor earn- ings. Amsterdam: North-Holland, 1980. Amsterdam: North-Holland, 1975a. "Factor Pricing in the Assignment Problem," "Substitution of Academically Trained by Scand. J. Econ., 1984, 86(1), pp. 17-34. . Unemployment, choice, and inequality. Berlin: Springer-Verlag, 1985. . "Consistent Wage Offer and Reservation Wage Distributions," Quart. J. Econ., Feb. 1991, 106(1),pp. 277-88. SHAPLEY,LLOYD S. AND SHUBIK,MARTIN."The As- signment Game I: The Core," Int.]. Game Theory, 1972, 1, pp. 111-30. SICHERMAN,NACHUM." 'Overeducation' in the Labor Market,"]. Labor Econ., Apr. 1991, 9(2), pp. 101 SIMON, HERBERT A. "The Compensation of Executives," Sociometry, Mar. 1957, 20, pp. 32 SMITH,ROBERTS. "Compensating Wage Differentials and Public Policy: A Review," Ind. Lab. Rel. Rev., Apr. 1979, 32(3), pp. 39-352. SOLON, GARY. "Self-Selection Bias in Longitudinal Estimation of Wage Gaps," Econ. Letters, 1988, 28(3), pp. 285-90. SPENCE, A. MICHAEL. "Job Market Signaling," Quart.J. Econ., Aug. 1973, 87(3), pp. 355-74. SPURR,STEPHENJ. "HOW the Market Solves an As- signment Problem: The Matching of Lawyers with Legal Claims," J. Labor Econ., Oct. 1987, Part 1, 5(4), pp. 502-32. S~~RENSEN, ARNE L. "An AAGE B. AND KALLEBERG, Outline of a Theory of the Matching of Persons to Jobs," in Sociological perspectives on labor mar- Other Manpower," Weltwirtsch. Arch., 197513, 111(3), pp. 466-76. . "Income Distribution: Second Thoughts," De Economist, 1977, 125(3), pp. 315-39. TSANG,MUN C. AND LEVIN,HENRYM. "The Econom- ics of Overeducation," Econ. Educ. Rev., 1985, 4(2), pp. 93-104. TUCK,RONALDHUMPHREY.An essay on the economic theory of rank. Oxford: Basil Blackwell, 1954. TUNALI, INSAN. "Labor Market Segmentation and Earnings Differentials." Working Paper #405, Dept. of Economics, Cornell U., Apr. 1988. WALDMAN, "Job Assignments, Signalling, MICHAEL. and Efficiency," Rand J. Econ., Summer 1984a, 15(2),pp. 255-67. . "Worker Allocation, Hierarchies and the Wage Distribution," Rev. Econ. Stud., 1984b, 51(1), pp. 95-109. WELCH, FINIS. "Linear Synthesis of Skill Distribu- tion," J. Hum. Res., Summer 1969, 4(3), pp. 311 WILLIS, ROBERT J. "Wage Determinants: A Survey and Reinterpretation of Human Capital Earnings Functions," in Handbook of labor economics. Vol. I. Eds.: ORLEY ASHENFELTER ANDRICHARD LAYARD.Amsterdam: North-Holland Publishing GO.,1986, pp. 525-602. WILLIS, ROBERT J. AND ROSEN, SHERWIN. "Education and Self-selection," 1. Polit. Econ., Oct. 1979, 87(5), Part 2, pp. S7-36.
{"url":"http://www.academicroom.com/article/assignment-models-distribution-earnings","timestamp":"2014-04-20T18:26:51Z","content_type":null,"content_length":"260018","record_id":"<urn:uuid:482d9a41-02f7-4a74-8be5-d4ed0efc0b08>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2000 [00183] [Date Index] [Thread Index] [Author Index] Re: Second Opinion • To: mathgroup at smc.vnet.net • Subject: [mg26382] Re: [mg26373] Second Opinion • From: BobHanlon at aol.com • Date: Sat, 16 Dec 2000 02:40:12 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com dist = PoissonDistribution[mu]; Range[0, Infinity] PDF[dist, x] Verify that the distribution is valid on the domain Sum[PDF[dist, x], {x, 0, Infinity}] == 1 mu == Mean[dist] == Variance[dist] CDF[dist, x] GammaRegularized[1 + Floor[x], mu] CDF[dist, x] == Sum[PDF[dist, k], {k, 0, x}], Element[x, Integers]] Plot[CDF[PoissonDistribution[5], n], {n, 0, 10}, PlotPoints -> 101, PlotStyle -> RGBColor[1, 0, 0]]; Plot3D[CDF[PoissonDistribution[mu], n], {mu, 1, 5}, {n, 0, 6}, PlotPoints -> 25]; Demonstrating that the mean of Poisson data is the maximum likelihood estimate for the parameter mu data = Table["x" <> ToString[k], {k, 20}]; (mu /. Solve[D[(Plus @@ Log[PDF[dist, #] & /@ data]), mu] == 0, mu][[1]]) == data = RandomArray[PoissonDistribution[10*Random[]], {250}]; (mu /. Solve[D[(Plus @@ Log[PDF[dist, #] & /@ data]), mu] == 0, mu][[1]]) == Bob Hanlon In a message dated 12/13/00 3:40:40 AM, john.lai at worldnet.att.net writes: >I tried to calculate Poisson Distribution in a backdoor way and used >mathematica to model it. I could not get what I wanted. I don't think >is mathematica problem and more than likely my method is flawed. So I >this out to see if some of you may spot my error. >Poisson Distribution,P(n) =1-Summation [exp(-n)*(n^x)]/Factorial(x) where >goes from 0 to N-1 >For given n and N, P(n) can be determined easily. However, I want to >determine N if P(n) and n are specified and I do not want to get access >Poisson lookup table. My idea is to calculate P(n) with a series of n >and N >(essentially generating the tables). Plot a surface curve whose variables >are n, P(n) and N. The idea was once this surface is obtained, with x-axis >as n, y-axis as P(n) and z-axis as N, then for a given n and P(n) I can >obtain N. >I wrote a C program to generate P(n) and use mathematica to plot this >surface. I have 14 sets of n and in each set of n, I have 139 variables >(i.e. N runs from 1 to 140 ), so there are 139 corresponding values of >for each n. When I tried to use the function Fit to estimate this surface, >it took about hr for my 500MHz desktop to calculate! And the resultant >expression is huge! >Then, I cut down the dimension of my data set. For each n, I generated >values of N and repeated the process again. However, no matter what >combination of polynomial I used (x,x^-1,Exp(-x),Exp(-x^2),Exp(-x-y).), >resulting equation of the surface is meaningless. It doesn't look right >least I expected it to resemble some sort of Poisson or even Gaussian shape) >and substituting P(n) and n back, I got garbage. I have enclosed a .nb >for reference. [Contact the author to obtain this file - moderator] >So after all this, does it mean that my scheme of calculating Poisson >Distribution is fundamentally wrong? >Any suggestions are appreciated and thanks in advance.
{"url":"http://forums.wolfram.com/mathgroup/archive/2000/Dec/msg00183.html","timestamp":"2014-04-18T13:32:32Z","content_type":null,"content_length":"37324","record_id":"<urn:uuid:a811a2b3-e314-48e2-abbd-c599b3d31d3f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Princeton GMAT Princeton GMAT Princeton Review for GMAT Preparation So you think you have the business acumen in you and nothing could be better than to hone your skill by pursuing management education? One has to take a standardized test such as GMAT or Graduate Management Admission Test, the scores of which areconsidered mandatory for admission to the leading business and management schools. When preparing for GMAT, one needs to have the right kind of mentor to help him/her achieve the optimum score. Amongst the many that you might consider, Princeton GMAT is worth a mention as it has been mentoring GMAT test takers with guaranteed results. • Why Select Princeton GMAT? Princeton GMAT has been designed not only to suit your budget but also to provide a tailor-made learning experience at your convenience. Its smart, enthusiastic instructors will definitely guide you to achieve the best. To start with, Princeton GMAT offers a free GMAT practice test to identify your strengths and weaknesses which is indeed helpful to help you decide your test preparation • Princeton GMAT Test Preparation Options: Princeton GMAT offers a wide array of test preparation options to choose from, such as □ Private tutoring □ Small group instruction □ GMAT classroom course □ GMAT live online □ GMAT online course □ GMAT books Princeton GMAT Private Tutoring: This option gives you the maximum flexibility as you get the undivided attention of your tutors. Even in terms of tutors, Princeton GMAT matches its tutors according to the student’s learning style. There are the private tutors, master tutors and premier tutors all of which are highly skilled and efficient in delivering the results. Princeton GMAT Small Group Instruction: This includes 18 hours of live instruction to a class size of four students. It also provides simulated 7 full-length Computer Adaptive GMAT practice tests. Princeton GMAT Classroom Course: This option provides 21 hours of live instruction with 7 full-length Computer Adaptive GMAT practice tests. Princeton GMAT Live Online: The live online instructions provide 22 hours of instructions covering all the important GMAT concepts. Princeton GMAT Online Course: The online course is for the self directed learners who can get maximum flexibility through their 30 hours of interactive self-paced lessons. The highly skilled instructors are available online 24/7. Besides the computer-adaptive practice tests, you will also have access to more than hundreds of highly interactive multimedia lessons, practice tests etc. Princeton GMAT Books: While there are a number of Princeton Review GMAT books to select from, Cracking the GMAT needs a special mention as it contains proven strategies and techniques besides the number of tests and drills with full explanations. For more Princeton GMAT books you can log onto www.randomhouse.com/princetonreview. Princeton GMAT also offers Math fundamental workshops to brush up your mathematical concepts with four hour live online sessions which are divided into sessions of two hours each. You can attend these sessions as many times as you like; besides, there are also 250 questions available online for additional practice. If you are aiming for the 700+ score to get into the best business schools then Princeton GMAT offers the GMAT Hard Math live online classroom. The two and half hour session includes practice drills with more than 300+ questions with tips and tricks to master the concepts. The drills also work on your time management skills to give you the extra edge to tackle the computer-adaptive test. If in spite of putting your best foot forward you are unable to get your expected scores, Princeton GMAT is willing to work with you all over again for free. Think about it! Terms and Conditions Information published in TestPrepPractice.net is provided for informational and educational purpose alone for deserving students, researchers and academicians. Though our volunteers take great amount of pain and spend significant time in validating the veracity of the information or study material presented here, we cannot be held liable for any incidental mistakes. All rights reserved. No information or study material in this web site can be reproduced or transmitted in any form, without our prior consent. However the study materials and web pages can be linked from your web site or web page for • Research • Education • Academic purposes No permission is required to link any of the web page with educational information available in this web site from your web site or web page
{"url":"http://www.testpreppractice.net/GMAT/princeton-gmat.html","timestamp":"2014-04-19T20:03:48Z","content_type":null,"content_length":"23552","record_id":"<urn:uuid:84b4085d-672d-4a3b-8a88-f08cbb091096>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Distribution Matching for Transduction Novi Quadrianto, James Petterson and Alex Smola In: NIPS 2009, 6-11 Dec 2009, Vancouver, Canada. Many transductive inference algorithms assume that distributions over training and test estimates should be related, e.g. by providing a large margin of separation on both sets. We use this idea to design a transduction algorithm which can be used without modification for classification, regression, and structured estimation. At its heart we exploit the fact that for a good learner the distributions over the outputs on training and test sets should match. This is a classical two-sample problem which can be solved efficiently in its most general form by using distance measures in Hilbert Space. It turns out that a number of existing heuristics can be viewed as special cases of our approach. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00005684/","timestamp":"2014-04-16T22:12:57Z","content_type":null,"content_length":"7226","record_id":"<urn:uuid:f84aa642-bae0-41ef-befe-5691e8cc2a5a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Which of the following sets is uncountable? P(N) (N is the natural number set.) Q Z [0, 1] n Q Its P(N) but it's important you understand why the others are countable. (I take it P means power set). First of. If a set is countable then a subset of it is also countable. So... Is $\mathbb{Q}$ countable? If it is then $[0, 1] \cap \mathbb{Q}$ will be countable. Surely you can tell that $\mathbb{Z}$ is countable? Do you know anything about cardinality yet? Ya no way could I be fully proving that of the top of my head. Could prove the rest are countable though... To be honest I kinda misinterpreted the question as 'which one is uncountable' so I figured try to show OP how to eliminate the countable ones. Never the less... If OP is learning about cardinality that could be used via Cantors card(x) < card(P(x)) theorem. I think i can see why P(N) is countable cus usually when you take all the subsets of N right? and thats not countable cus there's alot of sets i really suck at proving stuff I looked at cantors therom Proof: It suffices to show that [0, 1] is uncountable (see Exercise 7). If not, then we have a bijection from N to [0, 1]. This is a sequence (x) that lists all numbers in [0, 1], in some order. By considering the canonical decimal expansions, we will construct a number not on the list. Xl = CI,I CI,2 C I.3 X2 = C2, I C2,2 C 2.3 X3 = C3,IC3,2C3.3 Suppose that the expansions appear in order as indicated above. We build a canonical decimal expansion that disagrees with every expansion in our list. Let an = 1 if Cn,n = 0, and an = 0 if Cn,n > o. Now (a) disagrees in position n with the expansion of Xn. Furthermore, since (a) has no 9, (a) cannot be the alternative expansion of any number in our list. Therefore, the expansion (a) does not represent a number in our list. By Theorem 13.25, (a) is the canonical expansion of some real number. Thus our list does not contain expansions for all real numbers in [0, 1]. . but i dont really get how to use it
{"url":"http://mathhelpforum.com/discrete-math/139712-following-sets-uncountable-print.html","timestamp":"2014-04-20T21:27:30Z","content_type":null,"content_length":"11069","record_id":"<urn:uuid:9ea3ad6c-fac9-4586-9394-8881ca36342f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
The most important thing you can know about Haskell and Functional Programming Submitted by metaperl on Thu, 10/06/2005 - 6:16pm. I bought "The Haskell Road to Logic, Maths, and Programming" but never looked at it until recently. Even though I had gone through 16 chapters of Simon Thompson's book, I had failed to grasp just what Haskell was about but I knew there was something that I was missing. And then I saw it in Section 1.9 of "The Haskell Road..." entitled "Haskell Equations and Equational Reasoning" The Haskell equations f x y = ... used in the definition of a function f are genuine mathematical equations. They state that the left hand side of the right hand side of the equation have the same value. This is very different from the use of = in imperative languages like C or Java. In a C or Java program the statement x = x*y does not mean that x and x*y have the same value, but rather it is a command to throw away the old value of x and put the value of x*y in its place. It is a so-called destructive assignment statement: the old value of a variable is destroyed and replaced by a new Reasoning about Haskell definitions is a lot easier than reasoning about programs that use destructive assignment. In Haskell, standard reasoning about mathematical equations applies. E.g. after the Haskell declarations x= 1 and y = 2, the Haskell declaration x = x + y will raise an error "x" multiply defined. ... = in Haskell has the meaning "is by definition equal to"... This was a huge landslide victory for me. Because I quit trying to write programs to get data here, data there. Values here, values there. Instead, I simply began to rewrite the original function as a new definition. I became so confident that I was able to write a program to return all the leaves of a tree. and here it is: data Tree a = Empty | Node a (Tree a) (Tree a) -- leaves takes a tree and an empty list and returns a list of leaves -- of the tree leaves :: Tree a -&gt; [a] -&gt; [a] leaves tree lis | empty tree = lis -- an empty tree is just the leaves so far -- add on current node if it is terminal.. NO! scratch that! no add -- on! That is an action. We are simply rewriting leaves tree lis -- as something else based on what we found out about leaves tree lis | terminal currentNode = currentNode tree : lis | empty rightBranch = leaves (leftBranch tree) lis | empty leftBranch = leaves (rightBranch tree) lis | otherwise = leaves (leftBranch tree) lis ++ leaves (rightBranch tree) lis Looking back at "Algorithms in Haskell" by Rabhi and "Craft of FP" by Simon Thompson, they do both make this same statement, but somehow it never really hit me right. One of my favorite spiritual quotes is actual said by a number of people. I think it covers what I was frantically trying to do in imperative languages and what Haskell never does: Nothing is happening. Nothing ever has happened. Nothing ever will happen. All that has occurred is me being aware of the dance of light on my consciousness (or something like that). -- EJ Gold, the American Book of the Dead "In Reality, nothing happens! It is a great gift to be able to understand this; if you perceive this, you are blessed, for inner vision has been granted to you." ~Ma Anandamayi 'Whatever you experience, realize that all of it is simply the unobstructed play of your own mind.' -Tsele Natsog Rangdrol In the quantum mechanical model nothing ever happens! -- http://www.mtnmath.com/cat.html It seems more natural to me to define leaves in such a way that it is of type Tree a -> [a]. Then you can simple collect the leaves in a bottom-up fashion: leaves Empty = [] leaves (Node x l r) = x : leaves l ++ leaves r (Note that the use (++) of is actually quite expensive. One usually circumvents this by introducing a so-called accumulating parameter, very much like you did: leaves :: Tree a -> [a] leaves = flip leaves' [] leaves' Empty = id leaves' (Node x l r) = (x :) . leaves' l . leaves' r Why should ++ be expensive? I thought that a list was implemented internally as a lazy linked list, and I would have thought that ++ should be cheap, just like : is. xs ++ ys takes time proportional to the size of xs, rendering the running time of a naive version of leaves quadratic. ++ is defined as (in module GHC.Base): (++) :: [a] -> [a] -> [a] (++) [] ys = ys (++) (x:xs) ys = x : xs ++ ys So it adds a constant overhead to the lazy evaluation of (:) in the first part of the combined list (i.e. the pattern match to decide between the two cases). Unless I'm misunderstanding something this is not proportional to the sizes of the lists in the concatentation. (x : xs) ++ ys = x : (xs ++ ys) Note that the two appearances of the cons constructor actually refer to different nodes here: you're rebuilding the first list. So, you evaluate and reconstruct every cons node in the first argument list. Clearly, this takes time proportional to the length of the first argument list. Admittedly, implementations, e.g. GHC, do offer some ad-hoc optimizations. Perhaps, I'm not explaining this too clear. I'm sorry for that. However, although it has been a while for me, as far as I can remember, this stuff is convered in almost all beginner's tutorials on the language. Consult, for instance, Section 8.3 of A Gentle Introduction to Haskell. Furthermore, the text book on algorithms in functional languages by Rabhi and LaPalme as well as Okasaki's standard work on Purely Functional Data Structures do a good job in explaining how to analyze the efficiency of functional programs. Well, that's if you are using lists. Functional dequeues or something else could be better... Another very simple example is the standard way to transform a tree into a list: inorder Empty = [] inorder (Node l x r) = inorder l ++ [x] ++ inorder r Again we have quadratic running time, since ++ traverses and copies all of its left argument. Perhaps the reader is trained to transform that code to make it faster: one can use an accumulator to replace ++. But the simpler solution is to use a better Sequence implementation. The beauty of the program is preserved, and the program becomes asymptotically optimal. It's a pity that students are usually thaught the other way. How then should they learn Reuse and Abstraction?
{"url":"http://sequence.complete.org/node/115","timestamp":"2014-04-17T12:37:14Z","content_type":null,"content_length":"26949","record_id":"<urn:uuid:1a3c89dd-cbc7-4dd6-b54d-ea8a9ac81dce>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What does it mean for a number to be the "solution" to an equation? Give an example of a variable equation and its solution. Explain in complete sentences how you know that number is the solution. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508eade1e4b0bd4e826cb0e1","timestamp":"2014-04-17T07:03:29Z","content_type":null,"content_length":"110083","record_id":"<urn:uuid:b0bbd206-58b8-4cc0-a177-0ce37a875bb8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The streets of a city are arranged like the lines of a chess-board. Let the dimensions be \(m \times n\). Find the number of ways in which a man can travel from the North-West corner to the South-East corner, going the shortest possible distance. • one year ago • one year ago Best Response You've already chosen the best response. thts a lot of turns i can take x( Best Response You've already chosen the best response. Kinda sounds like problem asked by FFM Best Response You've already chosen the best response. who know when i could get lost :/ @Diyadiya will drive me! then no need to worry! Best Response You've already chosen the best response. It is similar to FFM's problem. Best Response You've already chosen the best response. n+1th Mozkin's no Best Response You've already chosen the best response. @Ishaan94 i have an answer Best Response You've already chosen the best response. o wait not yet! Best Response You've already chosen the best response. |dw:1339345895603:dw| seems to be six int this case Best Response You've already chosen the best response. yes...six..\({4\choose 2}\) Best Response You've already chosen the best response. maybe I should write it \[{2+2\choose 2}\] Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. so for 2x3 ...we have 10 Best Response You've already chosen the best response. \[{5 \choose 2}\] Best Response You've already chosen the best response. so adding one segments ... \[ \binom {4 + 1}{2}\] Best Response You've already chosen the best response. \[{3+2\choose 2}\] Best Response You've already chosen the best response. Best Response You've already chosen the best response. for your 3x2 you will have to go 3 units up and 2 units to the right in total to get to the upper right hand corner UUURR the number of ways to arrange these letters is \({3+2\choose 2}\) Best Response You've already chosen the best response. Best Response You've already chosen the best response. 2*4+2*10+6*3 = \( \binom{9}{2} = \binom{3 + 3 + 3}{2}\) Best Response You've already chosen the best response. there are only 20 ways for a 3x3 Best Response You've already chosen the best response. \[{m+n\choose n}\] Best Response You've already chosen the best response. Yeah, because you take m+n steps and you must decide which n of them will be right, or which m of them will be down (it's equivalent). Best Response You've already chosen the best response. Best Response You've already chosen the best response. isn't there 36 steps in 3x3?? Best Response You've already chosen the best response. show me a path that requires 36 steps Best Response You've already chosen the best response. something wrong with draw app here ... can't copy my own drawing!! Best Response You've already chosen the best response. only 6 steps are needed to get to the bottom right Best Response You've already chosen the best response. Best Response You've already chosen the best response. there are 6 ways to do a 2x2 not 10 Best Response You've already chosen the best response. Oh .. i made mistake then ... it must be 3x2 Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. shouldn't it be 2*3*3+2*4+2*6 = 38 ?? Best Response You've already chosen the best response. try and draw 38 minimum distance paths using that graph Best Response You've already chosen the best response. I'm getting ( m + n -2)!/(m-1)!(n-1)! ?????? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Let each square formed be a unit square. Therefore if a man needs to get from NW to south east, He basically needs to cover "n-1" units on the east line. And "m-1" units south. --- For the Shortest possible path when he decides to never turn back. Now in order to do this each street taken by him must be in the South or east direction. Every time he goes east Lets call that turn/ street taken E. Every time he turns south we call that S. So any path followed by him can be written as: SSESESESSSSEEE........... ----- Meaning the man goes first south then south again without turning at next intersection Then turns east at the next turn.......... Here number of S's = m-1 No. of E's = n-1. So the problem has become an equivilant of Finding the permutations of n-1 E's and m-1 S's. = ( m -1 + n -1)! ----------------- (m-1)!(n-1)! Best Response You've already chosen the best response. |dw:1339422624581:dw| this is a 1x1 block so m=1 and n=1 using your formula \[\frac{(1-1+1-1)}{(1-1)!(1-1)!}=\frac{0!}{0!0!}=1\] but there are 2 paths to the end Best Response You've already chosen the best response. My formula deals with streets. => lines. => m = 2, n=2 in this caase Best Response You've already chosen the best response. mine does not Best Response You've already chosen the best response. I go by the number of blocks created and the number of steps to go from one corner to the other for a 1x1 block you have to go down 1 and to the right 1 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fd4c7a9e4b057e7d22300de","timestamp":"2014-04-18T16:17:12Z","content_type":null,"content_length":"625018","record_id":"<urn:uuid:cc54c463-5a41-411e-97ef-167406447f9c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Tools for "infinite-dimensional linear programming" up vote 1 down vote favorite I was wondering, whether you could point me to some tools with which I could tackle the following "infinite-dimensional linear programming" problem: $a=1,2,\ldots, A$, $x\in\Omega:=\left\{x\in\mathbb R^n|\sum_{i=1}^nx_i=1, x_i\geq 0, \forall i\right\}$ $r_a:\Omega\rightarrow [r_-,r_+]\subset\mathbb R$, $p_a:\Omega\times\Omega\rightarrow \mathbb R^+, \int_\Omega dy\ p_a(x,y)=1,\forall x,a$ $\pi_a:\Omega\rightarrow \mathbb R^+$ The Problem: Given $r=(r_1,\ldots,r_A)$ and $p=(p_1,\ldots,p_A)$. Let $\pi=(\pi_1,\ldots,\pi_A)$ and define $\Pi(r,p)=\left\{\pi|\int_\Omega dx\sum_a\pi_a(x)=1 \land\int_\Omega dx\sum_a\left(\pi_a(y)- p_a(x,y)\pi_a(x)\right)=0,\forall y \right\}.$ Find $\pi^*$ such that $\pi^*=\arg\max_{\pi\in\Pi(r,p)}\int_\Omega dx\sum_a r_a(x)\pi_a(x)$ 1 Something is fishy: the volume of the simplex in question is well below $1$, so the integral operators are very strongly contracting, reducing the domain to a bunch of identically $0$ functions. Are you sure you wrote what you wanted to write? In any case, if you replace the measure, the operators are compact, so $\sum_a\pi_a$ is an element of a finite-dimensional space of functions. So the first step is to figure out what that subspace is. – fedja Jun 1 '13 at 0:03 Thanks for your comment. There was indeed something wrong: the range of $p_a$ and $\pi_a$ (now corrected). Furthermore I forgot to mention a constraint on $p_a$ (added). This may be trivial, but I don't see why the compactness implies that $\sum_a\pi_a$ is an element of a finite-dimensional space of functions. – bfrank Jun 2 '13 at 12:50 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged oc.optimization-control or ask your own question.
{"url":"http://mathoverflow.net/questions/132443/tools-for-infinite-dimensional-linear-programming","timestamp":"2014-04-19T00:16:04Z","content_type":null,"content_length":"48622","record_id":"<urn:uuid:dceac2bf-81c0-4325-8f48-2f7f2d36cccd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Spinrath, Martin (2010): New Aspects of Flavour Model Building in Supersymmetric Grand Unification. Dissertation, LMU München: Faculty of Physics Metadaten exportieren Autor recherchieren in We derive predictions for Yukawa coupling ratios within Grand Unified Theories generated from operators with mass dimension four and five. These relations are a characteristic property of unified flavour models and can reduce the large number of free parameters related to the flavour sector of the Standard Model. The Yukawa couplings of the down-type quarks and charged leptons are affected within supersymmetric models by tan beta-enhanced threshold corrections which can be sizeable if tan beta is large. In this case their careful inclusion in the renormalisation group evolution is mandatory. We analyse these corrections and give simple analytic expressions and numerical estimates for them. The threshold corrections sensitively depend on the soft supersymmetry breaking parameters. Especially, they determine the overall sign of the corrections and therefore if the affected Yukawa couplings are enhanced or suppressed. In the minimal supersymmetric extension of the Standard Model many free parameters are introduced by supersymmetry breaking about which we make some plausible assumptions in our first simplified approach. In a second, more sophisticated approach we use three common breaking schemes in which all the soft breaking parameters at the electroweak scale can be calculated from only a handful of parameters. Within the second approach, we apply various phenomenological constraints on the supersymmetric parameters and find in this way new viable Yukawa coupling relations, for example y_mu/y_s = 9/2 or 6 or y_tau/y_b = 3/2 in SU(5). Furthermore, we study a special class of quark mass matrix textures for small tan beta where theta_{13}^u = theta_{13}^d = 0. We derive sum rules for the quark mixing parameters and find a simple relation between the two phases delta_{12}^u and delta_{12}^d and the right unitarity triangle angle alpha which suggests a simple phase structure for the quark mass matrices where one matrix element is purely imaginary and the remaining ones are purely real. To complement the aforementioned considerations, we give two explicit flavour models in a SU(5) context, one for large and one for small tan beta which implement the Yukawa coupling relations mentioned before. The models have interesting phenomenological consequences like, for example, quasi-degenerate neutrino masses in the case of small tan beta. Item Type: Thesis (Dissertation, LMU Munich) Keywords: Flavour, Supersymmetry, Grand Unification Subjects: 600 Natural sciences and mathematics 600 Natural sciences and mathematics > 530 Physics Faculties: Faculty of Physics Language: English Date Accepted: 23. July 2010 1. Referee: Raffelt, Georg Persistent Identifier (URN): urn:nbn:de:bvb:19-119190 MD5 Checksum of the PDF-file: 0f13785c8e5b2102d768269d42a65f97 Signature of the printed copy: 0001/UMC 18809 ID Code: 11919 Deposited On: 31. Aug 2010 08:51 Last Modified: 16. Oct 2012 08:41
{"url":"http://edoc.ub.uni-muenchen.de/11919/","timestamp":"2014-04-17T01:12:02Z","content_type":null,"content_length":"26464","record_id":"<urn:uuid:57bfdf44-1421-47ca-9429-c458d6328884>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] help on how to write a frequency-counting function in a more functional way [plt-scheme] help on how to write a frequency-counting function in a more functional way From: Noel Welsh (noelwelsh at gmail.com) Date: Mon Apr 20 09:41:32 EDT 2009 I'd like to retract this statement. I think this is too confusing and possibly controversial to be useful in the context of this discussion. On Mon, Apr 20, 2009 at 6:58 AM, Noel Welsh <noelwelsh at gmail.com> wrote: > For this problem imperative = functional. The problem is basically a > fold over a list. If the seed/accumulator of the fold never leaks > outside the fold (i.e. it cannot be observed until the fold is > complete) you can mutate it all you want and still have a functional > implementation. This is related to the list monad, but I'm not > entirely precise on the details so I won't attempt an explanation. Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2009-April/032276.html","timestamp":"2014-04-20T04:25:59Z","content_type":null,"content_length":"6450","record_id":"<urn:uuid:8a7470b4-256f-4ab5-93cb-6e09570b8719>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00514-ip-10-147-4-33.ec2.internal.warc.gz"}